<img src="//bat.bing.com/action/0?ti=5739181&amp;Ver=2" height="0" width="0" style="display:none; visibility: hidden;">

DevOps

AWS Components: A (Not So) Brief Introduction- Part 1

Amazon Cloud: 60 seconds of history

If you're a so-called normal Internet user like me, there is a big chance that you had used Amazon online store to purchase something, where something expands to almost everything that can be sold and sent directly to your accepting hands (not including military aircraft, atomic bombs, Bengali tigers and other dangerous things. Oh well).

Amazon started out as an online bookstore back in 1994 and expanded to other items and so many services beyond books circa 1998, but it was only between the years 2005 and 2015 when Amazon started to sell cloud services to common mortals like ourselves. below, we’ll talk about the AWS cloud components and their typical uses in modern infrastructure deployments. Note that the AWS cloud includes a lot of services, and they add new services and improve already available ones each year. We’ll talk about the most important and notable ones here.

Object Storage services: AWS S3

S3, which basically stands for Simple Storage Service, was one of the very first services offered by the Amazon cloud. It offers a way to store files (aka: objects) that can be retrieved or streamed using https endpoints. You can store any kind of file (text, documents, pictures, videos, whatever) and use them as part of your website. The native way to use any file stored on S3 is: A web request.

S3 can serve your files using static https URL’s, but can also stream your files. This allows you to use S3 as your backend for any video you want to stream. One of the most notable users of S3 streaming services is... Netflix! Also, some training portals (acloud.guru being another good example) stores and serves its courses videos using S3.

Another common use of S3 in cloud-based systems is backup storage: Think of a databases server that makes a daily dump every day. You can compress that db-dump, and send it to a S3 storage space, also known as bucket.

S3 is very reliable; The normal S3 availability is 99.99% with a durability of 99.999999999% (yes, that was 11 nines) for your stored objects. S3 includes many offers with different prices and durability options.

Other very useful features on S3 is versioning and expiration. You can have different versions off a same file, and you can create expiration rules that will allow you to better manage your storage space and delete old objects. Use less space, pay less money. Very feasible business model.

S3 can also include ACL’s and authentication based on the Amazon cloud. You can serve certain objects to the entire world, or just to a set of specific origins.

With no doubt, S3 is by far one of the more convenient services AWS offers, and the first big application of the 'object storage' concept that is now part of most modern clouds (public or private).

A typical view from AWS GUI showing a S3 bucket used for storing backups:

AWS1.png

 

Compute services part 1: AWS EC2

 

EC2 (for Elastic Compute Cloud) was the next very important service offered by AWS. Simplifying the matter, EC2 is essentially virtual machines. You could say: “Ahh but this is something other hosting providers do!".  Absolutely and Definitively not!

The elastic term is not a decoration. Elastic is a real term that has modeled and defined the way modern cloud computing works. First, the machines are served in a very easy-to-use way, where the end user does not need to go through typical operating-system installation steps, nor networking and storage configuration. All is provisioned by the cloud in an automated way, where the end-user only need to select how much power s/he needs (the machine type) and which base operating system s/he wants (Amazon machine image- AMI).

The user can take snapshots of any already-working machine (that can include any application installed by the user), then convert this snapshot into an AMI and deploy new machines with simple clickety-clicks, avoiding the need to repeat post-installation steps.

Another way to fully automate things is the use of user-data or bootstrap script. At creation time, the user can send a script (a simple “bash-or-sh” shell script) with specific instructions that will be run the first time the virtual machine (aka instance) is deployed. Those instructions can include application installation steps, or even restore some application files from the Block Storage (another very frequent use of S3).

Note: Amazon hypervision layer is made of a highly customized version of XEN. XEN was one of the first hypervisors used on the OpenSource world.

The block storage services used by the EC2 instances is also elastic (block storage = disks or volumes). EBS (or Elastic Block Storage) provides virtual volumes for your instances. The nicest feature here is the ability of create new volumes and attach them at will to the instances. Need more disk space? Create an EBS volume, and attach it to your instance. Also, there are different volume types with different I/O features offered by the cloud. This allows you to better control your deployed application and provide the right I/O to the right application.

A views of EC2 administration panels, and a running EC2 instance:

AWS2.png

 

Compute services part 2:

Elasticity expanded Load balancing and Auto-ScalingAWS3.png

EC2 by itself can be used with additional services provided by AWS networking layer, and supported by the monitoring component on AWS: Cloudwatch.

First let’s talk about ELB (Elastic Load Balancer). The cloud can provide you with an http/https load balancer. If you want to be really redundant, you can deploy EC2 instances on different AWS datacenters or availability zones in an AWS region, then create an ELB load balancer and put your http/https instances behind the load balancer. Your site (www.yoursite.com) will be available and reachable through your ELB service, which will monitor the instances distributed on your availability zones. If a specific az goes down with your instances, the ELB will just send the traffic to the surviving machines in the remaining az’s.

Secondly, Let’s talk about Auto Scaling: You can define a set of rules that will enable your applications to grow and shrink horizontally when the load changes. Let’s explain this with a more practical example: You have an ELB with 3 web servers behind, and your application is an online bookstore. Normally, you only need 2 or 3 servers for your daily load. Then something happens: A new book about cats is being sold with an alarming rate, and your 3 servers begin to suffer from extreme loads. What is the AWS solution for this? You can set your servers in an autoscaling group (ASG) and define the following rules:

  • If the CPU usage inside my servers goes over 75% for 5 continuous minutes, add more servers up to a maximum of 10 in the ASG
  • If the CPU usage inside my servers falls below 10% for 15 continuous minutes, delete servers from the ASG until we have only 3 servers

When the extreme load begins (thanks to a “new book about cats, or a black Friday”), the AWS cloud monitoring component (AWS Cloudwatch) will notice that the CPU usage is above the indicated threshold, and with no further intervention will begin to create, in a fully automated way, new web servers and add them to the ELB. The system will cease to add new servers only if the load stabilizes or if it reaches the ASG maximum number of instances.

When nobody else wants to purchase the “cat’s book” (or when black Fridays end), and the load falls below the indicated threshold, AWS will delete the extra servers (in order to keep your costs at bay) until you reach your minimum servers on the ASG.

This way to react against changes in the load is the true elasticity concept in the cloud. Just try to do that in a non-cloud system!

The ASG creation panel on the AWS GUI:

AWS3-1.png

Notification services and the decoupled model

 

With the cloud computing times there are new concepts and paradigms used by DevOps and SysOps when deploying cloud-aware applications: Microservices and the decoupled model.

The decoupled model on modern IT platforms uses a lot of mission-specific microservices that perform specific tasks and communicate to each other using REST and messaging services. Those messaging services are normally based on lightweight protocols and components (like rabbitmq and message queues). This is where the first service offered by Amazon enters the scene (SQS) along other notification services also available in the cloud: SNS.

SQS (for Simple Queue Services) is a service that allows those microservices to exchange simple messages between each other in a fast and efficient way.

SNS (for Simple Notification Services) is a service that allow based on “publishing/subscriptions” of messages to different components in the Amazon cloud, and also allows the sending of SMS and SMTP messages to external systems.

A typical application is to send SMTP messages (aka: mail) when auto-scaling events occur, or, when a monitoring variable (monitored by Cloudwatch) exceeds a threshold. Also SNS can interact with SQS in order to publish messages based on SQS events.

Another typical use of those notification/message services is with S3. Another example: You can define a SQS/SNS interaction that will inform a specific microservice in your deployment when something or someone put a file in a S3 bucket. This file can be a video that need to be transcoded to a common format. Then you can combine SQS/SNS and Elastic Transcoding Services (yes, another AWS service) in order to transform the video from “format X” to “format Y”.

A typical sample of the decoupled model in action:

AWS4-1.png

In the next part we'll cover everything else, digging into anything from serverless to advanced networking services and cloud automation. Stay tuned. 

Meet the Loom team @AWS re:invent 2017! Click here to schedule a F2F meeting and a private demo.

reinvent-signature-1.jpg

 

 

Loom Systems delivers an AIOps-powered log analytics solution, Sophie, to predict and prevent problems in the digital business. Loom collects logs and metrics from the entire IT stack, continually monitors them, and gives a heads-up when something is likely to deviate from the norm. When it does, Loom sends out an alert and recommended resolution so DevOps and IT managers can proactively attend to the issue before anything goes down.
Get Started with AIOps Today!

 

New Call-to-action

Measure ROI from IT Operations Tools

 

 

New Call-to-action

Gain Visibility into Your OpenStack Logs with AI

 

 

New Call-to-action

Lead a Successful Digital Transformation Through IT Operations

 

Looking for more posts like this?