Page Menu

Information Technology Blog

AWSome Day Boston, 2018

Posted On: Friday, May 25, 2018 Posted By: Brian Zimmel Tags: Information Technology, Innovation

Amazon recently held a free, one-day training in Boston to describe the core services of their cloud environment Amazon Web Services, better known as AWS.

As a noob when it comes to AWS, I attended to get an understanding of the resources AWS has to offer its customers. I was joined by my teammate Tim Kontos, who has been working with AWS to get our first web application hosted on the platform.

After a word from the event’s sponsors, we were given a brief history and purpose of AWS by the speaker Scott Jones, who presented in a way that made me feel I was siting in a college class with one of the fun professors. In fact, he is a Technical Training Specialist with AWS and was excellent at not only explaining what the resources can do but also how they can benefit us as the customer. 

Cloud computing is all about spinning up resources as they are needed and releasing them when no longer needed. When an application experiences a large data request or user activity, additional CPUs or memory can be allocated to handle the demand. In a self-hosted data center, this would require racking additional servers, running cables, and installing software to get up and running. AWS eliminates this physical task. When demand is low, the resources can be released. Amazon charges for the use of its services, so the less that is used the better.  Scott best explained it by reminding us what our parents would always say, “Turn off the lights when you leave the room.” Not only does AWS make it easy to turn off the lights and save money, but with its triad of services – load balancing, auto scaling, and log analyzing – this can be done automatically.

AWS day May 2018

The global infrastructure of AWS is what makes this all possible. Amazon currently has 18 regions in the world that host Availability Zones, each of which host a cluster of data centers typically 50km apart. This allows for automatically routing traffic to another zone in the case of an outage or heavy traffic load.

It is within these regions that one sets up their resources to mimic a data center that would have been self-hosted. The idea, as mentioned in the training, is to “write the infrastructure to suit the application.” The foundational services of AWS include Elastic Compute Cloud (EC2) which is a Linux or Windows server, storage in the form of a Simple Storage Service (S3) for hosting web content and media files or an Elastic Block Store (EBS) for database storage and data lakes, and a Virtual Private Cloud (VPC) to tie this all together in a 3-tier architecture with the ability to create subnets, firewalls, and gateways to allow or restrict traffic to secure your resources.

Which brings up a concern I often hear, including at the training while enjoying the free boxed lunch provided to us. People are afraid of the cloud and worry about the cloud getting hacked. The cloud is nothing more than a data center owned by a vendor such as Amazon instead of the organization itself. Amazon’s data centers are non-descript, fenced-in, restricted-access buildings in locations selected to mitigate environmental risks. The infrastructure is constantly updated with the latest patches and is built with the redundancy necessary to provide its services with little downtime.

Amazon considers itself responsible for security “of the cloud” while its customers are responsible for security “in the cloud” by establishing firewalls, encryption, and access management. People can be given user credentials to log into the console and manage the resources available based on policies set in place for their account or security group.

The same can be applied to programmatic access such as an API call in the form of access and secret keys. Roles can be assigned or assumed for temporary access needed when spinning up additional resources. AWS CloudTrail logs all API calls made and whether these calls exceed the privilege granted to the account for better management of security policies or alerting on malicious activity. AWS wants to make sure its customers meet all compliance requirements by providing workbooks for how to set up services to meet certification and Amazon Inspector to detect gaps in security. In a self-hosted environment this could take hours, days, even weeks of inspecting and documenting the infrastructure to satisfy an audit. Amazon gives us the materials needed to better manage this task.

The biggest take-away I had from this training was the amount of resources, configurations, and materials provided to us by Amazon to accomplish what is needed to get an application up and running. AWS built Quick Starts full of scripts designed to deploy resources based on best practice for high security and availability. AWS supports multiple types of relational and dynamic databases to suit the need of the application. AWS provides replication and redundancy to ensure the application is always running. AWS offers management tools to stop the application from running and tear down the resources when it is no longer needed. As the speaker Scott put it, “if we do this right, we give Jeff Bezos as little money as possible.”

Want to know more and learn about AWS resources? There are numerous resources out on the web, including Qwiklabs which provides quests and labs on AWS. If you join and search for “introduction” you can find many free self-paced labs that create temporary AWS accounts allowing you to build your own stack of resources at no cost!

Brian Zimmel is a Senior Software Engineer at University of Massachusetts Medical School.