When studying to be a Solutions Architect in AWS and other cloud services, it is important to know the infrastructure you are building on- both for the exams and for real-life scenarios. There have been many breaches recently, and I believe they were caused, at least in part, by Engineers and Architects that were either misinformed or not trained properly in the use of security features. When using AWS, the main feature to know is the shared responsibility model.
The shared responsibility model is a representation and description of what AWS is responsible for and what we, as customers, are responsible for. When studying best practices for security on AWS, some interesting aspects came to light for me that I would like to share.
Note that I am not writing this to help with a specific exam. These concepts are more than any of what the Associate Level AWS Exams cover. I am sharing this as a means to help Architects out there conceptualize what they are responsible for in regards to securing their infrastructure.
As stated in the documentation, AWS is responsible for security “of the cloud.” The customer is responsible for securing what is placed “in the cloud.” Depending on which services your organization uses, there are actually three different versions of the shared responsibility model, and a good Solutions Architect needs to be familiar with all of them.
Shared Responsibility in Infrastructure Services
First is what AWS calls Infrastructure Services. With these services, AWS handles the bare metal servers, hypervisors, storage and core networking (basically all the foundational items) as well as securing the Global Infrastructure (Regions, Availability Zones, and Edge locations). This pertains to services like VPC, EC2, EBS, and AutoScaling setups in AWS. The customer is responsible for securing the User Accounts (IAM policies), VPC access control, operating systems, applications, data integrity, and encryption.
There are a lot of variables to think about and, hopefully, your organization has requirements and structures in place so you can be clear on what needs to be done. Some examples of the pitfalls that have been seen in Infrastructure Services security include:
- Weak passwords (both users and on servers)
- Open network ports that aren’t necessary (e.g., telnet)
- Lack of encryption leaving data accessible to attackers
There is a huge opportunity for human error to make your organization vulnerable at this stage. For instance, if a web server is available to the public, is unpatched from a known vulnerability, and it gets exploited, AWS will not be responsible.
Shared Responsibility in Container Services
Second is what AWS calls Container Services, but this doesn’t necessarily refer to only systems that use what we traditionally think of as “containers.” Examples here are more “managed” than the first stage and include Relational Database Service (RDS), Elastic MapReduce (EMR), and EC2 Container Services (ECS). AWS has all the responsibility from before, plus platforms, applications, base operating systems, and networking configurations. Here, the customer is still responsible for customer IAM, customer data integrity, data encryption, and network traffic protection.
Like with Infrastructure Services, weak passwords are still an issue, as well as lack of encryption. Because of the managed nature of these services, AWS has more responsibility and the risk of human error is decreased slightly.
For instance, suppose a customer is using RDS as their managed database solution. AWS handles all the replicating to read replicas and Multi-AZ as well as replacing hosts when needed. It is still the customer who is inputting and reading the data, therefore, security and transport of that data, and the application accessing it lies with the customer.
Shared Responsibility in Abstracted Services
The third type, AWS refers to as Abstracted Services. Examples of these services are DynamoDB, S3, and CloudFront. AWS becomes responsible for almost all of the infrastructure, however, the customer is still responsible for customer IAM, data integrity, and data encryption. These services offer far fewer opportunities for human error to corrupt security, although security mistakes still happen frequently. For example, if the customer leaves public read permissions open on an S3 bucket, that data is fully accessible on the open internet and AWS is not to blame.
My background in network operations and infrastructure reminds me of the term demarc. In networking, the demarc refers to the point where ownership of a connection or circuit transitions from the telephone company to the customer. This is what I think of when dealing with these different scenarios.
In designing and implementing security for your AWS environment, the first question that needs answering is: Where is the demarc between AWS and your organization? The answer can vary widely depending on the services your organization uses. Then, on your organization’s side of that demarc, use Multi-Factor Authentication whenever possible, encryption on all data (in transit if possible), as few user permissions as needed, and block every network port that is not necessary to your application.