During the last month or so I’ve started and continued a tradition. I want to sit all of the AWS exams, and write an overview style blog post to help people know what to expect before they arrive. It’s designed to supplement quality online training such as that provided by LinuxAcademy.com.

Please share this post!

I want to help as many people in the community as possible with this article, You can help me do this by:

  • Sharing the URL to the post on social media such as Twitter or LinkedIn. You can post the URL yourself, or retweet or share the post where you saw this article. If I’m not tagged, then please add me :)
  • Additionally you can post this article on any online communities you use such as reddit or internet forums such as hackernews, Quora, experts-exchange or medium.

But thats enough of that, lets dive into it!.

General Exam Thoughts

  • The exam is focussed on the operations aspect of AWS, there are no architecture style questions..maybe 1 or 2; instead it’s about how to correctly implement things, the constraints and features of things and diagnosing their faults & issues.
  • It is much cleaner in the sense that architecture and developer question styles have been removed. The SysOps exam never had any real overlap with the SA/Developer associate exams and this separation has been maintained.
  • Without a doubt it’s still the most challenging of the associate level exams - it does feel slightly less challenging though. I remember there being a substantial gap between the old SA&Developer and the legacy SysOps; this gap may have narrowed a tiny amount, but not much.

In terms of the specifics, I’ve split the rest of the information into some broad high level categories - these don’t align with the exam domains, but are instead based loosely on AWS Product groupings.

Virtual Private Cloud (VPC) and Networking

  • IPv6 is featured in the new SysOps Exam. There is an expectation that you know how IPv6 addressing vs IPv4 impacts ingress and egress of an instance & VPC.
  • You should understand how IPv4 works - the difference between a NAT Gateway and an Internet Gateway and when to use each.
  • Likewise, from an IPv6 perspective - how does this change things? can you control IPv6 instances so that egress can be allowed, but not ingress? How? What VPC entity allows this.
  • You should understand VPC’s, Subnets, Route Tables, Routes, NACL’s Security Groups, NAT Gateways, Internet Gateways, VPC Peering and all other VPC entities from an operational perspective. How can VPC’s be made to allow communication. How is VPC peering configured, what are it’s features and limitations. How does VPC peering impact DNS Resolution of an instance. How do you step-by-step configure VPC Peering. Are there any cross-account or cross-region limitations?
  • Be 100% comfortable with privateIP’s, publicIP’s and ElasticIP’s. How they change, when they are assigned, are they ever removed?
  • The ability to diagnose networking problems as it relates to EC2 is essential. Be able to look at all of the networking/VPC related configuration in an account and identify why an EC2 instance has no internet access and what corrective actions are required.
  • You should really understand DNS - everyone should really understand DNS :)
  • Specifically, learn how Route53 works — what features it offers, both traditional DNS features, and enhancements. The different routing methods, health-checks and failover. It’s important to know this area from an operational level and be able to use to implement things.
  • Learn how Route53 can be used for weighted traffic distribution to support development teams.
  • Learn how Route53 can be used for performance improvements and control — latency based and geolocation based. When would you use each, know their similarities and differences.
  • Be comfortable with what a DNS Zone is - how would you override publicDNS with VPC based DNS for internal AWS entities. Can you make an EC2 instance see different DNS values inside a VPC than if it was a public machine on the internet? If so, how.
  • What is N-Tier architecture. Understand the rational behind using N-Tier, and be able to implement this within AWS. How do AWS’ offerings such as Load Balancers (public and private), and managed databases fit into this. Given N-Tier requirement, be comfortable knowing how many ELB’s are required and where to place them.
  • Understand the differences between a Security Group (SG) and a Network Access Control List (NACL). Can SG’s block traffic flow? Can they allow it? Can NACL’s block? Can they allow? What are SG’s and NACL’s attached to, or associated with.
  • What is stateful filtering? Are SG’s and/or NACL’s stateful ?
  • The exam expects you to understand how to configure a hardware VPC VPN within AWS. Learn the process step-by-step.
  • How do static and dynamic VPN’s differ?
  • Learn the architecture of Direct Connect (DX) and under what circumstances you would use DX vs VPN. When wouldn’t or couldn’t you use DX vs VPN - what are the major limitations and pros/cons of DX.
  • Learn how to identify when something is present without it being explicitly stated. If an EC2 instance has a publicIP and can communicate with the internet — there is an Internet Gateway attached to the VPC it’s in. This ability will help on diagnostic style questions.
  • Be 100% familiar with the process required to provide EC2 Instances and Subnets with Public Access.
  • Whats a private subnet? Whats a public subnet?
  • Are you comfortable with how to implement an Internet Gateway? what about a NAT Gateway? What requirements does each have? Routes, SG’s & NACL’s.
  • How does DNS work inside a VPC? Is DNS enabled by default in a VPC? Can it be enabled specifically on a per-VPC basis? what about on a per-subnet basis?
  • What are nested security groups and when are they used. What capabilities do they offer. What limitations do they have.
  • You should be comfortable with IP routing in general. Understand routing notation .. 0.0.0.0/0 = default route, 1.1.1.1/32 = specific IP, X.X.X.X/24 is a subnet of a certain size.

Security, Accounts & Permissions

  • Being able to read and understand IAM policy documents is essential. It’s not enough just to be able to parse one and think you understand it. You will probably see 2-3 questions which involve policy documents, and from those the policy documents may have multiple parts. It will be beneficial to be 100% comfortable with how ALLOW and DENY work, and which takes priority.
  • How and when isiam:passroleused.
  • What is the Shared Responsibly Model? Which parts of an AWS environment are you as an Operations engineer responsible for, and what parts do AWS manage. For products like ELB/RDS/EC2 be fully aware of which bits AWS expects you to operationally manage.
  • In terms of ELB - understand the concept of a Cipher and how to make an ELB (Classic or App) work with older browsers.
  • Spend some time looking at IAM reporting — what reports are available? MFA Report, Roles Report, Credentials Report. Which are real & which aren’t. Be very clear with the information exposed in each of the reports.
  • How would you implement the ability for Active Directory (AD) logins to the AWS console. What services are involved? What steps are involved?
  • It’s worth spending time looking through the AWS Organisations feature for controlling multi-account scenarios. There are two aspects, consolidated billing and permissions/login management-be comfortable with both. You need to be able to review a scenario and suggest if AWS Organisations are a valid ‘fit’ for that scenario.
  • Be very very clear on when you would use IAM Users, Groups and Roles.
  • Can you log in to a user, group and/or role?
  • Can you assume a user/group/role.
  • Can users/groups/roles be used in the console/api/cli? are there limitations?
  • Can access keys be used to login to the console?
  • Can username/password be used with the CLI/API?
  • You will need to have a high-level understanding of what web identity federation is - and how its implemented within AWS - what products will you need to use?

S3 and CloudFront

S3 and CloudFront featured heavily in the exam, in many cases paired together. There is an expectation that you understand both from an operational perspective — you should be comfortable implementing, operating and fault-finding.

  • S3 will use HTTP error codes at various times, you should study and be comfortable with the error codes used and how they map to S3 states. This link is a good place to start. Focus on any codes which relate to missing files, permissions issues and any performance management or throttling.
  • Learn in-depth all you can about S3 permissions. You should understand the best way to assign permissions to S3 buckets & objects, or just objects.
  • What method’s can be used to grant access to buckets avoiding operational overhead?
  • What about delegating rights to another entity? What about using pre-signed URL’s.
  • Who by default owns objects in an S3 bucket.. Does this change with delegation?
  • When would you use a role to allow S3 access, and if a role is assumed by another AWS account, who owns any objects created?
  • Do you understand that S3 buckets are flat and don’t really have folders?
  • How is the name of an object presented as a folder structure?
  • For a given S3 bucket — what performance improvement options do you have. CloudFront? Changing the object name format? Do you understand S3 partitions, and at what transactions per second and above, performance management is really needed.
  • What is MFA Delete? When would you use it? What does enabling it do? and once enabled which operations require an MFA one time code.
  • Be able to implement the most appropriate way to share S3 buckets with the public or other AWS accounts. Understand the options you have available and the ways to secure this access. The concept of bucket and object ownership should be second nature before sitting the exam.

Databases

Understanding Databases from an operational perspective is critical for the new exam. I remember DB’s featuring in the older exam, but much more so in the new.

  • What is Amazon Athena and when and how would you use it.
  • It’s really essential for this exam to understand the types of things this product allows you to do.
  • Can it query objects on S3? What type of objects? How fast is it? What are it’s limitations?
  • Can it be used for logging queries in some way? Cloud watch?
  • For all of the AWS Database types — learn how resilient they are by default. Can they withstand an AZ failure by default?
  • Understand how to improve the availability (so being able to add HA) of all of the databases.
  • Learn the terms OLTP and OLAP as they relate to databases.
  • Which AWS Database products are OLAP and OLTP? DDB, RDS, EMR, Redshift and others.
  • What benefits does a read-replica provide? performance? HA? can they be in the same region, in different regions? in different accounts?
  • Do all AWS Database engines support read-replicas?
  • Understanding of how Databases within AWS can be backed-up and restored is absolutely essential.
  • Review DB Backups/Snapshots - how are automatic and manual backups/snapshots different. What happens to backups/snapshots when you delete a DB instance? How long are backups/snapshots retained for?
  • It’s worth understanding the step-by-step process in restoring a DB from a snapshot. Actually do this in an AWS environment.
  • Become familiar with Multi-AZ for RDS databases. What benefits does it provide, how does it work? what are the limits? You should be comfortable in picking whether to implement Multi-AZ, or read-replicas or both for a given scenario.

Integration

  • You should have a general understanding and ability to implement Amazon SNS, SQS, DynamoDB and high level understanding of kinesis.
  • Practice diagnosing issues involving the above services. If you had an auto scaling group which scaled based on CPU load, and that autoscaling group was servicing an SQS queue - how could you enhance this to scale based on the amount of work in the queue?

Monitoring & Operations

There were many questions on understanding your AWS infrastructure, using AWS tools and services to gain insights and visibility of your infrastructure. Additionally questions on general day-to-day management, and migrations into AWS.

  • Pick some data volumes, 1GB, 100GB, 1TB, 500TB and beyond. For each of those data volumes, and given a deadline and internet speeds, you should understand how best to get that data into AWS.
  • Would this process be an upload over VPN, an upload to S3, an upload over direct connect, shipping the data to AWS in some way?
  • Specifically, focus on understanding the capabilities of Snowball/Snowball Edge. Research the process end-2-end for using both of these products. Pretend you are doing an actual migration of data from 1, 10, 100 servers/workstations/file stores - how would you perform this migration.
  • Given operational information (VPC, Subnets, Routes, NACL, SG, Instance Roles) be able to diagnose why an instance can’t access S3, or DynamoDB, or another VPC.
  • Spend some time to really get to know and understand Amazon Inspector You should understand it’s features, it’s limitations, when you would and maybe more importantly when you wouldn't use it. Do you always have access to this product? or are there limitations/requirements?
  • What is the AWS Personal Health Dashboard - as with the Inspector above, learn its features and limitations. Understand who gets access, is it freely available, or are there requirements to gaining access? At an operational level, you should be able to use it, and be confident doing so.
  • For an EC2 instance without any agent software installed, what metrics are visible in CloudWatch? If you install an agent, how does this set change? Focus on things like Network In, Network Drops, memory Utilisation, CPU Utilisation, CPU Ready and others. A good understanding of what comes as standard with EC2 is crucial for a few exam questions.
  • AWS Accounts have limits … some can be changed, some can’t. It’s worth understanding these at a high level. Get a feeling for which can be changed. Do you know how determine how you compare to these limits for a given account? if you were logged into an AWS console now - could you check how far off a given service limit you are? you should learn how.
  • AWS offer a number of support plans, compare them and understand the differences. Know what you gain access too at each level.
  • Research what the AWS Trusted Advisor is - what does it allow you to do? does it come as part of AWS, available to all customers? Are there multiple versions? What does each version provide vs the other.
  • Spend time becoming comfortable with AWS Resource Tagging. Why would you use resource tagging, how would you use it. What additional capabilities does it offer? Permissions, billing, operations?
  • What is AWS Config used for? What are it’s features. Can it be used to check for open ports on an EC2 instance, or a SG/NACL? Can this behaviour (if it exists) be adjusted with different ports?
  • Review how logging works for Classic and Application Load Balancers. Where are the logs stored? What information is exposed in those logs. Can the logging be adjusted in any way? If so how? Can you obtain retroactive logs in any way?
  • Learn how Cloud Trail works from an operational perspective. How local and global services are presented in Cloud Trail. Learn the pitfalls of using a multi trail implementation and how to work around those pitfalls.

Compute and Storage

There was a heavy emphasis placed on compute and storage - a real understanding of the specifics of all products in these categories is essential - more so than any other AWS exam.

  • Learn what Raid 0, Raid 1 and Raid 5 are.
  • What benefits do they provide, which can be implemented in AWS, and when you would and wouldn't use each.
  • It’s worth spending some time understanding how EBS snapshots work, both for single volumes and volumes which may be part of a Software Raid set defined in an instance operating system.
  • You should understand the difference between instance store volumes and EBS volumes. I’d go so far as to suggest knowing it architecturally at a physical level.
  • Instance store are attached to the host, whereas EBS are network attached. More so, when an instance is shutdown, rebooted and terminated, how are instance store volumes impacted.
  • What options are available to recover or replace SSH keys on EC2 instances. Assume you have lost the private key part… what now? Are the options available influenced by EBS root volumes vs instance store? If so how?
  • You should have a good understand of how storage gateway works. How to implement it on-premises and within AWS.
  • How you would implement it to extend the life of a legacy backup system.
  • Understand what types of storage gateway there are, how storage gateway volumes are presented to hosts. What protocol…limitations….Features.
  • There are various EC2 limit errors which you may receive during normal operations. RequestLimitExceeded, Instance Limit Exceeded. It’s worth spending some time and get familiar with these
  • Really understand and be comfortable with the EBS Volume Types.
  • In this exam more than any other associate the numbers matter. Knowing the performance ranges for IOPS and throughput is important. Knowing when you would use io1 vs gp2
  • When would you use HDD vs SSD?
  • Do instances place any limitations on what performance a volume can deliver?
  • Diagnosis of storage performance features in the new exam.. Given a scenario or sub-par performance, being able to diagnose and suggest fixes is critical.
  • Can the volume type of an EBS volume be changed? if so, how? What about the size? can it be increased? Decreased? Can this be done with the EC2 instance online?
  • It’s important to be familiar with all of the AWS Storage products in terms of their usage scenarios. So for S3, EBS, Instance Store, EFS - how can they be used with EC2? Can they be mounted? are they for block or object storage. Which products are shared, i.e can be used with multiple EC2 instances at the same time.
  • Become really familiar with EC2 Systems Manager. Know its capabilities, when to use it, when not too. Can it be used to run commands? To perform software updates? To change SSH keys? what else?
  • Do any AWS products exist which can validate if EC2 instances have an appropriate set of software updates applied? If so, which product.
  • There are lots of EC2 instance types - C, R, M, T - learn the pros and cons of all of them.
  • Focus specifically on the T instance type. Do you understand how they are different than all the other classes? the assumed over commit on host CPU? Learn how CPU credits work in detail, and the various options that exist for increasing credits and managing credits. What happens when CPU credits run out?
  • Learn how changing from one instance type to another works. And what capabilities this adds or removes. e.g. T2 -> M, C->M, M->R, C->T2. This is from a CPU, Disk, Network and cost perspective.
  • How do Spot Instances work…when would you implement spot instances. For a given scenario involving Cost, Risk, Ability to interrupt workloads — you should be comfortable suggesting if spot instances are appropriate or not.
  • Spend some time learning and really understanding the different types of reservations available. Specifically scheduled reservations and region or AZ scoped reservations. Do they all reduce cost and reserve capacity - or not ?
  • Look at the architecture of Lambda. What options you have to upload functions, how the security works, and the ways in which lambda functions can be invoked. Are you comfortable with ‘event driven architecture’ and do you know what AWS products are used for scheduled lambda executions?
  • Gain some experience of Amazon Machines Images (AMI’s). How they work, how they are used, how they are made and how they are shared publicly or with specific AWS accounts.
  • Learn how to offer paid AMI’s, including paid support on AMI’s.
  • For Storage gateway — what is a ‘file gateway’ and how does it work. How are ‘things’ that it provides presented. What protocol(s) is/are used.
  • For storage gateway - what is VTL mode. How are the ‘things’ that it provides presented? what protocol(s) is/are used.

Automation & Infrastructure-as-code

In the other associate exams, and even the older SysOps associate a basic understand of automation/infrastructure as code was enough; the expectations in the new SysOps have increased!

  • It’s worth understanding CloudFormation (cfn) end to end.
  • Templates, Stacks, Stack Sets, Change Sets, Nested Templates/Stacks, cross-stack references, stack roles, custom resources and others.
  • Become familiar with how CloudFormation custom resources work. What do they do, when would you use them, how do you implement them.
  • Learn what data cfn passes to custom resources, and how.
  • Learn how custom resources signal back to cfn if they succeed or fail.
  • What is cfn-signal when is it used? how is it used? What are the networking requirements for an EC2 instance to be able to use cfn-signal
  • Given a cfn stack — what is the least risky and most efficient way to see how changes to a cfn template will change the stack and it’s resources?
  • Learn how cfn ChangeSets work and when you would use them.
  • What is a StackRole — when would you use them? What permissions are required to apply a stack role. What permissions are required to update a stack with a stack role.
  • You need to have a high-level understanding of what Chef and Puppet are. What do they do, how do they work. Are they the same basic product? Or do they have major differences in the way they do things.
  • What AWS products allow you to use Chef and/or Puppet within AWS in a managed way. I’d suggest you get comfortable with using any of these services at an operational/practical level.
  • Given a scenario — Be able to answer the best way to implement a stack architecture, isolated stacks, nested stacks, stack references. So if a number of stacks needed access to a shared S3 bucket for data access - how to implement that with cfn.
  • Learn how you can adjust cloudformation’s actions when a createstack or updatestack fails. Should it delete? Rollback? do nothing? whats the difference? and when and why would you change this?

Parting Thoughts

A post like this is no substitute for quality training material. Whether you learn via the AWS provided documentation, in-person training or via an online training provider; there is no substitute for studying to learn and not just to pass the exam.

The SysOps associate exam is the hardest associate exam available for AWS, and arguably it’s one of the harder associate style certifications available today. Preparation is key, don’t expect to be able to walk into the exam and just pass; not unless you have previous experience.

Now full disclosure - I work for LinuxAcademy.com as a Training Architect; I create courses, training materials and live practical based labs to help people learn. It would mean a great deal if you could stop by and see if any of the subscription packages would be of value.

I’m biased because I work here, but I’ve had a LA subscription since they became available; I’ve used them for my own training and recommended them to friends and family.

subscriptions

What most people don’t realise is that it’s not just training. With other providers you buy or subscribe to courses and then you’re responsible for AWS costs while you study. With LinuxAcademy It’s different, we provide access to a huge number of live practical environments and your very own cloud servers to use.

infra

If that wasn’t enough… something that I’m really proud of is the commitment to keep content up to date and valid. Every month we strive to do better than before. At the time of writing this, we’ve got a massive month of 150 updates to bring

updates

So stop by, have a look around, and give me a shout on twitter or email if you have any questions.

/Adrian