Back in March 2018 I sat the new Associate Solutions Architect Exam; I even blogged about it here. I already had all the associate certs but I wanted to experience the new architecture focus of the exam. I was so impressed with how AWS had improved the new test over the old one, I wanted to experience the same for the refresh of the Developer - Associate exam.

In this article, I’ll detail my thoughts together with hints and tips on how to achieve a solid pass mark.

Share this post!

Before I start - I have a favour to ask.

I want to help as many people in the community as possible with this article, You can help me do this by:

  • Sharing the URL to the post on social media such as Twitter or LinkedIn. You can post the URL yourself, or retweet or share the post where you saw this article. If I’m not tagged, then please add me :)
  • Additionally you can post this article on any online communities you use such as reddit or internet forums such as hackernews, Quora, experts-exchange or medium.

But thats enough of that, lets dive into it!.

High Level Thoughts - What’s changed

In my previous post I wrote about how, in my opinion, AWS had changed the scope of the SA associate exam away from being a general AWS certification and toward being entirely about architecture. I’m happy to report that the developer associate exam has received the same treatment. In the old version, the SA Associate and Developer exams had a lot of overlap. Developer was very similar to SA associate just with a larger emphasis placed on SQS and DynamoDB. The new developer exam still doesn’t test your ability to write code - but it is more focussed on development practices and how AWS tools and services will be used by a developer.

Don’t let it scare you, the new exam is much easier to study for, because of the improved focus. It’s great if you’re a developer, because it will play much more to your strengths. For any non-developers looking to achieve the exam don’t worry, the narrower scope means that teaching the skills needed to pass is much easier.

Ok, so enough of my general thoughts, lets get down to the specifics:

Programming Languages

The exam doesn’t appear to have any requirements for specific programming languages. The ability to understand code flow would be an advantage. So do research on pseudo-code for examples of the level of understanding you’ll need.

DynamoDB (DDB)

For any of the AWS certifications it really pays to know the basics of DynamoDB. This is especially important in this exam which focusses on how to USE the product in the real-world.

  • Know the basics - what are TABLES, what are ITEMS, what is an ATTRIBUTE
  • Understand the key structure of a table, so PARTITION (PK) and SORT(SK) keys.
  • Know which DDB operations can occur on one PK, which on a range of PK’s, which can filter based on one or more SK’s or a range of SK’s.
  • How does security work in DDB - can access be restricted to only a table level? or can you restrict at a more fine-grained level. if you can be more granular with permissions - what can you restrict on.
  • Get used to the outputs of the various DDB commands. get-item, query, scan - what does it look like if they return data, what if they don’t?
  • Know that using the SCAN operation is almost NEVER preferred. But understand when it’s needed. What are it’s problems? (hint: performance). How can you minimise issues with the SCAN operation.
  • Can everything related to DDB be done with the CLI/API and GUI? anything which cant?
  • Understand performance of a DDB table - WCU, RCU and PARTITIONS. How do partitions impact performance? what causes additional partitions to be created?
  • Rapid changes of WCU and RCU is almost NEVER a good idea. Understand WHY!
  • Related to the above point - understanding why DDB tends not to be ideal for sudden bursts of read/write is good. Also, know how to mitigate against burst. How can DAX help, what about SQS, what about elasticache?
  • Be comfortable with the READ and WRITE size in DDB. Know how eventual and immediate consistency for reads impacts the RCU needed. Be able to calculate how many RCU and WCU are needed for a given number of reads or writes of a certain size and how eventual/immediate consistency changes this calculation. I’m going to have a blog post dedicated to this soon.
  • Be very comfortable with Global (GSI) and Local (LSI) secondary indexes for DDB. What’s the difference? which one allows alternative SK and PK? which one doesn’t? are there limits to how many you can have? are there limits on WHEN they can be created? How is performance allocated and controlled for both?


  • Understand how EC2 instance roles work, when are they applied, can they be changed? does the instance need to be stopped, restarted or terminated?
  • Whats the AWS recommended method to obtain the EC2 internal, external, elastic IP from inside the instance, how exactly is it accomplished.
  • Inside an EC2 instance, what order are credentials processed - Roles/profiles etc.
  • If permissions are changed on a role, do they change immediately? if not, how long?, if it’s not immediate can it be forced, if so, how?

Lambda & API Gateway

It’s worth understanding serverless and event-driven architecture for this exam. Likewise the basics of lambda, how to works, limits in execution time, how to scale, how well it scales, how it integrates with other AWS services, how it logs, how its secured, how it uses roles.

  • Be comfortable with the different ways a function can be uploaded. Inline, is S3 an option? is uploading a ZIP an option? what about using CloudFormation?
  • Understand the structure of all of the ways a function’s code is provided.
  • Is there a maximum runtime for a lambda function? if so, what is it? If there is and you need something to run for longer, how can you achieve this?
  • What are step functions? what do they do? how are they different from normal functions?
  • Is there a way to improve the speed of DB connections using Lambda by persisting them? i.e keeping them open between executions of many functions. If so, how is this accomplished
  • Learn the architecture of API gateway, how it works, what its components are and do.
  • Is API gateway resilient? at what level? AZ, Region, Globally?
  • Could you talk step by step about how to deploy something to API Gateway
  • Understand how API Gateway and Lambda interact.
  • How can you rollback within API Gateway/and or Lambda.
  • What are TAGS? what are they used for? what functionality do they offer.


  • Learn about envelope encryption & how it’s used in AWS.
  • Understand what each different key type in AWS is, and exactly the function of each
  • Are there any limits on using KMS encryption which could impact high-volume usage? any maximum operations per second on KMS encryption/decryption/key management usage?
  • How does ‘re-encryption’ work for various AWS services when using KMS.
  • Understand how S3 encryption works… what keys are used
  • Understand how RDS encryption works - what components are encrypted and when.
  • What AWS services can help with encryption without adding additional COMPUTE requirements.


I always tend to look at SQS and Kinesis together when it comes to exams. They are actually radically different in terms of features, but there is some overlap.

  • Be comfortable with how a Queue works, specifically SQS
  • know the difference between FIFO and non FIFO
  • Know how SQS can scale, and how FIFO impacts that.
  • Understand the concept of a item on the Queue,
  • Whats the difference between short polling and long polling?, does long polling impact costs in any way?
  • What’s the function of a dead letter queue.
  • What’s the function of the SQS visibility timeout?
  • How can you avoid double-processing of items in a queue.
  • Understand how kinesis is architected .. what are shards?
  • Is Kinesis FIFO? does single shard or multi-shard make a difference?
  • When would you use kinesis vs SQS, single consumers, multiple consumers?
  • Understand the differences between SQS and SNS, which one would you use to notify individuals or teams of people. Which one would allow integration of a logging/mgmt application application into AWS.
  • Be comfortable with how to configure SES, how to use SES from a developer perspective. understand the limits of SES and where you WOULD and WOULDN’T use it.
  • Research datapipeline and understand its role in the AWS ecosystem for a developer. Why would you use it, how, what can it integrate with.

IAM & Security

  • Understand the limits of IAM, how many users, how many groups.
  • When to use USERS, GROUPS and ROLES.
  • When is federation a requirement?
  • What can IAM federate with?
  • Understand the IAM policy document structure, be able to read policies.
  • ARN’s, Wildcards, Fine-grained policies.
  • Are you comfortable with the best practice ways of storing keys, passwords and other sensitive information and making it available to other AWS services such as EC2?
  • What are the best practices for authenticating large numbers of users - say development teams for AWS - IAM accounts, using roles, using federation to another ID store.
  • Be comfortable with Roles - TRUST and ACCESS policies.
  • Know how sts:AssumeRole functions from a developer perspective.


  • A good understanding of the architecture of S3 is essential
  • Bucket’s, Object’s, metadata.
  • Can you add custom metadata to an object, does it need a specific format? is there a specific name which needs to be used?
  • Is an S3 bucket flat or does it have folders? (hint the folders arent real)
  • Does S3 have partitions? how does S3 determine which partition to store objects in?
  • Does S3 have performance limits? how can you influence these, how is your object naming related to this?
  • Does S3 allow encryption? how does this work? what options are there for encryption. Does S3 use envelope encryption? which encryption should be selected if a customer wants to control keys.
  • How does key rotation work in S3? whats it’s function.
  • How can you prevent an S3 bucket storing objects which aren’t encrypted? is there an AWS best practice way of handling this?
  • S3 performance management is a big deal, it would be really good to be 100% comfortable with object naming and how it relates to performance.
  • Understand S3 billing … based on space, transfer and operations. Understand how to optimise costs for using S3 by choosing as storage class, and knowing which operations to use … learn the limits at a high level for GET and PUT.

Code Commit/Pipeline/Deploy

Elastic Beanstalk

  • Understand the architecture of elastic beanstalk.
  • It’s a PAAS product - when WOULD you use it, when WOULDNT you. What types of apps does it suit and what type doesn’t it work well with.
  • How is access to a DB provided via elasticbeanstalk.
  • How does deployment work in EB, different rollout methods, their pros and cons.
  • How does scaling work? what scaling options exist.
  • How can EB be customised, what are the various files which can be used.
  • How is software deployed into an EB environment
  • Where is that software stored pre deployment, during deployment and afterwards.
  • What about application versions?
  • What about environment swaps? how does that work.
  • really ensure you understand deployment methods - ensuring a certain capacity is maintained.


  • Be comfortable knowing how to use CloudFormation to deploy things into AWS.
  • Specifically, how to deploy serverless applications into AWS using CloudFormation
  • Understand templates & stacks, and how resources are updated and replaced based on template changes.
  • from a development perspective, understand changesets
  • Be comfortable interacting with CloudFormation from the API and CLI.


  • Be comfortable at a high level with the architectural basics for RDS/Aurora and DDB
  • Understand how read-replicas can be used not only to add resiliance, but also allow read operations to scale beyond a single server.
  • Understand the best practice ways of providing the read replica server details to an application. Hostnames? IP’s? how to obtain these at scale, across many application servers.


  • Xray and the ability to trace had a major place in this exam. Study and understand how x-ray works and how it can be beneficial to a Developer
  • know what xray works with, and how it can be integrated.
  • understand it’s limits and main capabilities.
  • Be comfortable with CloudTrail - how it works globally and regionally. Understand how to use it to diagnose errors you may see in your applications. What if an application experiences a bucket not found .. how could you use cloud trail to diagnose this.
  • Know how services log to cloudwatch - for instance if Lambda isn’t logging to cloudwatch even though you know its running, be able to diagnose why!


  • Understand at a foundational level what ‘in memory caching’ is. What it does, how it works, the advantages and disadvantages.
  • Research the elasticache as a product - including it’s two variants, a) the redis engine and b) the memcached engine.
  • Know the sizes AWS offers, and the capabilities and limitations of each engine. Sharding, clustering, backups, replication, speed, customisability.
  • Can elasticache be used with DDB? can it be used for user sessions.
  • Can it be used to cache S3? what about SQL databases like RDS and Aurora?

General (topics that don’t fit anywhere else)

  • Be confortable with how to implement (from a development perspective) good global performance and resiliency. When to use ELB’s, R53, Multiple Buckets or multiple DynamoDB tables with replication. When is a CDN preferable over having content locally.
  • From a developer perspective understand how to access Load Balancer logs to determine the real IP address of clients connecting to your services.
  • Research how to fault find with Load Balancers, for the various difference HTTP error codes, how would you determine the root cause and rectify.
  • Research the basics of containers, docker and how to utilise ECS.
  • Understand the best ways, and more importantly the best practise ways to give specific containers access to AWS services - can roles be used? are these assigned to a container host or containers themselves.
  • What is a launch configuration, how is it used?
  • What is an autoscaling group, how is it used, how does it relate to a launch configuration. Can you scale using an autoscaling group based on number of users? if so, how would you accomplish that?
  • Do you understand when and where storage services in AWS are appropriate? can S3 be mounted to an EC2 instance? if not, which service can be?
  • If you needed to make application data available to many EC2 instances… what storage service would be the most appropriate?
  • Be able to diagnose the reasons for slow application performance based on given data. Understand the various steps, login, profile access, API access, data access.
  • Research how cognito works, what does it do, how does it help provide sync services to mobile apps/devices.

Parting Thoughts

Overall, I’m a super-fan of the new developer focus. For me, it was a fun experience and I enjoyed the whole process much more than the older developer exam. I’ll be doing a series of articles on how to effectively study for, and pass the all of the AWS exam, the three associates, two professional and three specialties. That being said, these posts won’t be enough on their won, you should look at some formal training if you are serious about passing the exam.

I’ve made courses on some of the above subject matter myself, but strongly recommend if you are studying for one or more of the AWS exams you should give Linux Academy a look. Their annual subscriptions are great value - you can snag one for $37.42 p/m if you signup for a year.


Full disclosure - I’m now working with Linux Academy, so expect to hear my voice on some training coming soon :). That being said, I’ve had my own subscription, paid for by my own money for as long as they have been available.

If you want any more exam related information, then keep an eye on my blog, better-still subscribe to the RSS feed or follow @adriancantrill on twitter for new article announcements.