AWS Certified Solutions Architect - Associate (SAA-C03) Ultimate Cheat Sheet
Your Quick Reference Study Guide
This cheat sheet covers the core concepts, terms, and definitions you need to know for the AWS Certified Solutions Architect - Associate (SAA-C03). We've distilled the most important domains, topics, and critical details to help your exam preparation.
💡 Note: While this study guide highlights essential concepts, it's designed to complement—not replace—comprehensiv e learning materials. Use it for quick reviews, last-minute prep, or to identify areas that need deeper study before your exam.
About This Cheat Sheet: This study guide covers core concepts for AWS Certified Solutions Architect - Associate (SAA-C03). It highlights key terms, definitions, common mistakes, and frequently confused topics to support your exam preparation.
Use this as a quick reference alongside comprehensive study materials.
AWS Certified Solutions Architect - Associate (SAA-C03)
Cheat Sheet •
About This Cheat Sheet: This study guide covers core concepts for AWS Certified Solutions Architect - Associate (SAA-C03). It highlights key terms, definitions, common mistakes, and frequently confused topics to support your exam preparation.
Use this as a quick reference alongside comprehensive study materials.
Design Secure Architectures
30%AWS IAM — Identity, Roles & Trust
Global authN/authZ: users, roles, policies and trust relationships for cross-account least-privilege.
Key Insight
AssumeRole gives an identity; effective access requires the role's permissions plus any resource policy — trust policy alone grants no actions.
Often Confused With
Common Mistakes
- Thinking sts:AssumeRole alone removes the need for resource-based policies.
- Believing a trust policy itself grants permissions (it only allows assumption).
- Assuming centralizing identities eliminates per-account resource permission management.
IAM Policies — Identity vs Resource
JSON policies (identity, resource, account) are evaluated together; explicit Deny wins and choose policy type by access/
Key Insight
All applicable policies combine: an explicit Deny always blocks; access requires at least one Allow and no Deny; KMS key policies can be authoritative
Often Confused With
Common Mistakes
- Thinking an Allow anywhere overrides an explicit Deny.
- Expecting an identity policy alone to grant cross-account access without a resource policy or cross-account role.
- Believing all AWS services support resource-based policies the same way.
ALB — L7 Routing, TLS & Session Controls
Layer‑7 load balancer: host/path routing, TLS termination or TLS‑bridge, target‑group health checks, sticky cookies, and
Key Insight
TLS termination at ALB can be followed by backend re‑encryption; health checks, stickiness, and auth are per listener/target‑group and must be enabled
Often Confused With
Common Mistakes
- Assuming TLS termination at ALB removes the need to encrypt backend traffic
- Expecting ALB authentication to be automatically applied to all listeners
- Thinking TLS termination only works on port 443 or eliminates certificate management
VPC Networking — Subnets, Routes & Controls
Isolated virtual network: subnets + route tables control reachability; IGW/NAT provide internet egress; SGs, NACLs, andV
Key Insight
Internet reachability is route‑table driven — a private subnet alone doesn't block egress; security groups are stateful, NACLs are stateless; VPC Endp
Often Confused With
Common Mistakes
- Treating security groups as stateless like NACLs
- Believing a default VPC is fully private and blocks internet access
- Assuming placing an instance in a private subnet alone prevents internet egress
AWS KMS — CMKs & Envelope Encryption
Managed CMKs that never expose key material; CMKs wrap data keys used to encrypt bulk data (envelope encryption).
Key Insight
KMS never returns CMK key material — use CMKs to encrypt (wrap) data keys; access enforced by key policy + IAM/grants.
Often Confused With
Common Mistakes
- Trying to extract CMK key material from KMS — impossible
- Using a CMK to encrypt large objects directly; use envelope encryption with data keys
- Assuming KMS 'on' auto-encrypts service data — you must configure each service (SSE-KMS, EBS, etc.)
Amazon RDS — Managed Backups & PITR
Managed relational DBs with automated continuous backups stored in S3 for point-in-time recovery; manual snapshots are a
Key Insight
Automated continuous backups are stored in S3 enabling PITR, but they are not automatically cross‑region; manual snapshots are separate and managed by
Often Confused With
Common Mistakes
- Believing automated backups give instant PITR to any second without proper retention/config
- Thinking continuous backups stay on the DB host or EBS rather than S3-managed storage
- Assuming backups are unencrypted even if the source DB is encrypted
AWS IAM — Identity, Roles & Trust
Global authN/authZ: users, roles, policies and trust relationships for cross-account least-privilege.
Key Insight
AssumeRole gives an identity; effective access requires the role's permissions plus any resource policy — trust policy alone grants no actions.
Often Confused With
Common Mistakes
- Thinking sts:AssumeRole alone removes the need for resource-based policies.
- Believing a trust policy itself grants permissions (it only allows assumption).
- Assuming centralizing identities eliminates per-account resource permission management.
IAM Policies — Identity vs Resource
JSON policies (identity, resource, account) are evaluated together; explicit Deny wins and choose policy type by access/
Key Insight
All applicable policies combine: an explicit Deny always blocks; access requires at least one Allow and no Deny; KMS key policies can be authoritative
Often Confused With
Common Mistakes
- Thinking an Allow anywhere overrides an explicit Deny.
- Expecting an identity policy alone to grant cross-account access without a resource policy or cross-account role.
- Believing all AWS services support resource-based policies the same way.
ALB — L7 Routing, TLS & Session Controls
Layer‑7 load balancer: host/path routing, TLS termination or TLS‑bridge, target‑group health checks, sticky cookies, and
Key Insight
TLS termination at ALB can be followed by backend re‑encryption; health checks, stickiness, and auth are per listener/target‑group and must be enabled
Often Confused With
Common Mistakes
- Assuming TLS termination at ALB removes the need to encrypt backend traffic
- Expecting ALB authentication to be automatically applied to all listeners
- Thinking TLS termination only works on port 443 or eliminates certificate management
VPC Networking — Subnets, Routes & Controls
Isolated virtual network: subnets + route tables control reachability; IGW/NAT provide internet egress; SGs, NACLs, andV
Key Insight
Internet reachability is route‑table driven — a private subnet alone doesn't block egress; security groups are stateful, NACLs are stateless; VPC Endp
Often Confused With
Common Mistakes
- Treating security groups as stateless like NACLs
- Believing a default VPC is fully private and blocks internet access
- Assuming placing an instance in a private subnet alone prevents internet egress
AWS KMS — CMKs & Envelope Encryption
Managed CMKs that never expose key material; CMKs wrap data keys used to encrypt bulk data (envelope encryption).
Key Insight
KMS never returns CMK key material — use CMKs to encrypt (wrap) data keys; access enforced by key policy + IAM/grants.
Often Confused With
Common Mistakes
- Trying to extract CMK key material from KMS — impossible
- Using a CMK to encrypt large objects directly; use envelope encryption with data keys
- Assuming KMS 'on' auto-encrypts service data — you must configure each service (SSE-KMS, EBS, etc.)
Amazon RDS — Managed Backups & PITR
Managed relational DBs with automated continuous backups stored in S3 for point-in-time recovery; manual snapshots are a
Key Insight
Automated continuous backups are stored in S3 enabling PITR, but they are not automatically cross‑region; manual snapshots are separate and managed by
Often Confused With
Common Mistakes
- Believing automated backups give instant PITR to any second without proper retention/config
- Thinking continuous backups stay on the DB host or EBS rather than S3-managed storage
- Assuming backups are unencrypted even if the source DB is encrypted
Design Resilient Architectures
26%HA & Fault‑Tolerant Design — RTO/RPO Rule
Map topology, replication, and failover to RTO/RPO: multi‑AZ for AZ loss, multi‑Region for region loss.
Key Insight
Sync replication = low RPO but higher write latency; async = lower latency with possible data loss — choose by RPO/RTO.
Often Confused With
Common Mistakes
- Treating Multi‑AZ as protection against Region failures.
- Assuming read replicas provide synchronous/strong consistency.
- Believing more replicas always improves write performance or removes failover design needs.
Well‑Architected Pillars — Trade‑offs First
Five pillars to judge designs: balance security, reliability, performance, cost, and operations per workload.
Key Insight
Every design choice shifts multiple pillars — explicitly state the trade‑offs (e.g., cheaper => more operational effort or lower availability).
Often Confused With
Common Mistakes
- Treating pillars as independent; ignoring cross‑pillar impacts.
- Picking the newest or managed service by default without validating workload fit.
- Optimizing only for cost and skipping availability or security trade‑offs.
ELB — Multi‑AZ High Availability (ALB / NLB / CLB)
Distributes traffic across AZs; choose ALB/NLB/CLB by layer and use health checks + Auto Scaling/WAF for HA.
Key Insight
ELB balances traffic but does NOT replicate application state — combine with Auto Scaling, shared storage, and correct health checks for true HA.
Often Confused With
Common Mistakes
- Believing ELB alone ensures app-level HA if only one AZ or no Auto Scaling
- Assuming ELB replicates application state across AZs — it doesn't
- Thinking NLB can't preserve client source IP (NLB preserves; ALB uses X-Forwarded-For)
RDS Multi‑AZ HA vs Read Replica
Synchronous primary + standby across AZs for automated failover (HA); read replicas use asynchronous replication for 읽-s
Key Insight
Multi‑AZ = high availability and automated failover (no read scaling); read replicas = read scaling and manual promotion with possible lag/data loss.
Often Confused With
Common Mistakes
- Thinking read replicas auto-failover like Multi‑AZ
- Assuming read replicas are always up-to-date — they can lag
- Believing Multi‑AZ removes the need for backups/snapshots/PITR
Design High-Performing Architectures
24%Amazon S3 — Object Storage & Glacier Archive
Durable object storage (buckets/objects) with classes, lifecycle, versioning; Glacier tiers trade cost for retrieval lag
Key Insight
Objects are immutable units — choose storage class/lifecycle for access pattern; Glacier adds minutes-to-hours retrieval latency
Often Confused With
Common Mistakes
- Treating S3 like a POSIX filesystem — no in-place edits or byte-range locking
- Expecting immediate access from Glacier/Deep Archive — retrieval can take minutes to hours
- Assuming S3 is eventually consistent for all ops — modern S3 provides strong consistency for PUT/GET/LIST
Amazon EBS — AZ-bound Block Storage (SSD/HDD, IOPS)
Block-level, AZ-attached persistent volumes for EC2 with volume types for throughput/IOPS and snapshot backups
Key Insight
EBS is tied to a single AZ and attached to instances; snapshots are incremental (stored in S3) and performance depends on instance + volume config
Often Confused With
Common Mistakes
- Assuming EBS is globally accessible like S3 — volumes are bound to one AZ
- Thinking snapshots are continuous backups — snapshots are point-in-time and must be created
- Believing provisioned IOPS always delivers full performance regardless of instance or config
Auto Scaling (ASG — EC2 Auto Scaling / AWS Auto Scaling)
Automatically adjusts EC2 capacity with Auto Scaling Groups, launch templates, and scaling policies.
Key Insight
Scaling has latency—account for launch/boot/warm‑up, use target‑tracking, cooldowns, and multi‑AZ health checks.
Often Confused With
Common Mistakes
- Treating scaling as instantaneous — ignore launch, boot, and app warm‑up time
- Assuming an ASG alone guarantees cross‑AZ HA without LB and health checks
- Believing ASGs always reduce cost; misconfigured policies can overscale
EC2 Instances (On‑Demand, Reserved, Spot, Savings Plans)
Virtual servers with families, sizes, storage options, and purchase models that trade cost vs reliability.
Key Insight
Match instance type to CPU/memory/IO needs, choose EBS vs instance‑store correctly, and pick the right buying model for cost vs availability.
Often Confused With
Common Mistakes
- Stopping an instance doesn't stop EBS, snapshots, or Elastic IP charges
- Using Spot as the default 'cheapest' for production without fallback/interrupt handling
- Assuming all instances provide persistent storage (instance store is ephemeral)
DynamoDB — Partitioned Serverless NoSQL
Serverless NoSQL with partition keys, GSIs/LSIs, capacity modes and hot‑partition risks.
Key Insight
Partition key distribution controls usable throughput; on‑demand/adaptive help but don't cure a single hot key.
Often Confused With
Common Mistakes
- On‑demand doesn't remove need for a well‑distributed partition key; hot keys still throttle.
- Adaptive Capacity helps but won't absorb indefinite uneven traffic or guarantee unlimited capacity.
- Adding a sort (range) key doesn't spread writes across physical partitions — the partition (hash) key does.
High‑Performing DB Design
Pick engine, schema (normalize vs denormalize), consistency, replication and caching to meet latency and scale.
Key Insight
Match the data model to access patterns: transactional → normalized + strong consistency; read‑heavy → denormalize, cache, and replicas.
Often Confused With
Common Mistakes
- Assuming NoSQL means no schema — implicit schemas and constraints still affect correctness and scale.
- Believing relational DBs can't scale horizontally — sharding, clustering and replicas enable horizontal scale.
- Treating denormalization as 'wrong' — it's a deliberate trade‑off for read latency and throughput.
Load Balancers: ALB vs NLB vs GWLB
ALB = L7 content routing; NLB = L4 high-throughput/static IPs; GWLB = steer/encapsulate flows to inspection appliances.
Key Insight
ALB handles HTTP/S content routing; NLB handles raw TCP/UDP (and TLS offload); GWLB encapsulates (GENEVE UDP 6081) and forwards to appliances for all/
Often Confused With
Common Mistakes
- Treat ALB and NLB as interchangeable — they differ L7 vs L4, TLS/session features.
- Assume GWLB performs L7 inspection or TLS termination — it only encapsulates/forwards via GENEVE.
- Attach GWLB to an ASG or skip GENEVE — register appliance instances in a GWLB target group; appliances must support GENEVE/UDP 6081.
High-Performing Network Design: Latency vs Throughput
Place services, choose edge/load strategies and measure correctly: ping/jitter for latency, iPerf3 for raw throughput, A
Key Insight
Latency ≠ throughput — use the right test: ping/jitter for latency, iPerf3 for bandwidth (account for CPU, TCP window, instance limits); test acrossAZ
Often Confused With
Common Mistakes
- Assume edge services always add latency.
- Think increasing bandwidth alone will fix latency problems.
- Rely on a single ping or one iPerf3 run to represent real application performance.
High‑Performance Ingestion & ETL/ELT Tradeoffs
Select streaming vs batch and ETL vs ELT by latency, ordering, throughput, governance, and total cost.
Key Insight
Pick by SLA: streaming (Kinesis) for low‑latency ordered events; batch (S3/DataSync/Snowball) for bulk throughput. ETL = pre‑load quality; ELT = post‑
Often Confused With
Common Mistakes
- Assume ELT is always cheaper — ignores transform compute and later query/storage costs.
- Assume ETL always gives lower latency — batch windows and pre‑processing can add delay.
- Pick one ingestion service for every workload — streaming, bulk transfer, and offline migrations differ.
Athena • Lake Formation • QuickSight — Query, Govern, Visualize
Athena = serverless SQL over S3; Lake Formation = catalog + fine‑grained access/governance; QuickSight = BI/visuals on準
Key Insight
Athena queries data in place (no clusters, pay‑per‑query); Lake Formation manages/catalogs/enforces access on S3/Glue; QuickSight is visualization,not
Often Confused With
Common Mistakes
- Think Athena stores or ingests data — it runs serverless queries over S3.
- Treat Lake Formation as storage — it's a management/catalog and access layer over S3/Glue.
- Expect QuickSight to replace ETL — it's for visualization and light dataset prep only.
Design Cost-Optimized Architectures
20%CloudFront — Edge Caching & Origin Egress
Global CDN that caches content at edge locations to cut latency and origin egress; tune cache/TLS/origins.
Key Insight
Cache misses, invalidations and preloads still hit the origin and incur egress—maximize hit ratio with TTLs and behaviors.
Often Confused With
Common Mistakes
- Assuming CloudFront removes all origin egress — misses, invalidations, and preloads still bill.
- Relying on default cache settings — no Cache‑Control/TTL or behaviors => low hit rates and higher costs.
- Skipping origin TLS/config — CloudFront doesn't auto-provision HTTPS on your origin.
EBS Types: gp2 vs gp3 vs io1/io2 vs st1/sc1
Block storage choices: gp3 decouples IOPS/throughput from size; gp2 uses size-based burst credits; pick by IOPS, latency
Key Insight
gp3 lets you provision IOPS/throughput independently of size; use io1/io2 for sustained high IOPS and avoid HDDs for random I/O.
Often Confused With
Common Mistakes
- Thinking gp3 IOPS scale with volume size like gp2 — gp3 IOPS are provisioned separately.
- Assuming gp3 is always the cheapest/best — high IOPS workloads may need io1/io2.
- Using st1/sc1 for low-latency random I/O — HDD types are for large sequential throughput only.
Regions & AZs — Cost, Latency, Compliance Tradeoffs
Region/AZ choice affects pricing, latency, availability, data‑residency, feature set and transfer costs.
Key Insight
Lowest per‑unit Region price can be negated by cross‑Region transfer, latency, missing features, or compliance.
Often Confused With
Common Mistakes
- Assuming identical service pricing and features across Regions
- Picking the cheapest Region without adding cross‑Region transfer, latency or compliance costs
- Expecting resources/data to be auto‑replicated across Regions
Compute Cost Models — On‑Demand, Reserved, Savings, Spot
Map workload pattern to purchase model: steady→RIs/Savings, unpredictable→On‑Demand, interruptible batch→Spot.
Key Insight
Match interruption tolerance and utilization: use Spot for checkpointable jobs, RIs/Savings for predictable steady state.
Often Confused With
Common Mistakes
- Always choosing the lowest hourly price without checking performance or interruption risk
- Treating Savings Plans and Reserved Instances as identical coverage/flexibility
- Using Spot for critical stateful workloads that can't tolerate interruptions
Read Replicas — Read-Scale with Lag Risk
Read-only async copies to offload reads; can lag and break read-after-write guarantees.
Key Insight
Asynchronous replication = possible stale reads; use for eventual-consistency reads, not immediate read-after-write.
Often Confused With
Common Mistakes
- Assuming replicas are immediately consistent; they can lag and serve stale data.
- Using replicas to scale writes — they only increase read throughput, not primary writes.
- Expecting automatic, lossless promotion — promotion can require manual steps and risk lost in-flight writes.
DynamoDB Capacity: Provisioned vs On-Demand
Provisioned uses RCUs/WCUs (autoscale, reserved discounts); on-demand is pay-per-request — pick by predictable load.
Key Insight
Autoscaling is reactive (cooldowns, min/max bounds) and won't instantly stop throttling; reserved discounts only apply to provisioned throughput.
Often Confused With
Common Mistakes
- Believing Auto Scaling prevents all throttling — it reacts with delays and has limits.
- Assuming on-demand is always cheaper for bursts — sustained heavy traffic often favors provisioned + reserved.
- Thinking TTL instantly frees cost/space — deletions are asynchronous and savings appear later.
VPC Endpoints — Gateway vs Interface (PrivateLink)
Private AWS connectivity: Gateway endpoints (S3/DynamoDB) are free; Interface (PrivateLink) uses ENIs and has hourly + /
Key Insight
Use gateway for heavy S3/DynamoDB traffic (no endpoint fees); use interface when service-level isolation or PrivateLink security justifies hourly +/
Often Confused With
Common Mistakes
- Treating interface endpoints as free like gateway endpoints
- Assuming PrivateLink eliminates all data-transfer charges
- Believing endpoint creation overrides IAM/resource policies
Cost‑Optimized Network Design — CDN, Routing, Placement
Cut billable data transfer with placement, routing, CDNs and caching; estimate using AWS per‑GB and per‑hour pricing.
Key Insight
Cross‑AZ and some inter‑service paths incur per‑GB fees; Direct Connect wins only with sustained bandwidth to amortize port/hour costs.
Often Confused With
Common Mistakes
- Assuming all intra‑region traffic is free
- Believing private IPs always avoid transfer charges
- Thinking adding a CDN removes origin egress costs
Certification Overview
Cheat Sheet Content
Similar Cheat Sheets
- CCNA Exam v1.1 (200-301) Cheat Sheet
- AWS Certified Cloud Practitioner (CLF-C02) Cheat Sheet
- AWS Certified AI Practitioner (AIF-C01) Cheat Sheet
- Exam AI-900: Microsoft Azure AI Fundamentals Cheat Sheet
- Google Cloud Professional Cloud Architect Cheat Sheet
- Google Cloud Security Operations Engineer Exam Cheat Sheet