CompTIA SecurityX (CAS-005) Certification Exam Ultimate Cheat Sheet
Your Quick Reference Study Guide
This cheat sheet covers the core concepts, terms, and definitions you need to know for the CompTIA SecurityX (CAS-005) Certification Exam. We've distilled the most important domains, topics, and critical details to help your exam preparation.
💡 Note: While this study guide highlights essential concepts, it's designed to complement—not replace—comprehensiv e learning materials. Use it for quick reviews, last-minute prep, or to identify areas that need deeper study before your exam.
About This Cheat Sheet: This study guide covers core concepts for CompTIA SecurityX (CAS-005) Certification Exam. It highlights key terms, definitions, common mistakes, and frequently confused topics to support your exam preparation.
Use this as a quick reference alongside comprehensive study materials.
CompTIA SecurityX (CAS-005) Certification Exam
Cheat Sheet •
About This Cheat Sheet: This study guide covers core concepts for CompTIA SecurityX (CAS-005) Certification Exam. It highlights key terms, definitions, common mistakes, and frequently confused topics to support your exam preparation.
Use this as a quick reference alongside comprehensive study materials.
Governance, Risk, and Compliance
20%Security Program Lifecycle
Ongoing cycle to plan, implement, measure, and improve security across people, processes, and tech; aligns risk to biz.
Key Insight
Prioritize remediation by risk and business impact — policies need owners, KPIs, enforcement, and iterative improvement.
Often Confused With
Common Mistakes
- Treating the security program as a one-time project
- Relying on written policies without owners, KPIs, or enforcement
- Fixing every finding immediately regardless of risk or business impact
GRC Platforms — Mapping & Monitoring
Centralized platforms that map controls to frameworks, automate evidence/workflows, track posture, and enable continuous
Key Insight
GRC tools speed evidence and reporting but don't remove owner decisions — correct mappings, sensor coverage, and ops are required.
Often Confused With
Common Mistakes
- Expecting GRC tools to fully automate compliance and remove owner responsibility
- Treating control mapping as proof of compliance without evidence or remediation
- Assuming continuous monitoring finds every issue regardless of sensor coverage/configuration
Control Selection & Justification
Pick, classify, and sequence controls using measurable effectiveness (coverage, MTTD/MTTR) to reduce residual risk.
Key Insight
Prioritize controls by measurable risk-reduction per cost; justify sequence using residual risk and detection/response metrics.
Often Confused With
Common Mistakes
- Selecting controls solely to satisfy compliance
- Assuming one control eliminates the risk; ignoring residual risk
- Always favoring technical controls over administrative/physical ones
Quant vs Qual Risk Assessment
Apply numeric models (ALE, probabilities) or categorical scales (risk matrices); combine methods when data or context is
Key Insight
Quantitative outputs are only as good as their inputs—use qualitative scoring to fill gaps and validate rankings.
Often Confused With
Common Mistakes
- Believing quantitative scores are objectively correct
- Treating qualitative methods as unusable for prioritization
- Forcing a single method; never combine quantitative and qualitative
Regulatory Alignment & Evidence Flow
Translate laws into controls, collect technical evidence, and run impact‑based updates to keep compliance current.
Key Insight
Compliance proves obligation coverage; true security needs risk‑based controls, testable evidence, and continuous monitoring.
Often Confused With
Common Mistakes
- Equating audit pass with being secure — audits show coverage, not risk elimination
- Applying identical controls across units without risk‑based tailoring
- Handing compliance only to legal/compliance teams instead of cross‑functional owners
Control Mapping: NIST↔ISO Crosswalk
Turn requirements into traceable, testable controls and maintain cross‑framework mappings to reduce audit effort.
Key Insight
Mappings are living traceability: one requirement may need multiple controls and mapping ≠ implementation or effectiveness.
Often Confused With
Common Mistakes
- Treating mapping as a one‑time documentation task instead of ongoing maintenance
- Assuming a mapped control automatically proves it's implemented and effective
- Believing identical control labels across frameworks mean identical test scopes
Threat Modeling — SDLC Attack‑Path Analysis
Systematic identification of assets, adversaries, attack surfaces and likely attack paths to fix design flaws early in S
Key Insight
Iterate across the SDLC: model attackers, assets and attack paths, then prioritize mitigations by impact and exploitability.
Often Confused With
Common Mistakes
- Running threat modeling once at design and never re-evaluating after changes
- Focusing only on external attackers; ignoring insiders and supply‑chain threats
- Assuming threat modeling replaces SAST/DAST or code reviews
STRIDE — S/T/R/I/D/E Threat Categories
Mnemonic to map system components to Spoofing, Tampering, Repudiation, Information Disclosure, DoS, Elevation for threat
Key Insight
STRIDE categorizes threat *types*, not risk — use it to find issues, then quantify likelihood/impact and pick controls.
Often Confused With
Common Mistakes
- Treating STRIDE categories as mutually exclusive
- Using STRIDE output as final risk scores instead of feeding it to a scoring process
- Equating Information Disclosure only with network breaches (ignores logs, misconfigs, insiders)
Security Program Lifecycle
Ongoing cycle to plan, implement, measure, and improve security across people, processes, and tech; aligns risk to biz.
Key Insight
Prioritize remediation by risk and business impact — policies need owners, KPIs, enforcement, and iterative improvement.
Often Confused With
Common Mistakes
- Treating the security program as a one-time project
- Relying on written policies without owners, KPIs, or enforcement
- Fixing every finding immediately regardless of risk or business impact
GRC Platforms — Mapping & Monitoring
Centralized platforms that map controls to frameworks, automate evidence/workflows, track posture, and enable continuous
Key Insight
GRC tools speed evidence and reporting but don't remove owner decisions — correct mappings, sensor coverage, and ops are required.
Often Confused With
Common Mistakes
- Expecting GRC tools to fully automate compliance and remove owner responsibility
- Treating control mapping as proof of compliance without evidence or remediation
- Assuming continuous monitoring finds every issue regardless of sensor coverage/configuration
Control Selection & Justification
Pick, classify, and sequence controls using measurable effectiveness (coverage, MTTD/MTTR) to reduce residual risk.
Key Insight
Prioritize controls by measurable risk-reduction per cost; justify sequence using residual risk and detection/response metrics.
Often Confused With
Common Mistakes
- Selecting controls solely to satisfy compliance
- Assuming one control eliminates the risk; ignoring residual risk
- Always favoring technical controls over administrative/physical ones
Quant vs Qual Risk Assessment
Apply numeric models (ALE, probabilities) or categorical scales (risk matrices); combine methods when data or context is
Key Insight
Quantitative outputs are only as good as their inputs—use qualitative scoring to fill gaps and validate rankings.
Often Confused With
Common Mistakes
- Believing quantitative scores are objectively correct
- Treating qualitative methods as unusable for prioritization
- Forcing a single method; never combine quantitative and qualitative
Regulatory Alignment & Evidence Flow
Translate laws into controls, collect technical evidence, and run impact‑based updates to keep compliance current.
Key Insight
Compliance proves obligation coverage; true security needs risk‑based controls, testable evidence, and continuous monitoring.
Often Confused With
Common Mistakes
- Equating audit pass with being secure — audits show coverage, not risk elimination
- Applying identical controls across units without risk‑based tailoring
- Handing compliance only to legal/compliance teams instead of cross‑functional owners
Control Mapping: NIST↔ISO Crosswalk
Turn requirements into traceable, testable controls and maintain cross‑framework mappings to reduce audit effort.
Key Insight
Mappings are living traceability: one requirement may need multiple controls and mapping ≠ implementation or effectiveness.
Often Confused With
Common Mistakes
- Treating mapping as a one‑time documentation task instead of ongoing maintenance
- Assuming a mapped control automatically proves it's implemented and effective
- Believing identical control labels across frameworks mean identical test scopes
Threat Modeling — SDLC Attack‑Path Analysis
Systematic identification of assets, adversaries, attack surfaces and likely attack paths to fix design flaws early in S
Key Insight
Iterate across the SDLC: model attackers, assets and attack paths, then prioritize mitigations by impact and exploitability.
Often Confused With
Common Mistakes
- Running threat modeling once at design and never re-evaluating after changes
- Focusing only on external attackers; ignoring insiders and supply‑chain threats
- Assuming threat modeling replaces SAST/DAST or code reviews
STRIDE — S/T/R/I/D/E Threat Categories
Mnemonic to map system components to Spoofing, Tampering, Repudiation, Information Disclosure, DoS, Elevation for threat
Key Insight
STRIDE categorizes threat *types*, not risk — use it to find issues, then quantify likelihood/impact and pick controls.
Often Confused With
Common Mistakes
- Treating STRIDE categories as mutually exclusive
- Using STRIDE output as final risk scores instead of feeding it to a scoring process
- Equating Information Disclosure only with network breaches (ignores logs, misconfigs, insiders)
Security Architecture
27%Zero Trust Architecture (ZTA)
Security model that denies implicit trust — continuous verification, least privilege, and contextual access.
Key Insight
Access decisions are based on identity, device posture and context — not network location.
Often Confused With
Common Mistakes
- Mistaking ZTA for perimeter removal — it augments perimeters with continuous verification.
- Equating MFA alone with Zero Trust — MFA is necessary but insufficient for ZTA.
- Treating ZTA as only tech — policy, processes, and org changes are required.
Data Loss Prevention (DLP)
Policies and controls to detect and prevent unauthorized exposure of sensitive data at rest, in use, or in transit.
Key Insight
Effective DLP requires accurate classification + tuned policies + enforcement across endpoint, network, and cloud.
Often Confused With
Common Mistakes
- Assuming DLP stops all leaks out of the box — it needs classification, policy tuning, and governance.
- Believing encryption removes the need for DLP — encryption doesn't stop authorized plaintext exfiltration.
- Limiting DLP to network-only — modern DLP must include endpoints, cloud/API integration, and data discovery.
BC/DR — RPO, RTO & Replication Tradeoffs
Policies and designs to keep services running and restore data — choose RPO/RTO, replication, backups, or DRaaS by risk.
Key Insight
RPO = acceptable data loss; RTO = acceptable downtime. Pick technology and contracts that explicitly meet both and test them.
Often Confused With
Common Mistakes
- Treating RPO and RTO as the same metric; they demand different solutions.
- Assuming replication alone replaces independent backups, retention, and formal DR plans.
- Assuming DRaaS or cross-region replication guarantees zero loss; ignore SLAs and shared dependencies at your peril.
Container Security — Image-to-Run Controls
Defend container workloads from build to runtime: signed images, continuous scanning, registries, RBAC, network/pod and
Key Insight
Image scanning is necessary but insufficient — enforce supply-chain signing, continuous scans, runtime defenses, least-privilege RBAC, and network/pod
Often Confused With
Common Mistakes
- Relying on a single build-time image scan and skipping continuous scanning/runtime protections.
- Granting broad cluster-admin or service-account roles to apps instead of least-privilege RBAC.
- Assuming default Kubernetes provides network isolation; network/pod policies must be defined.
Risk Triage & Residual Risk
Identify, assess, and prioritize enterprise risks; calculate residual risk and align treatments to strategy.
Key Insight
Residual risk = inherent risk minus control effect; prioritize using likelihood, impact, and asset criticality.
Often Confused With
Common Mistakes
- Treating residual risk as inherent risk—ignore controls' effect.
- Expecting residual risk to be zero after treatment; acceptance/tracking is normal.
- Prioritizing only likelihood or only impact—omit asset value/business context.
Key Management: KMS vs HSM & Custody
Manage cryptographic keys' lifecycle—generation, storage, rotation, revocation, backup—across KMS, HSM, BYOK/CMK models.
Key Insight
Key custody defines trust: HSMs isolate key material; BYOK reduces provider control but doesn't remove metadata or backup exposure.
Often Confused With
Common Mistakes
- Assuming encryption alone guarantees compliance or prevents breaches.
- Believing BYOK means the cloud provider has zero access to key material.
- Backing up keys adjacent to ciphertext or on the same server is safe.
Security Engineering
31%Secrets Management — Vaults & Lifecycle
Securely store, distribute, rotate, and retire credentials; exam focus: automation, IAM integration, and auditability.
Key Insight
Treat secrets as ephemeral assets: vault + RBAC + automated rotation/revocation + audit = exam-grade control.
Often Confused With
Common Mistakes
- Storing secrets in private repos or config files — assume exposure; use a vault with access controls.
- Using environment variables as a long-term secret store — require vault-backed encryption and RBAC.
- Relying on rotation alone — still enforce least privilege, automated revocation and audit logging.
MAC — Message Authentication Code (HMAC / CMAC)
A symmetric tag proving integrity and authenticity with a shared key; use when non-repudiation isn't required.
Key Insight
MACs authenticate messages between key holders only — they verify integrity/authenticity but do NOT provide non-repudiation or confidentiality.
Often Confused With
Common Mistakes
- Assuming a MAC provides non-repudiation like a digital signature — it does not.
- Comparing MACs with normal string compare — use constant-time verification to avoid timing attacks.
- Placing MAC and encryption incorrectly — prefer encrypt-then-MAC (or use AE) to avoid subtle flaws.
PKI: Architecture & Certificate Lifecycle
CAs, trust anchors and the enroll/issue/renew/revoke lifecycle — plus operational controls and automation.
Key Insight
Trust rests on anchor custody and CA key control; revocation is NOT instantaneous—use short‑lived certs, OCSP stapling, and automation.
Often Confused With
Common Mistakes
- Assuming PKI only covers TLS — it also enables code signing, S/MIME, VPN and device/user auth.
- Treating self-signed certs as trusted by default instead of explicitly adding to a trust store.
- Believing revocation is instantaneous — CRLs/OCSP have propagation, availability, and windowed risk.
PQC: Algorithms, Tradeoffs & Migration
Public-key algorithms designed to resist quantum attacks — know families, performance costs, and migration strategy.
Key Insight
PQC planning is urgent: 'harvest-now, decrypt-later' risks plus large variance in key/cipher sizes, latency and CPU costs by algorithm family.
Often Confused With
Common Mistakes
- Deferring PQC because quantum hardware isn't yet widespread — ignores long-lived data risk.
- Equating PQC with QKD — software algorithm migration vs physical-layer quantum links.
- Assuming all PQC schemes have similar keys/performance — lattice, code-based, hash-based, isogeny differ widely.
Policy-as-Code: Shift-Left Governance (OPA/Gatekeeper)
Encode controls as code; enforce in IaC, CI/CD and runtime; combine tagging with CSPM/CWPP for continuous compliance.
Key Insight
PaC must be versioned, tested and enforced across pipeline + runtime; tags map policies to cloud assets for continuous checks.
Often Confused With
Common Mistakes
- Treat PaC as IaC — PaC defines rules, IaC creates resources.
- Committing policy files to a repo equals enforcement — skip CI tests/runtime checks and misconfigs slip through.
- Assume policies are static or cloud‑portable — they need maintenance and provider‑specific mappings.
IoT Security: Inventory, Segmentation & Lifecycle
Networked sensors/actuators that expand attack surface; require accurate inventory, network segmentation and long‑term F
Key Insight
Treat IoT as persistent enterprise assets: enforce segmentation, gateway/NAC controls, vendor firmware SLAs and compensating controls.
Often Confused With
Common Mistakes
- Assume IoT devices can't be secured — many mitigations (segmentation, gateways, secure boot) reduce risk.
- Rely on a single perimeter firewall to contain compromised IoT devices.
- Treat IoT as short‑lived — ignore lifecycle planning, vendor support and firmware update processes.
Security Operations
22%SOAR — Security Orchestration, Automation, Response
Platform that integrates SIEM, EDR/XDR, ITSM to enrich alerts, automate workflows, and execute response playbooks.
Key Insight
SOAR augments analysts—automate routine work, but gate risky actions for human approval and audit trail.
Often Confused With
Common Mistakes
- Believing SOAR replaces analysts—human approvals remain necessary for high‑risk actions.
- Treating SOAR as just a SIEM that only collects logs and correlates events.
- Automating every alert without gating or testing, which increases false positives and operational risk.
IR Playbooks & Integrations — Triggers, Gates, Actions
Encoded incident procedures: triggers, decision logic, manual checkpoints, escalations, error handling, and tool/API tie
Key Insight
Parameterize playbooks by incident type, include manual checkpoints for high‑impact steps, and log/retry every automated action.
Often Confused With
Common Mistakes
- Treating playbooks as static—skip regular testing, tabletop exercises, or revisions.
- Removing human gates and letting automation make all incident decisions.
- Embedding plaintext credentials or permanent secrets in playbook actions.
Continuous Monitoring — Telemetry → Triage
Collect, normalize, and analyze telemetry to produce prioritized alerts for fast incident response.
Key Insight
Collection alone is useless — parsing, correlation, detection rules, and prioritization turn telemetry into actionable alerts.
Often Confused With
Common Mistakes
- Treating monitoring as only log collection; skipping normalization/parsing.
- Expecting automation to replace analysts instead of augmenting investigations.
- Assuming more telemetry automatically improves detection; it often increases noise.
SIEM — Security Info & Event Management
Centralizes, normalizes, correlates, and enriches logs to detect threats, generate alerts, and support investigations.
Key Insight
SIEM needs quality telemetry, parsing/correlation rules, tuning, and threat intel; it's for detection/enrichment, not prevention.
Often Confused With
Common Mistakes
- Believing SIEM will auto-detect all attacks (including zero‑days) without tuning and good data.
- Treating SIEM as a preventative control and removing endpoint/IDS protections.
- Expecting accurate, low-noise alerts immediately after deployment without rule tuning.
EDR — Endpoint Detection & Host Forensics
Agent-based host telemetry, alerting and remediation plus forensic artifacts to detect, validate, and contain endpointth
Key Insight
Deep host telemetry + response actions — EDR detects and aids forensics but needs tuning, correlation, and other controls; alerts aren't confirmations
Often Confused With
Common Mistakes
- Treating every EDR alert as a confirmed compromise
- Assuming EDR replaces AV or other endpoint protections
- Expecting EDR to provide full network-wide visibility
Incident Response (IR) Lifecycle & Roles
Phased, cross-functional process—prepare, identify, contain, eradicate, recover, lessons learned—to limit impact and保留e证
Key Insight
Phases overlap; containment can be partial and trades off evidence preservation vs business impact — involve IT, legal, and comms early
Often Confused With
Common Mistakes
- Leaving IR solely to security without IT, legal, or business owners
- Choosing fastest containment that destroys evidence or breaks operations
- Conflating eradication (remove cause) with recovery (restore operations)
Malware Detonation & IoC Extraction (Sandboxing)
Run samples in instrumented sandboxes to capture telemetry, extract vetted IOCs, and produce detection/TIP-ready artects
Key Insight
Sandbox results are partial and environment-dependent — always validate IOCs across instrumentations, live hosts, and EDR logs
Often Confused With
Common Mistakes
- Treating single-sandbox output as full malware behavior
- Promoting sandbox-only artifacts as IOCs without cross-validation
- Assuming sandboxes fully prevent escapes or never alter malware behavior
Config Management: Idempotent Baselines & Drift Control
Define idempotent secure baselines, detect and remediate drift, and govern changes with staging, approvals, and rollback
Key Insight
Idempotence + staged deployment = safe scale; plan inventory/credentials for agentless tools and use asset-specific baselines
Often Confused With
Common Mistakes
- Assuming any remediation script is idempotent
- Auto-remediating production without staged testing or approval gates
- Applying a single baseline to every asset type
Certification Overview
Cheat Sheet Content
Similar Cheat Sheets
- CCNA Exam v1.1 (200-301) Cheat Sheet
- AWS Certified Cloud Practitioner (CLF-C02) Cheat Sheet
- AWS Certified AI Practitioner (AIF-C01) Cheat Sheet
- Exam AI-900: Microsoft Azure AI Fundamentals Cheat Sheet
- Google Cloud Professional Cloud Architect Cheat Sheet
- Google Cloud Security Operations Engineer Exam Cheat Sheet