Mocka logoMocka
Home
Why MockaPricingFAQAbout

IAPP Certified Artificial Intelligence Governance Professional (AIGP) Exam Ultimate Cheat Sheet

5 Domains • 30 Concepts • Approx. 4 pages

Your Quick Reference Study Guide

This cheat sheet covers the core concepts, terms, and definitions you need to know for the IAPP Certified Artificial Intelligence Governance Professional (AIGP) Exam. We've distilled the most important domains, topics, and critical details to help your exam preparation.

💡 Note: While this study guide highlights essential concepts, it's designed to complement—not replace—comprehensiv e learning materials. Use it for quick reviews, last-minute prep, or to identify areas that need deeper study before your exam.

IAPP Certified Artificial Intelligence Governance Professional (AIGP) Exam Practice Questions
Access Mock Exams & Comprehensive Question Bank
Listen to Audio Podcasts
Expert summaries for IAPP Certified Artificial Intelligence Governance Professional (AIGP) Exam

About This Cheat Sheet: This study guide covers core concepts for IAPP Certified Artificial Intelligence Governance Professional (AIGP) Exam. It highlights key terms, definitions, common mistakes, and frequently confused topics to support your exam preparation.

Use this as a quick reference alongside comprehensive study materials.

IAPP Certified Artificial Intelligence Governance Professional (AIGP) Exam

Cheat Sheet •

Provided by GetMocka.com

About This Cheat Sheet: This study guide covers core concepts for IAPP Certified Artificial Intelligence Governance Professional (AIGP) Exam. It highlights key terms, definitions, common mistakes, and frequently confused topics to support your exam preparation.

Use this as a quick reference alongside comprehensive study materials.

Understanding the foundations of AI governance

18%

Data Quality — Fit, Traceable, Lawful

Make training/test data measurable, representative, lawful and lineage-traceable for valid models and audits.

Key Insight

Quality = measurable fitness (accuracy, completeness, representativeness, label quality) + provenance/lineage for auditability.

Often Confused With

Data GovernanceBias MitigationData Privacy

Common Mistakes

  • Assuming 'more data' automatically fixes bias or representativeness.
  • Treating publicly available data as automatically lawful to use.
  • Believing de‑identification removes the need for legal review or data‑subject protections.

Accountability — Roles, Oversight & Transparency

Define who is answerable for AI outcomes and enforce oversight, reporting, logging and remedial action across the AI lif

Key Insight

Accountability = answerability + enforceable oversight: assigned roles, policies, logs, disclosures and remediation powers.

Often Confused With

ResponsibilityTransparencyCompliance

Common Mistakes

  • Expecting full source code/raw training data publication to satisfy transparency.
  • Treating a single public report as complete, ongoing accountability.
  • Equating responsibility (task owner) with accountability (answerable and subject to oversight).

Tiered Governance — Tailor by Mission & Risk

Design tiered, risk-based controls sized to mission, sector, maturity, and explicit risk tolerance.

Key Insight

Scale control scope and intensity by multiple dimensions (mission, values, sector, maturity, tolerance).

Often Confused With

AI classification model for risk-based governanceOne-size governance frameworks

Common Mistakes

  • Applying one risk tolerance across all models and use cases
  • Assuming scaled controls mean weaker protections
  • Relying solely on technical controls; ignoring policy/legal/ops

AI Risk-Tier Classification — Tiers to Controls

Assign AI systems to measurable risk tiers (data sensitivity, impact, autonomy) and map tiers to required controls.

Key Insight

Classify using deployment context and impact, not model capability alone; the model informs controls but doesn't enforce them.

Often Confused With

Tailoring governance to industry, risk appetite, and process differencesCapability-based risk assessment

Common Mistakes

  • Scoring risk by model capability alone
  • Treating autonomy as automatic high risk
  • Ignoring non-personal data and inference harms

EU AI Act Risk Tiers (Prohibited / High / Limited / Minimal)

Four-tier risk model — prohibited, high‑risk, transparency/limited, minimal — for mapping obligations and controls.

Key Insight

Risk is set by intended use, context and harm potential — not only by technical capability.

Often Confused With

GDPRGeneral-purpose AI (GPAI)Sector-specific regulation

Common Mistakes

  • Assuming high‑risk = ban; high‑risk is permitted but subject to strict obligations.
  • Classifying by technical capability alone — ignore intended use, context, and impact.
  • Applying the EU taxonomy unchanged or assuming EU adoption equals global legal compliance.

AI Access Controls — RBAC, SoD & Immutable Audit Trails

Use RBAC, segregation‑of‑duties, MFA and immutable logs to control and record access to models, data, pipelines and roll

Key Insight

Combine preventive controls (RBAC, SoD, MFA) with detective controls (immutable logs, alerts) and include service accounts/CI/CD.

Often Confused With

MFA (Multi-Factor Authentication)Audit LoggingEncryption

Common Mistakes

  • Believing RBAC alone fully protects models and data.
  • Treating audit logs as a substitute for preventive controls.
  • Ignoring service accounts and CI/CD agents when enforcing SoD and approvals.

Data Quality — Fit, Traceable, Lawful

Make training/test data measurable, representative, lawful and lineage-traceable for valid models and audits.

Key Insight

Quality = measurable fitness (accuracy, completeness, representativeness, label quality) + provenance/lineage for auditability.

Often Confused With

Data GovernanceBias MitigationData Privacy

Common Mistakes

  • Assuming 'more data' automatically fixes bias or representativeness.
  • Treating publicly available data as automatically lawful to use.
  • Believing de‑identification removes the need for legal review or data‑subject protections.

Accountability — Roles, Oversight & Transparency

Define who is answerable for AI outcomes and enforce oversight, reporting, logging and remedial action across the AI lif

Key Insight

Accountability = answerability + enforceable oversight: assigned roles, policies, logs, disclosures and remediation powers.

Often Confused With

ResponsibilityTransparencyCompliance

Common Mistakes

  • Expecting full source code/raw training data publication to satisfy transparency.
  • Treating a single public report as complete, ongoing accountability.
  • Equating responsibility (task owner) with accountability (answerable and subject to oversight).

Tiered Governance — Tailor by Mission & Risk

Design tiered, risk-based controls sized to mission, sector, maturity, and explicit risk tolerance.

Key Insight

Scale control scope and intensity by multiple dimensions (mission, values, sector, maturity, tolerance).

Often Confused With

AI classification model for risk-based governanceOne-size governance frameworks

Common Mistakes

  • Applying one risk tolerance across all models and use cases
  • Assuming scaled controls mean weaker protections
  • Relying solely on technical controls; ignoring policy/legal/ops

AI Risk-Tier Classification — Tiers to Controls

Assign AI systems to measurable risk tiers (data sensitivity, impact, autonomy) and map tiers to required controls.

Key Insight

Classify using deployment context and impact, not model capability alone; the model informs controls but doesn't enforce them.

Often Confused With

Tailoring governance to industry, risk appetite, and process differencesCapability-based risk assessment

Common Mistakes

  • Scoring risk by model capability alone
  • Treating autonomy as automatic high risk
  • Ignoring non-personal data and inference harms

EU AI Act Risk Tiers (Prohibited / High / Limited / Minimal)

Four-tier risk model — prohibited, high‑risk, transparency/limited, minimal — for mapping obligations and controls.

Key Insight

Risk is set by intended use, context and harm potential — not only by technical capability.

Often Confused With

GDPRGeneral-purpose AI (GPAI)Sector-specific regulation

Common Mistakes

  • Assuming high‑risk = ban; high‑risk is permitted but subject to strict obligations.
  • Classifying by technical capability alone — ignore intended use, context, and impact.
  • Applying the EU taxonomy unchanged or assuming EU adoption equals global legal compliance.

AI Access Controls — RBAC, SoD & Immutable Audit Trails

Use RBAC, segregation‑of‑duties, MFA and immutable logs to control and record access to models, data, pipelines and roll

Key Insight

Combine preventive controls (RBAC, SoD, MFA) with detective controls (immutable logs, alerts) and include service accounts/CI/CD.

Often Confused With

MFA (Multi-Factor Authentication)Audit LoggingEncryption

Common Mistakes

  • Believing RBAC alone fully protects models and data.
  • Treating audit logs as a substitute for preventive controls.
  • Ignoring service accounts and CI/CD agents when enforcing SoD and approvals.

Understanding how laws, standards and frameworks apply to AI

21%

Privacy in AI Governance (DPIA, Controllers/Processors)

How data‑protection duties (lawful basis, controller/processor roles, DPIAs) apply across the AI model lifecycle.

Key Insight

Privacy duties follow the data and the decision — controllers retain legal responsibility even if processing is 'technical' or outsourced.

Often Confused With

AnonymizationController vs ProcessorDPIA vs Security Assessment

Common Mistakes

  • Assuming privacy laws don't apply because AI is a 'technical' system.
  • Thinking anonymized/aggregated training data always removes obligations.
  • Believing a controller can completely delegate legal duty to a processor by contract.

Purpose Limitation & Necessity for AI Reuse

Document legitimate purposes and test if secondary AI use is necessary, compatible, and needs a new lawful basis or new/

Key Insight

If secondary use fails necessity or compatibility tests, you must obtain a new lawful basis (or consent) and update notices.

Often Confused With

Legitimate interestAnonymizationConsent

Common Mistakes

  • Treating 'legitimate interest' as blanket approval for secondary AI reuse.
  • Believing anonymization/pseudonymization automatically frees you from notice or lawful‑basis duties.
  • Relying only on internal policy updates without updating privacy notices or getting fresh consent.

Human–AI Oversight & Handoffs (HITL / HOTL)

Define who decides vs when AI only advises, intervention thresholds, and recordkeeping to meet liability/transparency.

Key Insight

A human's presence alone doesn't make AI safe—oversight must have authority, training, and timely ability to intervene.

Often Confused With

Human-in-the-Loop (HITL)Human-on-the-Loop (HOTL)Automated Decision-Making

Common Mistakes

  • Assuming any human presence makes the system compliant or safe
  • Believing assigning oversight removes the organization's liability
  • Interpreting HITL as 'human must act on every model decision'

AI Supply‑Chain Risk Triage & Provenance

Map and tier vendors, components and upstream dependencies by impact, data sensitivity, provenance and emergent-risk.

Key Insight

Transitive dependencies and emergent interactions cause most AI supply‑chain risk—continuous upstream monitoring is required.

Often Confused With

Traditional Vendor Risk ManagementOpen-Source Software RiskThird-Party SLAs

Common Mistakes

  • Treating AI supply‑chain like traditional vendor risk (ignores transitive dependencies)
  • Assuming open‑source components are inherently low-risk
  • Relying on a one-time inventory instead of continuous monitoring

Conformity Assessment — Proof, Paths & Pitfalls

Evidence-based process (tests, docs, marks) proving an AI system meets mandatory or voluntary regulatory rules; affects🇧

Key Insight

Conformance proves process and evidence, not perfection — routes differ (self‑assessment, notified body, 3rd party) and obligations persist post‑cert.

Often Confused With

Vendor self-attestationVoluntary labellingPost-market surveillance

Common Mistakes

  • Relying on one benchmark score as proof of full regulatory conformance.
  • Accepting vendor self-attestation without independent evidence or documentation.
  • Assuming a certificate removes ongoing obligations like monitoring and incident reporting.

Model Transparency — Labels, Model Cards & Limits

Stakeholder-facing disclosures (labels, model cards, notices) that state capabilities, limits and when transparency suff

Key Insight

A terse 'This is AI' or a lone model card rarely meets duty: disclosures must be proportionate, clear on limits/risks, and paired with controls when a

Often Confused With

Model explainabilityPrivacy noticesLimited-risk vs high-risk classification

Common Mistakes

  • Thinking a simple 'This is AI' label alone satisfies legal transparency duties.
  • Believing a model card alone discharges all regulatory disclosure obligations.
  • Assuming transparency requires publishing model weights or full algorithmic explainability.

AI Lifecycle Policy & RACI

Translate trustworthy‑AI principles into enforceable lifecycle policies with scope, roles (RACI), measurable controls, &

Key Insight

Policy = roles + delegated decisions + escalation. Central sign‑off doesn't replace role‑level responsibilities or RACI.

Often Confused With

High-level AI principlesRACI matrixMLOps governance

Common Mistakes

  • Naming a single owner or board as 'accountability' — accountability must map to role duties.
  • Treating high‑level principles as sufficient; skipping enforceable policies and measurable controls.
  • Letting Legal own all governance and sidelining data stewards/MLOps decisions.

Content Provenance & Lineage

Tamper‑evident lineage linking datasets, code, parameters, model versions and deployment artifacts for audits and repro.

Key Insight

Provenance must link artifacts (hashes, timestamps, access controls, chain-of-trust). Simple tags or filenames fail audits.

Often Confused With

Data versioningAudit logs

Common Mistakes

  • Tracking only datasets and ignoring code, config, parameters and model artifacts.
  • Assuming simple version tags or filenames provide sufficient provenance.
  • Relying on ingest logs alone without integrity, retention and access controls.

Understanding how to govern AI development

24%

Traceable Audit Trail — Model, System & Logs

Time‑stamped model/system cards, datasheets and audit logs linking purpose, provenance, performance, approvals, and ret‑

Key Insight

Docs must be verifiable and linked across the lifecycle (artifact + metadata + retention) to support audits and investigations.

Often Confused With

Model cardsSystem cardsTechnical documentation

Common Mistakes

  • Relying on a one‑time model card as permanent proof of governance
  • Assuming any documentation equals compliance—poor or unverifiable docs fail audits
  • Skipping system‑level records because 'model card is enough'

AI Quality & Evaluation — Benchmarks, Metrics, Thresholds

Operational benchmarking: choose datasets/baselines/metrics, set acceptance thresholds, detect regressions and generaliz

Key Insight

Use multiple, protocol‑matched benchmarks and operational thresholds; build‑time scores don't replace monitoring or context tests.

Often Confused With

LeaderboardsBenchmarkingProduction monitoring

Common Mistakes

  • Treating a single metric or leaderboard rank as proof of production fitness
  • Re‑running published datasets without matching protocol/preprocessing
  • Skipping post‑deployment monitoring because build‑time tests passed

Fairness Testing — Group vs Individual

Measure disparate outcomes with multiple formal metrics at dataset and model levels to guide governance.

Key Insight

No single metric fits every use-case; dataset bias ≠ model bias — test both group and individual outcomes.

Often Confused With

Bias mitigationDataset auditingCalibration testing

Common Mistakes

  • Using one fairness metric as a universal test
  • Believing removing protected attributes removes bias
  • Treating dataset-level stats as proof of model fairness

Dev-time Testing & Benchmarking (Governance Controls)

Integrate testing and benchmarks during training to find safety, bias, security, and performance risks before deployment

Key Insight

Iterative dev-phase gates + traceable benchmarks catch issues early — include subgroup, adversarial, and explainability tests.

Often Confused With

Post-deployment monitoringUnit/integration testing

Common Mistakes

  • Assuming overall accuracy implies no subgroup harms
  • Only testing after final training; late fixes are costly
  • Treating a single benchmark pass as sufficient for release

Post-Deployment Monitoring (PMM) & Red Teaming

Continuous telemetry, KPIs, alerts, user feedback and red‑team tests to catch drift, regressions, safety/fairness and to

Key Insight

Combine automated telemetry + human review + red‑team scenarios: thresholds flag, humans adjudicate, remediation closes the loop.

Often Confused With

Pre-deployment validationA/B testingAutomated alerting

Common Mistakes

  • Treat PMM as only technical telemetry; ignore user complaints and real‑world outcome data.
  • Run a one-time post-release check; PMM must be continuous and recalibrate thresholds.
  • Rely solely on automated alerts or assume alerts auto-resolve—human triage and escalation are required.

Model Card / Factsheet — Purpose, Limits & Safe Use

Timestamped summary of model purpose, data, performance, limitations, safe‑use rules, and linked incident updates forAud

Key Insight

Model cards are living audit artifacts: update after incidents, data or performance changes and link detailed incident logs for traceability.

Often Confused With

Dataset datasheetRelease notesModel risk assessment

Common Mistakes

  • Treat the model card as a one-time document; fail to update after model/data/usage or incident changes.
  • Skip updating for 'minor' incidents; any material effect on performance or limits requires an update.
  • Provide only high-level incident summaries; audits demand timelines, root cause and mitigation details.

Understanding how to govern AI deployment and use

24%

Where to Run AI: Edge · On‑Prem · Cloud

Choose edge/on‑prem/cloud by balancing latency, data residency, security, ops cost and model‑sourcing tradeoffs.

Key Insight

Tradeoff: edge = lowest latency but limited compute; on‑prem = control/compliance; cloud = scale/managed services — model‑sourcing (fine‑tune vs RAG)​

Often Confused With

Hybrid deploymentData residencyModel sourcing: fine‑tuning vs RAG

Common Mistakes

  • Assuming cloud is inherently more secure than on‑prem without validating controls
  • Assuming on‑prem automatically meets regulatory or residency needs without added controls
  • Believing fine‑tuning always outperforms RAG or that RAG removes the need to update data/models

AI DPIA — ICO / CNIL Templates

AI‑adapted DPIA: document purpose, data types, risks, necessity/proportionality and concrete privacy mitigations (anonym

Key Insight

DPIA must link each AI risk to concrete mitigations, monitoring and justification; pseudonymization ≠ anonymization; reassess on major changes.

Often Confused With

Standard DPIAGDPR compliance assessment

Common Mistakes

  • Thinking a DPIA is only needed for new projects—ignore secondary uses or major changes at your peril
  • Treating the DPIA as paperwork and not implementing or monitoring the listed mitigations
  • Assuming pseudonymization equals full anonymization or that anonymization has no analytic tradeoffs

AI Impact Assessments (AIA / FRIA)

Sector-aware pre-deploy and continuous review of harms, stakeholders, data flows, legal/privacy, safety and operational​

Key Insight

Use DPIA for GDPR data risks, run a FRIA when fundamental‑rights harms (profiling, expression, dignity) are possible; log mitigations, owners, metrics

Often Confused With

DPIA (Data Protection Impact Assessment)FRIA (Fundamental Rights Impact Assessment)Model Risk Assessment

Common Mistakes

  • Thinking AIA = privacy only — it must cover safety, fairness, legal, reputational and operational risks
  • Relying on a DPIA alone to meet AI‑specific AIA/FRIA obligations
  • Running one-off assessments and skipping mitigation records, decision logs, or monitoring triggers

AI Incident Response (Detect → Contain → Learn)

Operational process to detect, triage, contain, investigate, remediate, report and learn from AI incidents with evidence

Key Insight

AI incidents include model failures, biased/harmful outputs and privacy/safety events — preserve evidence, contain impact, then remediate and record,‑

Often Confused With

Security incident responseBusiness continuity / Disaster RecoveryModel monitoring & observability

Common Mistakes

  • Counting only cyber breaches as incidents — model drift, unsafe outputs, and privacy harms qualify
  • Treating rollback/kill‑switch as always sufficient — deactivation can cause downstream harm and not remove root causes
  • Using RCA to assign blame instead of identifying systemic process and control failures

Foreseeable Misuse & Function‑Creep Analysis

Map how models can be repurposed, assess impact paths, and pick design, contractual, and monitoring controls.

Key Insight

For each misuse pathway, assign concrete controls (design limits, monitoring, deactivation, contracts); controls are complementary not interchangeable

Often Confused With

Threat modelingPrivacy Impact Assessment (PIA)Model validation and testing

Common Mistakes

  • Treat downstream harms as only technical model errors
  • Assume vendor disclaimers shift full responsibility
  • Rely on contracts alone without active monitoring/enforcement

Role‑Based AI Governance Training

Targeted, competency‑based training with practical assessments, escalation paths, and auditable evidence.

Key Insight

Use role-specific practical assessments tied to controls and repeat on role/risk changes — attendance alone ≠ competence

Often Confused With

Security Awareness TrainingGeneral Compliance TrainingOnboarding Orientation

Common Mistakes

  • One‑off training is sufficient
  • Completion records prove competence
  • Treat training as a substitute for technical or process controls

Bloom’s Taxonomy-based cognitive skill emphasis

13%

Bias Mitigation Playbook

Define contextual fairness, detect sampling/label/measurement/historical/algorithmic bias, pick metrics, apply pre/in-//

Key Insight

No single metric fits all — bind a fairness definition to the specific harm, document tradeoffs, then monitor residuals.

Often Confused With

Fairness definitionsModel interpretabilityData preprocessing

Common Mistakes

  • Dropping protected attributes doesn't stop proxy or historical bias.
  • Assuming one fairness metric covers all stakeholder harms.
  • Relying only on technical fixes without governance, documentation, or monitoring.

Governance Audit Logging (tamper‑evident)

Tamper‑evident records of risk scores, mitigation plans, inputs, versions, approvals and actor attribution for audits.

Key Insight

Audit value = context + integrity: timestamps, actor, model/version, inputs and retention/access controls — not just a final score.

Often Confused With

Operational logsData provenanceRetention policies

Common Mistakes

  • Recording only final risk scores — omits inputs, versions and approvals needed for investigations.
  • Storing logs without tamper‑evidence — unprotected logs can be altered and lose evidentiary value.
  • Assuming 'we have logs' equals compliance — you still need retention, access controls and review procedures.

Dataset Governance — Collect • Store • Process

Roles, policies and controls to ensure lawful, auditable dataset sourcing, quality, labeling, retention, access and use.

Key Insight

Governance is cross‑functional and must scale by system risk tier — higher risk needs stricter provenance, labeling, retention, access.

Often Confused With

Privacy & data protectionData quality managementModel risk management

Common Mistakes

  • Treating governance as IT‑only and skipping business, legal or data‑owner approvals
  • Applying rules only to training data while ignoring test/validation/synthetic sets
  • Assuming pseudonymization or public availability removes legal/compliance checks

Explainability — Intrinsic vs Post‑hoc (Global vs Local)

Methods and controls to make AI behavior understandable; choose approach by model type, audience and decision risk.

Key Insight

Match method to risk/audience: prefer inherently interpretable models for high‑stakes; use validated local post‑hoc tools for individual decisions.

Often Confused With

InterpretabilityModel validationFeature attribution

Common Mistakes

  • Trusting post‑hoc explanations as faithful evidence of internal model reasoning
  • Interpreting feature importance scores as causal effects
  • Relying on one global explanation to justify every individual prediction

© 2026 Mocka.ai - Your Exam Preparation Partner

IAPP Certified Artificial Intelligence Governance Professional (AIGP) Exam Practice Questions
Access Mock Exams & Comprehensive Question Bank
Listen to Audio Podcasts
Expert summaries for IAPP Certified Artificial Intelligence Governance Professional (AIGP) Exam

Certification Overview

Duration:180 min
Questions:100
Passing:60%
Level:Intermediate

Cheat Sheet Content

30Key Concepts
5Exam Domains

Similar Cheat Sheets

  • CCNA Exam v1.1 (200-301) Cheat Sheet
  • AWS Certified Cloud Practitioner (CLF-C02) Cheat Sheet
  • AWS Certified AI Practitioner (AIF-C01) Cheat Sheet
  • Exam AI-900: Microsoft Azure AI Fundamentals Cheat Sheet
  • Google Cloud Professional Cloud Architect Cheat Sheet
  • Google Cloud Security Operations Engineer Exam Cheat Sheet
Mocka logoMocka

© 2026 Mocka. Practice for what's next.

Product

  • Browse Certifications
  • How to get started

Company

  • About Us
  • Contact

Legal

  • Terms of Service
  • Privacy Policy
  • Imprint
Follow