Autonomous Systems
The Answer
Formal proof of algorithmic fairness.
Applies Z3 formal verification to AI model decision outputs to prove — or disprove — that protected attributes causally influence outcomes under defined constraints. Produces machine-verifiable fairness certificates for EU AI Act Article 10, Canada's AIDA, and federal AI governance mandates.
EU
AI Act
AIDA
Canada
Z3
Solver
§ SPECIFICATION
Input
- AI model decision log (input features + output decisions)
- Protected attribute definitions (PIPEDA / AIDA / EU AI Act)
- Matched cohort definition for causal analysis
Constraints Verified
- Causal sufficiency — protected attribute is causally sufficient for decision difference
- Proxy independence — no legitimate feature mediates the causal path
- Matched cohort validity — identical non-protected features
Output
- Z3 causal proof or UNSAT certificate
- Causal pathway from attribute to decision
- Fairness certificate (machine-verifiable)
- AIDA / EU AI Act compliance report
§ SAMPLE PROOF ARTIFACT
ARTIFACT // ANSWER-COMPAS-01FAILURE DETECTED
// SAMPLE PROOF — THE ANSWER ENGINE
COMPAS recidivism — racial bias formal proof
TargetNorthpointe COMPAS v2.0
Condition
race = African-American → risk_score HIGH with n=6172, controlled priorsVerdictSAT
SummaryCausal bias proven. Path: race → proxy features → HIGH risk. AIDA s.12 breach.
StatusReview-ready
§ FIELD VALIDATION
| # | Target | Vulnerability Class | Status |
|---|---|---|---|
| 01 | COMPAS (Northpointe) Causal Bias | Causal Bias | Certified |
Run The Answer on your system.
Formal engagement starts with a technical intake. We scope, configure, and deliver a proof artifact within the agreed SLA.