The verification spectrum
Every security claim about a TEE deployment reduces to one question: who can independently verify this?
Not "does it work?" Not "is it signed?" Not "does the vendor say so?" — who can independently verify it? The answer determines where you sit on the verification spectrum, and what your attestation is actually worth.
The spectrum runs from weakest to strongest. Each rung adds something verifiable. Each also reveals what the rung below left unverified.
Rung 1 — Signed builds
Someone compiled the software and signed the artifact. You verify the signature and trust the signing key.
What it proves: The artifact hasn't been tampered with since it was signed.
What it doesn't prove: Anything about what was compiled, how, or from what source. The signer is a trust anchor. If the signer is compromised, or their build environment was compromised, the signature is meaningless. You're trusting a person or organization, not a process.
Who can independently verify this? Anyone with the public key — but only that the binary matches the signature. Not that the binary is what the source says it is.
Rung 2 — Reproducible application binaries
The application is built deterministically: given the same source and build environment, anyone can produce an identical binary. The hash is published. Anyone can rebuild and compare.
What it proves: The binary corresponds to the published source. No undisclosed modifications, no hidden backdoors inserted at build time.
What it doesn't prove: Anything about the OS, runtime, or firmware the application runs on. The application binary is clean; the substrate it sits on is still opaque.
Who can independently verify this? Anyone who can run the build. This is a meaningful bar — it's what the open-source promise is actually worth when a build system is reproducible.
Reproducible builds prove the application. They say nothing about the environment. A clean binary running on compromised firmware is still compromised.
Rung 3 — Reproducible OS + firmware
The entire stack — kernel, init system, and UEFI firmware — is built from source with deterministic outputs. Expected measurements are computed from source before deployment and compared against what the TEE reports at boot.
What it proves: The full software stack, from firmware to application, corresponds to audited, published source. No component was silently swapped.
What it doesn't prove: Anything below the firmware. The CPU microcode, the hardware itself, and the TEE implementation are still trusted without independent verification.
Who can independently verify this? Anyone who can run the builds. In practice, almost no one does this end-to-end on cloud infrastructure today — the cloud provider controls the firmware, and most don't publish reproducible builds of it.
AWS is the closest: their UEFI firmware is open-source with Nix reproducible builds, and sev-snp-measure can compute expected launch measurements. But they still modify the guest OS image with a closed-source tool before launch.
Rung 4 — Hardware attestation (TEEs)
The CPU itself generates a signed measurement of what booted. The signature traces to the silicon vendor's root CA (Intel, AMD). A verifier checks the signature and the measurements.
What it proves: That specific measurements were observed by genuine TEE hardware at boot time, and that the hardware is authentic.
What it doesn't prove: That those measurements correspond to anything you've reviewed. A valid TDX quote for an unknown binary is still a valid TDX quote. The attestation doesn't know whether the measurement is good or bad — you do, if you've registered expected values.
Who can independently verify this? Anyone with access to Intel's or AMD's public certificate chain. The hardware root of trust is the one thing you genuinely don't have to trust a vendor's word on — you can verify the signature chain yourself.
Hardware attestation proves authenticity of the measurement, not correctness of the software. "The CPU signed this quote" and "this is the right software" are two different claims. You need both.
Rung 5 — Reproducible builds + TEE attestation
The strongest practical guarantee: the full software stack is reproducible from source, expected measurements are computed before deployment, and a TEE quote proves those exact measurements ran on genuine hardware.
A remote party can:
- Rebuild the entire stack from source
- Compute the expected MRTD / MEASUREMENT / PCR values
- Obtain a fresh attestation quote
- Confirm the quote was signed by real hardware
- Confirm the measurements match what they built
This closes the loop. No single party's word is required at any step.
What it doesn't prove: Anything about the silicon itself. You still trust that Intel's TDX implementation is correct, that the CPU hasn't been physically tampered with, and that the hardware vendor hasn't introduced silicon-level backdoors. This is the residual trust that hardware-rooted security genuinely cannot eliminate.
Who can independently verify this? Anyone. That's what makes it the gold standard.
Where most deployments actually sit
| Rung | Achievable today on cloud? |
|---|---|
| Signed builds | Yes, universally |
| Reproducible app binaries | Yes, with effort (Nix, reproducible-builds.org tooling) |
| Reproducible OS + firmware | Partially — AWS firmware only; GCP/Azure firmware not reproducible |
| Hardware attestation | Yes, on all major TEE platforms |
| Reproducible builds + TEE attestation | Partially — AWS firmware + app layer; full stack not achievable on GCP or Azure |
Most production deployments combine rungs 2, 4, and partial 3: reproducible application builds, hardware attestation, and golden measurements for the firmware layer (trusting the provider's signed endorsement rather than independent reproduction).
That's a reasonable position. The point is to know it's a choice, not an accident.
For each component in your stack, ask: if this binary were replaced with a malicious one, would my attestation catch it? Work backwards from the answer to find the gaps in your verification.