What are you protecting, and from whom?
Imagine you could compile everything from source: the OVMF firmware, the Linux kernel, the init system, your application. You compile it twice on independent machines. You get identical bytes both times. You boot it inside a TDX Trust Domain, request an attestation quote, and the MRTD in the quote matches the hash you computed locally from source.
No one told you to trust anything. You verified it yourself.
This is the gold standard of confidential computing: deterministic, reproducible, independently verifiable from firmware to application. Any remote party can rebuild your entire stack, compare hashes against the attestation measurements, and know with hardware-rooted certainty that what's running is exactly what was published.
It is largely impossible on any major cloud provider today. But it is the right place to start — because it makes explicit exactly what you give up as you move toward the deployable.
The trust cursor
Between "compile everything yourself" and "trust the cloud provider completely" lies a spectrum. Where you place what we'll call the trust cursor determines what a remote party can independently verify about your system, what attack surface you're silently accepting, and what "attestation" actually proves in your specific setup.
| Position | What you verify | What you trust |
|---|---|---|
| Absolute | Everything: firmware, OS, app — reproduced from source | Nothing except your own build toolchain |
| Pragmatic ideal | App layer via reproducible builds + SLSA; firmware via vendor-signed golden measurements | Cloud provider's firmware build pipeline |
| Middle ground | App layer only; everything below via vendor endorsements | Cloud provider firmware + build pipeline |
| Baseline | SLSA provenance on app; quote signature validity | Almost everything below your app |
Most production deployments land at pragmatic ideal or middle ground. Neither is wrong — but you must know which one you're at, and why. If you can't answer that question, you don't yet know what your attestation is actually proving.
Three questions before touching any tooling
1. What code runs inside the TEE?
List every binary: firmware, kernel, init system, your application. Each one is attack surface. Each one needs a story — either "I can verify this from source" or "I trust the party that built it, and here's why."
2. Who operates the infrastructure?
A cloud provider, a bare-metal host, your own hardware? The operator's position in the trust model changes entirely depending on the answer. A compromised cloud provider cannot read your TDX memory — but they can shut you down, starve you of resources, or substitute firmware between deployments. They are not your adversary in the typical deployment. But they are in your trust model, whether you name them or not.
3. Who verifies the attestation?
If no one checks the measurements in the quote against known-good values, attestation is a signing ceremony with no audience. Verification requires knowing the expected values before you start. If you don't have a reference set of measurements, you have quotes but no verification.
"We use TDX" does not answer any of these questions. Hardware attestation proves that some code ran on genuine TDX hardware. It says nothing about which code, unless you pin the measurements and check them.
The reproducibility gap
Here is the gap that catches teams unprepared:
You can receive a valid TDX quote — cryptographically correct, signed by Intel's root CA, structurally sound — for a VM running firmware you cannot audit and did not build. The quote proves a measurement. It does not prove that measurement corresponds to code you've reviewed.
- GCP uses a custom OVMF build and has stated it will not be open-sourced. You can verify that a binary hash matches a measurement via Google's signed endorsements, but you cannot audit the source that produced the binary.
- Azure TDX runs on OpenHCL (open-source and readable), but the production build pipeline is not reproducible — you can read the code, you cannot prove the deployed binary was built from it.
- AWS has the best firmware story: open-source OVMF with Nix reproducible builds. But they modify the guest OS image with a closed-source tool before launch, and that modification is not captured in the launch measurement.
On every major cloud provider, there is at least one link in the chain between source and attestation that an external auditor cannot independently close. Know which link it is for your platform before you design your verification strategy around it.
Your first practical exercise
Before writing a single line of attestation code, fill in this table for your deployment:
| Layer | Who controls it | Verifiable from source? | If not, trusted via |
|---|---|---|---|
| CPU / TDX module | Intel | No | Intel's hardware guarantee |
| UEFI / OVMF firmware | Cloud provider | Platform-dependent | Vendor-signed golden measurements |
| Paravisor / hypervisor | Cloud provider | Azure: partially (OpenHCL) | Provider's build pipeline |
| OS kernel | You | Yes, with reproducible builds | Your build system |
| Application | You | Yes | Your build system + SLSA provenance |
Every cell in the last column is a risk you're accepting. Name them. Decide whether they're acceptable for your threat model. Then choose your cursor position deliberately.
If you can't fill in this table for your deployment, do that before evaluating any attestation tooling. The tooling answers "how do I generate and verify quotes." This table answers "what am I actually verifying and what am I not."
The rest of this handbook is about moving up that table — closer to verifiable — in ways that are practical given the platforms and constraints you're working with.