# Trusted Execution Environments (TEE)

This section explains how secure hardware guarantees that data was processed correctly and privately, and why this is a critical part of the system's security.

### **The Problem: Who Watches the Server?**

When a server processes your data, you're trusting the operator. You trust that they're running the right software, that they haven't modified it, and that they aren't peeking at your private data.

But servers get hacked. Insiders go rogue. Operators make mistakes. And in high-stakes financial applications, "just trust us" isn't good enough.

What if the hardware itself could guarantee integrity — independent of the operator?

### **What Is a TEE?**

A **Trusted Execution Environment** is a special, isolated area inside a processor. Code running inside a TEE is completely sealed off from everything else — including the server's operating system, other applications, and even the cloud provider's own infrastructure.

**Think of it like a bank vault with a window.** You can put documents into the vault, and the vault processes them according to pre-set rules. You can see the results that come out. But you cannot open the vault, peek inside, or change the rules after it's sealed. The vault also stamps every result with a tamper-evident seal proving it came from inside.

That's essentially what a TEE does, but with cryptographic guarantees instead of physical locks.

TEE technology has been adopted across industries that handle sensitive data — from banking and healthcare to government and defense. Major cloud providers offer TEE solutions, and the technology is considered mature and production-ready for high-security applications.

### **TEE Technologies in the Landscape**

Several TEE technologies exist today, each with different architectures but the same core promise: isolated, verifiable computation.

| Technology                              | Provider | Approach                                                   |
| --------------------------------------- | -------- | ---------------------------------------------------------- |
| Nitro Enclaves                          | AWS      | Dedicated security chips, separate VM isolation            |
| SGX (Software Guard Extensions)         | Intel    | CPU-level enclaves with hardware memory encryption         |
| SEV (Secure Encrypted Virtualization)   | AMD      | Encrypted virtual machines at the hypervisor level         |
| TrustZone                               | ARM      | Processor-level separation for mobile and embedded devices |
| CCA (Confidential Compute Architecture) | ARM      | Next-gen confidential computing for cloud workloads        |

While the implementations differ, they all share the same fundamental properties: code and data are isolated from the rest of the system, the hardware produces cryptographic proof of what ran, and no external party — including the infrastructure operator — can observe or tamper with the execution.

This system is designed to work with TEE technology broadly. The architecture is TEE-agnostic at its core — what matters is the attestation guarantee, not which specific hardware provides it.

### **How This System Uses TEEs**

Here's the flow of how data moves through the TEE:<br>

<figure><img src="https://3912034821-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEPdvkoJHpBF3QkBeBWkM%2Fuploads%2FxNLfWRbn1Exb1iB60Rc6%2Fdoc11.png?alt=media&#x26;token=fe6406a7-a2c0-4522-8a90-43b596863d27" alt=""><figcaption></figcaption></figure>

#### **1. The Enclave Is Created**

When the system starts up, an enclave is created from a pre-built image — a package containing the exact code that will run. This image is **measured** (hashed) at creation time, producing a set of values that act as the code's fingerprint.

These measurement values are locked in. If anyone modifies even a single line of code, the measurements change. Think of it like a tamper-evident seal on a medicine bottle — you can tell immediately if it's been opened.

Different TEE platforms call these measurements different things (PCRs, MREnclave, launch digests, etc.), but the concept is universal: a cryptographic fingerprint of the code that the hardware will enforce.

#### **2. Data Enters the Enclave**

Private financial data (reserves, liabilities) is sent into the enclave through a secure, direct channel. This channel connects the host system to the enclave without going through the public network. Data enters the sealed environment and is no longer accessible to the outside world.

The specifics of the communication channel vary by TEE platform, but the security property is the same: data enters the enclave through a controlled, isolated path, and once inside, it's protected by hardware isolation.

#### **3. All Processing Happens Inside**

Everything that matters happens inside the enclave:

* Salts are derived from the master secret
* Each value is committed (fingerprinted)
* The Merkle tree is built
* Totals are computed
* The attestation is generated
* ZK proofs are created

At no point does the raw data leave the enclave. The operator, the host server, and even the cloud provider itself cannot see what's being processed. This is enforced by hardware, not by policy — there is no "admin override" that can bypass the isolation.

#### **4. The Hardware Produces an Attestation**

This is the key step. The enclave's hardware security module generates a signed **attestation document** — a cryptographic statement saying:

> "I am genuine hardware. I was running this specific code (here are the measurements). The computation produced this exact result (here is the data hash)."

This attestation is signed using a key that is embedded in the hardware itself. It cannot be extracted, copied, or used by software. The signature is proof that genuine, unmodified hardware produced this result.

Think of it like a notary stamp that's physically built into the vault. The stamp can't be removed, forged, or used outside the vault. When you see the stamp on a document, you know the vault produced it.

#### **5. The Signature Chains to a Root of Trust**

The enclave's signature is backed by a **certificate chain** — a sequence of digital certificates where each one vouches for the next, leading all the way up to a trusted **root certificate authority** maintained by the hardware provider.

<figure><img src="https://3912034821-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEPdvkoJHpBF3QkBeBWkM%2Fuploads%2FuHHlkOR79C3MLTOHYull%2Fdoc12.png?alt=media&#x26;token=4326b1e9-62f8-4481-bd8b-867934dde317" alt=""><figcaption></figcaption></figure>

This chain works the same way HTTPS certificates work for websites. When you visit your bank's website, your browser verifies the site's certificate chains to a trusted root authority. Here, the verifier checks that the attestation's certificate chains back to the hardware provider's root certificate.

The root certificate is public — the provider publishes it, and it's included in every proof payload. Anyone can verify the chain without needing special access or permission.

### **What TEE Attestation Proves**

When you verify a TEE attestation from this system, you know five things for certain:

#### **1. Genuine Hardware**

The certificate chain traces to a recognized root certificate authority. This is not a software signature that could be faked by compromising a server. It originates from purpose-built security hardware that is manufactured and controlled by the provider.

An attacker who compromises a server — gaining full root access to the operating system — still cannot forge a valid attestation. The signing keys live in hardware that the operating system cannot reach.

#### **2. Correct Code**

The measurement values in the attestation are fingerprints of the code that ran. By comparing them to the known-good values published at deployment time, you can confirm the enclave ran exactly the right software — not a modified version with backdoors, data exfiltration, or altered logic.

This is a powerful guarantee. Even if a malicious actor gains control of the deployment infrastructure, any code changes would produce different measurements, and the attestation would fail verification against the published values.

#### **3. Untampered Results**

The attestation includes a hash of the computation's output (the Merkle root, totals, and metadata). This hash is signed by the hardware. If anyone modified the results after the enclave computed them, the hash wouldn't match.

This prevents a "man in the middle" attack where someone intercepts the enclave's output and replaces it with different data before publishing. The hardware-signed hash makes any such modification detectable.

#### **4. Isolation During Processing**

The TEE architecture guarantees that during processing, nothing outside the enclave can access the data inside. Not the operating system, not the hypervisor, not other processes on the same machine, and not the cloud provider's infrastructure.

This means sensitive data — individual reserve positions, custodian breakdowns, the master secret — is protected by hardware barriers, not just software permissions. There's no admin account that can bypass the isolation.

#### **5. Data Binding**

Here's what makes this system's approach particularly strong: the Merkle root isn't signed separately from the attestation. It goes directly into the attestation's data field. When the hardware signature verifies, you've proven the hardware itself computed that exact root.

Many systems compute data in one place and then sign it somewhere else. That creates a gap — you trust the signer, but can't prove the computation was correct. Here, computation and attestation happen in the same sealed environment. They're cryptographically bound together.

This binding is the difference between "someone signed off on these numbers" and "these numbers were computed inside verified, tamper-proof hardware." The latter is a fundamentally stronger guarantee.

### **What the Enclave Cannot Do**

The enclave's isolation works both ways. The enclave:

* **Cannot access the internet** — no outbound network connections
* **Cannot access the disk** — no reading or writing to the server's storage
* **Cannot talk to other processes** — only communicates through its dedicated secure channel
* **Cannot persist data across restarts** — if the enclave restarts, its memory is wiped clean

This extreme isolation is a feature, not a limitation. It means there's no way for data to leak out through side channels, and no way for external interference to affect the computation.

It also means the enclave can't be used as a general-purpose server. It does one thing — process data, produce proofs — and it does it in complete isolation. This limited attack surface is a security advantage.

### **Code Measurements: The Software Fingerprint**

When the enclave image is built, the hardware computes measurements of the code — cryptographic fingerprints that capture exactly what software is loaded. These measurements typically cover:

| What's Measured       | What It Captures                                             |
| --------------------- | ------------------------------------------------------------ |
| The application image | The complete enclave package — all the code and dependencies |
| The system layer      | The kernel, boot configuration, and runtime environment      |
| The application layer | The specific application logic within the enclave            |

These measurements are included in every attestation. They serve as an unforgeable fingerprint of exactly what code ran.

**How verifiers use these measurements:** When a new version of the enclave is deployed, the expected measurement values are published. Verifiers compare the values in the attestation against the published values. If they match, the enclave ran the expected code. If they don't, something was changed — and the attestation should not be trusted.

This is like checking the serial number on a sealed product. The manufacturer publishes the expected serial number. If it doesn't match, the product may have been tampered with. Except here, the "serial number" is a cryptographic hash that covers every byte of the code — it's impossible to make a meaningful change without the hash changing.

### **TEE + ZK: Why Both?**

You might wonder: if the hardware guarantees the computation, why also include zero-knowledge proofs?

The answer is **defense in depth**. No single security mechanism is perfect:

* **TEE alone** means you're trusting hardware. If a theoretical vulnerability in the hardware were ever discovered, the guarantees would break. History has shown that hardware vulnerabilities do get found — Spectre, Meltdown, and various SGX attacks have demonstrated this.
* **ZK alone** means you're trusting mathematics and the proof setup process. If the setup were ever compromised, false proofs could be created. While the mathematics is sound, the implementation and setup ceremony introduce practical trust assumptions.
* **Both together** means an attacker would need to simultaneously break hardware security AND mathematical proofs — two completely independent systems with different attack surfaces. The probability of both failing at the same time is astronomically lower than either failing alone.

This layered approach is the same principle used in critical infrastructure: defense doesn't rely on a single barrier, no matter how strong that barrier is. Banks have vaults, cameras, guards, alarms, and insurance — not because any one of those is insufficient, but because the combination is far stronger than any individual measure.

### **The Broader Confidential Computing Movement**

TEEs are part of a larger trend called **Confidential Computing** — an industry movement to protect data not just at rest (stored encrypted) and in transit (encrypted during transfer), but also **in use** (encrypted during processing).

The Confidential Computing Consortium, backed by major technology companies, is driving standardization and adoption of these technologies. This system aligns with that movement, using TEE attestation as a foundational building block for verifiable, privacy-preserving computation.

As TEE technology evolves — with new platforms, stronger isolation models, and broader hardware support — this system's architecture is designed to adopt improvements without changing the fundamental security model. The attestation guarantee is the constant; the specific hardware is the variable.

### **In Practice**

From a user's or verifier's perspective, TEE attestation is straightforward:

1. Receive a proof payload
2. Extract the attestation document
3. Verify the certificate chain leads to the provider's root certificate
4. Check the code measurements match the expected version
5. Confirm the data hash in the attestation matches the proof's data

If all checks pass, you have hardware-grade assurance that the computation was genuine. The verification process is documented in [Verification](https://docs.afiprotocol.xyz/proof-of-reserve-network/verification)
