HTTP/2 200
date: Fri, 10 Oct 2025 08:44:36 GMT
content-type: application/rfc+xml; charset=utf-8
content-length: 43373
cf-ray: 98c4d6315d41cb77-BLR
last-modified: Sat, 14 Jan 2023 00:20:57 GMT
etag: "3085f-5f22e52f856b9-gzip"
accept-ranges: bytes
vary: Accept-Encoding
content-encoding: gzip
strict-transport-security: max-age=31536000; includeSubDomains
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
cf-cache-status: DYNAMIC
set-cookie: __cf_bm=kev382tXj942B3rZhClidiAORW_T9TvRqYXGXvGuGNI-1760085876-1.0.1.1-S512u8oS1s2CuqFoTxb.clKITXIWxgtaesldKs2XPzdrQwoAVMIelXrqQ9yg9qDyRel12NgO0vb5qLFHY8m.JLunLkfbTp0cMiXdjEDQwwA; path=/; expires=Fri, 10-Oct-25 09:14:36 GMT; domain=.rfc-editor.org; HttpOnly; Secure; SameSite=None
server: cloudflare
alt-svc: h3=":443"; ma=86400
]>
Remote ATtestation procedureS (RATS) ArchitectureFraunhofer SITRheinstrasse 75Darmstadt64295Germanyhenk.birkholz@sit.fraunhofer.deMicrosoftUnited States of Americadthaler@microsoft.comSandelman Software WorksCanadamcr+ietf@sandelman.caIntel CorporationUnited States of Americaned.smith@intel.comHuawei Technologieswilliam.panwei@huawei.com
sec
ratsIn network protocol exchanges, it is often useful for one end of a
communication to know whether the other end is in an intended operating state.
This document provides an architectural overview of the entities involved
that make such tests possible through the process of generating,
conveying, and evaluating evidentiary Claims. It provides a model that is neutral toward
processor architectures, the content of Claims, and protocols.IntroductionThe question of how one system can know that another system can be trusted has found new interest and relevance in a world where trusted computing elements are maturing in processor architectures.Systems that have been attested and verified to be in a good state
(for some value of "good") can improve overall system posture.
Conversely, systems that cannot be attested and verified to be in a
good state can be given reduced access or privileges, taken out of
service, or otherwise flagged for repair.For example:
A bank backend system might refuse to transact with another system
that is not known to be in a good state.
A healthcare system might refuse to transmit electronic healthcare
records to a system that is not known to be in a good state.
In Remote ATtestation procedureS (RATS), one peer (the "Attester")
produces believable information about itself ("Evidence") to enable
a remote peer (the "Relying Party") to decide whether or not to consider that
Attester a trustworthy peer.
Remote attestation procedures are facilitated by an additional vital party (the "Verifier").The Verifier appraises Evidence via appraisal policies and creates
the Attestation Results to support Relying Parties in their decision
process.
This document defines a flexible architecture consisting of attestation roles
and their interactions via conceptual messages. Additionally, this document defines a universal set of terms that can be mapped to various existing and emerging remote attestation procedures.
Common topological patterns and the sequence of data flows associated with them, such as
the "Passport Model" and the "Background-Check Model", are illustrated.
The purpose is to define useful terminology for remote attestation and enable readers to map
their solution architecture to the canonical attestation architecture provided here.
Having a common terminology that provides well-understood meanings for common themes,
such as roles, device composition, topological patterns, and appraisal procedures, is vital for
semantic interoperability across solutions and platforms involving multiple vendors and providers.Amongst other things, this document is about trust and trustworthiness.
Trust is a choice one makes about another system. Trustworthiness is a
quality about the other system that can be used in making one's decision to
trust it or not. This is a subtle difference; being familiar with the
difference is crucial for using this document. Additionally, the concepts of freshness and trust relationships are specified to enable implementers to choose appropriate solutions
to compose their remote attestation procedures.Reference Use CasesThis section covers a number of representative and generic use cases
for remote attestation, independent of specific solutions. The
purpose is to provide motivation for various aspects of the architecture
presented in this document. Many other use cases exist; this document
does not contain a complete list. It only
illustrates a set of use cases that collectively cover all the
functionality required in the architecture.Each use case includes a description followed by an additional summary of the
Attester and Relying Party roles derived from the use case.Network Endpoint AssessmentNetwork operators want trustworthy reports that include identity
and version information about the hardware and software on the
machines attached to their network.
Examples of reports include purposes (such as inventory summaries), audit
results, and anomaly notifications (which typically include the maintenance of log
records or trend reports).
The network operator may also want a policy by which full access is only
granted to devices that meet some definition of hygiene, and so wants to get
Claims about such information and verify its validity. Remote attestation is
desired to prevent vulnerable or compromised devices from getting access to
the network and potentially harming others.Typically, a solution starts with a specific component (sometimes referred to as a "root of trust") that often
provides a trustworthy device identity and performs a series of operations that enables trustworthiness appraisals for other components.
Such components perform operations that help determine the trustworthiness of yet other components
by collecting, protecting, or signing measurements.
Measurements that have been signed by such components are comprised of Evidence that either supports or refutes a claim of trustworthiness when evaluated.
Measurements can describe a variety of attributes of system components, such as hardware, firmware, BIOS, software, etc., and how they are hardened.
Attester:
A device desiring access to a network.
Relying Party:
Network equipment (such as a router, switch, or access point) that is
responsible for admission of the device into the network.
Confidential Machine Learning Model ProtectionA device manufacturer wants to protect its intellectual property.
The intellectual property's scope primarily encompasses the machine learning (ML) model that is deployed in the devices purchased by its customers.
The protection goals include preventing attackers, potentially
the customer themselves, from seeing the details of the model.Typically, this works by having some protected environment
in the device go through a remote attestation with some manufacturer service
that can assess its trustworthiness. If remote attestation succeeds,
then the manufacturer service releases either the model or a key to decrypt
a model already deployed on the Attester in encrypted form to the requester.
Attester:
A device desiring to run an ML model.
Relying Party:
A server or service holding ML models it desires to protect.
Confidential Data ProtectionThis is a generalization of the ML model use case above where
the data can be any highly confidential data, such as health data
about customers, payroll data about employees, future business plans, etc.
As part of the attestation procedure, an assessment is made against a set
of policies to evaluate the state of the system that is requesting
the confidential data. Attestation is desired to prevent leaking data via
compromised devices.
Attester:
An entity desiring to retrieve confidential data.
Relying Party:
An entity that holds confidential data for release to authorized entities.
Critical Infrastructure ControlPotentially harmful physical equipment (e.g., power grid, traffic
control, hazardous chemical processing, etc.) is connected to a
network in support of critical infrastructure. The organization
managing such infrastructure needs to ensure that only authorized code
and users can control corresponding critical processes, and that these
processes are protected from unauthorized manipulation or other
threats.
When a protocol operation can affect a critical system component of the
infrastructure, devices attached to that critical component require some
assurances depending on the security context, including assurances that a requesting
device or application has not been compromised and the requesters and actors
act on applicable policies. As such, remote attestation can be used to only
accept commands from requesters that are within policy.
Attester:
A device or application wishing to control physical equipment.
Relying Party:
A device or application connected to potentially dangerous physical
equipment (hazardous chemical processing, traffic control, power grid,
etc.).
Trusted Execution Environment ProvisioningA Trusted Application Manager (TAM) server is responsible
for managing the applications running in a Trusted Execution Environment (TEE) of a client device, as described in .
To achieve its purpose, the TAM needs to assess the state of a TEE or applications
in the TEE of a client device. The TEE conducts remote attestation
procedures with the TAM, which can
then decide whether the TEE is already in compliance with the TAM's latest
policy. If not, the TAM has to uninstall, update, or install approved
applications in the TEE to bring it back into compliance with the TAM's policy.
Attester:
A device with a TEE capable of
running trusted applications that can be updated.
Relying Party:
A TAM.
Hardware WatchdogThere is a class of malware that holds a device hostage and does
not allow it to reboot to prevent updates from being applied.
This can be a significant problem because it allows a fleet of devices to be held hostage for ransom.A solution to this problem is a watchdog timer implemented in a protected
environment, such as a Trusted Platform Module (TPM),
as described in Section 43.3 of .
If the watchdog does not receive regular and fresh Attestation Results regarding the system's health, then it forces a reboot.
Attester:
The device that should be protected from being held hostage for
a long period of time.
Relying Party:
A watchdog capable of triggering a procedure that resets a device into
a known, good operational state.
FIDO Biometric AuthenticationIn the Fast IDentity Online (FIDO) protocol , the device in the user's hand authenticates the
human user, whether by biometrics (such as fingerprints) or by PIN and
password. FIDO authentication puts a large amount of trust in the device
compared to typical password authentication because it is the device that
verifies the biometric, PIN, and password inputs from the user, not the
server. For the Relying Party to know that the authentication is
trustworthy, the Relying Party needs to know that the Authenticator part
of the device is trustworthy. The FIDO protocol employs remote
attestation for this.The FIDO protocol supports several remote attestation protocols and a mechanism by which new ones can be registered and added; thus, remote attestation defined by the RATS architecture is a candidate for use in the FIDO protocol.
Attester:
FIDO Authenticator.
Relying Party:
Any website, mobile application backend, or service that relies on authentication data based on biometric information.
Architectural Overview depicts the data that flows between different roles, independent of protocol or use case.Conceptual Data Flow| Verifier +-----. |
| '-------------------------' | |
| | |
| Evidence Attestation | |
| Results | |
| | |
| v v
.-----+----. .---------------.
| Attester | | Relying Party |
'----------' '---------------']]>The text below summarizes the activities conducted by the roles illustrated in .
Roles are assigned to entities. Entities are often system components , such as devices. As the term "device" is typically more intuitive than the term "entity" or "system component", device is often used as an illustrative synonym throughout this document.The Attester role is assigned to entities that create Evidence that is conveyed to a Verifier.The Verifier role is assigned to entities that use the Evidence, any Reference Values from Reference Value Providers, and any Endorsements from Endorsers
by applying an Appraisal Policy for Evidence to assess the trustworthiness of the Attester.
This procedure is called the "appraisal of Evidence".Subsequently, the Verifier role generates Attestation Results for use by Relying Parties.The Appraisal Policy for Evidence might be obtained from the Verifier Owner via some protocol mechanism, configured into the Verifier by the Verifier Owner, programmed into the Verifier,
or obtained via some other mechanism.The Relying Party role is assigned to an entity that uses Attestation Results by applying its own
appraisal policy to make application-specific decisions, such as authorization decisions.
This procedure is called the "appraisal of Attestation Results".The Appraisal Policy for Attestation Results might be obtained from the Relying Party Owner via some protocol mechanism, configured into the Relying Party by the Relying Party Owner, programmed into the Relying Party, or obtained via some other mechanism.See for further discussion of the conceptual messages shown in . provides a more complete definition of all RATS roles.Two Types of Environments of an AttesterAs shown in , an Attester consists of at least one Attesting Environment and at least one
Target Environment co-located in one entity.
In some implementations, the Attesting and Target Environments might be combined into one environment.
Other implementations might have multiple Attesting and Target Environments,
such as in the examples described in more detail in Sections
and . Other examples may exist. All compositions of Attesting and Target Environments discussed in this architecture can be combined into more complex implementations.Two Types of Environments within an AttesterClaims are collected from Target Environments.
That is, Attesting Environments collect the values and the information to be represented in Claims by reading system registers and variables, calling into subsystems, and taking measurements on code, memory, or other relevant assets of the Target Environment.
Attesting Environments then format the Claims appropriately; typically, they
use key material and
cryptographic functions, such as signing or cipher algorithms, to
generate Evidence.
There is no limit or requirement on the types of hardware or software environments that can be used to implement an Attesting Environment. For example, TEEs, embedded Secure Elements
(eSEs), TPMs , or BIOS firmware.An arbitrary execution environment may not, by default, be capable of Claims collection for a given Target Environment.
Execution environments that are designed specifically to be capable of Claims collection are referred to in this document as "Attesting Environments".
For example, a TPM doesn't actively collect Claims itself. Instead, it
requires another component to feed various values to the TPM.
Thus, an Attesting Environment in such a case would be the combination
of the TPM together with whatever component is feeding it the measurements.Layered Attestation EnvironmentsBy definition, the Attester role generates Evidence. An Attester
may consist of one or more nested environments (layers). The bottom
layer of an Attester has an Attesting Environment that is typically
designed to be immutable or difficult to modify by malicious code. In
order to appraise Evidence generated by an Attester, the Verifier
needs to trust various layers, including the bottom Attesting
Environment. Trust in the Attester's layers, including the bottom
layer, can be established in various ways, as discussed in .In layered attestation, Claims can be collected from or about each
layer beginning with an initial layer. The corresponding Claims can
be structured in a nested fashion that reflects the nesting of the
Attester's layers. Normally, Claims are not self-asserted. Rather, a
previous layer acts as the Attesting Environment for the next layer.
Claims about an initial layer are typically asserted by an
Endorser.The example device illustrated in includes
(A) a BIOS stored in read-only memory, (B) a bootloader, and (C) an
operating system kernel.Layered Attester| Verifier |
| Provider(s) | ROM, bootloader, | |
'-------------' and kernel '----------'
^
.------------------------------------. |
| | |
| .---------------------------. | |
| | Kernel(C) | | |
| | | | | Layered
| | Target | | | Evidence
| | Environment | | | for
| '---------------+-----------' | | bootloader
| Collect | | | and
| Claims | | | kernel
| .---------------|-----------. | |
| | Bootloader(B) v | | |
| | .-----------. | | |
| | Target | Attesting | | | |
| | Environment |Environment+-----------'
| | | | | |
| | '-----------' | |
| | ^ | |
| '--------------+--|---------' |
| Collect | | Evidence for |
| Claims v | bootloader |
| .-----------------+---------. |
| | ROM(A) | |
| | | |
| | Attesting | |
| | Environment | |
| '---------------------------' |
| |
'------------------------------------']]>The first Attesting Environment (the ROM in this example)
has to ensure the integrity of the bootloader (the first Target Environment).
There are
potentially multiple kernels to boot; the decision is up to the bootloader.
Only a bootloader with intact integrity will make an appropriate decision.
Therefore, the Claims relating to the integrity of the bootloader have to be measured securely.
At this stage of the boot cycle of the
device, the Claims collected typically cannot be composed into Evidence.After the boot sequence is started, the BIOS conducts the most
important and defining feature of layered attestation: the
successfully measured bootloader now becomes (or contains) an
Attesting Environment for the next layer. This procedure in layered
attestation is sometimes called "staging". It is important that the
bootloader not be able to alter any Claims about itself that were
collected by the BIOS. This can be ensured having those Claims be
either signed by the BIOS or stored in a tamper-proof manner by the
BIOS.Continuing with this example, the bootloader's Attesting
Environment is now in charge of collecting Claims about the next
Target Environment. In this example, it is the kernel to be booted.
The final Evidence thus contains two sets of Claims: one set about the
bootloader as measured and signed by the BIOS and another set of Claims about the
kernel as measured and signed by the bootloader.This example could be extended further by making the kernel become another
Attesting Environment for an application as another Target Environment.
This would result in a third set of Claims in the Evidence pertaining to that application.The essence of this example is a cascade of staged environments. Each
environment has the responsibility
of measuring the next environment before the next environment is started.
In general, the number of layers may vary by device or implementation,
and an Attesting Environment might even have multiple Target Environments
that it measures, rather than only one as shown by example in .Composite DeviceA composite device is an entity composed of multiple sub-entities such that its
trustworthiness has to be determined by the appraisal of all these sub-entities.Each sub-entity has at least one Attesting Environment collecting the Claims
from at least one Target Environment. Then, this sub-entity generates Evidence
about its trustworthiness; therefore, each sub-entity can be called an "Attester".
Among all the Attesters, there may be only some that have the ability to communicate
with the Verifier while others do not.For example, a carrier-grade router consists of a chassis and multiple slots.
The trustworthiness of the router depends on all its slots' trustworthiness.
Each slot has an Attesting Environment, such as a TEE, collecting the
Claims of its boot process, after which it generates Evidence from the Claims.Among these slots, only a "main" slot can communicate with the Verifier
while other slots cannot. However, other slots can communicate with the main
slot by the links between them inside the router.
The main slot collects the Evidence of other slots, produces the final Evidence of the whole router, and conveys the final Evidence to the Verifier.
Therefore, the router is a composite
device, each slot is an Attester, and the main slot is the lead Attester.Another example is a multi-chassis router composed of multiple single carrier-grade routers.
Multi-chassis router setups create redundancy groups that provide higher throughput by interconnecting
multiple routers in these groups, which can be treated as one logical router for simpler management.
A multi-chassis router setup provides a management point that connects to the Verifier.
Typically, one router in the group is designated as the main router.
Other routers in the multi-chassis setup are connected to the main router only via physical network links; therefore, they are managed and appraised via the main router's help.
Consequently, a multi-chassis router setup is a composite device,
each router is an Attester, and the main router is the lead Attester. depicts the conceptual data flow for a composite device.Composite Device| Attesting |<--------+ Attester B +-. |
| | | |Environment | | '-+----------' | |
| | .--------+-------. | |<----------+ Attester C +-. |
| | | Target | | | | '-+----------' | |
| | | Environment(s) | | |<------------+ ... | |
| | | | '------------' | Evidence '------------' |
| | '----------------' | of |
| | | Attesters |
| | lead Attester A | (via Internal Links or |
| '--------------------------------------' Network Connections) |
| |
| Composite Device |
'------------------------------------------------------------------'
]]>In a composite device, each Attester generates its own Evidence by its
Attesting Environment(s) collecting the Claims from its Target Environment(s).
The lead Attester collects Evidence from other Attesters and conveys it to a Verifier.
Collection of Evidence from sub-entities may itself be a form of Claims collection that results in Evidence asserted by the lead Attester.
The lead Attester generates Evidence about the layout of the whole composite device, while sub-Attesters generate Evidence about their respective (sub-)modules.In this scenario, the trust model described in can also be applied to an inside Verifier.Implementation ConsiderationsAn entity can take on multiple RATS roles (e.g., Attester, Verifier, Relying
Party, etc.) at the same time.
Multiple entities can cooperate to implement a single RATS role as well.
In essence, the combination of roles and entities can be arbitrary.
For example, in the composite device scenario, the entity inside
the lead Attester can also take on the role of a Verifier and the
outer entity of Verifier can take on the role of a Relying Party.
After collecting the Evidence of other Attesters, this inside Verifier uses
Endorsements and appraisal policies (obtained the same way as by any other
Verifier) as part of the appraisal procedures that generate Attestation Results.
The inside Verifier then conveys the Attestation Results of other Attesters to the outside Verifier,
whether in the same conveyance protocol as part of the Evidence or not.As explained in , there are a variety of roles in the RATS architecture; they are defined by a unique combination of artifacts they produce and consume.
Conversely, artifacts are also defined by the roles that produce or consume them.
To produce an artifact means that a given role introduces it into the RATS architecture.
To consume an artifact means that a given role has responsibility for processing it in the RATS architecture.
Roles also have the ability to perform additional actions, such as caching or forwarding artifacts as opaque data.
As depicted in , these additional actions can be performed by several roles.Terminology has defined a number of terms that are also used in this document.
Some of the terms are close to, but not exactly the same.
Where the terms are similar, they are noted below with references.
As explained in , when this document says "Compare:", the terminology used in this document differs significantly from the definition in the reference.This document uses the terms in the subsections that follow.Roles
Attester:
A role performed by an entity (typically a device) whose Evidence must be appraised in order to infer the extent to which the Attester is considered trustworthy, such as when deciding whether it is authorized to perform some operation.
Produces:
Evidence
Relying Party:
A role performed by an entity that depends on the validity of information about an Attester for purposes of reliably applying application-specific actions. Compare: relying party .
Consumes:
Attestation Results, Appraisal Policy for Attestation Results
Verifier:
A role performed by an entity that appraises the validity of Evidence about an Attester
and produces Attestation Results to be used by a Relying Party.
Consumes:
Evidence, Reference Values, Endorsements, Appraisal Policy for Evidence
Produces:
Attestation Results
Relying Party Owner:
A role performed by an entity (typically an administrator) that is authorized to configure an Appraisal Policy for Attestation Results in a Relying Party.
Produces:
Appraisal Policy for Attestation Results
Verifier Owner:
A role performed by an entity (typically an administrator) that is authorized to configure an Appraisal Policy for Evidence in a Verifier.
Produces:
Appraisal Policy for Evidence
Endorser:
A role performed by an entity (typically a manufacturer) whose Endorsements may help Verifiers appraise the authenticity of Evidence and infer further capabilities of the Attester.
Produces:
Endorsements
Reference Value Provider:
A role performed by an entity (typically a manufacturer) whose Reference Values help Verifiers appraise Evidence to determine if acceptable known Claims have been recorded by the Attester.
Produces:
Reference Values
Artifacts
Claim:
A piece of asserted information, often in the form of a name/value pair. Claims make up the usual structure of Evidence and other RATS conceptual messages.
Compare: claim .
Endorsement:
A secure statement that an Endorser vouches for the integrity of an Attester's various capabilities, such as Claims collection and Evidence signing.
Consumed By:
Verifier
Produced By:
Endorser
Evidence:
A set of Claims generated by an Attester to be appraised by a Verifier.
Evidence may include configuration data, measurements, telemetry, or inferences.
Consumed By:
Verifier
Produced By:
Attester
Attestation Result:
The output generated by a Verifier, typically including information about an Attester, where the Verifier vouches for the validity of the results.
Consumed By:
Relying Party
Produced By:
Verifier
Appraisal Policy for Evidence:
A set of rules that
a Verifier uses to evaluate the validity of information
about an Attester. Compare: security policy .
Consumed By:
Verifier
Produced By:
Verifier Owner
Appraisal Policy for Attestation Results:
A set of rules that direct how a Relying Party uses the Attestation Results regarding an Attester generated by the Verifiers. Compare: security policy .
Consumed by:
Relying Party
Produced by:
Relying Party Owner
Reference Values:
A set of values against which values of Claims can be compared as part of
applying an Appraisal Policy for Evidence. Reference Values are sometimes
referred to in other documents as "known-good values", "golden measurements",
or "nominal values". These terms typically assume comparison for
equality, whereas here, Reference Values might be more general and be used
in any sort of comparison.
Consumed By:
Verifier
Produced By:
Reference Value Provider
Topological Patterns shows a data flow diagram for communication between an Attester,
a Verifier, and a Relying Party. The Attester conveys its Evidence to the Verifier
for appraisal and the Relying Party receives the Attestation Result from the Verifier.
This section refines the data-flow diagram by describing two
reference models, as well as one example composition thereof. The
discussion that follows is for illustrative purposes only and does
not constrain the interactions between RATS roles to the presented models.Passport ModelThe Passport Model is so named because of its resemblance to how nations issue
passports to their citizens. The nature of the Evidence that an individual needs
to provide to its local authority is specific to the country involved. The citizen
retains control of the resulting passport document and presents it to other entities
when it needs to assert a citizenship or identity Claim, such as at an airport immigration
desk. The passport is considered sufficient because it vouches for the citizenship and
identity Claims and it is issued by a trusted authority.Thus, in this immigration desk analogy,
the citizen is the Attester,
the passport-issuing agency is a Verifier,
and the passport application and identifying information (e.g., birth certificate) is the Evidence.
The passport is an Attestation Result and the immigration desk is a Relying Party.In this model, an Attester conveys Evidence to a Verifier that compares
the Evidence against its appraisal policy.
The Verifier then gives back an Attestation Result that the Attester treats as opaque data.The Attester does not consume the Attestation Result, but it might cache it.
The Attester can then present the Attestation Result (and possibly additional Claims)
to a Relying Party, which then compares this information against its own
appraisal policy.
The Attester may also present the same Attestation Result to other Relying Parties.There are three ways in which the process may fail:
First, the Verifier may not issue a positive Attestation Result due to the Evidence not passing the Appraisal Policy for Evidence.
The second way in which the process may fail is when the Attestation Result
is examined by the Relying Party, and based upon the Appraisal Policy for
Attestation Results, the result does not comply with the policy.
The third way is when the Verifier is unreachable or unavailable.
As with any other information needed by the Relying Party to make an authorization decision,
an Attestation Result can be carried in a resource access protocol between the Attester and Relying Party.
In this model, the details of the resource access protocol
constrain the serialization format of the Attestation Result. On the other hand, the
format of the Evidence is only constrained by the
Attester-Verifier remote attestation protocol.
This implies that interoperability and standardization is more relevant for Attestation Results than it is for Evidence.Passport Model| | Compare Attestation
| Attester | Attestation | Relying | Result against
| | Result | Party | appraisal policy
'------------' '-------------']]>Background-Check ModelThe Background-Check Model is so named because of the resemblance
of how employers and volunteer organizations perform background
checks. When a prospective employee provides Claims about education or
previous experience, the employer will contact the respective
institutions or former employers to validate the Claim. Volunteer
organizations often perform police background checks on volunteers in
order to determine the volunteer's trustworthiness. Thus, in this
analogy, a prospective volunteer is an Attester, the organization is
the Relying Party, and the organization that issues a report is a
Verifier.In this model, an Attester conveys Evidence to a Relying Party,
which treats it as opaque and simply forwards it on to a Verifier.
The Verifier compares the Evidence against its appraisal policy and
returns an Attestation Result to the Relying Party. The Relying Party
then compares the Attestation Result against its own appraisal
policy.The resource access protocol between the Attester and Relying Party
includes Evidence rather than an Attestation Result, but that Evidence
is not processed by the Relying Party.Since the Evidence is merely forwarded on to a trusted Verifier, any serialization format can be used for Evidence because the Relying Party does not need a parser for it. The only requirement is that the Evidence can be encapsulated in the format
required by the resource access protocol between the Attester and Relying Party.However, as seen in the Passport Model, an Attestation Result is still consumed by the
Relying Party. Code footprint and attack surface area can be minimized by
using a serialization format for which the Relying Party already needs a
parser to support the protocol between the Attester and Relying Party,
which may be an existing standard or widely deployed resource access protocol.
Such minimization is especially important if the Relying Party is a
constrained node.Background-Check Model|---' | Compare Attestation
| Attester | Evidence | Relying | Result against
| | | Party | appraisal policy
'------------' '-------------']]>CombinationsOne variation of the Background-Check Model is where the Relying Party
and the Verifier are on the same machine, performing both functions together.
In this case, there is no need for a protocol between the two.It is also worth pointing out that the choice of model depends on the use case and that different Relying Parties may use different topological patterns.The same device may need to create Evidence for different Relying Parties and/or different use cases.
For instance, it would use one model to provide Evidence to a network infrastructure device to gain access to the network and
the other model to provide Evidence to a server holding confidential data to gain access to that data.
As such, both models may simultaneously be in use by the same device. shows another example of a combination where Relying Party 1 uses the
Passport Model, whereas Relying Party 2 uses an extension of the Background-Check Model.
Specifically, in addition to the basic functionality shown in , Relying Party 2
actually provides the Attestation Result back to the Attester, allowing the Attester to
use it with other Relying Parties. This is the model that the TAM
plans to support in the TEEP architecture .Example Combination| | Compare Attestation
| Attester | Attestation | Relying | Result against
| | Result | Party 1 | appraisal policy
'-------------' '-------------']]>Roles and EntitiesAn entity in the RATS architecture includes at least one of the roles defined
in this document.An entity can aggregate more than one role into itself, such as being both
a Verifier and a Relying Party or being both a Reference Value Provider and
an Endorser.
As such, any conceptual messages (see for more
discussion) originating from such roles might also be combined. For example,
Reference Values might be conveyed as part of an appraisal policy if the
Verifier Owner and Reference Value Provider roles are combined. Similarly,
Reference Values might be conveyed as part of an Endorsement if the Endorser
and Reference Value Provider roles are combined.Interactions between roles aggregated into the same entity do not necessarily use the
Internet Protocol.
Such interactions might use a loopback device or other IP-based
communication between separate environments, but they do not have to.
Alternative channels to convey conceptual messages include function calls, sockets, General-Purpose Input/Output (GPIO)
interfaces, local buses, or hypervisor calls. This type of conveyance is typically found
in composite devices. Most importantly, these conveyance methods are
out of scope of the RATS architecture, but they are presumed to exist in order to convey
conceptual messages appropriately between roles.In essence, an entity that combines more than one role creates and consumes
the corresponding conceptual messages as defined in this document.Trust ModelRelying PartyThis document covers scenarios for which a Relying Party
trusts a Verifier that can appraise the trustworthiness of
information about an Attester. Such trust
is expressed by storing one or more "trust anchors" in a secure location
known as a "trust anchor store".As defined in :
A trust anchor represents an authoritative entity via a public
key and associated data. The public key is used to verify digital
signatures, and the associated data is used to constrain the types
of information for which the trust anchor is authoritative.
The trust anchor may be a certificate or it may be a raw public key
along with additional data if necessary, such as its public key
algorithm and parameters.
In the context of this document, a trust anchor may also be a symmetric key, as
in , or the symmetric mode described in
.Thus, trusting a Verifier might be expressed by having the Relying
Party store the Verifier's key or certificate in its trust anchor store. It might also
be expressed by storing the public key or certificate of an entity (e.g., a Certificate Authority) that is
in the Verifier's certificate path.
For example, the Relying Party can verify that the Verifier is an expected one by out-of-band establishment of key material combined with a protocol like TLS to communicate.
There is an assumption that the Verifier has not been compromised between the establishment of the trusted key material and the creation of the Evidence.For a stronger level of security, the
Relying Party might require that the Verifier first provide
information about itself that the Relying Party can use to assess
the trustworthiness of the Verifier before accepting its Attestation Results.
Such a process would provide a stronger level of confidence in the correctness of
the information provided, such as a belief that the authentic Verifier has
not been compromised by malware.For example, one explicit way for a Relying Party "A" to establish
such confidence in the correctness of a Verifier "B" would be for B to first act as an Attester
where A acts as a combined Verifier/Relying Party. If A then accepts B as
trustworthy, it can choose to accept B as a Verifier for other Attesters.Similarly, the Relying Party also needs to trust the Relying Party Owner
for providing its Appraisal Policy for Attestation Results, and,
in some scenarios, the Relying Party might even require that the
Relying Party Owner go through a remote attestation procedure with it before the Relying Party will accept
an updated policy. This can be done in a manner similar to how a Relying Party
could establish trust in a Verifier as discussed above, i.e., verifying credentials against a trust anchor store
and optionally requiring Attestation Results from the Relying Party Owner.AttesterIn some scenarios, Evidence might contain sensitive information, such as
Personally Identifiable Information (PII) or system identifiable information.
Thus, an Attester must trust the entities to which it conveys Evidence to not
reveal sensitive data to unauthorized parties.
The Verifier might share this information with other
authorized parties according to a governing policy that addresses the
handling of sensitive information (potentially included in Appraisal
Policies for Evidence).
In the Background-Check Model, this Evidence may also be revealed to Relying Parties.When Evidence contains sensitive information, an Attester
typically requires that a Verifier authenticates itself (e.g., at TLS session establishment) and might even request a remote attestation before the Attester
sends the sensitive Evidence. This can be done by having the
Attester first act as a Verifier/Relying Party and the Verifier act as its
own Attester, as discussed above.Relying Party OwnerThe Relying Party Owner might also require that the
Relying Party first act as an Attester by providing Evidence that the Owner
can appraise before the Owner would give the Relying Party an updated
policy that might contain sensitive information.
In such a case, authentication or attestation in both directions might
be needed. Typically, one side's Evidence must be considered safe to
share with an untrusted entity in order to bootstrap the sequence.
See for more discussion.VerifierThe Verifier trusts (or more specifically, the Verifier's security
policy is written in a way that configures the Verifier to trust) a
manufacturer or the manufacturer's hardware so as to be able to
appraise the trustworthiness of that manufacturer's devices. Such trust
is expressed by storing one or more trust anchors in the Verifier's
trust anchor store.In a typical solution, a Verifier comes to trust an Attester
indirectly by having an Endorser (such as a manufacturer) vouch for
the Attester's ability to securely generate Evidence through
Endorsements (see ). Endorsements might
describe the ways in which the Attester resists attacks, protects
secrets, and measures Target Environments. Consequently, the
Endorser's key material is stored in the Verifier's trust anchor store
so that Endorsements can be authenticated and used in the Verifier's
appraisal process.In some solutions, a Verifier might be
configured to directly trust an Attester by having the Verifier
possess the Attester's key material (rather than the Endorser's) in
its trust anchor store.Such direct trust must first be established at the time of trust anchor
store configuration either by checking with an Endorser at that
time or by conducting a security analysis of the specific device.
Having the Attester directly in the trust anchor store narrows
the Verifier's trust to only specific devices rather than all devices
the Endorser might vouch for, such as all devices manufactured by the
same manufacturer in the case that the Endorser is a manufacturer.Such narrowing is often important since physical possession of a device
can also be used to conduct a number of attacks, and so a device in
a physically secure environment (such as one's own premises) may be
considered trusted, whereas devices owned by others would not be.
This often results in a desire either to have the owner run their
own Endorser that would only endorse devices one owns or to use
Attesters directly in the trust anchor store. When there are many
Attesters owned, the use of an Endorser enables better scalability.That is, a Verifier might appraise the trustworthiness of an application component, operating
system component, or service under the assumption that information
provided about it by the lower-layer firmware or software is true.
A stronger level of assurance of security comes when information can be vouched
for by hardware or by ROM code, especially if such hardware is
physically resistant to hardware tampering.
In most cases, components that have to be vouched for via Endorsements (because no Evidence is generated about them) are referred to as "roots of trust".The manufacturer having arranged for an Attesting Environment to be provisioned with key material with which to sign Evidence, the Verifier is then provided with
some way of verifying the signature on the Evidence. This may be in the form of an appropriate trust anchor or the Verifier may be provided with a database of public keys (rather than certificates) or even carefully curated and secured lists of symmetric keys.The nature of how the Verifier manages to validate the signatures produced by the Attester is critical to the secure operation of a remote attestation system but is not the subject of standardization within this architecture.A conveyance protocol that provides authentication and integrity protection can be used
to convey Evidence that is otherwise unprotected (e.g., not signed). Appropriate conveyance of unprotected Evidence (e.g., ) relies on the following conveyance protocol's protection capabilities:
The key material used to authenticate and integrity protect the conveyance channel is trusted by the Verifier to speak for the Attesting Environment(s) that collected Claims about the Target Environment(s).
All unprotected Evidence that is conveyed is supplied exclusively by the Attesting Environment that has the key material that protects the conveyance channel.
A trusted environment protects the conveyance channel's key material, which may depend on other Attesting Environments with equivalent strength protections.
As illustrated in , an entity that receives unprotected Evidence via a trusted conveyance channel always takes on the responsibility of vouching for the Evidence's authenticity and freshness.
If protected Evidence is generated, the Attester's Attesting Environments take on that responsibility.
In cases where unprotected Evidence is processed by a Verifier, Relying Parties have to trust that the Verifier is capable of handling Evidence in a manner that preserves the Evidence's authenticity and freshness.
Generating and conveying unprotected Evidence always creates significant risk and the benefits of that approach have to be carefully weighed against potential drawbacks.See for discussion on security strength.Endorser, Reference Value Provider, and Verifier OwnerIn some scenarios, the Endorser, Reference Value Provider, and Verifier Owner may need to trust the Verifier
before giving the Endorsement, Reference Values, or appraisal policy to it. This can be done in a
similar manner to how a Relying Party might establish trust in a Verifier.As discussed in , authentication or attestation in both directions might be
needed. Typically, one side's identity or
Evidence in this case must be considered safe to share with an untrusted entity
in order to bootstrap the sequence.
See for more discussion.Conceptual Messages illustrates the flow of conceptual messages between various roles.
This section provides additional elaboration and implementation considerations.
It is the responsibility of protocol specifications to define the actual data format
and semantics of any relevant conceptual messages.EvidenceEvidence is a set of Claims about the Target Environment that reveal operational
status, health, configuration, or construction that have security relevance.
Evidence is appraised by a Verifier to establish its relevance, compliance, and timeliness.
Claims need to be collected in a manner that is reliable such that a Target Environment cannot lie to the Attesting Environment about its trustworthiness properties.
Evidence needs to be securely associated with the Target Environment
so that the Verifier cannot be tricked into accepting Claims originating
from a different environment (that may be more trustworthy).
Evidence also must be protected from an active on-path attacker who may observe,
change, or misdirect Evidence as it travels from the Attester to the Verifier.
The timeliness of Evidence can be captured using Claims that pinpoint the time
or interval when changes in operational status, health, and so forth occur.EndorsementsAn Endorsement is a secure statement that some entity (e.g., a
manufacturer) vouches for the integrity of the device's various
capabilities, such as Claims collection, signing, launching code,
transitioning to other environments, storing secrets, and more. For
example, if the device's signing capability is in hardware, then an
Endorsement might be a manufacturer certificate that signs a public
key whose corresponding private key is only known inside the device's
hardware. Thus, when Evidence and such an Endorsement are used
together, an appraisal procedure can be conducted based on appraisal
policies that may not be specific to the device instance but are merely
specific to the manufacturer providing the Endorsement. For example,
an appraisal policy might simply check that devices from a given
manufacturer have information matching a set of Reference Values. An appraisal policy might also have a set of more complex logic on how to
appraise the validity of information.However, while an appraisal policy that treats all devices from a
given manufacturer the same may be appropriate for some use cases, it
would be inappropriate to use such an appraisal policy as the sole
means of authorization for use cases that wish to constrain
which compliant devices are considered authorized for some
purpose. For example, an enterprise using remote attestation for
Network Endpoint Assessment (NEA) may not wish to
let every healthy laptop from the same manufacturer onto the network.
Instead, it may only want to let devices that it legally owns onto the
network. Thus, an Endorsement may be helpful information in
authenticating information about a device, but is not necessarily
sufficient to authorize access to resources that may need
device-specific information, such as a public key for the device or
component or user on the device.Reference ValuesReference Values used in appraisal procedures come from a Reference
Value Provider and are then used by the Verifier to compare to
Evidence. Reference Values with matching Evidence produce acceptable
Claims. Additionally, an appraisal policy may play a role in determining
the acceptance of Claims.Attestation ResultsAttestation Results are the input used by the Relying Party to
decide the extent to which it will trust a particular Attester and
allow it to access some data or perform some operation.Attestation Results may carry a boolean value indicating compliance
or non-compliance with a Verifier's appraisal policy or may carry a
richer set of Claims about the Attester, against which the Relying
Party applies its Appraisal Policy for Attestation Results.The quality of the Attestation Results depends upon the ability of the Verifier to evaluate the Attester.
Different Attesters have a different Strength of Function, which results in the Attestation Results being qualitatively different in strength.An Attestation Result that indicates non-compliance can be used by an Attester (in the Passport Model) or
a Relying Party (in the Background-Check Model) to indicate that the Attester
should not be treated as authorized and may be in need of remediation. In some cases,
it may even indicate that the Evidence itself cannot be authenticated as being correct.By default, the Relying Party does not believe the Attester to be compliant.
Upon receipt of an authentic Attestation Result and given the Appraisal Policy for Attestation Results is satisfied,
the Attester is allowed to perform the prescribed actions or access.
The simplest such appraisal policy might authorize granting the Attester full access or control over the resources guarded by the Relying Party.
A more complex appraisal policy might involve using the information
provided in the Attestation Result to compare against expected values or to apply complex analysis
of other information contained in the Attestation Result.Thus, Attestation Results can contain detailed information about an
Attester, which can include privacy sensitive information as discussed
in . Unlike Evidence, which is
often very device- and vendor-specific, Attestation Results can be
vendor-neutral, if the Verifier has a way to generate vendor-agnostic
information based on the appraisal of vendor-specific information in
Evidence. This allows a Relying Party's appraisal policy to be
simpler, potentially based on standard ways of expressing the
information, while still allowing interoperability with heterogeneous
devices.Finally, whereas Evidence is signed by the device (or indirectly by
a manufacturer if Endorsements are used), Attestation Results are
signed by a Verifier, allowing a Relying Party to only need a trust
relationship with one entity rather than a larger set of entities
for purposes of its appraisal policy.Appraisal PoliciesThe Verifier (when appraising Evidence) or the Relying Party (when
appraising Attestation Results) checks the values of matched Claims
against constraints specified in its appraisal policy.
Examples of such constraints checking include the following:
Comparison for equality against a Reference Value.
A check for being in a range bounded by Reference Values.
Membership in a set of Reference Values.
A check against values in other Claims.
Upon completing all appraisal policy constraints, the remaining Claims are accepted
as input toward determining Attestation Results (when appraising Evidence)
or as input to a Relying Party (when appraising Attestation Results).Claims Encoding Formats illustrates a relationship to which remote attestation is desired to be added:Typical Resource Access| | request
| Attester | Access some | Relying | against
| | resource | Party | security
'-------------' '------------' policy]]>In this diagram, the protocol between the Attester and a Relying Party
can be any new or existing protocol (e.g., HTTP(S), CoAP(S),
Resource-Oriented Lightweight Information Exchange (ROLIE) ,
802.1x, OPC UA , etc.) depending on the use case.Typically, such protocols already have mechanisms for passing security information for authentication and authorization purposes.
Common formats include JSON Web Tokens (JWTs) , CWTs , and X.509 certificates.Retrofitting already-deployed protocols with remote attestation requires
adding RATS conceptual messages to the existing data flows. This must be
done in a way that does not degrade the security properties of the systems involved
and should use extension mechanisms provided by the underlying
protocol. For example, if a TLS handshake is to be extended with
remote attestation capabilities, attestation Evidence may be embedded
in an ad hoc X.509 certificate extension (e.g., ) or into a new
TLS Certificate Type (e.g., ).Especially for constrained nodes, there is a desire to minimize
the amount of parsing code needed in a Relying Party in order to both
minimize footprint and the attack surface. While
it would be possible to embed a CWT inside a JWT, or a JWT inside an
X.509 extension, etc., there is a desire to encode the information
in a format that is already supported by the Relying Party.This motivates having a common "information model" that describes
the set of remote attestation related information in an encoding-agnostic
way and allows multiple encoding formats (CWT, JWT, X.509, etc.)
that encode the same information into the Claims format needed by the
Relying Party. illustrates that Evidence and
Attestation Results might be expressed via multiple potential encoding
formats so that they can be conveyed by various existing protocols. It
also motivates why the Verifier might also be responsible for accepting
Evidence that encodes Claims in one format while issuing Attestation
Results that encode Claims in a different format.Multiple Attesters and Relying Parties with Different Formats| Relying Party V |
'--------------' | | `-------------------'
v |
.--------------. JWT .---------+--. JWT .-------------------.
| Attester-B +-------->| +-------->| Relying Party W |
'--------------' | | `-------------------'
| |
.--------------. X.509 | | X.509 .-------------------.
| Attester-C +-------->| Verifier +-------->| Relying Party X |
'--------------' | | `-------------------'
| |
.--------------. TPM | | TPM .-------------------.
| Attester-D +-------->| +-------->| Relying Party Y |
'--------------' '---------+--' `-------------------'
^ |
.--------------. other | | other .-------------------.
| Attester-E +-----------' '---------->| Relying Party Z |
'--------------' `-------------------']]>FreshnessA Verifier or Relying Party might need to learn the point in time
(i.e., the "epoch") an Evidence or Attestation Result has been produced. This is essential in deciding whether the included Claims can be considered fresh,
meaning they still reflect the latest state of the Attester, and that any
Attestation Result was generated using the latest Appraisal Policy for Evidence, Endorsements, and Reference Values.This section provides a number of details.
However, it does not define any protocol formats and the interactions shown are abstract.
This section is intended for those creating protocols and solutions to understand the options available to ensure freshness.
The way in which freshness is provisioned in a protocol is an architectural decision.
Provisioning of freshness has an impact on the number of needed round trips in a protocol; therefore, it must be made very early in the design.
Different decisions will have significant impacts on resulting interoperability,
which is why this section goes into sufficient detail such that choices in freshness will be compatible across interacting protocols, such as depicted in .Freshness is assessed based on the Appraisal Policy for Evidence or
Attestation Results that compares the estimated epoch against an
"expiry" threshold defined locally to that policy. There is, however,
always a race condition possible in that the state of the Attester and
the appraisal policies might change immediately after the Evidence or
Attestation Result was generated. The goal is merely to narrow their
recentness to something the Verifier (for Evidence) or Relying Party
(for Attestation Result) is willing to accept. Some flexibility on the
freshness requirement is a key component for enabling caching and reuse
of both Evidence and Attestation Results, which is especially valuable
in cases where their computation uses a substantial part of the resource
budget (e.g., energy in constrained devices).There are three common approaches for determining the epoch of Evidence or an
Attestation Result.Explicit Timekeeping Using Synchronized ClocksThe first approach is to rely on synchronized and trustworthy clocks and
include a signed timestamp (see ) along with the
Claims in the Evidence or Attestation Result. Timestamps can also be added on a
per-Claim basis to distinguish the time of generation of Evidence or Attestation
Result from the time that a specific Claim was generated. The clock's
trustworthiness can generally be established via Endorsements and typically requires additional Claims about the signer's time
synchronization mechanism.However, a trustworthy clock might not be available in some use cases. For
example, in many TEEs today, a clock is only
available outside the TEE; thus, it cannot be trusted by the TEE.Implicit Timekeeping Using NoncesA second approach
places the onus of timekeeping solely on the Verifier (for Evidence)
or the Relying Party (for Attestation Results). For example, this
approach might be suitable in case the Attester does not have a
trustworthy clock or time synchronization is otherwise impaired.
In
this approach, an unpredictable nonce is sent by the appraising
entity and the nonce is then signed and included along with the Claims
in the Evidence or Attestation Result. After checking that the sent
and received nonces are the same, the appraising entity knows that the
Claims were signed after the nonce was generated. This allows
associating a "rough" epoch to the Evidence or Attestation Result. In
this case, the epoch is said to be rough because:
The epoch applies to the entire Claim set instead of a more granular
association, and
The time between the creation of Claims and the collection of Claims is
indistinguishable.
Implicit Timekeeping Using Epoch IDsA third approach relies on having epoch identifiers (IDs)
periodically sent to both the sender and receiver of Evidence or
Attestation Results by some "epoch ID distributor".Epoch IDs are different from nonces as they can be used more than once and
can even be used by more than one entity at the same time.
Epoch IDs are different from timestamps as they do not have to convey information about a point in time, i.e., they are not necessarily monotonically increasing integers.Like the nonce approach, this allows associating a "rough" epoch without
requiring a trustworthy clock or time synchronization in order to generate or
appraise the freshness of Evidence or Attestation Results. Only the
epoch ID distributor requires access to a clock so it can periodically send
new epoch IDs.The most recent epoch ID is included in the produced Evidence or Attestation
Results, and the appraising entity can compare the epoch ID in received
Evidence or Attestation Results against the latest epoch ID it received from
the epoch ID distributor to determine if it is within the current epoch.
An actual solution also needs to take into account race conditions
when transitioning to a new epoch, such as by using a counter signed
by the epoch ID distributor as the epoch ID, by including both the current and
previous epoch IDs in messages and/or checks by requiring retries
in case of mismatching epoch IDs, or by buffering incoming messages
that might be associated with an epoch ID that the receiver has not yet
obtained.More generally, in order to prevent an appraising entity from generating false
negatives (e.g., discarding Evidence that is deemed stale even if it is
not), the appraising entity should keep an "epoch window" consisting of the
most recently received epoch IDs. The depth of such epoch window is directly
proportional to the maximum network propagation delay between the first to receive the epoch ID and the last to receive the epoch ID and it is inversely proportional to the epoch duration.
The appraising entity shall compare the
epoch ID carried in the received Evidence or Attestation Result with the epoch IDs
in its epoch window to find a suitable match.Whereas the nonce approach typically requires the appraising entity
to keep state for each nonce generated, the epoch ID approach minimizes
the state kept to be independent of the number of Attesters or Verifiers
from which it expects to receive Evidence or Attestation Results as long
as all use the same epoch ID distributor.DiscussionImplicit and explicit timekeeping can be combined into hybrid mechanisms.
For example, if clocks exist within the Attesting Environment and are considered trustworthy (tamper-proof) but are not synchronized, a nonce-based exchange may be used to determine the (relative) time offset between the involved peers followed by any number of timestamp
based exchanges.It is important to note that the actual values in Claims might have been
generated long before the Claims are signed. If so, it is the signer's
responsibility to ensure that the values are still fresh when they are
signed. For example, values generated at boot time might have been saved to
secure storage until network connectivity is established to the remote Verifier
and a nonce is obtained.A more detailed discussion with examples appears in .For a discussion on the security of epoch IDs see .Privacy ConsiderationsThe conveyance of Evidence and the resulting Attestation Results
reveal a great deal of information about the internal state of a
device as well as potentially any users of the device.In many cases, the whole point of attestation procedures is
to provide reliable information about the type of the device and the
firmware/software that the device is running.This information might be particularly interesting to many attackers.
For example, knowing that a device is
running a weak version of firmware provides a way to aim attacks better.In some circumstances, if an attacker can become aware of Endorsements, Reference Values, or appraisal policies, it could potentially provide an attacker with insight into defensive mitigations.
It is recommended that attention be paid to confidentiality of such information.Additionally, many Evidence, Attestation Results, and appraisal policies potentially contain Personally Identifying Information (PII) depending on the end-to-end use case of
the remote attestation procedure.
Remote attestation that includes containers and applications, e.g., a blood pressure monitor, may further
reveal details about specific systems or users.In some cases, an attacker may be able to make inferences about the contents of Evidence
from the resulting effects or timing of the processing.
For example, an attacker might be able to infer the value of specific Claims if it knew that only certain values were accepted by the Relying Party.Conceptual messages (see ) carrying sensitive or confidential information are expected to be integrity protected (i.e., either via signing or a secure channel) and optionally might be confidentiality protected via encryption.
If there isn't confidentiality protection of conceptual messages themselves, the underlying conveyance protocol should provide these protections.As Evidence might contain sensitive or confidential information,
Attesters are responsible for only sending such Evidence to trusted
Verifiers.
Some Attesters might want a stronger level of assurance of
the trustworthiness of a Verifier before sending Evidence to it.
In such cases,
an Attester can first act as a Relying Party and ask for the Verifier's own
Attestation Result. Appraising it just as a Relying Party would appraise
an Attestation Result for any other purpose.Another approach to deal with Evidence is to remove PII from the Evidence
while still being able to verify that the Attester is one of a large set.
This approach is often called "Direct Anonymous Attestation". See
Section 6.2 of and for more discussion.Security ConsiderationsThis document provides an architecture for doing remote attestation.
No specific wire protocol is documented here.
Without a specific proposal to compare against, it is impossible to know if the security threats listed below have been mitigated well.The security considerations below should be read as being, essentially, requirements against realizations of the RATS architecture.
Some threats apply to protocols and some are against implementations (code) and physical infrastructure (such as factories).The fundamental purpose of the RATS architecture is to allow a Relying Party to establish a basis for trusting the Attester.Attester and Attestation Key ProtectionImplementers need to pay close attention to the protection of the Attester and the manufacturing processes for provisioning attestation key material. If either of these are compromised, intended levels of assurance for remote attestation procedures are compromised because attackers can forge Evidence or manipulate the Attesting Environment.
For example, a Target Environment should not be able to tamper with the
Attesting Environment that measures it by isolating the two environments
from each other in some way.Remote attestation applies to use cases with a range of security requirements. The protections discussed here range from low to high security: low security may be limited to application or process isolation by the device's operating system and high security may involve specialized hardware to defend against physical attacks on a chip.On-Device Attester and Key ProtectionIt is assumed that an Attesting Environment is sufficiently isolated from the
Target Environment it collects Claims about and that it signs the resulting Claims set with an attestation
key so that the Target Environment cannot forge Evidence about itself. Such
an isolated environment might be provided by a process, a dedicated chip,
a TEE, a virtual machine, or another secure mode of operation.
The Attesting Environment must be protected from unauthorized modification to ensure it behaves correctly. Confidentiality protection of the Attesting Environment's signing key is vital so it cannot be misused to forge Evidence.In many cases, the user or owner of a device that includes the role of Attester must not be able to modify or extract keys from the Attesting Environments to prevent creating forged Evidence.
Some common examples include the user of a mobile phone or FIDO authenticator.Measures for a minimally protected system might include process or application isolation provided by a high-level operating system and restricted access to root or system privileges. In contrast, for really simple single-use devices that don't use a protected mode operating system (like a Bluetooth speaker), the only factual isolation might be the sturdy housing of the device.Measures for a moderately protected system could include a special restricted operating environment, such as a TEE. In this case, only security-oriented software has access to the Attester and key material.Measures for a highly protected system could include specialized hardware that is used to provide protection against chip decapping attacks, power supply and clock glitching, faulting injection and RF, and power side channel attacks.Attestation Key Provisioning ProcessesAttestation key provisioning is the process that occurs in the factory or elsewhere to establish signing key material on the device and the validation key material off the device.
Sometimes, this procedure is referred to as "personalization" or "customization".When generating keys off-device in the factory or in the device, the use of
a cryptographically strong sequence () needs consideration.Off-Device Key GenerationOne way to provision key material is to first generate it external to the device and then copy the key onto the device.
In this case, confidentiality protection of the generator and the path over which the key is provisioned is necessary.
The manufacturer needs to take care to protect corresponding key material with measures appropriate for its value.The degree of protection afforded to this key material can vary by the intended
function of the device and the specific practices of the device manufacturer or integrator.
The confidentiality protection is fundamentally based upon some amount of physical protection. While encryption is often used to provide confidentiality when a key is conveyed across a factory where the attestation key is created or applied, it must be available in an unencrypted form.
The physical protection can therefore vary from situations where the key is unencrypted only within carefully controlled secure enclaves within silicon to situations where an entire facility is considered secure
by the simple means of locked doors and limited access.The cryptography that is used to enable confidentiality protection of the attestation key comes with its own requirements to be secured.
This results in recursive problems, as the key material used to provision attestation keys must again somehow have been provisioned securely beforehand (requiring an additional level of protection and so on).Commonly, a combination of some physical security measures and some cryptographic measures are used to establish confidentiality protection.On-Device Key GenerationWhen key material is generated within a device and the secret part of it never leaves the device, the problem may lessen.
For public-key cryptography, it is not necessary to maintain confidentiality
of the public key. However, integrity of the chain of custody of the
public key is necessary in order to avoid attacks where an attacker is able
to get a key endorsed that the attacker controls.To summarize, attestation key provisioning must ensure that only valid attestation key material is established in Attesters.Conceptual Message ProtectionAny solution that conveys information in any conceptual message (see )
must support end-to-end integrity protection
and replay attack prevention. It often also needs to support additional
security properties, including:
end-to-end encryption,
denial-of-service protection,
authentication,
auditing,
fine-grained access controls, and
logging.
discusses ways in which freshness can be used in this
architecture to protect against replay attacks.To assess the security provided by a particular appraisal policy, it
is important to understand the strength of the root of trust, e.g.,
whether it is mutable software or firmware that is read-only after
boot or immutable hardware/ROM.It is also important that the appraisal policy was obtained securely itself.
If an attacker can configure or modify appraisal policies and Endorsements or Reference Values for a Relying Party or a Verifier, then integrity of the process is compromised.Security protections in the RATS architecture may be applied at different layers, whether by a conveyance protocol or an information encoding format.
This architecture expects conceptual messages to be end-to-end protected based on the role interaction context.
For example, if an Attester produces Evidence that is relayed through some other entity that doesn't implement the Attester or the intended Verifier roles, then the relaying entity should not expect to have access to the Evidence.The RATS architecture allows for an entity to function in multiple roles
() and for composite devices ().
Implementers need to evaluate their designs to ensure that the assumed security properties of the individual components and roles still hold despite the lack of separation and that emergent risk is not introduced. The specifics of this evaluation will depend on the implementation and the use case; hence, they are out of scope for this document.
Isolation mechanisms in software or hardware that separate Attesting Environments and Target Environments () can support an implementer's evaluation and resulting design decisions.Attestation Based on Epoch IDEpoch IDs, described in , can be tampered with, replayed, dropped, delayed, and
reordered by an attacker.An attacker could either be external or belong to the distribution group (for
example, if one of the Attester entities have been compromised).An attacker who is able to tamper with epoch IDs can potentially lock all the
participants in a certain epoch of choice forever, effectively freezing time.
This is problematic since it destroys the ability to ascertain freshness of
Evidence and Attestation Results.To mitigate this threat, the transport should be at least integrity protected
and provide origin authentication.Selective dropping of epoch IDs is equivalent to pinning the victim node to a past epoch.
An attacker could drop epoch IDs to only some entities and not others, which will typically result in a denial of service due to the permanent staleness of the Attestation Result or Evidence.Delaying or reordering epoch IDs is equivalent to manipulating the victim's
timeline at will. This ability could be used by a malicious actor (e.g., a
compromised router) to mount a confusion attack. For example, a Verifier
can be tricked into accepting Evidence coming from a past epoch as fresh, while, in
the meantime, the Attester has been compromised.Reordering and dropping attacks are mitigated if the transport provides the ability to detect reordering and drop.
However, the delay attack described above can't be thwarted in this manner.Trust Anchor ProtectionAs noted in , Verifiers and Relying Parties have trust anchor stores
that must be secured.
contains more discussion of trust anchor store requirements
for protecting public keys.
Section 6 of contains a comprehensive treatment of the
topic, including the protection of symmetric key material.
Specifically, a trust anchor store must resist modification against unauthorized insertion, deletion, and modification.
Additionally, if the trust anchor is a symmetric key, the trust anchor store must
not allow unauthorized read.If certificates are used as trust anchors, Verifiers and Relying Parties are also
responsible for validating the entire certificate path up to the trust anchor,
which includes checking for certificate revocation.
For an example of such a procedure, see .IANA ConsiderationsThis document has no IANA actions.ReferencesNormative ReferencesInformative ReferencesOPC Unified Architecture Specification, Part 2: Security Model, Release 1.03OPC FoundationDICE Attestation ArchitectureTrusted Computing GroupVersion 1.00Revision 0.23Trusted Execution Environment Provisioning (TEEP) ArchitectureTrusted Platform Module Library, Part 1: ArchitectureTrusted Computing GroupWeb Authentication: An API for accessing Public Key Credentials Level 1W3CClient to Authenticator Protocol (CTAP)FIDO AllianceStrength of FunctionNISTA Technical Analysis of Confidential ComputingConfidential Computing ConsortiumVersion 1.3Symmetric Identity Based Device AttestationTrusted Computing GroupVersion 1.0Revision 0.95Recommendation for Key Management: Part 1 - GeneralNISTArm's Platform Security Architecture (PSA) Attestation TokenTime Considerations discussed various issues and requirements around freshness of Evidence and
summarized three approaches that might be used by different solutions to address them.
This appendix provides more details with examples to help illustrate potential approaches and inform those creating specific solutions.The table below defines a number of relevant events with an ID that
is used in subsequent diagrams. The times of said events might be
defined in terms of an absolute clock time, such as the Coordinated Universal Time timescale,
or might be defined relative to some other timestamp or timeticks counter, such as a clock resetting its epoch each time it is powered on.
Relevant Events over Time
ID
Event
Explanation of event
VG
Value generated
A value to appear in a Claim was created. In some cases, a value may have technically existed before an Attester became aware of it, but the Attester might have no idea how long it has had that value. In such a case, the value created time is the time at which the Claim containing the copy of the value was created.
NS
Nonce sent
A nonce not predictable to an Attester (recentness & uniqueness) is sent to an Attester.
NR
Nonce relayed
A nonce is relayed to an Attester by another entity.
IR
Epoch ID received
An epoch ID is successfully received and processed by an entity.
EG
Evidence generation
An Attester creates Evidence from collected Claims.
ER
Evidence relayed
A Relying Party relays Evidence to a Verifier.
RG
Result generation
A Verifier appraises Evidence and generates an Attestation Result.
RR
Result relayed
A Relying Party relays an Attestation Result to a Relying Party.
RA
Result appraised
The Relying Party appraises Attestation Results.
OP
Operation performed
The Relying Party performs some operation requested by the Attester via a resource access protocol as depicted in , e.g., across a session created earlier at time(RA).
RX
Result expiry
An Attestation Result should no longer be accepted, according to the Verifier that generated it.
Using the table above, a number of hypothetical examples of how a solution might be built are illustrated below.
This list is not intended to be complete;
it is just representative enough to highlight various timing considerations.All times are relative to the local clocks, indicated by an "_a" (Attester),
"_v" (Verifier), or "_r" (Relying Party) suffix.Times with an appended Prime (') indicate a second instance of the same event.How and if clocks are synchronized depends upon the model.In the figures below, curly braces indicate containment.
For example, the notation Evidence{foo} indicates that 'foo' is contained in the Evidence; thus, it is covered by its signature.Example 1: Timestamp-Based Passport Model
illustrates a hypothetical Passport Model
solution that uses timestamps and requires roughly synchronized
clocks between the Attester, Verifier, and Relying Party, which
depends on using a secure clock synchronization mechanism.
As a result, the receiver of a conceptual message containing a
timestamp can directly compare it to its own clock and timestamps.Timestamp-Based Passport Model| |
| | |
| time(RG_v) |
| | |
|<-----Attestation Result---------+ |
| {time(RG_v),time(RX_v)} | |
~ ~
| |
+--Attestation Result{time(RG_v),time(RX_v)}--> time(RA_r)
| |
~ ~
| |
| time(OP_r)]]>The Verifier can check whether the Evidence is fresh when appraising
it at time(RG_v) by checking time(RG_v) - time(EG_a) < Threshold, where the
Verifier's threshold is large enough to account for the maximum
permitted clock skew between the Verifier and the Attester.If time(VG_a) is included in the Evidence along with the Claim value
generated at that time, and the Verifier decides that it can trust the
time(VG_a) value, the Verifier can also determine whether the Claim value is
recent by checking time(RG_v) - time(VG_a) < Threshold.
The threshold is decided by the Appraisal Policy for Evidence and, again, needs to take
into account the maximum permitted clock skew between
the Verifier and the Attester.The Attester does not consume the Attestation Result but might cache it.The Relying Party can check whether the Attestation Result is fresh
when appraising it at time(RA_r) by checking the time(RA_r) - time(RG_v) < Threshold,
where the Relying Party's threshold is large enough to account for the
maximum permitted clock skew between the Relying Party and the Verifier.
The result might then be used for some time (e.g., throughout the lifetime
of a connection established at time(RA_r)). However, the Relying Party must be
careful not to allow continued use beyond the period for which
it deems the Attestation Result to remain fresh enough. Thus,
it might allow use (at time(OP_r)) as long as time(OP_r) - time(RG_v) < Threshold.
However, if the Attestation Result contains an expiry time time(RX_v), then
it could explicitly check time(OP_r) < time(RX_v).Example 2: Nonce-Based Passport Model illustrates a hypothetical Passport Model
solution that uses nonces instead of timestamps. Compared to the
timestamp-based example, it requires an extra round trip
to retrieve a nonce and requires that the Verifier and Relying Party
track state to remember the nonce for some period of time.The advantage is that it does not require that any clocks
are synchronized.
As a result, the receiver of a conceptual message containing a
timestamp cannot directly compare it to its own clock or timestamps.
Thus, we use a suffix ("a" for Attester, "v" for Verifier, and "r" for Relying Party) on the IDs below indicating which clock generated them since times from different clocks cannot be compared.
Only the delta between two events from the sender can be used by the receiver.Nonce-Based Passport Model| |
| {Nonce1, time(EG_a)-time(VG_a)} | |
| | |
| time(RG_v) |
| | |
|<--Attestation Result------------+ |
| {time(RX_v)-time(RG_v)} | |
~ ~
| |
|<--Nonce2-------------------------------------time(NS_r)
| |
time(RR_a) |
| |
+--[Attestation Result{time(RX_v)-time(RG_v)}, -->|time(RA_r)
| Nonce2, time(RR_a)-time(EG_a)] |
| |
~ ~
| |
| time(OP_r)]]>In this example solution, the Verifier can check whether the Evidence is
fresh at time(RG_v) by verifying that time(RG_v)-time(NS_v) < Threshold.However, the Verifier cannot simply rely on a Nonce to
determine whether the value of a Claim is recent since the Claim value
might have been generated long before the nonce was sent by the Verifier.
Nevertheless, if the Verifier decides that the Attester can be trusted to
correctly provide the delta time(EG_a)-time(VG_a), then it can determine recency
by checking time(RG_v)-time(NS_v) + time(EG_a)-time(VG_a) < Threshold.Similarly if, based on an Attestation Result from a Verifier it trusts, the
Relying Party decides that the Attester can be trusted to correctly provide
time deltas, then it can determine whether the Attestation Result is fresh by
checking time(OP_r)-time(NS_r) + time(RR_a)-time(EG_a) <
Threshold. Although the Nonce2 and time(RR_a)-time(EG_a) values
cannot be inside the Attestation Result, they might be signed by the Attester
such that the Attestation Result vouches for the Attester's signing
capability.However, the Relying Party must still be careful not to allow continued
use beyond the period for which it deems the Attestation Result to remain
valid. Thus, if the Attestation Result sends a validity lifetime
in terms of time(RX_v)-time(RG_v), then the Relying Party can check
time(OP_r)-time(NS_r) < time(RX_v)-time(RG_v).Example 3: Passport Model Based on Epoch IDThe example in illustrates a hypothetical Passport Model
solution that uses epoch IDs instead of nonces or timestamps.The epoch ID distributor broadcasts epoch ID I, which starts a new
epoch E for a protocol participant upon reception at time(IR).The Attester generates Evidence incorporating epoch ID I and conveys it to the
Verifier.The Verifier appraises that the received epoch ID I is "fresh" according to the
definition provided in whereby retries are required in the case of mismatching epoch IDs; then the Verifier generates an Attestation Result. The
Attestation Result is conveyed to the Attester.After the transmission of epoch ID I' a new epoch E' is
established when I' is received by each protocol participant. The Attester
relays the Attestation Result obtained during epoch E (associated with epoch ID
I) to the Relying Party using the epoch ID for the current epoch I'.
If the Relying Party had not yet received I', then the Attestation Result would be rejected. The Attestation Result is received in this example.In , the epoch ID for relaying an Attestation Result to the Relying Party is current while a previous epoch ID was used to generate Verifier evaluated Evidence.
This indicates that at least one epoch transition has occurred and the Attestation Results may only be as fresh as the previous epoch.
If the Relying Party remembers the previous epoch ID I during an epoch window
as discussed in , and the message is received during
that window, the Attestation Result is accepted as fresh; otherwise, it is
rejected as stale.Epoch ID-Based Passport Model time(IR_v) ---> time(IR_r)
| | | |
time(EG_a) | | |
| | | |
+---Evidence--------------------->| |
| {I,time(EG_a)-time(VG_a)} | |
| | | |
| | time(RG_v) |
| | | |
|<--Attestation Result------------+ |
| {I,time(RX_v)-time(RG_v)} | |
| | | |
time(IR'_a) <----I'-o--I' ----> time(IR'_v) --> time(IR'_r)
| | |
+---[Attestation Result--------------------> time(RA_r)
| {I,time(RX_v)-time(RG_v)},I'] | |
| | |
~ ~ ~
| | |
| | time(OP_r)]]>Example 4: Timestamp-Based Background-Check Model illustrates a hypothetical Background-Check Model
solution that uses timestamps and requires roughly synchronized
clocks between the Attester, Verifier, and Relying Party.
The Attester conveys Evidence to the Relying Party, which treats it as opaque and simply forwards it on to the Verifier.Timestamp-Based Background-Check Model| |
| {time(EG_a)} | |
| time(ER_r) ---Evidence{time(EG_a)}---->|
| | |
| | time(RG_v)
| | |
| time(RA_r) <---Attestation Result------+
| | {time(RX_v)} |
~ ~ ~
| | |
| time(OP_r) |]]>The time considerations in this example are equivalent to those
discussed under Example 1.Example 5: Nonce-Based Background-Check Model illustrates a hypothetical Background-Check Model
solution that uses nonces; thus, it does not require that any clocks
be synchronized.
In this example solution, a nonce is generated by a Verifier at the request of a Relying Party when the Relying Party needs to send one to an Attester.Nonce-Based Background-Check Model| |
| | |
| time(ER_r) ---Evidence{Nonce}--->|
| | |
| | time(RG_v)
| | |
| time(RA_r) <---Attestation Result--+
| | {time(RX_v)-time(RG_v)} |
~ ~ ~
| | |
| time(OP_r) |]]>The Verifier can check whether the Evidence is fresh and a Claim
value is recent, which is the same as Example 2.However, unlike in Example 2, the Relying
Party can use the Nonce to determine whether the Attestation Result is
fresh by verifying that time(OP_r)-time(NR_r) <
Threshold.However, the Relying Party must still be careful not to allow continued
use beyond the period for which it deems the Attestation Result to remain
valid. Thus, if the Attestation Result sends a validity lifetime
in terms of time(RX_v)-time(RG_v), then the Relying Party can check
time(OP_r)-time(ER_r) < time(RX_v)-time(RG_v).The authors would like to thank the following people for their input:, , , , , , , , , , , , , , , , , , , , , and . Contributors created initial versions of the
terminology section in collaboration with . provided the conceptual
separation between Attestation Provision Flows and Attestation Evidence
Flows.
was a key author of a document that was merged to create
this document.
provided many of the motivational building
blocks with respect to the Internet Threat Model. contributed critical review
feedback as part of the weekly design team meetings that added precision
and depth to several sections.