Securing over-the-air updates on IoT devices with verifiable credentials
Securing firmware updates for embedded devices without costly PKI infrastructure: By using decentralized identities (DIDs) and SIOPv2, a secure, verifiable update process is created. Verifiable Presentations streamline authentication and authorization, offering a cost-effective solution. This process can be seamlessly implemented with filancore Sentinel, simplifying identity and credential management for IoT updates.

After their last adventure designing an auditable door lock, Alice and Bob are now tasked to implement an update process on these simple embedded devices. The customer did not mention that requirement beforehand but, fortunately, they are just starting production so most parts can still be influenced.
They were given a rough outline of what is expected:
- There is an isolated end-of-line process that flashes the initial firmware to the device.
- The firmware and its updates are proprietary and their distribution should fit into the existing processes.
- Updates must be signed by the company and verifiable for the device.
- Licensing costs for public key infrastructure (PKI) should be avoided if possible.
Version zero: deploying firmware
Alice is intrigued, she always wanted to sneak into their hardware department and get a look at their processes. Unfortunately, when the customer said isolated end-of-line process she quickly caught on that our hardware guys took that seriously and really meant isolated. After fetching the sources of a firmware release, the whole build process runs sandboxed and with limited connectivity.
She was also told by a friendly guy named Fred that the devices would all receive the same firmware and only receive a unique identity after turning on for the first time, still in an isolated environment. Apparently, they only check connectivity at the moment and that each device identifies itself with its MAC address from which they derive their unique serial number later on.
With this understanding of the end-of-line process, she decided to return to her office and hit Bob up for some brainstorming. That worked quite well the last time around. After listening to Alice’s introduction to the process, Bob, the spoilsport, of course immediately pointed out that this whole isolated process with only a connectivity check looked really hostile to creating an account for that device on their download center and getting API credentials on the device before it is shipped.
However, Alice stopped by the coffeemaker on her way back to the office so she was in a far better mood and started drafting a solution: “What if we would generate one-time access codes for our download center and put them on the connectivity check server? The MAC address check would then respond with such a code instead of a no content response and the device would use it the first time in the field to get proper credentials.”
Bob didn’t really like it and foresaw some nasty problems with these one-time codes but he didn’t have a better idea yet, so he decided he’d roll with it for now to kickstart the design debate.
Verifying updates: the threat of file serving
Now that Alice and Bob agreed on a way to get access credentials onto the device, the next step would be designing an update check. Fortunately, their download center’s current API already got them covered: there is a POST endpoint to query for new eligible versions given the own firmware version and authentication details returning a list of files to download and a GET endpoint to actually get files from. So far for the easy part.
This time Alice wanted to be the spoilsport and remarked that the current system had one major omission with respect to the customer’s wishes: files are not inherently signed, there is only a checksum file available. Bob tried the hand-waving this time and suggested that they’d simply do it like most package managers and add another file that contains the signature of the hash.
He then noticed himself that this is only part of the solution. The device would still need a trusted way of verifying the signature. That meant either some bundled certificate or a trusted way to fetch the public key to verify it with. Fortunately, their adventures into filancore streams taught them about self-sovereign identities and their relationship to key material and signatures.
So instead of diving into the expensive public key infrastructure the clients wanted to avoid, they decide to trust their company’s domain and its SSL certificate and settle for a did:web identity to serve the key. Thus, they resolve the identity from did:web:company.com
to a JSON document containing the public key material to verify the signature files. And while not adding much to did:web
identities, with the identity and corresponding domain linkage assertions they feel reasonably sure about the identity serving.
{
"@context": ["https://www.w3.org/ns/did/v1", …],
"id": "did:web:company.com",
"verificationMethod": [
{
"id": "did:web:company.com#0",
"controller": "did:web:company.com",
"type": "JsonWebKey2020",
"publicKeyJwk": {…}
}
],
"authentication": ["did:web:company.com#0"],
"assertionMethod": ["did:web:company.com#0"],
"keyAgreement": ["did:web:company.com#0"],
"service": []
}
{
"@context": "https://identity.foundation/.well-known/did-configuration/v1",
"linked_dids": [
{
"@context": [
"https://www.w3.org/2018/credentials/v1",
"https://identity.foundation/.well-known/did-configuration/v1"
],
"issuer": "did:web:company.com",
"issuanceDate": "1970-01-01T01:01:01Z",
"expirationDate": "2099-01-01T01:01:01Z",
"type": ["VerifiableCredential", "DomainLinkageCredential"],
"credentialSubject": {
"id": "did:web:company.com",
"origin": "https://company.com"
},
"proof": {
"type": "Ed25519Signature2018",
"created": "1970-01-01T01:01:01Z",
"jws": "…",
"proofPurpose": "assertionMethod",
"verificationMethod": "did:web:company.com#0"
}
}
]
}
As can be seen in the picture, this now has a few trust assumptions, the most important one that the company domain’s SSL certificate is always up to date and neither certificate nor server are compromised. As a number of different components on the device already make this assumption, Alice and Bob decide to proceed but they would have to note that down for their whitepaper of the update flow.
SIOPv2: robustifying the login procedure
One nice flowchart done, that calls for a coffee break to find the sore spots in this plan. It worked the last time, why not again. On their way to the coffee maker, they passed one of their company’s “great employer” certificates. Bob stopped. Wait a minute, he said, such a certificate can be used by our company to proof to others how satisfied employees are. Maybe we can use the same approach for login instead of the brittle one-time token approach. Surely one of the industry standards like OAuth or OpenID Connect (OIDC) have a way to log us in if the device says “trust me, I’ve been produced by the company”.
Alice liked the idea. Given that they use a decentralized did:web
identity for update verification, it seemed quite reasonable to look into self-sovereign certificates, aka Verifiable Credentials. Instead of “great employer”, the device would be certified to be “produced by us” using a signature verifiable with the decentralized identity. So instead of the one-time access code, the end-of-line process will issue such a credential for the device so that the device is able to present it when logging in.
Now that we have something to present, both Alice and Bob were wondering how to proceed. Basically they wanted to build some kind of challenge-response authentication using this credential. A short while of digging through OIDC standards later, they found that actually there is such a thing as being a self-issued identity provider, i.e., “login with device” instead of “login with Google”. This SIOPv2 standard provides various flows for authentication without username and password.
For that to work, Alice and Bob have to revisit one major assumption: the device needs a crypto chip with a private and public key. Fortunately, the door lock already signs things and fulfills this requirement. And the public key is something that can be made known to the update server by including it in the end-of-line process’ information gathering that already saves the MAC address. So far so good, the device can sign requests, the update server knows the public keys to verify them. Now, they needed to understand SIOPv2.
As the update server and the device are two separate entities, the relevant flow from the SIOPv2 specification is the cross-device flow. The basic process for logging in then is as follows:
- The device initiates a login request telling the update server who it claims to be (by means of a decentralized identifier, in this case the device’s public key encoded as
did:key
). - The update server responds to the login request with a SIOPv2 URL like
siopv2://?
client_id=did:key:…&
request_uri=https://auth.company.com/request/<request-id>
- The device fetches additional parameters of the login request like below and processes the request, sending a signed response to the update server.
{"alg": "ES256","kid": "did:web:company.com#0","typ": "JWT"}.
{
"client_id": "did:key:…",
"response_type": "id_token",
"redirect_uri": "https://auth.company.com/redirect",
"scope": "openid",
"client_metadata": {
"subject_syntax_types_supported": ["did:key", "did:web"],
"id_token_signed_response_alg": "ES256"
},
"nonce": "n-0S6_WzA2Mj"
}.[signature]
- The update server checks this response for authenticity (by checking the list of allowed public keys for the public key that just completed the request) and decides whether to let the device in with its self-issued ID token.
{
"iss": "did:key:…",
"sub": "did:key:…",
"aud": "did:key:…",
"nonce": "n-0S6_WzA2Mj",
"exp": 1311281970,
"iat": 1311280970
}
Alice and Bob were satisfied. That kind of authentication looked a lot more reasonable than the ad-hoc one-time access code. And it even uses an open standard with only one tiny adjustment to the end-of-line process. Unfortunately, it did not even need their nice idea of presenting a verifiable credential yet.
After Alice and Bob left their trail to the coffee maker to research SIOPv2, they finally reached it in another attempt. Charlie from ops was there too and complained loudly about their customer login not working on the support page, what a coincidence. In the heat of the discussion about customer authorization failures, Alice realized that their SIOPv2 approach provides a nice way to authenticate the device but the authorization is basically a good old allowlist. There had to be a more elegant solution.
An hour of rekindled standards research later, she emerged from the internet with another concept: Verifiable Presentations. After finding Bob submerged in documenting their current approach, she stopped him and suggested him to look at the following flow:
- Prerequisite: The update server has a certain kind of verifiable credential expects. Let’s say it needs the fields
produced-by: us
,production-date: <date>
, andserial-number: <number>
signed by our very owndid:web:company.com
. - Prerequisite: The device actually holds such a credential because that’s what the end-of-line provisioning responds to the device instead of one-time access codes.
- The device initiates a login request claiming to be
did:key:<public-key>
(like in the pure SIOP case). - The update server responds to the login request with a SIOPv2 URL acknowledging the request. In its response, the server not only requests an
id_token
but also avp_token
for a verifiable presentation of the credential from the provisioning process proving that it wasproduce-by: us
etc.
{
"client_id": "did:key:…",
"response_uri": "https://auth.company.com/redirect",
"response_type": "vp_token id_token",
"response_mode": "direct_post",
"presentation_definition": {
"id": "vp token example",
"input_descriptors": [
{
"id": "ProductionCredential",
"format": {…},
"constraints": {
"fields": [{
"path": ["$.produced-by"],
"filter": {
"type": "string",
"pattern": "us"
}
}]
}
}
]
},
"nonce": "n-0S6_WzA2Mj",
…
}
- The device fetches additional parameters of the login request and processes the request, sending a signed response to the update server including a verified presentation of the credential.
- The update server checks this response for authenticity and decides whether to let the device in solely based on the validity of the provided verifiable presentation without having to check an allowlist.
Bob was intrigued, not only does this remove a communication step between isolated provisioning environment but it also provides a reliable way of authenticating and authorizing in one flow so you only have to provide eligibility criteria for authorization like production-date
in a certain range instead of serial-number
from this list.
Credentials everywhere
Now that they have been hooked by the world of verifiable credentials and presentations, they pondered whether they would really need all that fuss with multiple files in and around the update: a file for the update, a file for its hash, and a file for its signature with a predetermined (not self-contained) identity who signed the update. That sounds like such a prime use-case for credentials that they simply had to take a stab at redesigning it.
First off, they would have to integrate hash and signature into a credential. So they would need the credential basics first. Issuer of the update and therefore also the attestation that composes the credential is did:web:company.com
. Check. The credential subject is basically the file, so that gets a bit hairy. We identify the file by its hash, so let’s just add a hash
property to the subject (basically, the subject represents the credentials’ claims) and be done with it. By convention, subjects are also assigned an ID, mostly a URL, we don’t need this here but it’s probably a good idea to put the URL the update will be retrieved from here, so something like https://download.company.com/firmware/vx.y.z/archive.tar.gz
.
At this point, Alice and Bob basically replaced the separate hash file with the added verification metadata which provided the hash and the file it is supposed to be providing claims about. So the only thing that is missing is the signature. Fortunately, there is a standard for that too, so we simply add a proper cryptographic proof to the credential that identifies did:web:company.com
as the signing identity holding the keys and we’ve got our verifiable credential for the update package.
{
"@context": ["https://www.w3.org/2018/credentials/v1"],
"issuer": "did:web:company.com",
"issuanceDate": "1970-01-01T01:01:01Z",
"type": ["VerifiableCredential", "VerifiableFirmwareUpdate"],
"credentialSubject": {
"id": "https://download.company.com/firmware/v1.2.3/archive.tar.gz",
"hash": "…"
},
"proof": {
"type": "Ed25519Signature2018",
"created": "1970-01-01T01:01:01Z",
"jws": "…",
"proofPurpose": "assertionMethod",
"verificationMethod": "did:web:company.com#0"
}
}
Alice pondered whether they had to discuss further ideas like putting the credential directly into the archive. But she figured that with the referential guarantees of credentials the separate file is sufficient. And as a separate (plain text / JSON) file, it’s also the more responsible way to avoid having someone download and extract an unknown archive in a potentially faulty archive format that can brick your device before being able to verify the signature of what has been downloaded. And if they needed to regenerate the credentials due to new keys of the identity, they would not have to regenerate the images.
Content with the creation process of the update and its corresponding credential, Alice and Bob decide to pitch the idea to the firmware folks around Fred and call it a day.
Performing the update
Fred looked at the charts provided by Alice and Bob. “This is something new”, he thought. “So let’s see if we can integrate your process into our existing update solution SWUpdate”.
“First off, I really appreciate you guys thinking this through. Unfortunately, I may have expressed myself poorly. We cannot use the customer-facing download center for firmware serving, we will have to make do with our current image provider infrastructure for compatibility reasons.” After seeing a sad look on Alice’s face, he continued, “Still, I like the SIOPv2 approach for other requests we make, we have a few server connections on the device that we should be able to use it for. And we will surely migrate to a server that supports it if we change systems.”
He invited Alice and Bob to the hardware department’s self-engineered coffee maker and a whiteboard to talk it over. Fortunately, the current process does not involve much authentication, so they skip ahead to the credential verification of the update. It would have been too easy if Fred didn’t have another request for that too: “Guys, I know you explained that you wanted the credential to be hosted separately but would you mind if we included it in the image?”
After a short discussion, it seemed that untrusted extraction of the downloaded swu image was not part of the threat model. And not having to fetch it separately definitely has its merits. It would still provide a hash over the actual update content and a signature so its purpose would remain intact. The device will still have to fetch our did:web:company.com
for its public key in the process to verify it. Alice tried to object but realized that her previous point about possibly regenerating the credentials due to key changes is practically moot because credentials can reference any key in the did:web:company.com
identity so the identity can add new keys without breaking old ones.
Bob ventured one of his open questions: “We now have a verification flow but what does the actual verification? I’m not sure, SWUpdate will be able to do that on its own, right?” Fred agreed but already presented a solution: SWUpdate provides a pre-update hook that is executed after downloading the update but before installing it. So this hook would verify the credential from the image to check the image data for integrity and signature for verifiability and abort the update if there is a problem. Incidentally, the hook was actually introduced to aid creating alternative firmware verification modes. So the image basically would need two changes: a config change for the pre-update hook and a small binary to run there that verifies the image.
Great, so the update process already works and the image has been adjusted. They quickly agreed that the surrounding tooling would be rather easy to write, i.e., storing the response of the connectivity test as a verifiable credential on the device and writing a verifier binary for the pre-update hook. Fortunately, they know of filancore and its SDK since the access log part of the project to finish this quickly.
Speaking of the access log, as the devices now have their credentials, they can revamp the authentication on that front by allowing new devices to log in with SIOPv2 so they can fade out the allowlist of devices that can access the server without breaking anything. Bob was relieved. While the whole SIOPv2 flow hasn’t proven as helpful as hoped for the update process, they have still improved the overall setup with it so they didn’t waste all that reading of complicated standards.
After this rather interesting design session, Fred congratulated Alice and Bob for the nice preparation and discussion and wished them a nice day. Both left the hardware department with the good feeling of a successful collaboration.
Using filancore Sentinel for OIDC
Alice and Bob solved their next adventure and dipped their feet into the world of OIDC and self-sovereign identity standards. Next time around, they could simply use filancore Sentinel which:
- reliably handles identity and credential management without PKI costs,
- supports all common identity-based OIDC flows like SIOPv2, OID4VCI, and OID4VP,
- allows integrating these flows with existing identity providers like Keycloak,
- provides ready-made integration with industry-grade OTA update solutions like SWUpdate, and
- provide an IoT-ready SDK for issuing and verifying credentials.