It’s summer time, people are wandering off to the world of vacations, and Fred has the hardware department to himself for a change. Since his adventure with the software folks Alice and Bob to secure OTA updates, he has spent quite a few off-minutes thinking about the company’s next projects and common pitfalls he might want to address in his next designs.

One problem repeatedly surfaced: Fred and his team are building small sensor devices, usually the start of a chain of data processing, maintenance monitoring, or other uses of the devices’ output. But at the moment, the chip just dumps data to a third party that collects it. And this data collector trusts the chips to provide valid data not claiming to be some other chip.

While this kind of trust is fine for isolated sensor networks, the problem of actually verifying the origin of data so that no sensory “identity crisis” stays undetected is an obstacle for any production system that is or will be audited. Especially with supply chain regulation only increasing the maintenance and verification duties.

The problem of persistence and remote verifiability

So Fred does what every engineer likes to do: close his eyes and hope the problem goes away start imagining the next project he might have to work on to come up with a nice not-too-overengineered solution upfront. So he considers the following simple scenario for a start: there is an IoT device with an image sensor that has to send its data to some kind of collection service, potentially a gateway to a whole series of processing steps.

It’s not as if Fred particularly liked imaging sensors. But there is a reason for this choice: Image data are comparatively large. There is no way the sensor device can store all data ever created in its storage, let alone in some kind of trusted storage. So the data collector can’t perform spot checks when it starts doubting the origin of the data. Given that constraint, Fred has just cut himself off some very convincing shortcuts like just putting in more memory.

As soon as the sensor goes into mass production, there will probably be many sensors pushing images to the data collector. So we may use the image metadata to sort of encode a proof of origin. But then we have to store that proof for potential spot checking. Sadly, not a real solution.

The inherent conflict of interest at this point is storing proof of origin vs. wanting to verify the origin of data remotely. Fortunately, IT security has a solution for this problem: cryptographic signatures using public key cryptography. As long as the data collector knows the sensor’s public key, the sensor can just sign the data with its private key and the data collector has an easy way to verify data integrity by verifying the cryptographic signature.

Software keys and weak links in the verification process

At this point, Fred needed a key pair. On his commodity laptop, Fred could have asked the trusted platform module (TPM) for a key and be done. IoT devices don’t come with such a device by default. Still, they usually have enough compute power to run some software-based key generation algorithm.

So Fred decided to use a software implementation of a key pair generation algorithm. Seconds later, he realized that he had just swapped one storage problem for another. If he needed a persistent public key that would not change even if there was a power outage, he also needed a way to save the private key on the device. And a private key needs trusted storage. Not much of it but still you cannot have anyone with access to the device just read the private key or else you lose all the authenticity of a signature.

So he pondered his options. A simple file-based storage of the private key was out of the question. But what guarantees would he be lacking? With a file-based storage, the data sent to third parties are as authentic as the file access to the device is secured. If the data collector needs to trust the data to produce a statement based on these data, it can verify the signature. But this relies on the assumption that the private key cannot be accessed by anything but the internal signature process. So how would he restore the necessary trust guarantees by read-protecting the key?

At this point, he could not come up with a software-based solution that could not be circumvented when given physical access to the device. As long as your userspace signing program needs to read the private key to sign, anyone with access to the file system can read the private key. If your supply chain starts with such a weak link, it can’t become more reliable or verifiable than this.

Hardware solutions and why they matter

Fred wouldn’t work in the hardware department if he didn’t try to come up with a hardware solution when software let him down again. So back to the point where he would have asked his laptop’s TPM for a key: what does it do to protect its keys and how would your most basic TPM for an IoT device look like? Actually, the TPM spec (ISO/IEC 11889) is impressively complicated, so he drew inspiration from some less convoluted crypto chip designs.

First off, his to-be-designed chip has the primary purpose to read-protect a private key. So it must never expose that part of the key pair. Hence, it needs to implement all operations that need access to the private key itself: signing and verification of signatures and potentially asymmetric encryption and decryption. Given the computations involved in these processes, that would already make a non-trivial chip in itself.

Now that the chip protects the private key, it actually also needs to store the key. So it needs some kind of protected storage that an external entity has no access to. If it didn’t have one, we would be back to new key pairs at each device boot. But if it stores the key, it also needs a way to generate it. So some kind of initialization and, ideally, write-locking the memory afterwards to avoid tampering with the key material. Fred was increasingly annoyed, we just went from a simple “read-protected storage” to a chip design with some serious business logic and security properties of its own.

And there are even more caveats here. The chip needs a way to expose the public key of its private key so that others can actually be sure that this device is what it claims to be. Furthermore, the chip should be somewhat tamper-resistant: if it detects a major change of hardware environment, it should definitely invalidate its keys. As if that wasn’t bad enough, it would probably also need some kind of certification so that the data collector and later parts in the supply chain have no reason whatsoever to doubt the origin of the data when seeing a signature by such a chip.

At this point, Fred gave up his theoretical evaluation of the situation. A crypto chip solves too many problems at once that it was no use designing his own hardware for it. Being out of his depth here, he looked for readily available crypto chip solutions that would help him make his devices the best possible start of a data supply chain.

Managing device identities with crypto chips and filancore Sentinel

Fred already heard Alice and Bob talk about filancore’s identity solutions. So he decided to check whether they had something in store for him:

  • With the Tropic01 chip from their partner tropicsquare, they recommend a secure element that provides much more hardware support for common verification cases than Fred actually came up with.
  • filancore ankrypt allows accessing secure elements like these on your IoT device, helping you to create and verify signatures and managing your whole device identity.
  • And for managing all the device identities across your devices and collect data from devices, filancore Sentinel and filancore streams got you covered.