About Miracle:

The SCARV Miracle study aims to provide a rigorous and systematic evaluation of micro-architectural power side-channel leakage effects found in common embedded CPUs and micro-controllers.

  • The Targets page lists the set of target devices we have analysed so far as part of the study.
  • The Experiments page lists each experiment, and the targets for which we have results.
  • All of the infrastructure and experiment code used in the study is available on GitHub.

If you use our work in papers or reports, please consider letting your readers know:

@MISC{scarv:miracle,
    author = {Ben Marshall, Daniel Page, James Webb},
    title  = {Miracle: Micro-architectural Leakage Evaluation},
    howpublished="\url{miracle.scarv.org}, \url{github.com/scarv/miracle-experiments}"
}

What we hope to contribute:

Cryptographic engineering is hard. Side-channel resistant software is extremely hard. We want to make it easier.

  • There is a lot of literature on algorithms which, if implemented correctly, we are reasonably confident will behave robustly under leakage analysis.
  • We have a weak notion of "correctness" with respect to leakage resilience.
  • Attacks and detection techniques are improving all the time. Their results or the exactness/strength of their claims can be easily misunderstood.
  • Theoretically provably secure algorithms, once implemented, does not always stay secure.
  • It is rarely obvious exactly where leakage is coming from or why. There are many device specific pitfalls which one can encounter when writing leakage resistant code.
  • There is lots of literature on abstract masking algorithms, and on implementation effects which give rise to leakage. However, these two bodies of literature rarely seem to interact.
  • There is a need for practical guidance and information for engineers: How to approach writing leakage resistant code for a given device? How to design a new device with leakage resilience in mind?
  • Different devices are often hard or impossible to compare based on their leakage characteristics. E.g: the ARM M0 core is one of the most studied devices in the world from a leakage perspective. But finding comparable data-sets or lists of characteristics / implementation "gottcha's" is almost impossible.
  • Recent work tries to create provably secure masked implementations by modelling the micro-architectural behaviour of target devices. Without access to the underlying engineering designs, these models must be built systematically and empirically using experiment sets like ours.

Arguably, a bigger contribution is being able to organise each of these new effects, and devise experiments to test for them quantitatively. Having clear examples of new/interesting/existing effects then act as a useful contribution in their own right, as well as acting as a proof of concept / usefulness for the tooling and methodology.

It's reasonable to assert that there are potentially hundreds/thousands of different combinations of device, micro-architectural leakage source and methods of exploitation. The value of finding any given one of these is hence small, given the total problem space (this sort of follows from existing literature, where many papers appear each detailing a small effect). The real value then comes from making it easy to explore the problem space, and organise the results of that exploration.

Qualitative contributions:

  • Raise awareness of implementation difficulties across the literature, and bridge the gap between more "theoretical" leakage papers v.s. empirical studies of leakage effects.
  • Understand how the same *code* can leak differently across different devices.
  • Understand how the same *action* (syntactically different but semantically identical code) can leak differently across different devices.
  • Understand what the most important questions are about a device from the perspective of it's behaviour under leakage analysis.
  • Enable a 3rd party to contribute information on a device, or pose a new / relevant experiment.

Tooling:

  • Create a set of micro-benchmarks which probe *specific* pieces of functionality about a range of devices.
  • Ideally, each benchmark yields either a yes/no answer to a question of the form "Does device X exhibit leakage when executing this particular code-idiom"
  • Care should be taken to avoid questions which tempt comparisons of the "Device X leaks *more/less* than device Y". The current methods of leakage in the literature do not support such comparisons.

Methodology:

  • A tool flow that is totally separable from the devices and experiments.
  • Create a standardised flow for running said benchmarks on different devices and collating their results *in an actionable way*.
  • It should be *trivial* for individuals / organisations to add both new devices and new benchmarks.
  • Present results such that the community can add to them over time.

Systematisation of Knowledge:

  • Given a set of benchmarks and a standardised flow for executing them, create a database of as many different devices and their analysis results as possible.
  • Engineers/Researchers can then query the database:
  • For a given benchmark, how do all devices behave under it?
  • For a given device, what key leakage phenomena should I be aware of when writing leakage resilient code for it?
  • Does this device behave in ways that transparently undermine my proof of security?
  • Give some unexpected leakage in my implementation, which known effects in this device might be the cause?
  • How can / is this source of leakage be exploited? Do I as an engineer need to worry about it?
  • Which parts of the academic literature are relevant to this device or effect?