/
koeberl14entropy

koeberl14entropy

Title of the paper: Entropy Loss in PUF-based Key Generation Schemes: The Repetition Code Pitfall

Available at: https://ieeexplore.ieee.org/document/6855566

Abstract

One of the promising usages of Physically Unclonable Functions (PUFs) is to generate cryptographic keys from PUFs for secure storage of key material. This usage has attractive properties such as physical unclonability and enhanced resistance against hardware attacks. In order to extract a reliable cryptographic key from a noisy PUF response a fuzzy extractor is used to convert non-uniform random PUF responses into nearly uniform randomness. Bosch et al. in 2008 proposed a fuzzy extractor suitable for efficient hardware implementation using two-stage concatenated codes, where the inner stage is a conventional error correcting code and the outer stage is a repetition code. In this paper we show that the combination of PUFs with repetition code approaches is not without risk and must be approached carefully. For example, PUFs with min-entropy lower than 66% may yield zero leftover entropy in the generated key for some repetition code configurations. In addition, we find that many of the fuzzy extractor designs in the literature are too optimistic with respect to entropy estimation. For high security applications, we recommend a conservative estimation of entropy loss based on the theoretical work of fuzzy extractors and present parameters for generating 128-bit keys from memory based PUFs.

 

Introduction

Most if not all cryptographic schemes require a key and are underpinned by the requirement to store this material securely. Conventional secure key storage schemes use nonvolatile memory (NVM) technologies to store cryptographic secrets which can be vulnerable to attacks such as readout and cloning. Key generation and storage solutions based on Physically Unclonable Functions (PUFs) are emerging which promise high levels of assurance at low cost.

Since PUF responses are inherently noisy and may not be uniformly random, a post-processing function is needed to condition the raw PUF responses into high-quality cryptographic keys. This post-processing function is known as the fuzzy extractor or helper data algorithm in the literature.

Recent soft decision fuzzy extractors are based on concatenated codes where the inner stage is a conventional error correcting code such as BCH, Reed-Muller, or Golay code the outer stage being a repetition code.

In this paper, we show that fuzzy extractors based on repetition codes are not without risk, particularly when the PUF response has low entropy. If the min-entropy of PUF is lower than 66%, then repetition code based fuzzy extractors could result in zero leftover entropy and should be avoided.

Even if the PUF response has excellent entropy, e.g., with 95% min-entropy, we need a larger PUF to extract a 128-bit key than estimated in Efficient helper data key extractor on FPGAs.

The SRAM PUF has the largest min-entropy with 87%. Although most of the SRAM PUFs have good entropy, certain SRAM devices have been reported with low entropy [10]

Related Work

Dodis et al. [6], [5] provided a formal definition of fuzzy extractors that converts noisy data into cryptographic keys. This paper provides two fuzzy extractor constructions for the Hamming distance metric: code-offset construction and syndrome construction. The code-offset construction is frequently used in the PUF literature, e.g. in [3], [17], [23].

Maes et al. [17] proposed a fuzzy extractor using soft decision error correction for memory-based PUFs. In soft decision fuzzy extractors, the error probability of each PUF cell is stored in the helper data, so that better decisions can be made during error correction. It has been shown in [17] that soft decision fuzzy extractors could reduce the PUF size by up to 58% when generating a 128-bit cryptographic key.

All three fuzzy extractors [3], [17], [23] use repetition codes.

Review of Fuzzy Extractors

In this section, the authors introduced:

  • terminology

  • definitions of min-entropy, average min-entropy and statistical distance

  • formal definition of fuzzy extractor

K: secret key

The security property states that the extracted key K is very close to a randomly chosen l-bit key, with only a negligible statistical difference. To extract a 128-bit cryptographic key from a fuzzy extractor with 128-bit security, one can choose l = 128 and = 2−128 as parameters. The choices of n, m, and t will depend on the min-entropy and error rate of the PUF response. We shall discuss how to choose these parameters in Sections IV for high security applications.

Fuzzy Extractor based on Code-Offset Construction

Two fuzzy extractor constructions have been provided for the Hamming distance metric [5]: the code-offset construction and the syndrome construction. Both constructions have similar efficiency.

entropy extractor = hash function

In many practical implementations of fuzzy extractors [3], [17], [23], universal hash functions are replaced with an LFSR-based Toeplitz hash [11] or other cryptographic functions (e.g., SHA or HMAC).

l: key length

m: min-entropy per sequence

n-k: loss due to ECC

L: loss due to hash

Review of Repetition Code based Fuzzy Extractors

Using a conventional ECC, such as a repetition code or a BCH code, the code-offset construction may be inefficient.

As argued by Bosch et al. [3], using repetition code alone requires large PUF responses, using Golay code suffers high error probability, and BCH code has high decoding complexity. As a result, they proposed to use a concatenated code to address the PUF requirement and decoding complexity tradeoff.

When used in a fuzzy extractor, intuitively, the outer code brings down the error rate, and the inner code eliminates the remaining errors.

Entropy loss in repetition code based fuzzy extractors

If the min-entropy in w is low, e.g., 65%, then there is zero leftover entropy left from the code-offset.
Even if the min-entropy in w is high, e.g., 95%, the leftover entropy decreases quickly if we use a large repetition code.

The authors give two concrete scenarios where entropy loss is a concern:

  1. suppose the first bit of the PUF response w is fixed → zero leftover entropy

  2. suppose the PUF response is strongly biased → even if the PUF response has good entropy (say 80% or even 97%), severe entropy loss is possible in a repetition-code based fuzzy extractor.

 

The methods from [3] are hard decision fuzzy extractors and the methods from [17], [18], [23] are soft decision fuzzy extractors.

Observe that if the PUF response is random, then there is at least 168-bit leftover entropy. However, if the PUF response has 95% minentropy, half of the methods have 0 leftover entropy and none of them have more than 128-bit entropy. If the PUF response has 75% min-entropy, no entropy remains.

[3], [17], [23] are too optimistic in entropy estimation: they assume that the PUF response is uniformly random during the secure sketch procedure, and then consider entropy loss only during the randomness extraction process (termed privacy amplification in these papers).

Parameters for high security applications

In this section, we examine how to construct fuzzy extractors for memory-based PUFs in high security applications. As discussed earlier, many existing fuzzy extractor proposals [3], [17], [23] are over-optimistic on entropy estimation and do not have enough entropy for extracting a 128-bit key, even if the PUF min-entropy is 95%. These fuzzy extractors may be suitable for low-cost, resource constrained devices such as RFID or smart cards. But for high security or high assurance applications, we recommend a conservative estimate of entropy loss based on theories from Dodis et al. [5] so that we have high confidence about the leftover entropy in the extracted cryptographic key.

VERY IMPORTANT SECTION → check the paper for the technical details

For 128-bit computational security, we can set L = 128 based on the generalized LHL in [2]. In other words, to extract a 128-bit key from PUF, we need to have m + k − n (the leftover entropy of the PUF response given the helper data h) greater or equal to 256 bit.

Repetition code R[5, 1, 5] seems to be most efficient among all possible repetition codes

Observe that if the min-entropy of the PUF is high enough, repetition code based fuzzy extractors are still effective.

Also observe that soft decision fuzzy extractors are slightly more efficient than hard decision fuzzy extractors, with smaller ECC complexity and smaller PUF requirements.

However, soft decision fuzzy extractors require multiple PUF measurements at the generation phase and require much larger helper data, which may be unacceptable for some applications.

With low PUF entropy, a repetition code would result in zero leftover entropy in the key. The soft decision fuzzy extractor from [17] may work, but significant modification is needed since repetition codes cannot be used likely resulting in a larger and costly soft-decision maximum-likelihood decoder.

For PUFs with high error rate and low entropy, we propose using the dark bits method (pre-selection)

Related content