OpenSSL and LibreSSL PRNG, what’s different?

openbsdhelpusIn July, a blog post from Andrew Ayer described the new, unsafe behaviour of portable LibreSSL 2.0.1. While it is right to say that it’s unsafe, it is still safer than baseline’s OpenSSL and portable LibreSSL 2.0.2. That’s what I’ll explain in this blog post.

OpenSSL

During March 2014, I released two CVE on OpenSSL consumers, stunnel (CVE-2014-0016) and libssh (CVE-2014-0017). I also wrote a paper about it in the french magazine MISC mag 74. Unfortunately the paper is in french and not yet released in CC-BY-NC, so here are the major points:

    • OpenSSL RAND_bytes() pool can be shared by two processes that are related, e.g. with a fork().
    • OpenSSL mitigates that problem by adding the result of getpid() in the entropy pool during a RAND_bytes() call. That means that two processes that share the same entropy pool and end up with the same PID will generate the same pseudorandom numbers.
    • That’s what happens in stunnel in fork mode: a master process initializes the entropy pool and spawns children. As children die, PID are recycled until a PID is reused and starts generating the same sequences of pseudorandom numbers.
    • Hopefuly OpenSSL uses RAND_seed() with the current unix time (time(NULL)) on every SSL handshake, so there’s only a one-second time spot to exploit that weakness, before having to start from scratch again. OSes with sequential PID generation are then less vulnerable than OSes with random PID (AFAIK only OpenBSD). This is because open OpenBSD it’s likely to have two different children have the same PID reused in the same second.
    • In the end, the exploit I wrote for it didn’t work, because the OpenSSL ECDSA_sign() function calls RAND_seed() with the content of the message to be signed, and the secret number k is different every time, mitigating the exploit:

int ECDSA_sign_ex(int type, const unsigned char *dgst, int dlen, unsigned char
    *sig, unsigned int *siglen, const BIGNUM *kinv, const BIGNUM *r,
    EC_KEY *eckey)
{
    ECDSA_SIG *s;
    RAND_seed(dgst, dlen);
    s = ECDSA_do_sign_ex(dgst, dlen, kinv, r, eckey);
    if (s == NULL)
    {
        *siglen=0;
        return 0;
    }
    *siglen = i2d_ECDSA_SIG(s, &sig);
    ECDSA_SIG_free(s);
    return 1;
}

I have written exploit code for it. It doesn’t manage to extract the private key (with OpenSSL) but detects when a PRNG collision happens. Now, that’s a weakness in OpenSSL that I suggested fixing with register_atfork(). It’s different in LibreSSL.

LibreSSL 2.0.1

Portable LibreSSL-2.0.1 is based on LibreSSL-2.0.1 that changed all that code to rely on the arc4random(). On Linux, this function works more or less the same than it did with OpenSSL, with an initialized entropy pool. We see the following properties:

  • fork() is detected by a change in pid (using getpid()). It is not stirred in the pool with RAND_add() and so it’s not possible for two children to share the same pool content. However as described by Ayer, a child may have the same PID as its grandfather and not detect the PID change.
  • RAND_add()/RAND_seed() was removed from the source code. It’s a no-op and references were removed everywhere in libssl and libcrypto. That includes the TLS session management and the ECDSA_sign() functions.

In order to test for the strength of LibreSSL against the CVE-2014-0016 bug, I compiled stunnel 4.56 on Linux with a little addition:

int getpid(){return 31337;}

The exploit worked like a charm because LibreSSL didn’t detect the pid change and reused the same entropy content on two sessions. This is of course a lab test, but I did the same with OpenSSL and couldn’t extract the private key because of the mitigations in ECDSA_sign():

ECDSA/DSA weakness

Without going too far in the mathematical details, it is important to know that both ECDSA and DSA rely on a random big integer, k, to do the signature operation. This number should be perfectly random and not reused, because its knowledge (or reuse) would balance the ECDSA/DSA equations and make the private key easy to calculate.
OpenSSL did RAND_add() on the actual message to be signed. Most likely, this message could be public, but it would be at least different on every attack. This simple RAND_add()/RAND_seed() made my exploit impossible as the value k would be chosen differently on every attack.
On the contrary, LibreSSL removed that statement. The developers decided that the only source of randomness should be the PRNG.
While I agree that having a strong PRNG is a requirement to do strong crypto, the choice of the secret value is a very well known weakness of ECDSA/DSA. DJB explains why they chose in ed25519 not to use RNG at all for the selection of k:

Avoiding RNGs in signatures. It would be better if the signature system didn’t need new randomness for each signature in the first place. The RNG would be used much less frequently, and could afford to be designed more conservatively. We wouldn’t have to worry about the RNG taking entropy from malicious sources. We could move a long-term secret key into a highly protected hardware device for signing, without having to pay for an RNG in the hardware device and without having to worry about the quality of that RNG. We could have test vectors for signing functions, because those functions wouldn’t have a hidden RNG input.

This is exactly what was accomplished by a very simple modification to DSA suggested by George Barwood in February 1997: namely, “a prng is used to generate k, keyed with d (the private key), and seeded with e (the message digest).” Essentially the same idea was suggested by John Wigley in March 1997: “Why not derive the random number from the message to be signed therefore removing the need for a hardware rng (smartcards) or a complex and potentially fallible software attempt. … 1)Hash the message to be signed as usual and use that as the message in the signature algorithm. 2)hash the private key using the chaining variable state left by the previous hash, and use that as the random number.”

I think LibreSSL should implement a workaround such as this one to protect ECDSA against problems just like the ones we discussed, and have the secret value K generated using a function of the PRNG output, message and private key. It probably wouldn’t pass FIPS certification, but who cares 🙂

LibreSSL 2.0.2

The LibreSSL team fixed that bug by using the register_atfork() function, that gets a callback called after every fork. This function is part of POSIX and is portable enough, and better, it ensures that every fork() is detected in the right way. This is not perfect (as wrote Andrew) and an explicit way to reset the entropy pool would be welcome.

Summary

This is a little summary of the different ways of handling fork() in the PRNG and ECDSA signatures as I discussed in the post:

name handling PID change DSA/ECDSA
OpenSSL RAND_add(getpid()) RAND_add(message); k=random()
portable LibreSSL 2.0.1 if(getpid()!=pid) reset() k=random()
portable LibreSSL 2.0.2 onfork() callback k=random()
My ideal onfork() callback k=hash(random() + privkey + message)

To end my point, I would say that the bug that was found is not as bad as it looks, mainly because OpenSSL had a much worse behavior for years and nobody ever complained. The way they solved it is the best I can think of, even though I think something should be done to protect the ECDSA and DSA signature process against future bugs in the PRNG.

To finish I still think it’s a little too soon to use portable LibreSSL in production, as OpenSSL was chopped down badly to fit into OpenBSD’s security model, and the portable LibreSSL team may not be aware of all the traps left in the port. The behaviour of rc4random() on Linux was one of them.

5 Replies to “OpenSSL and LibreSSL PRNG, what’s different?”

  1. k=hash(random() + privkey + message) makes no make sense.

    If the random data is indeed random, there is no benefit to appending any other data.

    Where does this particular idea come from?

  2. Hi Theo,

    I knew that this claim would bring some positive discussion, I should have justified this part a little bit better, but the blogpost was already long. So thanks for your comment.

    The idea globally comes from DJB’s document that I cited in the post: http://blog.cr.yp.to/20140323-ecdsa.html



    More possibilities for the hash input. I’ve heard many stories from DSA/ECDSA implementors who generate k as a hash of various inputs. All of the inputs I’ve heard about fall into four categories:

    P: a string from some PRNG.
    X: the DSA secret key, or some better-separated secret.
    N: the number of messages signed.
    M: the message, or a collision-resistant hash of the message.

    It’s important for k to be unguessable. M and N are guessable, so it’s critical to use at least one of P, X. There’s an easy argument that using X is sufficient here: if X were guessable then the whole system would be broken no matter how k is generated. Note that the same argument doesn’t apply to P.

    It’s also important for k to be different for each message. X doesn’t vary from one message to another, so it’s critical to use at least one of P, N, M. Using M is obviously sufficient here. Using N is also obviously sufficient. Using P might be sufficient but requires a serious PRNG audit.

    It’s also helpful for signing to be stateless. This means that it’s bad to use N or P.

    All of these criteria are satisfied by simply hashing (X,M), as Barwood and Wigley proposed. (P,X,M), (X,N,M), and (P,X,N,M) lose statelessness, and I have no idea why they’re supposed to be better than (X,M). There’s a very small speed argument for omitting M in favor of N, producing (X,N) or (P,X,N), but these lose statelessness and are certainly more difficult to get right. I’m not aware of any argument for omitting X”

    My argument is the following: I think that P alone is not secure enough. We’re speaking of DSA/ECDSA, algorithms that leak their entire private key the second two messages are signed with the same seed. We could argue that the PRNG is secure, but that’s a very hard requirement difficult to prove. What happens if a snapshot of the VM is taken a few instants before a message is signed? That’s probably out of control from the OS.

    In the context of an all-purpose library, N is probably impractical.

    DJB claims then that X and M should be used alone. I agree with him, but in that case, the algorithm is not DSA anymore. That’s why I’m thinking of (P,X,M) as a trade-of between (P) and (P,M).
    Note that OpenSSL is indirectly using (P,M).

    Aris

  3. Well, I don’t buy it. unknown + known is unknown. hash(unknown + known) is no better.

    Seems the primary complaint is about poor PRNGs. The solution is to build excellent PRNGs which are always available, especially if we can make stream-cipher variants have some backtracking resistance and such via shared consumption. This goal is achievable.

    I do love how VMs keep entering the discussion. Obviously some attacker will peek inside the VM to look at the PRNG state, but I guess they won’t look at the keys…

    But thanks for your posting making it clear that OpenSSL is not doing things better. This was known to us, and guided us down this road. The problem starts by making PRNG reseeding the responsibility of the applications.

    Since PRNG maintainance is not the primary purpose of an application, the authors will forget to request reseeding. Oct 2013 in OpenBSD we removed the arc4random_stir() and arc4random_addrandom() APIs from our libc. We were unable to find applications using them correctly, and realized that the library must take responsibility for this. Our kernel+libc are improving to achieve this, then we will help the other systems. The same policy is now being pushed in libressl. Let’s see where it goes.

  4. > Well, I don’t buy it. unknown + known is unknown. hash(unknown + known) is no better.
    In fact it would be f(unknown + secret + variable). The combination function doesn’t have to be a secure hash.

    > Seems the primary complaint is about poor PRNGs
    Exactly. It’s very difficult to build excellent PRNG (you must know it) and it’s often the weakest part in many cryptosystems. Reducing the impact of a broken PRNG, just in case, is imho a good step in “defense in depth”.

    > I do love how VMs keep entering the discussion. Obviously some attacker will peek inside the VM to look at the PRNG state, but I guess they won’t look at the keys…
    I wasn’t thinking in a deliberately malicious scenario. Imagine a production server that serves thousands of signed DNS-like requests. Every request requires a few bytes of random for the signature process.
    Now, an admin takes a snapshot of the VM before doing a critical maintenance, while still in production. Unfortunately the maintenance goes wrong and he rollbacks to the previous snapshot.
    How can you guarantee, from withing the OS, that the next request to be signed will use a unique random value?

    Agreed with the PRNG maintenance in applications, but to remove these interfaces you must have a very strong confidence in your PRNG, and this in every platform supported by the toolkit. I see a lot going in that direction in linux nowadays, and it’s certainly pushed by you so thanks for your work 🙂

    Aris

  5. Hi Theo,

    Does my proposition to use P + M instead of P still looks ridiculous in light of the latest FreeBSD PRNG bug?

    But I guess the answer is to go back 3 months in time and fix this “excellent” PRNG.

Leave a Reply

Your email address will not be published. Required fields are marked *