[PATCH] getrandom: new applet

Rich Felker dalias at libc.org
Tue Jul 12 02:26:39 UTC 2016


On Mon, Jul 11, 2016 at 08:19:29PM +0200, Denys Vlasenko wrote:
> On Mon, Jul 11, 2016 at 8:58 AM, Rob Landley <rob at landley.net> wrote:
> > Anyway, the point is you can't voluntarily collect _more_ entropy from
> > these sources. They run when there's data to gather, calling the
> > routines when there isn't new data to gather doesn't help much.
> 
> And yet, interestingly, if you try to solve an opposite problem,
> the predictable and accurate benchmarking, you suddenly discover that
> the damn thing just can't be made to run completely deterministically.

The fact that your benchmark timings vary from run to run doesn't have
anything to do with whether they're deterministic or predictable. It's
mostly a function of the full state of the kernel (and all running
processes) -- aside from asynchronous hardware events, the main
factors which influce the timings are the points at which the
scheduler interrupts, the state of the cache, page tables, and tlb,
etc.

On modern x86's there's quite a degree of variation, for complicated
reasons that basically all stem from (1) unpredictable asynchronous
peripheral hardware and (2) internal unpredictability due to the
tricks involved in running at such high clocks. On small embedded
systems with little or no peripheral hardware, though, things are
quite predictable. I don't have figures but my (educated, based on
kernel debugging) guess would be that it's common for the number of
possible states when enterring init to be on the order of tens or at
most hundreds.

> Let's boot in single user, and run just one process, you say.
> Let's pin it to CPU0, you say. Switch off all dynamic CPU scaling,
> you say. No way. Still not totally deterministic.

Your logic is badly^H^H^H^H^Hdisastrously wrong here. Absolute or even
near determinism is not needed for there to be a major vulnerability.
Even if there are on the scale of 2^32 possible initial states,
computing all the resulting keys for each and trying them all is a
trivial computation even for individual attackers. With modern cloud
computing you don't need to be a nation-state to do this kind of brute
forcing.

> I tried timing CPU cycle counter delta across a CPUID(0) instruction.
> That alone had some jitter. I collected a few megabytes of it and ran
> it through entropy estimator, it said there is ~0.9 bit of randomness per byte.
> (I'm not claiming this is true on all CPUs. I'm only sharing my experience
> on my CPU).
> 
> I tried sending signals to my own process and timing how long does that take.
> Also varies.

Your interactive experiments with an interactive system are
meaningless because you have the biggest entropy input of all: the
timing of your own human-generated input. In order for them to be
meaningful you need to start from cold-boot and script the
measurements to automatically run.

Rich


More information about the busybox mailing list