For many decades, the term “random numbers” meant “pseudo-random numbers” to anyone who thought much about the issue and understood that computers simply were not equipped to produce anything that was truly random.
Manufacturers did what they could, grabbing some signals from the likes of mouse movement, keyboard activity, system interrupts, and packet collisions just to get a modest sampling of random data to improve the security of their cryptographic processes.
And the bad guys worked at breaking the encryption.
–– ADVERTISEMENT ––
We used longer keys and better algorithms.
And the bad guys kept at it. And life went on.
But something recently changed all that. No, not yesterday or last week. But it was only back in November of last year that something called the Entropy Engine won an Oscar of Innovation award for collaborators Los Alamos National Laboratory and Whitewood Security. This Entropy Engine is capable of delivering as much as 350 Mbps of true random numbers—sufficient to feed an entire data center with enough random data to dramatically improve all cryptographic processes.
Tapping into the quantum physics of matter and light to provide a source of entopy, the Entropy Engine produces numbers that:
- Cannot be predicted … ever
- Are based on the random behavior of photons
- Can be produced through something as small as a piece of Starburst sitting on a circuit board that looks, well, like a fairly standard circuit board
There was a problem with pseudo-random numbers?
Yes, there was and still is a problem. Pseudo-random numbers are barely sufficient. When it comes to cryptography, entropy is key. Any cryptographic system is only as strong as the source of randomness that it employs. In addition, the amount of entropy that systems can gather and use to keep up with demands is inadequate. There simply isn’t enough random data available.
To make matters worse, hacker tools are getting better. Predictability is the opposite of randomness and the bane of cryptography. The problem of entropy starvation is especially of concern in virtual machines, clouds and containers where there is no direct user activity and, therefore, the usual sources of entropy are not present. For virtual machines running on site or in Amazon Web Services (AWS), it’s impossible to know how much entropy is really available to each instance and what the basis of that entropy really is.
The Internet of Things (IoT) is no better off. Small and low-powered, IoT devices have little ability to generate entropy on their own.
Several hardware random number generators are available. Some are small and slow. Some might generate some suspicion hanging from USB ports. These devices use a number of techniques for generating entropy, such as atmospheric noise, reverse bias semiconductors and beam splitting.
Few are compatible both with NIST SP800-22 (A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications) and NIST SP800-90B (Recommendation for the Entropy Sources Used for Random Bit Generation).
How the Entropy Engine works
Randomness based on the activity of photons is a stretch for most of us to imagine, but it appears to be well-documented in scientific research papers.
I’m not referring to the activity of single photons, but to groups of photons and a process referred to as “photon bunching” in which photons compete to occupy the same state or mode and in doing so create fundamentally unpredictable changes in power that can be measured and digitized. The overall effect is the generation of true randomness, and it happens at significant speed.
Delivered as a network appliance, the Entropy Engine can generate enough entropy to meet the needs of an entire data center. And, considering the network appliances that I’ve participated in acquiring for data centers during my career, at a reasonable cost.
For Linux servers, the service provided would fill the entropy_avail files (/proc/sys/kernel/random/entropy_avail) up to their 4096 bit maximum and often enough to keep them full of truly random (i.e., unpredictable) data.
What is needed?
The answer to this question is pretty clear: enough entropy to feed all of our systems.
What can we expect?
Randomness is everywhere around us, but not in any form that we can grab easily and deliver to our computers. The remarkable work by Los Alamos National Laboratory in collaboration with Whitewood Security is breakthrough technology and will likely dramatically improve the security of cryptographic processes.
How do I know all this?
I was fortunate to interview Richard Moulds, general manager at Whitewood Security, and then I read a number of online articles describing the generation of entropy. I am no expert in this area—just a sysadmin who is very excited about what truly random data will do to improve security for the organizations and systems I care about.
Interesting ending thoughts
Think about this for a minute. No one owns entropy. It’s free, but harvesting it remains a significant challenge.
More important, randomness is hard to prove and can’t be fixed retrospectively. There are no alarms that go off when random data (and, therefore, keys) become predictable. Randomness and entropy need to be fixed proactively. The correct response is a focus on architectural design and not on an entropy Band-Aid.
Source: http://bit.ly/2h7Gy6n