Efficient use of entropy in cloud environments

Secure communication requires entropy — unpredictable input to the encryption algorithms that convert your message into what seems like a string of gibberish. Entropy is particularly important when generating keypairs, encrypting filesystems, and encrypting communication between processes. Computers use a variety of inputs to provide entropy: network jitter, keyboard and mouse input, purpose-built hardware, and so on. Frequently drawing from the pool of entropy can reduce it to the point where communications are blocked waiting for sufficient entropy.

Generally speaking, entropy has two aspects: quality (i.e. how random is the value you get?) and the amount available. The quality of entropy can be increased by seeding it from a quality source of entropy. Higher quality entropy makes better initialization vectors for the Linux Pseudo Random Number Generator (LinuxPRNG). The Ubuntu project offers a publicly-available entropy server. The quantity of entropy (i.e. the value of /proc/sys/kernel/random/entropy_avail) is only increased over time.

It is worth noting here Virtual Machines in the cloud are not quite “normal” computers in regards to entropy. Cloud instances lack many of the inputs that a physical machine would have, since they don’t have keyboard and mice attached, and the hypervisor buffers away much of the random jitter of internal hardware. Further, the Xen (Amazon Web Service), KVM (Google Cloud), and HyperV (Microsoft Azure) hypervisors virtualize hardware access to varying degrees which can result in diminished entropy.

You need to be aware of the entropy available on your instances and how your code affects that. When writing code, it’s important to minimize the calls to /dev/random for entropy as it blocks until sufficient entropy is available. /dev/urandom is non-blocking and can be used for faster operations except in cases where better randomness is critical (for example in keypair generation). Unfortunately, some programs make that choice for you. A notable example is older builds of openssl.This script shows the impact of reading from /dev/random:

import subprocess
import os
import timeit
import timedef read_dev_random():
devnull = open(os.devnull, 'w')
subprocess.check_call(['dd', 'if=/dev/random', 'of=mytmp', 'bs=1k', 'count=1'],
stdout=devnull, stderr=devnull)iterations = 10
iteration_times = []for i in range(iterations):
iteration_time = timeit.timeit(read_dev_random, number=1)
print "Period for iterations %d is %.3f" % (i, iteration_time)
iteration_times.append(iteration_time)average = sum(iteration_times) / float(iterations)
print "Average time reading random bytes from /dev/random %.3f seconds over %d iterations" \
% (average, iterations)

Here are the results running the above script on an  n1-standard-4 instance in Google Cloud:


Whoa! That run time gets large very quickly! Changing the script to use /dev/urandom has a real impact on the time.


Keeping this in mind while developing your code can have a large impact on performance.

Share this: