SQS as an Entropy Delivery System
With the Raspberry Pi having a good hardware random number generator and virtual machines and containers potentially having so little, especially shortly after boot, it makes it a cheap source. Amazon’s SQS provides a fully managed, reliable message delivery system and it’s cheap, free at low volume. This seems like an excellent way to deliver Entropy to all your virtual infrastructure.
Sending Entropy to the Queue
Once you’ve created an SQS queue this python code will periodically
(sleepfor) fetch the queue length if it’s below
(lowwatermark) it will add
(burstadd) messages to the queue, each message will contain 512 bytes of base64 encoded entropy.
#!/usr/bin/python3 import boto3 import pprint import base64 import time pp = pprint.PrettyPrinter(indent=4) url = '' # put your Queue url here lowwatermark = 10 # Start adding to the queue when Q has less than this in it burstadd = 10 # Add this many items to the queue each time sleepfor = 60 sqs = boto3.client('sqs') def send_ent(): with open("/dev/hwrng", 'rb') as f: bytes = (f.read(512)) message = base64.b64encode(bytes) text = message.decode('ascii') send = sqs.send_message( QueueUrl=url, MessageBody=(text) ) pp.pprint (send) def get_qlength(): response = sqs.get_queue_attributes( QueueUrl=url, AttributeNames=[ 'ApproximateNumberOfMessages' ], ) return response['Attributes']['ApproximateNumberOfMessages'] while True: qlength = get_qlength() # returns ascii good for printing bad for numeric comparisons qlengthint = int(qlength) if (qlengthint < lowwatermark): print ("Adding to the Queue.") i = 1 while i <= burstadd: send_ent() i += 1 else: print ("Queue already has " + qlength + " Messages in it") time.sleep(sleepfor)
Receiving Entropy from the Queue using python
Leave the above script running on one or more Pi and it will keep your queue topped up ready for consumption. We can then fetch entropy and stir it into the pool (sending to
/dev/random) at boot using the python below.
#!/usr/bin/python3 import boto3 import base64 sqs = boto3.client('sqs') url = '' # put your Queue url here response = sqs.receive_message( QueueUrl=queue_url, AttributeNames=[ 'All' ], MaxNumberOfMessages=1, MessageAttributeNames=[ 'All' ], VisibilityTimeout=0, WaitTimeSeconds=0 ) # we only asked for one message so safe to use  try: message = response['Messages'] except KeyError: print ('No Messages in the Queue') raise SystemExit # perhaps sleep and try again? body = message['Body'] raw = base64.b64decode(body) receipt_handle = message['ReceiptHandle'] # Messages should be deleted asap after receiving sqs.delete_message( QueueUrl=queue_url, ReceiptHandle=receipt_handle ) print('Received and deleted message: %s' % message) print('Message Body: %s' % body) with open("/dev/random", 'wb') as f: f.write(raw)
Receiving Entropy from the Queue using awscli
If you don’t want the trouble of installing python and the needed libraries you can run the following one liner at boot if you have
awscli installed and configured
aws sqs receive-message --queue-url 'YOUR SQS URL HERE' |jq '.Messages.Body' |sed -e "s#\"##g" |base64 -d >/dev/random
entyou can confirm you’re getting good quality entropy through end to end:
aws sqs receive-message --queue-url 'YOUR SQS URL HERE' |jq '.Messages.Body' |sed -e "s#\"##g" |base64 -d |ent
Entropy = 7.599510 bits per byte. Optimum compression would reduce the size of this 512 byte file by 5 percent. Chi square distribution for 512 samples is 250.00, and randomly would exceed this value 57.66 percent of the times. Arithmetic mean value of data bytes is 126.2578 (127.5 = random). Monte Carlo value for Pi is 3.294117647 (error 4.86 percent). Serial correlation coefficient is -0.017790 (totally uncorrelated = 0.0).
1kor you could just pull two messages.