Last week gave way to a flurry of activity around RSA and an alleged cryptographic flaw in the algorithm based on this report by Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, THorsten Kleinjung, and Christophe Wachter. RSA’s Sam Curry writes a post here, as well as posts by Dan Kaminski, Nadia Heninger, and this New York Times article.

Encrypted stories, by FeatheredTar

As I was reading through this whole mess and understanding the technical issues at hand, I started thinking that the description of the problem, ultimately a lack of entropy in a particular implementation, is something that the security industry has dealt with before. You don’t have to look very far to see implementation problems that cause both minor blips and massive security issues.

If an encryption algorithm is tested and found to be based on solid mathematics, then it comes down to the keys. How are they generated? How are they protected? How long are they? But it all starts with key generation.

In order to generate strong keys, you need randomness (or entropy). Unfortunately, machines struggle with entropy and generating random numbers. If you have ever created a PGP key, you might remember having to move your mouse around the screen or type on your keyboard to help generate enough user-supplied entropy to make sure that the base of your key uses as close to truely random numbers as possible.

Asymmetric encryption is built on the fundamental principle that factoring the products of large prime numbers is hard (labeled p and q). If I know both p and q, I am able to decrypt the message. If the p and q values that I generate are not random, or possibly share those values with other keys, then key recovery becomes a possibility thus removing the effective strength of your cryptosystem. Implementation of complex systems is everything when it comes to their effectiveness.

This post originally appeared on