Thursday, February 18, 2016

TWO BITS, FOUR BITS, THE GENETIC CODE VERSUS COMPUTER CODE

I just read Evolution 2.0 by Perry Marshall and was intrigued by his argument that the genetic code is actually a code just like a computer code. That got me to thinking: if this is true, how does the information storage power of the genetic code stack up against computer code. This is what I came up with:

The building blocks of computer code are bits. A bit can only be either a 1 or a 0. Eight bits equals a byte, and bytes are how we measure storage capacity--kilobytes, megabytes, gigabytes, and terabytes. If we work out all the different combinations of 1's and 0's in a byte, we come up with 256 different bytes.

Now I'm dumb as a box of rocks when it comes to any kind of math above 2+2, and even dumber than that when it comes to computer programming, but it seems to me that when we write a computer program, we've got 256 "letters" to put together into words to tell our story. We can cram a lot of information into a small space with an alphabet that large.

Now let's look at DNA code. The basic building blocks of DNA code are guanine, adenine, cytosine, and thymine. Guanine (G) and cytosine (C) always combine to make a rung of the DNA ladder, and adenine (A) and thymine (T) always combine to make a rung of the DNA ladder. This makes it look like we've only got two choices for a bit, just like computer code. But the rungs can be put on with either one of the blocks first, like this: GC, CG, AT, TA. So instead of the two possible bits of computer code, we've got four possible bits of DNA code.

With four possible bits instead of two, we have an exponentially larger number of possible bytes (65,536 if my math is correct). That's a huge alphabet for writing our genetic story. Thus, while the number of possible byte combinations in a megabyte of computer code are mind-boggling, they are miniscule when compared to the number of possible byte combinations in a megabyte of DNA code. If I were a betting man (which I'm not) I'd be willing to bet that the number of possible combinations in a kilobyte of DNA code dwarfs the possible combinations in megabyte of computer code.

Does what I am saying make sense? Does DNA's vastly larger "alphabet" make for more powerful coding than computer coding? I'd be glad to hear polite comments on my ruminations.

No comments:

Post a Comment