Wednesday 26 March 2014

Why Moore’s Law will Blow Your Mind

Most of you will be very familiar with Moore’s Law, formulated by Gordon E. Moore, the founder of Intel, way back in 1965.  Imagine, if you can, the state of electronics components technology back then.  Integrated circuits were in their infancy, and indeed few people today would look at the first-ever 1961 Fairchild IC and recognize it as such.  This was the state-of-the-art when Moore formulated his law which states that the number of transistors in an IC would double every two years.  Considering the infancy of the industry at the time Moore made his prediction, it is astonishing that his law continues to hold today.  In 1965, commercial ICs comprised up to a few hundred transistors.  Today, the biggest commercial ICs have transistor counts in the billions.  Also, every ten years or so, sage observers can be counted on to pronounce that Moore’s Law is bound to slow down over the coming decade due to [fill-in-the-blanks] technology limitations.  I can recall at least two such major movements, one in the early 1990’s, and again about 10 years later.  The movers and shakers in the global electronics industry, however, continue to base their long-range planning on the inexorable progress of Moore’s Law.

Last night I attended a profoundly illuminating talk given by John La Grou, CEO of Millennia Media.  John showed how Moore’s law applies in similar vein to a number of core technologies that relate to the electronics industry.  He touched on the mechanisms that underly these developments.  However, what was most impressive was how he expressed the dry concepts such as transistor counts in more meaningful terms.  The one which particularly caught my attention was a chart that expressed the growth in computer power.  Its Y axis has units like the brainpower of a flea, the brainpower of a rat, the brainpower of a human, and the combined brainpower of all humans on earth.  In his chart, today’s CPU has slightly more than the brainpower of a rat, but falls massively short of the brainpower of a human.  However, by 2050, which will be within the lifetimes of many of you reading this, your average computer workstation will be powered by something approaching the combined brainpower of every human being on earth.

I wonder if, back in 1965, Gordon Moore every paused to imagine the practical consequences of his law.  I wonder if he contemplated the possibility of having a 2014 Mac Pro on his office desk, a computer possessed of processing power equivalent to the sum total of every computer ever built up to the time Apple introduced their first ever PC.  Now Moore was a smart guy, so I’m sure he did the math, but if he did, I wonder if he ever asked himself what a person might ever DO with such a thing.  I don’t know if posterity records his conclusions.  In the same way, I wonder (and I most assuredly do not stand comparison to Moore) what a person might do in 2050 with a computer having at its disposal the combined brainpower of every human being on the planet.  As yet, posterity does not record my conclusions.

La Grou’s talk focussed on audio-related applications.  In particular he talked about what he referred to as immersive applications.  In effect, wearable technology that would immerse the wearer in a virtual world of video and audio content.  He was very clear indeed that the technology roadmaps being followed by the industry would bring about the ability to achieve those goals within a remarkably short period of time.  He talked about 3D video technology with resolution indistinguishable from reality, and audio content to match.  He was very clear that he did not think he was stretching the truth in any way to make these projections, and expressed a personal conviction that these things would come to fruition quite a lot faster than the already aggressive timescales he was presenting to the audience.  He showed some really cool video footage of unsuspecting subjects trying out the new Occulus Rift virtual reality headsets, made by the company acquired yesterday by FaceBook.  I won’t attempt to describe it, but we watched people who could no longer stand upright.  Grou has tried the Occulus Rift and spoke of its alarmingly convincing immersive experience.

At the start of La Grou’s talk, he played what he described as the first ever audio recording, made by a Frenchman some 30 years before Edison.  Using an approach similar to Edison’s, his recording was made by a needle which scratched the resultant waveform on a piece of (presumably moving) inked paper.  This recording was made without the expectation that it would ever be replayed; in fact the object was never to listen to the recorded sound, but rather to examine the resultant waveforms under a microscope.  By digitizing the images, however, we can replay that recording today, more than 150 years after the fact.  We can hear the Frenchman humming rather tunelessly over a colossal background noise level.  One imagines he never rehearsed his performance, or even paused to consider what he might attempt to capture as history's first ever recorded sound.  Anyway, the result is identifiable as a man humming tunelessly, but not much more than that.

At the end of the talk we watched the results of an experiment where researchers were imaging the brains of subjects while they (the subjects, that is, not the researchers) were watching movies and other visual stimuli.  They confined themselves to imaging only the visual cortex.  In doing so, there was no pattern to how particular images caused the various regions within the cortex to illuminate, but computers being the powerful things they are (i.e. smarter than the average rat), they let the computer attempt to correlate the images being observed with the patterns being produced.  If I understand correctly, they then showed the subjects some quite unrelated images, and asked the computer to come up with a best guess for what the subject was seeing, based on the correlations previously established.  There is no doubt that the images produced by the computer corresponded quite remarkably with the images which the subject was looking at.  In fact, the computer was making as good a reproduction of the image that the subject was looking at, as the playback of the 150-year old French recording was to what one might imagine was the original. 

I couldn’t help but think that it would be something less than - quite a lot less than - 150 years before this kind of technology advances to a practically useful level, one with literally mind-bending ramifications.