Northern Colorado Business Report
“A Supercomputing Future”
by Kai Staats
(This, my final column for the Northern Colorado Business Report was not published and is therefore provided here in full.)
Today, November 18, was the closing day of SuperComputing 2011, the conference and trade show for high performance computing research, labs, and industry. For this week the Seattle Washington Convention Center hosted representation of the latest, greatest, and fastest computers in the world, an overwhelming array of blinking lights, whirring fans, and massive LCD, plasma, and projection screens demonstrating human brain power applied to the human quest to learn how all things work.
It was my first time attending since 2008, my ninth or tenth show in total, but as I have been away for three years, I experienced a jump in the otherwise, relatively steady evolution of compute power and associated research results.
As in the movie “Minority Report” there are now fully interactive touch screens the size of a wall. Up to four people may interact, moving, panning, zooming, and annotating documents, photos, and film. I was able to not only resize a movie while it played, but with one hand rotate it 360 degrees, the motion never even hesitating. The immersive 3D worlds are faster, smoother, and of course, much higher resolution. Still a bit awkward for data visualization, but the flight simulators are amazing!
The challenge of building supercomputing clusters has in many respects remained the same, the balance between data storage, bandwidth, calculations per second, and visualization an ongoing battle.
As CPUs get faster, they need to be fed data at a higher rate. The interconnect fabric (network) advances, from 10/100 ethernet to gigabit, from Infiniband to 10g-e and beyond. But then the memory bus is saturated and can’t keep up, so the speed and quantity of RAM and cache must increase too.
As CPU frequencies have for the most part stalled, Moore’s law is maintained by adding more cores, two, four, and eight on a single socket. But even this has its limitation as we reach the boundary of how small we are able to manufacture a transistor and how effectively we may move heat without building quantum machines.
We add more processors in the form of GP/GPUs, advanced accelerators which grew out of the graphics card industry. Nvidia is leading the charge. Ah! A new challenge is presented, for now we have 500 cores in a PCI slot and four slots to fill. But with 2000 cores, a million or more across an entire cluster, we find that our programming models no longer hold up for the message passing interface which moves fragments of a computational problem takes more processing power, diminishing returns due to fabric latency, OS jitters, and kernel interrupts not easily be solved.
IBM builds a rack-mount node which takes four people to carry (let alone install). HP and Dell design higher density blades which require water cooling. Cray reinvents the wheel (it’s a very nice wheel). TI brings to market new digital signal processors while the ARM processor makes enters this industry with a many-core architecture, but the OS platform remains infantile, lacking industry support for compilers and management tools.
Tired yet? I have only just begun. Super computing is super confusing and yet somehow it works. The competition is fierce, new companies claiming fastest and best their second year in the industry. Big guys buy up small guys as the small guys continue to innovate, racing to support the most advanced research in the world: bioinformatics, nuclear physics, brain mapping, three dimensional imaging of the earth beneath our feet, climate modeling, quantum interactions at the event horizon of a black hole.
We now understand more of the universe inside, immediately around, and far beyond ourselves than ever before. Our knowledge of how things work is growing at an exponential rate. We now compare the DNA of a newly discovered species with another, from wet lab to sequencing in a matter of hours, and we know how many millions of years separate the two in their evolutionary tree. We model with incredible accuracy the proteins that make up various parts of our body and the function of individual cells in the human brain. We better predict the movement of hurricanes up coast lines while the mathematical prediction of fluctuations on Wall Street continues uninterrupted.
I watched a 3D model of a protein-ligand interaction, the colors ranging from white to blue to red to represent the heat-energy in various parts of the system. It jumped, danced, and moved in apparently intelligent ways, an “arm” of the protein connecting to itself only to break again where the synthetic drug attempted to bond. The model from start to finish was over a minute in length, and yet it happens millions of times a second throughout our bodies. For a moment I felt alive in a way that is difficult to explain, picturing in my mind these molecular interactions inside of me at a scale I cannot fully comprehend.
I want to know how all of this works, all of it! –but even in ten lifetimes it is impossible to gain this understanding for the people who bring these discoveries to life are experts in increasingly narrow fields.
Next year I want to attend the show again, and as I have promised myself too many times before, read the posters, interview the grad students and professors who have traveled across the globe to present their latest findings, for their knowledge is our future, a future modeled in supercomputers.