Today, CERN’S Large Hadron Collider has started working for the first time. It is not collecting data yet, but when it will, it will generate 300 Gb/second, requiring a significant amount of computing resources. This raw input will be filtered locally into a more reasonable stream of 300Mb/second. That stream will be again processed at the CERN data center, and reduced again before to research centers across the world. Even then, dedicated 10Gb/s links will be required in order to move around 40Tb of data every day.
Even by today’s standards, these numbers are staggering, and a 10Gb/s link remains 1,000 times higher than the bandwidth that we experience every day. But the really amazing thing is that the initial design of the computers that handle all this data at CERN was made 15 or 20 years ago, when the project started.
In 1988, I was an assistant in an American supercomputing research center, and some researchers there got the mission to make initial designs for the computers that would handle LHC’s American competitor, SSC. Both the research center and the accelerator have failed since then, but I found an interesting definition of that piece of work on a SSC fact sheet:
In concert with industry, the SSC Laboratory is designing ultra fast parallel computing systems capable of processing the equivalent of 10,000 floppy disks of data every second. This cooperative effort is expected to facilitate the entry of high performance electronics into the commercial marketplace.
10,000 floppy disks is around 14Gb, which means that the estimates were off by a factor 20. It looks large, but it’s not that bad. Back then, this number was enormous, as the size of hard drives was still measures in megabytes, and even our largest supercomputer (an ETA-10, the world’s fastest at the time) was only running at a cool 200MHz. Despite this discrepancy between the resources available in 1988 and the target computer (then set for 2003, 15 years later), the scientists determined that, if Moore’s Law remained valid, it would be possible to handle such an enormous amount of data. They even provided blueprints for such a system.
On a smaller scale, people who design standards often face the same issue. Between the time work starts on a standard and the time it becomes a mainstream product, several years often have passed. Standard designers must take that parameter into account from day 1. This is a tough job, and it is even tougher in “extreme” industries, where the constraints are strong. Supercomputing is at one end of the spectrum, and smart cards are at the other end.
The trade-off is not that simple to make. If we underestimate the evolution of computing power, we limit the possible exploitation of the nex technology, and hence, its value. On the other hand, if we overestimate the evolution of womputing power, we may end up with a technology that cannot be mapped into products (because whatever we designed does not fit on a smart card chip).
When we started the work on Java Card 3, I was afraid to be exactly in that situation. I gradually changed my mind, although I still have a little voice that tells me that we could have done a bit more on the optimization side. Well, as of today, the technology can be implemented, which is good news. However, it requires the largest chips available on the market (i.e., the most expensive). The software is not cheap, either, because Java Card 3 is the largest smart card system around (i.e., the most expensive).
The next step is to transform the technology into successful products. There are usually two ways to do that, but here, we most likely will have to combine them: first, we have to make the added value of the product visible, in order to convince our customers to pay more (at least for the software), and then we have to wait a little in orer to let the price of chips go down.
This may look difficult, but it is definitely achievable. The history of smart cards include plenty of products that were at some point too large, too expensive, or both. And in many cases (Java Card, SIM Toolkit, and others), these technologies often ended up in millions (or billions) of cards.
The hard part is that we now face a few years of hard work promoting Java card 3, making good products, and finding good applications. And only then, there will be another successful product on that list.