Evolution Of Supercomputers Will Continue
The technological and commercial considerations will slow down the super growth of their speeds witnessed so far but their evolution will continue
Photo Credit :
Bull Sequana X1310 Supercomputer
A supercomputer is a machine whose processing power significantly exceeds that of a conventional computer. Their processing power is measured in floating point operations per second or FLOPS and they are characterised by use of large number of microprocessors working in parallel. Conventional computers are evaluated based on their ability to execute million instructions per seconds or MIPS or their variants e.g. Dhrystone (DMIPS) or Whetstone (WMIPS) which test the microprocessor under some fixed set of instructions.
Nowadays supercomputers have speeds in PFLOPS or petaflops (10^15). The fastest supercomputer i.e. IBM’s Summit has speed in excess of 100 PFLOPS. This performance is measured using LINPACK benchmark which involves solving a system of linear equations. It is not directly comparable but for reference speed of conventional computer is in 100000s of MIPS.
Conventional supercomputers
Modern day supercomputers achieve this order of magnitude times higher performance by using a large number of microprocessors. Summit uses more than 9,216 CPUs with 22 cores and 27,648 GPUs. A conventional computer may have just 1 microprocessor with less than 100 cores. In a supercomputer the connection between nodes called interconnect must have high bandwidth and low latency
The presence of large number of nodes in a supercomputer implies that they are large and bulky and consume power in MWs. Management of this power which is dissipated as heat is a major technological challenge. Their size allows water to be used as coolant besides air which in turn makes the unit compact.
Grid computers
Grid computing is also considered as part of super computing. The computing power is less though grid computers with speed in PFLOPS are available. This is based on using spare computing power of various distributed and heterogeneous computers connected by Ethernet etc. This is cheaper than conventional supercomputer and their power comes from the large number of participating computers. Berkeley Open Infrastructure for Network Computing (BONIC) is the most notable grid computing platform. Large Hadron Collider (LHC) which studies elementary particles collisions and produces data is petabytes/year uses grid computing to analyse the collision data.
Applications
Current
There are various types of usage of supercomputers. First, many countries use them for military purposes e.g. simulation of tests of nuclear weapons as public opinion makes actual testing difficult. They are also used in cryptography. Secondly, there are other cases where governments may use them e.g. weather forecasting or forecasting of adverse marine condition. Thirdly, supercomputers are widely used in research e.g. molecular modelling, global climate modelling, simulation of universe, genomics etc. Finally, corporates also use supercomputers for aircraft design, effectiveness of drugs, industrial design etc.
Emerging
In the next decade many new technologies will become common e.g. IoT, AR/VR, driverless vehicles etc which will send massive amount to data to the cloud for processing. Genomes of various organisms will be decoded and sequencing of genome of an individual human will become common. This will allow creation of customised drugs in a matter of days. Genetically modified plants with higher yields and resistance to droughts, pests etc would be created. Artificial Intelligence will thrive with better NLP and recognition capabilities. These technologies, research in areas e.g. high-density 3D wave equations in oil exploration, protein folding in biology, mapping of human brains, hypersonic flow simulations for vehicle re-entry etc coupled with quest of military supremacy will drive the need of next generation of supercomputers.
Technological challenges
Current technologies
The commercially available microprocessors have transistor size of 10s of nm though research is focused on transistors of size of a few nms. At this stage the quantum effects cannot be ignored and tunnelling can happen whereby an electron can pass through the barriers in the transistors. This can make transistors non functional. And their biggest issue is heating which has dependency both on number of transistors as well as frequency of operations. Hence size of transistors cannot reduce easily nor can frequency of their operation increase much. Hence speed of microprocessors is increasing slowly and they cannot evolve to cover the use cases of supercomputers.
Emerging technologies
There are other types of computers and most promising of them are quantum computers which use laws of Quantum Mechanics and use qubits which represent both binary states with certain probability. They are commercially available with small number of qubits and the key technical challenge is have a quantum computers with say 50 qubits working in coherence. The number 50 has a significance as “quantum supremacy” could be achieved around this number. This refers to the state when computational power of quantum computer cannot be matched anymore by classical machines.
Another area of research is to use DNA for computing which has the advantage of having millions of molecules working in parallel and also provides benefits of miniaturization. But reading the results can take days and process needs human intervention. It also needs massive memory during the computation phase.
These and other emerging technologies will take time to compete with even a conventional computer.
Supercomputers
However, supercomputers with ever increasing speed continue to come to the market. The fastest supercomputer in 1995 had speed in Gigaflops (10^9). In 2005, IBM Blue Gene/L was about 1000 times faster at 280.6 TFLOPS. Today, Summit is about 500 times faster than Blue Gene/L. In the next 5 years, exascale supercomputers with speeds in EFLOPS or exaflops (10^18) will become common. However there are multiple challenges to overcome.
Power efficiency is the biggest factor in the design of exascale supercomputers and impacts design of other parts. An exascale supercomputer needs to have power efficiency of around 50 GFLOPS/watt to restrict power consumption to 20MW whereas the best figures as of now are around 1/3rd of that. Energy aware system scheduling, low power system and machine room cooling can help here. One supercomputer, SuperMUC uses hot water circulating at 400C as coolant. The speed of interconnect will need to increase from 100 GB/sec for Summit to at least 4 times and optics will play a much bigger role in data transfer between the nodes as well as between nodes and memory. High speed will imply further divergence between computing power and memory power often called memory wall. 3D stacking of memories offer a solution to this besides they reduce interconnect bandwidth challenges. Memories based on HBM (High Bandwidth Memory) or HMC (Hybrid Memory Cube) are expected to cater to high memory bandwidth requirements.
With large number of components, the chances of transient and permanent failure of some component will increase. The software will need to cater to not only the increased concurrency and parallelism but also provide a fault tolerant system.
Future
There are multiple use cases for which supercomputers with even higher speeds are needed. The technological and commercial considerations will slow down the super growth of their speeds witnessed so far but their evolution will continue.
Disclaimer: The views expressed in the article above are those of the authors' and do not necessarily represent or reflect the views of this publishing house. Unless otherwise noted, the author is writing in his/her personal capacity. They are not intended and should not be thought to represent official ideas, attitudes, or policies of any agency or institution.
Sandeep K Chhabra
Sandeep K Chhabra is a software professional working as General Manager at Ericsson India Global Services Pvt Ltd (EGIL). He is B Tech from IIT Delhi in Computer Science and Technology has more than 24 years of experience of working in IT industry. He is a Digital/Business transformations expert, startup mentor and an evangelist of emerging technologies.
More From The Author >>