We have all heard about this term SuperComputer but do you ever have ever thought about what a super computer is, or how it is different from our legacy personal PC.
So, first of all, let’s find out what is a supercomputer then we will go into further details.
A supercomputer is a fast parallel processing device which can perform operations in the range of hundreds of billions of floating point operations(FLOPS).
A supercomputer comes with great processing speed which is a no comparison for an ordinary i7 processor. Generally, there are more than one CPU in a supercomputer which makes it a tremendous fast processing unit and when I say there is more than one CPU, I do not mean two. I mean thousands.
Supercomputers come with a great memory capacity. This much memory is sufficient enough to make it compatible for fast engineering and scientific analysis and many other military operations.
This can be thought of as a cluster of computers packed into one which can do intensive computational tasks in a fraction of seconds. To give you a brief example: the fastest supercomputer as of 2015 is The Sunway TaihuLight which has a LINPACK benchmark rating of 93 petaflops and a total of 10,649,600 CPU cores across the entire system.
To give you a comparison some Core i7 Extreme processors have six or eight cores and this is what considered as powerful for a normal programmer or analyst.
Weather forecasting, data manipulation, quantum mechanics and other computation intensive tasks are efficiently performed by a supercomputer which would have taken months for a personal PC to do so.
Coming to the software part, most of the supercomputer now run Linux as their operating system. Internally all of them uses Linux kernel as their kernel. Apart from being open source and having great functionality as compared to Windows, Linux is very much efficient in terms of security as its monolithic kernel provides an additional level of security to the entire system.
Power consumption in supercomputers is the main issue as a huge amount of power is required to support the infrastructure.
Here is an upper view architecture of IBM’s Supercomputer Sequoia:
How SuperComputers achieves this much great speed:
Instead of using sequential processing, Super computers rely on Parallel processing. In most of our local systems, sequential processing is there in which process and commands executed in a serial manner. But in supercomputers processing is done in parallel. A process gets divided into several sub-tasks and then they are sent to the separate core for its execution. In the end, the result is compiled into one to produce a consolidated result.
A central system is there in supercomputers which have the responsibility of breaking down a task into a number of smaller tasks. Then these smaller tasks are assigned to different clusters of processors for execution.
Because processes execute in parallel, this is one of the most significant reasons that supercomputers are capable of performing resource extensive tasks such as analysing weather forecast data.
So more the number of cores in the system, more the number of processor in the system, more memory capacity are the factors that makes a supercomputer fast from its predecessor. Obviously, there are too many other factors which determine its speed, but they some of the major ones.
The performance of supercomputer:
Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time.
Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g., a very complex weather simulation application.
You can check out TOP 500 supercomputers of the world with their specific configurations at TOP500 organization site.