The Pawsey Supercomputing Centre, an Australia-based government-assisted high-performance computing facility, will reportedly provide researchers with NVIDIA graphical processing units to carry out research on the Nimbus cloud. Twelve NVIDIA V100 GPUs with sixteen GB of memory each will be deployed in nearly 6 HPE SX40 servers which are to be appended to Nimbus, an Ocata OpenStack installation at the establishment.
According to reliable sources, GPUs are likely to be utilized for speeding up AI, graphics, and high-performance computing jobs. The strategic move in all plausibility may allow research professionals to easily access virtual machines possessing enormous computing power.
The key officials of Pawsey have stated that parallel graphical processing units with about more than five thousand CUDA & nearly over six hundred tensor cores for each card will significantly enhance the performance of AI. They also further declared that a single GPU has the ability to deliver the same effective performance as nearly a hundred central processing units. Incidentally, NVIDIA GPUs provide nearly 7 to 7.8 TFLOPS for each second of single-precision performance. It has been speculated that for double-precision performance the figure falls in the range of 14 to 15.7 tera floating point operations for each CPU.
Commercial cloud service providers, including firms such as IBM, Amazon, and Google are already offering effective GPU powered computing ability. Apparently, Microsoft is also providing graphical processing units for its cloud computing service referred as Azure. Sources cite that the firm has already commenced conducting trials on AI supported FPGA (Field Programmable Gate Array) accelerators.
It is believed that Pawsey has recently deployed HPE nodes along with 2 graphical processing units in Nimbus, which has AMD Opteron central processing units with nearly over three-thousand cores as well as a storage capacity of more than two hundred terabytes. The firm has also announced that researchers at Pawsey can make use of GPU nodes free of cost.