Nvidia pushes ARM supercomputing

Enlarge (credit: Lawrence Berkeley National Laboratory [Public domain])

Graphics chip maker Nvidia is best known for consumer computing, vying with AMD’s Radeon line for framerates and eye candy. But the venerable giant hasn’t ignored the rise of GPU-powered applications that have little or nothing to do with gaming. In the early 2000s, UNC researcher Mark Harris began work popularizing the term “GPGPU,” referencing the use of Graphics Processing Units for non-graphics-related tasks. But most of us didn’t really become aware of the non-graphics-related possibilities until GPU-powered bitcoin-mining code was released in 2010, and shortly thereafter, strange boxes packed nearly solid with high-end gaming cards started popping up everywhere.

From digital currencies to supercomputing

The Association for Computing Machinery grants one or more $10,000 Gordon Bell Prize every year to a research team that has made a break-out achievement in performance, scale, or time-to-solution on challenging science and engineering problems. Five of the six entrants in 2018—including both winning teams, Oak Ridge National Laboratory and Lawrence Berkeley National Laboratory—used Nvidia GPUs in their supercomputing arrays; the Lawrence Berkeley team included six people from Nvidia itself.

In March of this year, Nvidia acquired Mellanox, makers of the high-performance network interconnect technology InfiniBand. (InfiniBand is frequently used as an alternative to Ethernet for massively high-speed connections between storage and compute stacks in enterprise, with real throughput up to 100Gbps.) This is the same technology the LBNL/Nvidia team used in 2018 to win a Gordon Bell Prize (with a project on deep learning for climate analytics).

Read 6 remaining paragraphs | Comments


Source link

Related Posts