NVIDIAs EOS supercomputer, you say ? … 3 times the speed, you say ? … making use of same ‘infrastructure’ without (too many) extra cost, you say ? Check this out: https://www.engadget.com/nvidias-eos-supercomputer-just-broke-its-own-ai-training-benchmark-record-170042546.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAA5d3XFNEkSqOtWaXvIzDpz9nHjyw7LiDUMl5K25IFfE-S1dCEQKwXgKr-6ku2006hYRWsfI8_v4ASuPcuJoGQo83lks8elimpHSzy-13VoyPGnbfwAFVn8atFph4VvM8AgC9Ul6fs2oIwStgkRCI2-FEPgDyqou5VMRYBAysPdj >>> … The impressive improvement in performance, granted, came from the fact that this recent round of tests … However NVIDIA explains that despite tripling the number of GPUs, it managed to maintain 2.8x scaling in performance — an 93 percent efficiency rate — through the generous use of software optimization. >>> "Scaling is a wonderful thing," Salvator said."But with scaling, you're talking about more infrastructure, which can also mean things like more cost. An efficiently scaled increase means users are "making the best use of your of your infrastructure so that you can basically just get your work done as fast [as possible] and get the most value out of the investment that your organization has made." >>> NVIDIA plans to apply these expanded compute abilities to a variety of tasks, including the company's ongoing work in foundational model development, AI-assisted GPU design, neural rendering, multimodal generative AI and autonomous driving systems.