Hungry for AI? New supercomputer contains 16 dinner-plate-size chips

  • 15 November 2022
  • 0 replies

Userlevel 7
Badge +4

Exascale Cerebras Andromeda cluster packs more cores than 1,954 Nvidia A100 GPUs.

BENJ EDWARDS - 11/14/2022, 9:16 PM


On Monday, Cerebras Systems unveiled its 13.5 million core Andromeda AI supercomputer for deep learning, reports Reuters. According Cerebras, Andromeda delivers over one 1 exaflop (1 quintillion operations per second) of AI computational power at 16-bit half precision.


Nvidia’s flagship AI chip reportedly up to 4.5x faster than the previous champ

The Andromeda is itself a cluster of 16 Cerebras C-2 computers linked together. Each CS-2 contains one Wafer Scale Engine chip (often called "WSE-2"), which is currently the largest silicon chip ever made, at about 8.5-inches square and packed with 2.6 trillion transistors organized into 850,000 cores.

Cerebras built Andromeda at a data center in Santa Clara, California, for $35 million. It's tuned for applications like large language models and has already been in use for academic and commercial work. "Andromeda delivers near-perfect scaling via simple data parallelism across GPT-class large language models, including GPT-3, GPT-J and GPT-NeoX," writes Cerebras in a press release.

The phrase "Near-perfect scaling" means that as Cerebras adds more CS-2 computer units to Andromeda, training time on neural networks is reduced in "near perfect proportion," according to Cerebras

Full Article:

0 replies

Be the first to reply!