New State of the Art in NLP: Beyond GPUs

8 RDU DATASCALE

According to Olukotun, SambaNova’s platform is designed to scale from tiny electronic devices to enormous remote datacenters. “SambaNova’s innovations in machine learning algorithms and software-defined hardware will dramatically improve the performance and capability of intelligent applications,” Olukotun added. “The flexibility of the SambaNova technology will enable us to build a unified platform providing tremendo us benefits for business intelligence, machine learning, and data analytics.”

One thing’s for certain: SambaNova’s founders are a decorated bunch. Olukotun — who recently received the IEEE Computer Society’s Harry H. Goode Memorial Award — is the leader of the Stanford Hydra Chip Multiprocessor (CMP) research project, which produced a chip design that pairs four specialized processors and their caches with a shared secondary cache. Ré, an associate professor in the Department of Computer Science at Stanford University’s InfoLab, is a MacArthur genius award recipient who’s also affiliated with the Statistical Machine Learning.

Groundbreaking Results, Validated in our Research Laboratories

SambaNova has been working closely with many organizations the past few months and has established  new state of the art in NLP. This advancement in NLP deep learning is illustrated by a GPU-crushing, world record performance result achieved on SambaNova Systems’ Dataflow-optimized system. We used a new method to train multi-billion parameter models that we call ONE (Optimized Neural network Execution). This result highlights orders-of-magnitude performance and efficiency improvements, achieved by using significantly fewer, more powerful systems compared to existing solutions.

Break Free from the GPU HandcuffsExperience Sambanova Datascale

SambaNova Systems’ Reconfigurable Dataflow Architecture™ (RDA) enables massive models that previously required 1,000+ GPUs to run on a single system, while utilizing the same programming model as on a single SambaNova Systems Reconfigurable Dataflow Unit™ (RDU).

SambaNova RDA is designed to efficiently execute a broad range of applications. RDA eliminates the deficiencies caused by the instruction sets that bottleneck conventional hardware today.

Run Large Model Architectures with a Single SambaNova Systems DataScale™ System

With GPU-based systems, developers have been forced to do complicated cluster programming for multiple racks of systems and to manually program data parallelism and workload orchestration.

A single SambaNova DataScale System with petaflops of performance and terabytes of memory ran the 100-billion parameter ONE model with ease and efficiency, and with plenty of usable headroom. Based on our preliminary work and the results we achieved, we believe running a trillion-parameter model is quite conceivable.

The proliferation of Transformer-based NLP models continues to stress the boundaries of GPU utility. Researchers are continuing to develop bigger models, and as a result the stress fractures on GPU-based deployments are also getting bigger. By maintaining the same simple programming model from one to many RDUs, organizations of all sizes can now run big models with ease and simplicity.

The sophistication of SambaNova Systems’ SambaFlow™ software stack paired with our Dataflow-optimized hardware eliminates overhead and maximizes performance to yield unprecedented results and new capabilities.

COMPONENT 1: Image description in from one (1) to two (2) lines maximum length for optimum visual appeal. Shows component 1 of 2.

COMPONENT 2: Image description in 1 line showing component 2 of 2.

Click to download eBook

See exclusive preview of
SambaNova DataScale

Get more info and keep updated
on company news

Stay on Top of AI

Sign up for AI trends,
information and company news.




Thank
you

for signing up.
We will keep you posted.