We are now leaving an era of AI exploration and entering one of pervasive AI – accelerating every role, in every organization, across every industry. But wide deployment of AI demands a platform that provides accuracy, reliability, and privacy, without an exponential increase in cost and complexity.
SambaNova Suite – the first purpose-built, full stack large language model (LLM) platform with world class open-source models – is now powered by the revolutionary SN40L Reconfigurable Data Unit (RDU). The SN40L is SambaNova’s fourth generation chip, purpose-built for the most demanding LLM workloads – capable of both dense and sparse compute, and including both large and fast memory.
SambaNova Suite includes the latest, most powerful open-source models and is capable of serving up to 5 trillion parameter models, with 256k+ sequence length. This is only possible because SambaNova provides a fully integrated stack, delivering higher quality models, with higher accuracy, faster inference and training, all at a lower total cost of ownership.
Introducing SambaNova Suite, powered by SN40L
Watch the Composition of Experts demo to discover what SambaNova Suite can bring to your enterprise
Bigger models deliver better results
Larger modular models are more accurate and more versatile, able to serve multiple use cases across many lines of business. Training and inferencing very large models (1 trillion+ parameters) demands high performance hardware and a full stack of integrated software to reliably deliver accuracy and high performance.
Higher performance with lower TCO
Traditional systems require linear or greater-than-linear expansion in compute as model demands increase. SambaNova’s unique technology dramatically changes this equation. As a result, we deliver higher performance with a smaller footprint and at a lower total cost of ownership.
Own your model: Build an asset & avoid lock-in
AI model ownership means that the investments organizations make become a valuable asset and a platform for the future. SambaNova customers can retain ownership of any model adapted with their data. By owning their model, organizations can continuously fine-tune with their data, capture competitive advantages, ensure privacy and security, and benefit from full visibility into weights and training methods, making it easy to meet compliance requirements. Finally, owning your model means no vendor lock-in.
Truly full stack
AI places enormous demands on hardware and software. SambaNova Suite is the first full stack AI solution, from chip to models, purpose built to power the largest models. Offering state-of-the-art open source models out of box, and MLOps software to manage end to end, SambaNova Suite is delivered as a complete, optimized AI platform on premises or in the cloud, reducing total cost of ownership and time to value.
Available, reliable, predictable compute
Buying dedicated systems means that you are no longer at the mercy of silicon availability and market price fluctuations – and never pay a penalty for failed experiments or training runs. Pay a single price for guaranteed availability and supply.
Purpose built for AI: Training and inference on a single platform
The SN40L, SambaNova’s fourth generation chip is specifically adapted for AI. With a three-tiered memory architecture, including on-chip memory, high bandwidth memory, and high capacity memory, and a revolutionary dataflow architecture, our SambaFlow software is able to optimize even the largest models for training and inference on the same platform. Taken all together, this increases performance, accuracy, and efficiency – resulting in a smaller footprint, lower cost, and faster time to value.
The latest open-source models, fine-tuned for you
SambaFlow software automatically optimizes for your models across the software and hardware stack. SambaNova Suite includes a wide range of the latest, most powerful pre-trained open-source models, which can be adapted for each industry and use case, and are easily deployable through an intuitive user interface. The models can be fine-tuned on your internal data and then are owned by your organization in perpetuity.
Composition of Experts (CoE) models deliver performance, efficiency, and security
As models become bigger and address more use cases, access control and security becomes even more important – for example, internal salary data should not be accessible to everyone! This can be addressed by using a modular approach – building a large model through a composition of smaller expert models which can be continually improved, adapted, added-to. Using SambaNova hardware and software and its unique memory advantages, we can deliver massive models (up to 5 trillion parameters) without sacrificing inference throughput or security.
The largest models, with room to grow
The best models, in terms of accuracy, features, and capabilities, are always the largest models. Currently, GPT-4 is the largest and best closed-source model and Falcon 180B is the largest and best open-source model. But this quality comes at a price: they are exceedingly complex and time consuming to train and very costly to run. SambaNova uses a Composition of Experts (CoE) design that groups an unlimited number of smaller models together to create a single large model. This provides all of the benefits of massive models with 1 trillion parameters (and a pathway to 5 trillion) with the training cost and inference latency of a much smaller model.
Schedule a meeting
Learn how SambaNova can advance your AI initiatives to help you achieve your impossible.