AI Innovation in the Foundation Model Era
AI Innovation in the foundation model Era
The world is entering a new era of AI.
Twenty years ago, the emergence of machine learning ushered in the first era of modern AI and changed how AI systems operate, enabling them to learn how to make future predictions based on historical data. In the 2010s, the deep learning revolution launched the second era of modern AI, which was powered by digital transformation, the availability of exponentially larger datasets, and new types of AI hardware.
Now, the third era of AI has emerged, powered by foundation models. Foundation models are defined by their massive scale which enables them to unlock insights trapped in unstructured data, in-context learning through powerful emergent capabilities with zero and few shot prompts, and high versatility with the ability to solve dozens of different tasks with a single model.
This combination of scale, versatility and accuracy enables organizations to reduce 1000s of legacy models to a single foundation model with the ability to deliver unprecedented value for organizations and enterprises, including:
- Fundamentally transforming call center operations to drastically reduce average handle time, allowing call center representatives to focus on customer service, not administrative tasks
- Addressing organization-wide compliance at scale with greater accuracy than was previously possible
- Enabling energy organizations to analyze 100s of cubic kilometers of 3D image data to identify multi-billion dollar discoveries with greater speed and accuracy
However, despite all of this impressive capability and business value potential, foundation models are also defined by their cost, complexity, and massive size, which is often hundreds of billions or even trillions of parameters. These challenges make it impractical for most organizations to train and deploy these models themselves.
To overcome these challenges and achieve the benefit of these state-of-the-art foundation models, organizations need to invest in AI technology that delivers value and innovation across the full AI stack: hardware, software, systems, and even pre-trained models.
Today, at the 2022 AI Hardware Summit, SambaNova announced several new capabilities across our fully integrated foundation model Platform, delivering new innovations at each layer of the AI stack to provide the “AI backbone” for the coming wave of AI innovation over the next 10 years.
Highlights of the announcement included:
- New DataScale® SN30
- Better Performance: World record GPT training that is 6x faster to train than DGX A100 systems (GPT 13B)
- New Processor: An enhanced Cardinal SN30™ Reconfigurable Dataflow Unit™
- Support for the Largest Models: Terabytes of memory — 12.8x more memory capacity than DGX A100
- Enhanced Full Stack Support: Additional deployment automation and ease of scale capabilities
- Customer Value-Based Pricing: Subscription pricing
To learn more about these exciting announcements, see our announcement page on the SambaNova website.
We believe that foundation models will continue to drive innovations in business transformation and value, both today and in the future, and we look forward to sharing more exciting related announcements in the future.