With the rapid growth in artificial intelligence (AI) and machine learning (ML) applications, the demand for compute power is expanding at an exponential pace. Even graphics processing units (GPUs), widely adopted as AI/ML compute workhorses, can struggle to keep up with increasing processing demands.
Fortunately, ground-breaking solutions are just around the corner. At Samsung, we’re developing the next generation of DRAM memory, DDR5, as a key technology for the next generation of data processing.
SambaNova Systems, a leader in the AI/ML space, provides vitally needed AI/ML-centric computing solutions for companies and organizations of all sizes. Their platform offerings (DataScale and Dataflow-as-a-Service) expects to leverage the power of Samsung’s DDR5 to provide the performance required for AI/ML applications.
I recently had the unique opportunity to speak with four key players at SambaNova Systems. SambaNova Co-founder and CEO Rodrigo Liang, Co-founder and Chief Technologist Kunle Olukotun, VP of Product Management Marshall Choy, and Chief Architect Sumti Jairath are true visionaries in the field.
I asked them how SambaNova’s innovative systems will leverage the power of DDR5 to allow AI and ML to reach their full potential.
Sarah Peach (SP):
What’s going on in the world of big data computing today that makes SambaNova’s products necessary?
Rodrigo Liang (RL):
Turning data into something truly valuable—that’s our generation’s official Gold Rush. But in order to truly realize the value of their data, companies and organizations need a computing system that is easy to use, and that allows them to efficiently access the data being collected. That’s where SambaNova comes in; we help to connect the demand for data to the supply of data.
Kunle Olukotun (KO):
We have developed next-generation computation based on dataflow architecture. Dataflow computing is well-suited to tackle problems in the AI/ML space, so we set out to create the world’s first scalable dataflow system.
RL:
We saw a need for a computing platform for AI that’s easy to use and provides capabilities not covered by existing solutions. From natural language processing (NLP) to high resolution video, there is a growing need for a high-capacity AI/ML platform. We figured out how to do that type of high-capacity compute more efficiently.
SP:
Many AI/ML platforms rely on GPUs for efficient compute. What’s different about SambaNova’s approach?
Sumti Jairath (SJ):
GPUs are essentially hardware built for video games, and AI/ML clearly needs more powerful platforms. Our higher capacity platform starts to unlock the possibility of higher resolution images. Particularly for areas like medical imaging: we can push far beyond 4k in MRIs, for instance. Our platforms also enable state-of-the-art NLP models and recommendation engines.
RL:
In the past, in order to maximize the performance of GPUs, manufacturers would use HBM [High Bandwidth Memory] rather than DDR. While HBM provided greater bandwidth than DDR, the tradeoff was that it had a lower memory capacity. But our solutions combine high-capacity DDR with high bandwidth.
SP:
Samsung’s memory and storage technologies are helping companies like SambaNova make AI/ML a reality today. With DDR5 DRAM becoming available, how do you see this technology improving SambaNova’s offerings to its current and potential customer base?
RL:
Demand for compute power is outpacing supply. At SambaNova, we’re always looking for new technology solutions to try and bridge this gap between demand and supply. DDR5 is one such solution. DDR5 allows higher accuracy when running machine learning models while minimizing the issue of labeling image data for ML and retaining the information in high resolution images. And with NLP, the greater memory capacity of DDR5 means more parameters and vastly improved performance.
SJ:
DDR5 influences two key factors: dataflow architecture and memory capacity. Higher resolution models require much more memory capacity and DDR5 provides this.
SP:
What benefits do your high-capacity AI platforms deliver to end users, and to society?
RL:
For the everyday mobile or home user, more memory capacity from DDR means better recommendation engines on apps and websites. More memory gives developers the ability to do larger embeddings. In retail, for instance, this will allow ecommerce sites to more accurately recommend related items, leading to better consumer experiences, while increasing revenue for retailers.
Marshall Choy (MC):
The power that AI and ML brings to medical research is accelerating drug research and drug trials processes, which could lead to the discovery of new drugs and antiviral compounds. AI is already helping to accelerate Covid-19 research by speeding up data analysis and modeling. The Lawrence Livermore Cancer Moonshot project is another example of AI being leveraged in medical research.
SJ:
Higher resolution image analysis will lead to improved cancer detection. Reading scan imagery, for example, might require high resolution over a wide area. By enabling this type of higher resolution, SambaNova’s efforts could lead to earlier detection and better patient outcomes.
SP:
Do SambaNova’s two product offerings make AI easier to use?
RL:
Yes, by increasing efficiency and ease-of-use for this kind of platform, and increasing access to the technology to smaller firms, in addition to the big players.
Our first product, DataScale, is an integrated software and hardware systems platform that allows people to run cutting-edge AI applications at scale. DataScale’s software-defined-hardware approach delivers a high degree of efficiency across applications, from training and inference, to data analytics and high-performance computing.
Our second offering is how we’ve democratized management of AI/ML modeling. Dataflow-as-a-Service is a subscription service that we developed based on customer requests. AI and ML are challenging, and most companies don’t have the expertise or manpower to manage the AI/ML models at scale. Dataflow-as-a-Service takes SambaNova’s expertise and learning and adds that to the client’s team. The service allows our clients to run cutting edge AI/ML applications without a large in-house team.
SP:
What makes you excited about the future with regards to AI? Next year, five years out, ten years out?
RL:
The transition to AI will be bigger than the advent of the Internet. It will touch every company in every industry, and create all sorts of opportunities. Yet when it comes to adopting AI, we’re basically in just the first third of an inning of an extra inning game. We can only imagine what the tech will look like in 5-10 years. By then, the majority of computers in datacenters will be running AI/ML tech like our products.
KO:
The center of gravity of AI/ML today is the big companies. But our technology will allow much smaller companies—and smaller systems—to run AI/ML.
RL:
Widespread technological adoption only happens when you make the tech easy for everyone to use. So this is our way of democratizing technology like AI/ML. Rather than exposing all of the complex technology to the end user, SambaNova delivers technology that the end user can plug in and learn within an hour.
SambaNova’s DataScale and Dataflow-as-a-Service provide vitally needed AI/ML-centric computing solutions for companies and organizations of all sizes. In addition, they make the technology easy to use, particularly for firms without large in-house teams. Their systems will leverage the power of DDR5 to provide the compute power required to realize the full potential of AI/ML. In so doing they will enable amazing new technologies that will positively impact our lives and help shape our future.
Learn more about Samsung DDR5 technology at www.samsungsemiconductor-us.com/ddr5.