Last week SambaNova’s Co-founder and Chief Technologist Kunle Olukotun connected with Alex Ratner, CEO of Snorkel, on what he sees as the most exciting trends (and challenges) for generative AI in the enterprise. They talked about a number of interesting topics including how generative AI and foundation models have changed what it means to develop AI, why ‘good enough’ isn’t enough for the enterprise when it comes to generative AI, and how SambaNova and Snorkel are partnering to solve the challenge of adaptation and scalable deployment of generative AI in the enterprise.
Below are three of my favorite insights from Kunle’s conversation with Alex. You can also watch the full discussion in the video above.
Generative AI and foundation models have shifted AI development from being model-centric to becoming data-centric
During the conversation, Alex made the point that generative AI and foundation models have changed what it means to do AI development, shifting the focus from being model-centric to becoming data-centric. Because these models can be thought of as ‘foundations’ to build upon, data has shifted from being a blocker to get right in the AI development process, to becoming the most important way to develop a model. For enterprises, this means that they can use their own highly bespoke and complex data to adapt generative AI models to be highly optimized for their specific business needs and workflows.
When it comes to generative AI in the enterprise, “good enough” just isn’t enough
Alex perfectly summarized the difference between consumer and enterprise AI: “When it is a research project on the internet you can get away with a lot more than you can when you are a bank or a hospital system or a government agency.” In the enterprise, generative AI needs to be accurate enough to be deployed in a business critical production workflow. For this to happen, these models need to be adapted with the company’s own data in their own environment, in a secure and private way. This is a central focus of SambaNova and Snorkel’s partnership to solve the challenge of adaptation and scalable deployment of generative AI in the enterprise.
Longer sequence length will be critical for applying generative AI and foundation models to solve real world challenges
In simple terms, sequence length refers to the amount of data that can be processed by a generative AI model at a time. Why is this important? With a longer sequence length, a generative AI model can analyze and process increasingly larger pieces of information. In an enterprise, this means the difference between analyzing a short Email and understanding a lengthy, complex report. Longer sequence length models are also increasingly showing promise for solving other types of applications, such as genome sequencing.
Be sure to check out our other blog post