CUSTOMER SPOTLIGHT

Hume AI delivers instant speech processing with SambaCloud

hume-new-logo
 

Alan Cowen, CEO of Hume AI, shares their vision for end-to-end speech LLMs and the importance of latency on delivering realistic outcomes. He expands on the challenges faced by the industy today, as well as potential applications and trust concerns of speech AI in the future.


“SambaCloud enables LLMs to be run more efficiently because the speech can be decoded faster as the model gains more predictive capabilities, resulting in larger batches at less cost.”

 

— Alan Cowen, CEO Hume AI

Synthesizing the human voice with AI

hume-case-study

CASE STUDY

Realistic voice AI real-time

Text-to-speech and speech-to-speech APIs with response times on the order of 100 ms to 300 ms, Hume AI marries hyperrealistic quality with human-like conversation latency. 

Learn more →

 

VIDEO

Meeting the needs of voice AI

Scalability, cost, and latency are required for quality voice AI. Enterprises also appreciate private deployments that SambaNova offers.

Image 8-4-25 at 10.28 AM

DEMO

The Hume Playground

Powered by SambaNova, Hume offers the world's most realistic and instructible speech-to-speech foundation model.  Try Hume now →

SambaNova customers push the limits of AI

SambaNova customers talk innovation at
Open Source Live

 

Databricks VP of AI, Naveen Rao, talks about the future of open source and AI infrastructure.

 

Rodrigo Liang, CEO of SambaNova, explores compute capacity as a scalability factor for AI with Tony Kim, Head of Technology Team at Blackrock.

 

Milos Rusic, CEO and co-founder of Deepset, and Annie Weckesser, CMO of SambaNova, discuss the evolution of open source and AI.

 

Stefano Maffulli, executive director of the Open Source Initiative, explains to Sarah Novotny the biggest misconception about open source AI and safety.