CUSTOMER SPOTLIGHT

Hume AI delivers instant speech processing with SambaCloud

hume-new-logo
 

Alan Cowen, CEO of Hume AI, shares their vision for end-to-end speech LLMs and the importance of latency on delivering realistic outcomes. He expands on the challenges faced by the industy today, as well as potential applications and trust concerns of speech AI in the future.


“SambaCloud enables LLMs to be run more efficiently because the speech can be decoded faster as the model gains more predictive capabilities, resulting in larger batches at less cost.”

 

— Alan Cowen, CEO Hume AI

Synthesizing the human voice with AI

Hume AI logo_new

CASE STUDY

Realistic voice AI real-time

Text-to-speech and speech-to-speech APIs with response times on the order of 100 ms to 300 ms, Hume AI marries hyperrealistic quality with human-like conversation latency. 

Learn more →

 

VIDEO

Meeting the needs of voice AI

Scalability, cost, and latency are required for quality voice AI. Enterprises also appreciate private deployments that SambaNova offers.

 

DEMO

Hume LLM using DeepSeek R1 on SambaCloud 

The EVI 3 API creates the most realistic voice AI interactions with ultrafast inference enables optimizations real-time.

Image 8-4-25 at 10.28 AM

EXPLORE

The Hume Playground

Powered by SambaNova, Hume offers the world's most realistic and instructible speech-to-speech foundation model.  Try Hume now →

SambaNova customers push the limits of AI

SambaNova customers talk AI innovation

 

Maitai CEO and founder, Christian Del Santo, discusses the value of fine-tuning and faster inference for accuracy.

 

This demo of Blackbox CyberCoder, powered by SambaNova, illustrates how software coding is transformed.

 

Aion Labs shares how SambaNova contributes to the performance improvements of LLMs.

 

OpenRouter CEO Chris Clark discusses the important role SambaNove plays in total time to last token for larger prompts.