LiquidMetal AI builds high-performance apps on SambaCloud
Users go from concept to delivering production-quality applications in only hours.

LiquidMetal AI empowers developers to build full-stack, agentic applications at scale using its Raindrop framework. Customers leverage plain English to create production-ready, interactive AI apps in hours — not weeks. The platform includes a global-edge AI network with stateless workers, integrates major cloud providers, and delivers infrastructure capable of supporting 100,000 users within five minutes of launch.
Through an intelligent MCP server, LiquidMetal AI orchestrates AI CLIs, like Claude Code or Gemini, to transform user specifications into functional applications. The system dynamically selects optimal models based on performance, cost, and accuracy needs (e.g., prioritizing speed for real-time interactions or cost-efficiency for batch processing) while abstracting technical complexities.
Challenge:
Supporting diverse customer use cases requires LiquidMetal AI to operate seamlessly across models ranging from 7B to 300B+ parameters. Delays in response directly impact user adoption and revenue, making latency and scalability non-negotiable.
Solution:
LiquidMetal AI partnered with SambaNova to deliver consistent high performance across all model sizes. SambaCloud’s OpenAI-compatible API enables effortless integration of open-source models, ensuring speed and reliability for every application.
Running on SambaCloud, large, powerful models such as DeepSeek make it possible for LiquidMetal AI to quickly build production-ready applications based on customer requirements. These are fully capable applications that are ready for release in only hours.
Once the applications are built, the high-speed inference of SambaCloud provides LiquidMetal AI customers with the performance they need to power their applications using the best and largest open-source models, to meet user expectations, and to scale to any demand.
Most models get the right outcome the first time
Build applications that scales to over 100,000 users globally
Raindrop reduces development time from weeks to days
Challenge:
Hume specializes in building the most realistic voice AI models for developers and enterprises. These models are based on LLMs, so they understand both language and a person’s voice at the same time. Their mission is to bring empathy to AI and to align AI with human well-being. To that end, the speech-LLMs they develop are capable of understanding both the tone and meaning of the spoken word. Applications for this include audio chatbots, customer service, and more.
They recently launched the highest quality speech-LLMs for text-to-speech (Octave) and speech-to-speech (EVI 3). Much of the quality comes from the models’ ability to understand language and to adjust its tone of voice naturally in response to the input. This enables a more natural conversation, which can improve user perception.
Most voice systems today have separate text-to-speech, speech-to-text, transcription, and other models connected together because they were better at each individual task, but with the latest advances in speech-language models this is no longer the case. Moreover, each of these steps adds latency to the process. Conversational human latency is 200 ms and anything longer than 1 second will sound less human. Hume AI and SambaNova have worked together to develop a solution that delivers the highest performance at the lowest latency possible.
Solution:
Hume and SambaNova have worked together to deploy Hume’s speech-language models on SambaCloud, enabling the best speech-to-speech and text-to-speech models in the world to run at conversational latency without any reduction in quality. Together, Hume AI and SambaNova provide enterprises with access to text-to-speech and speech-to-speech APIs with response times on the order of 100 ms to 300 ms, marrying hyperrealistic quality with human-like conversation latency.
For many enterprises, it is critical to deploy in private environments. Hume and SambaNova are providing Hume’s text-to-speech and speech-to-speech models through private deployments to meet these needs.
Response time
Highest quality speech LLMs
“SambaNova’s been a great partner...very helpful, the support team has been amazing, the sales team has been amazing. Everyone we’ve worked with is a good human and they want to solve real customer problems.”
— Geno Valente, Head of GTM and ENG, LiquidMetal AI
Related resources
