Products
Developers
About

Insights & Information

Find what you need to accelerate your AI journey

Blog

SambaNova Cloud: The fastest inference and the best models - for free

SambaNova is opening up the full spectrum of Llama models for developers to create the next wave of AI innovation.

Blog

Advanced AI Apps Need Fast Inference. SambaNova Cloud Delivers It

By improving inference performance, SambaNova has unlocked the full potential of Llama 3.1 405B and enabled...

Blog

Why SambaNova's SN40L Chip is The Best for Inference

Comparing the end-user inference performance of SambaNova's technology against that of Groq and Cerebras.

Blog

SubgoalXL: Pushing the Boundaries of LLM in Formal Theorem Proving

SubgoalXL represents a significant step forward in the field of AI-powered theorem proving.

Blog

SambaNova Holds Speed Record on Llama 3.1 405B - 4X faster than the rest

Today, we’ve set a world performance record of 114 tokens per second on Llama 3.1 405B, independently verified by...

Blog

Three Predictions for the Upcoming Llama 3 405B Announcement

Three predictions on how Llama 3 405B could reshape the landscape for developers engaged in AI and machine learning.

Blog

Typhoon model adds Thai language to Samba-1

With the inclusion of Typhoon Thai LLM, Samba-1 is now able to deliver generative AI capabilities in the Thai...

Blog

Does reduced precision hurt? A bit about losing bits.

Recent work highlighted how quantization for recent LLaMa 3 models can lead to non-negligible decay in model...

Blog

SambaNova CEO explains why only one AI company wants a monopoly

Rodrigo Liang and veteran tech journalist Don Clark of The New York Times discussed how a full stack approach to AI...