Products
Developers
About

Insights & Information

Find what you need to accelerate your AI journey

Blog

Does reduced precision hurt? A bit about losing bits.

Recent work highlighted how quantization for recent LLaMa 3 models can lead to non-negligible decay in model...

Blog

NAIRR; Govt-funded AI Research Resources

NAIRR pilot, in partnership with SambaNova, provides generative AI platforms for groundbreaking academic research....

Blog

Tokens Per Second is Not All You Need

In this post, we explore why tokens per second doesn't paint the full picture of enterprise LLM inference...

Blog

Samba-CoE v0.3: The Power of Routing ML Models at Scale

Samba-CoE-v0.3, our latest Composition of Experts, surpasses DBRX Instruct 132B and Grok-1 314B on the OpenLLM...

Blog

SambaLingo hits 15,000+ downloads, now integrated with Samba-CoE-v0.2

SambaLingo has been downloaded over 15,000 times and has achieved remarkable performance of 280 tokens/s inference...

Blog

SambaNova Delivers Accurate Models At Blazing Speed

Samba-CoE v0.2 is climbing on the AlpacaEval leaderboard, outperforming all of the latest open-source models.

Blog

Using Mixed Precision on RDUs

SambaFlow 1.18 introduces support for mixed precision on RDUs, streamlining the experience for model developers and...

Blog

Benchmarking Samba-1

Benchmarking Samba-1 with the EGAI benchmark - a comprehensive collection of widely adapted benchmarks sourced from...

Blog

Samba-CoE v0.1 - Unlocking the power of routing to build a Composition of Experts

We're thrilled to unveil Samba-CoE-v0.1, a scaled down version of Samba-1, our latest breakthrough model that...