Blog 2

Use this space to tell everyone about what you have to offer.

June 20, 2024

Does reduced precision hurt? A bit about losing bits.

SambaNova and Groq recently achieved 1000 tokens per second on their inference system for Meta’s LLaMa 3 8b Instruct...

May 13, 2024

Introducing Fugaku-LLM in Composition of Experts

The Composition of Experts (CoE) architecture that the Samba-1 model is based upon has many features that make it...

May 6, 2024

Sovereign AI

Artificial intelligence has become vital to nations, governments, and large corporations. Many of these large...

May 6, 2024

NAIRR; Govt-funded AI Research Resources

Artificial intelligence (AI) is driving the next generation of technological innovation and scientific discovery....

May 1, 2024

Tokens Per Second is Not All You Need

In the fast-paced world of LLM inference, there's been a growing buzz around achieving high tokens per second...

April 11, 2024

Samba-CoE v0.3: The Power of Routing ML Models at Scale

*A twitter user said that CausalLM-34b-beta is suspected to have MMLU contamination. On further investigation we do...

April 10, 2024

Responsible AI

Generative AI will be the defining technology of this century, fundamentally reshaping how businesses, governments,...

April 8, 2024

SambaLingo hits 15,000+ downloads, now integrated with Samba-CoE-v0.2

SambaLingo, our cutting-edge multilingual language expert series, surpassed 15k downloads and is now integrated into...

March 27, 2024

AI Power: Accurate Models at Blazing Speeds | SambaNova

In late February, we announced Samba-1, a CoE architecture that is a paradigm-shift, and will ultimately become the...

Subscribe

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam ultrices massa sit amet auctor scelerisque. Cras vel quam non lorem tincidunt facilisis.