A Composition of Experts

The most flexible, trillion+ parameter model, capable of addressing every use case, with the performance of small models.

Large, trillion parameter models have incredible accuracy for general knowledge, but struggle to be adapted to information that is specific to an enterprise. Worse, if they are fine tuned for one application, then they can lose accuracy in others. Smaller models can be trained for a particular use case, but cannot address the broad needs of large organizations.

A Composition of Experts model solves this by combining the breadth and depth of knowledge that can only be found in the largest models, with the trainability, flexibility, and performance of small models. It is a “best of both worlds” solution, without any of the drawbacks.

Composition of Experts

  • Combines any number of smaller expert models in a single, large model
  • Models can be fine tuned easily and quickly
  • Router model directs user prompts to the appropriate expert model and enables role and rule based access control
  • Only queries relevant expert model for inference, reducing costs by 10X

Open Source

  • Take advantage of the latest open source models
  • Eliminate concerns about data privacy and ownership

Model Ownership

  • SambaNova customers always own their own models which have been fine tuned on their data
  • Increase ability to achieve compliance
  • Ensure data is always secure and private


  • Fine tune models with private data quickly and easily, which is cost prohibitive for large, monolithic models
  • Perform continuous fine tuning so models are always up to date

Ready to see more?

Schedule a meeting to see how SambaNova can help advance your AI initiatives.