In this blog post, we will demonstrate a useful and practical application of the newly launched SambaNova Cloud to power Continue, the leading open-source AI code assistant. It can be used to build custom autocomplete and chat experiences within VS Code or JetBrains:
SambaNova Cloud is the only provider to offer the best open-source model, Llama 3.1 405B, at a world record speed of over 100 tokens/s. Sign up today.
SambaNova Cloud makes the best open-source models available to all developers. This has unlocked unprecedented potential with the increased precision of the model. In our previous blog post, we demonstrated how function-calling capability can enable LLMs to use tools to express users' requests. The demo, powered by Meta’s Llama 3.1 Instruct model, is a direct challenge to OpenAI’s recently released o1 model and represents a significant step forward in the race to dominate enterprise AI infrastructure.
Large language models are very efficient at writing, editing, and refactoring code from human instructions. Utilizing LLMs has been shown to help alleviate the cognitive burden of repetitive and uninteresting tasks, as demonstrated in this article about developers’ productivity.
Coding, as a form of machine language, is very verbose as it provides detailed descriptions and implementations of solutions. Simple human instructions like “sort the input,” which are three words long, can be translated into a quicksort algorithm consisting of hundreds of characters. After adding unit tests and documentation, the resulting code might reach a thousand characters. Coding at the speed of human thought would require a system capable of generating thousands of characters per second. SambaNova Cloud offers the fastest 405B inference, leveraging both complexity and speed, to power an efficient coding assistant.
Ready to start coding on SambaNova Cloud with a useful application? It's easy! Our integration with the leading open-source AI code assistant, Continue, allows you to get started for free.
const sambanova_api_key = <your-API-key>;
const sambanova_model = "llama3-405b";
You can follow the coding suggestions for autocompletions after a code snippet or some code comments. You can easily modify the autocomplete configuration as shown in the window below, which is opened by clicking on the Continue button at the bottom right of VSCode.
You can interact with the LLM directly by starting a session in a new extension window, and asking your question to the model.
You can request information about a selected snippet of code from the LLM by directly selecting your text and then pressing ⌘+l. This will open a Continue session in the top bar with the code snippet as context. From there, you can ask anything related to your code! You can also use the decorator in the main Continue window to add a defined function from your codebase as context.
You can ask your LLM to modify your code, add functionalities, documentation, etc. First, select the code snippet you want the model to modify. Then, press ⌘+I. This will open an input bar at the top of your IDE. Write your desired changes and press Enter/Submit. This will generate the modified code for you, and you can edit, accept, or reject the proposed changes.
You can ask the model to inspect your terminal error outputs to explain the error and give you some suggestions. After getting an error in your terminal, press ⌘+Shift+R. This will open a Continue session in a new extension window with the error explanation!
You can execute your custom commands/prompts by selecting a code snippet and then pressing ⌘+l to open a new Continue session. Next, write /<yourCommand> followed by any further instructions.
By integrating SambaNova Cloud's state-of-the-art Llama 3.1 405B model with Continue, we are transforming the way developers engage with their code, as demonstrated in VSCode.
This powerful synergy delivers exceptional speed and accuracy to autocomplete, chat, and code modification features, establishing a new benchmark in open-source AI-assisted programming. Ready to elevate your coding efficiency and precision to match the speed of your thoughts and beyond? Sign up today to take your development workflow to the next level.