Given a piece of text, output either a positive, negative, or neutral sentiment value.
Given a piece of text, output either a positive, negative, or neutral sentiment value.
Large Language Models(LLM) like BERT and GPT are designed to help computers understand the meaning of ambiguous language in text by using surrounding text to establish context. However, most of the open sourced models are generic in nature and lack domain specificity. For instance, the BERT framework was pre-trained using text from Wikipedia and can be fine-tuned with various tasks including sentiment analysis.
Selected sentence has a Positive sentiment.
But general purpose BERT predicted Negative.
Selected sentence has a Positive sentiment.
But general purpose BERT predicted Negative.
Selected sentence has a Positive sentiment.
But general purpose BERT predicted Negative.
Domain Specific Workflow
We propose SambaNova GPT_Banking, a purpose-built, domain-specific language model for the banking and financial services industry.