Products
Technology
Resources
Solutions
Community
About

Why the Defense Innovation Unit Considers AI Critical to the Public and Private Sector

Posted by SambaNova Systems on June 29, 2022

Why the Defense Innovation Unit Considers AI Critical to the Public and Private Sector

This is the sixth in a series of blogs on the “AI is here.” podcasts. Each blog in the series will highlight the insights from a specific industry leader as they describe how their organization is deriving significant value from AI today.

In this edition of the “AI is here.” podcast,  Dan Faggella, Founder and CEO, of market research and publishing company Emerj, speaks with Jared Dunnmon, Technical Director – Artificial Intelligence/Machine Learning, Defense Innovation Unit. Jared and Dan discuss how AI has changed how data can be analyzed to deliver insights that were never before possible

Dunnmon speaks about how using deep learning to analyze vast amounts of Publicly Available Information (PAI) and Commercially Available Information (CAI) has transformed how analysts can make connections between vast amounts of data to obtain insights that would not otherwise be possible. He discusses the challenges of analyzing the massive amounts of data that are being created, which include text based documents as well as things like images, videos, and even more exotic content such as LiDAR and open source radar. He talks about why this is a challenge both in terms of the volume of data that needs to be processed and in the magnitude of the processing that is required to analyze that much data. 

As an example, Dunnmon uses supply chain management as a use case. When looking at everything that needs to happen for any complex product to be built, and then scale that across hundreds or even thousands of products it becomes almost impossible for a human to make all of the possible connections. The issue is that in the modern world we have gone from analyzing hundreds or thousands of documents to millions, or in some cases, billions of documents, images, and other media. 

Dunnmon says that moving to an AI based analysis solves these challenges. 

They discuss how an analysis can be performed on a particular company. Using traditional methods, someone would have to read all the documents they could find on the company and, as a human, analyze what the company does, who they are transacting with, the suppliers that they interact with, and more to draw a conclusion.

He gives the example where company A could be a supplier of a component that goes into a device. Company B suddenly invests in company A. Company B may have a problematic relationship with Company C, but this is only shown by making connections between multiple, disparate documents that a human would be unlikely to see. Neural networks can be trained to identify these kinds of anomalies, which is a fundamentally more scalable method of performing analysis than would have been possible before. 

Where machine learning is beneficial in this use case is in extracting all the relevant data and performing an analysis to enable the human to make more informed decisions. The analysis that can be performed has been moved up a level. Now in place of researching a number of documents, a machine learning model can look for relationships at scale, in some cases across billions of documents. The model can provide a data object that is human parsable and efficient, enabling the analyst to ask a different set of questions. 

According to Dunnmon, there are three levels of ML models that are built:

  1. Models that extract entities from text, images, or other sources
  2. Relationship extraction models
  3. Graph analysis models

He then discusses why organizations need to invest in the right infrastructure to benefit from each of these models. That is because for each of those models organizations have to be constantly performing quality control to make sure their assumptions about the data are still true. 

Organizations will want to have machine learning models that are specific to their business. He talks about the challenges faced by the vast majority of organizations to build out the needed infrastructure on their own. They need to work with partners that can help them build out the model and also to perform continuous testing, integration, and deployment to ensure that the model is operating in the space it is designed to perform in. Being able to deploy models at scale is not something that can be done manually and is critical to developing a machine learning enabled process. 

Finally they discuss what needs to be done to enable human teams to implement a machine learning workflow. They look at how the team needs to be trained and resources to effectively optimize the end to end workflow. 

AI is here. Discover how you can fundamentally transform what is possible for your organization with the power of AI in weeks, not years. Powered by the industry’s most powerful, full stack deep learning platform, SambaNova is the industry leader for GPT and large language models, delivering the highest accuracy and performance, while dramatically reducing the need for significant investment in infrastructure, personnel, and other resources.

Topics: business

Editor
Editor

AI is here. With SambaNova, customers are deploying the power of AI and deep learning in weeks rather than years to meet the demands of the AI-enabled world. SambaNova’s flagship offering, Dataflow-as-a-ServiceTM, is a complete solution purpose-built for AI and deep learning that overcomes the limitations of legacy technology to power the large and complex models that enable customers to discover new opportunities, unlock new revenue and boost operational efficiency. For more information please visit us at sambanova.ai or contact us at info@sambanova.ai. Follow SambaNova Systems on LinkedIn.