Large language models (LLMs) are good at learning from unstructured data. But a lot of the proprietary value that enterprises hold is locked up inside relational databases, spreadsheets, and other structured file types.
Large enterprises have long used knowledge graphs to better understand underlying relationships between data points, but these graphs are difficult to build and maintain, requiring effort on the part of developers, data engineers, and subject matter experts who know what the data actually means.
Knowledge graphs are a layer of connective tissue that sits on top of raw data stores, turning information into contextually meaningful knowledge. So in theory, they’d be a great way to help LLMs understand the meaning of corporate data sets, making it easier and more efficient for companies to find relevant data to embed into queries, and making the LLMs themselves faster and more accurate.