Home >Technology peripherals >AI >A new era of enterprise natural language search has arrived
Due to the explosive release of OpenAI’s ChatGPT and the subsequent search engine war between Google and Microsoft, large language models (LLMs) and their applications have suddenly become a hot topic. ChatGPT and similar systems are reinvigorating our new experiences and concepts of search. Now users can interact with search engines naturally using human language, rather than relying on specific keywords or complex search query syntax.
The question answering (QA) system is a capability of natural language processing (NLP) and a set of language capabilities that LLM can achieve, but QA systems aren't always a popular use case. Ryan Welsh, CEO of NLP search company Kyndi, recalls the difficulty he had explaining his company’s approach to NLP search: “I remember raising money three years ago and everyone was like, ‘Hey, cool, you’re NLP , but this search is not a good application case.'"
Welsh said that because of the rise of ChatGPT, more and more people are aware of the value of natural language capabilities, and this response has already Complete change: "I feel like ChatGPT achieved a decade of hype in 90-120 days."
Billions of dollars are now being invested in next-generation search technology. Suddenly, there was a real need for QA systems that could quickly and accurately answer questions from stakeholders or external customers visiting the company's website or knowledge portal, as well as internal employees searching for company documents.
However, Welsh said that these current chatbot technologies do not meet the needs of enterprises, and explainability, which is the key to end-user trust, is often lacking. Enterprise requirements for large-scale language model systems are that the answers generated are accurate and reliable, rather than full of "chaos" of training data from network content. This is a problem faced by large-scale mainstream models like ChatGPT (extended reading: ). Due to the statistical nature of their underlying technology, chatbots can create a clutter of misinformation because they don't actually understand the language and are just predicting the next best word. Often, the training data is so extensive that it's nearly impossible to explain how the chatbot arrived at the answers it gave.
This “black box” approach to AI, which lacks explainability, is simply not suitable for many businesses. Welsh gave the example of a pharmaceutical company that is providing answers to healthcare providers or patients who visit its drug website. The company is required to know and explain every search result it can provide to the questioner. So despite the recent surge in demand for systems like ChatGPT, according to Welsh, adapting them to these stringent enterprise requirements is no easy task and the need often goes unmet.
Welsh said his company has focused on these enterprise needs for years, learning from experience and interacting directly with customers. Kyndi was founded in 2014 by Welsh artificial intelligence expert Arun Majumbar and computer scientist John Sowa, an expert on knowledge graphs who introduced a specific type called concept maps at IBM in 1976.
Kyndi’s natural language search applications are built on breakthroughs in knowledge graphs and LLM, using neuro-symbolic artificial intelligence, a semantic approach that complements statistical machine learning techniques. Rather than just predicting the next most likely word in a text, the system creates a symbolic representation of the language, leveraging vector and knowledge graph technology to map relationships between data. This enables the system to understand the true intent behind end-user questions, helping to find context-specific answers while distinguishing between common synonyms, semantically equivalent words, abbreviations and misspellings.
The technology requires almost no training data to work, which could alleviate bottlenecks caused by a lack of labeled data and AI expertise. The high costs associated with data labeling make training and fine-tuning LLM prohibitively expensive for many enterprises. This ease of adjustment is another differentiating factor of Kyndi’s neurosymbolic approach. Welsh said many enterprise customers are already suffering from slow AI deployments. One large pharmaceutical company used six machine learning engineers and data scientists to fine-tune LLM for more than six months before partnering with Kyndi. Welsh said Kyndi only needs the help of a business analyst to train and tune their model in a day. In several other cases, Kyndi was able to complete AI projects with demos, sandbox validation, and deployment within two weeks.
"I think that sometime in the next 10 years, every search bar and every chat interface in every enterprise in the world will have an answer engine. This will be what we see in enterprise software The biggest shift,” Welsh said, comparing this moment to the shift from preprocessing to the cloud. "I don't think there is any vendor that is dominating this market right now."
Welsh predicts that in this new era of enterprise search, the winning companies will be those that have the foresight to bring products to market. company. Although competition is currently heating up, some of these new companies are already lagging behind. He estimates they still have about 2-3 years and $30 million worth of construction work to do before success.
The above is the detailed content of A new era of enterprise natural language search has arrived. For more information, please follow other related articles on the PHP Chinese website!