Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Ad image

Integration of tools that retain Enterprise AI (and how to solve by Cotools)

MONews
9 Min Read

For the latest updates and exclusive contents of the industry’s best AI application, join the daily and weekly newsletter. Learn more


researcher Soochow University China in China introduced Tools (CATOOL), a new framework designed to improve how LLM (LANGUAL MODELS) is using external tools. Cotools aims to provide a more efficient and flexible approach compared to the existing method. This allows LLMS to directly use vast tool sets within the reasoning process, including unintentional education.

For companies that want to build sophisticated AI agents, this function can unlock more powerful and adaptable applications without the general shortcomings of the current tool integration technology.

Modern LLMs are excellent in text creation, understanding and complex reasoning, but should interact with external resources and tools, such as a database or application for many tasks. Equipping LLMS as an external tool (the function that can be essentially called API or function) is important for expanding the function to a practical application.

However, the current method that enables the use of the tool is faced with a significant trade off. One general approach is related to Fine LLM Example of tool use. This may be good at calling certain tools that the model is during training, but often limits the model to the tool. In addition, the micro -adjustment process itself can sometimes negatively affect the general reasoning ability of LLM, such as the general reasoning ability (COT) of LLM, and potentially reduce the core strengths of the foundation model.

The alternative approach depends on the context learning (ICL), which provides an example of the tools available to the LLM and an example of how to use it directly in the prompt. This method provides flexibility so that the model can potentially use the tools that the model has never seen before. However, constructing such a complex prompt is less practical in the scenario using a large dynamic tool set as the efficiency of the model is greatly reduced as the number of cumbersome and available tools increases.

As the researchers pointed out paper LLM agent said, “Because a large amount of tools can appear every day in actual application scenarios, we must be able to efficiently manage large amounts of tools during COT reasoning and efficiently manage invisible tools.”

COTOOLS provides a strong alternative to the existing method by decisively maintaining the core LLM “Frozen” while cleverly combining the aspect of micro -adjustment and semantic understanding. Cotools trains a light and specialized module that works with LLM instead of fine adjusting the entire model.

Researchers said, “The core idea of ​​Cotools is to use the meaningful expression of the frozen foundation model to determine where to call the tool and the tool to call.

In essence, Cotools takes advantage of the rich understanding of the internal expression of LLM and is often called “hidden state”. This means that the model is calculated as text and generates a response token.

Cotools Architecture Credit: ARXIV

The Cotools framework consists of three main components that operate sequentially in the LLM’s reasoning process.

Tool Judge: As the LLM generates a response token by the token, the tool judge analyzes the hidden status associated with the next token and determines whether the tool call is appropriate at a specific point of the inference chain.

Tools Retriever: If the judge determines that the tool is needed, the Retriever selects the tool that is best suited for the task. The tool retriever has been educated to make the embedding of the query and compare it with the available tools. This allows you to efficiently choose the most significantly related tools in the available tool pool, including the “invisible” tool (ie, not part of the educational data for the COTOOL module).

Tools: When the best tool is selected, Cotools uses an ICL prompt that shows that the tool parameter is filled according to the context. This use of ICL avoids the inefficient efficiency of adding thousands of demos to the prompt of the initial tool selection. When the selected tool is executed, the result is re -inserted into the response of the LLM.

Cotools achieves efficiency with a large tool set to maintain the core ability of LLM and use new tools flexibly by separating decision -making (judges) and choices based on the meaningful understanding of parameters (call through intensive ICL). However, because Cotools requires access to the hidden state of the model, it can only be applied to open weight models such as Llama and Mistral instead of personal models such as GPT-4O and Claude.

Cotools
Examples of Cotools in action. Credit: ARXIV

Researchers evaluated Cotools in two separate application scenarios. Numerical reasoning using arithmetic tools and knowledge -based questions (KBQA) must be searched on the knowledge basis.

In arithmetic benchmarks such as GSM8K-XL and FunCQA (using more complex functions), the COTOOL, which is applied to LLAMA2-7B, achieves similar performance to CHATGPT in GSM8K-XL, and has a slight performance for the Toolkengpt and FundCQA variations such as other tools. Excellent. The result is that Cotools effectively improves the function of the basic basic model.

In the case of KBQA tasks, the newly constructed SMEPLETOOLQUESTIOTIONS data set, characterized by KBQA work tested in the KAMEL data set and very large tool pool (837 tools invisible in the test set), showed excellent tool selection accuracy. It was especially excellent when using the explanatory information for effective search that shakes only a scenario with large tool numbers and invisible tools. The experiment also showed that Cotools maintained powerful performance despite low quality educational data.

Implications for companies

The tool misconduct chain presents a promising direction to build a more practical and powerful LLM -based agent in a company. This is useful because new standards, such as Model Context Protocol (MCP), can easily integrate external tools and resources into applications. Companies can deploy agents that adapt to new internal or external APIs, and can minimize retraining overhead.

The dependence of the framework on the understanding of Semantic through the hidden state can allow a subtle and accurate tool to select a subtle and accurate tool, which can lead to a more reliable AI assistant in tasks that require interaction with various information sources and systems.

“Cotools explores a simple way of having new tools in LLM and can be used to build a personal AI agent with an MCP and to make complex reasoning with scientific tools.” there is.”

But WU also mentioned that he has only performed preliminary navigation so far. WU said, “To apply to the actual environment, we need to find a balance between the fine adjustment cost and the efficiency of the generalized tool call.

The researchers have announced code for training judges and retriever modules. github.

The researchers said, “We believe that the ideal tool learning agent framework, based on the frozen LLM with a practical realization method, can be useful for actual applications and can lead additional development of tool learning.

Share This Article
Leave a comment