Welcome to [Paper Review], where TentuPlay presents the latest research papers on AI, the games industry, and IT.

Before we talk about Chameleon,

Chameleon LLM Example 1

Before we talk about what Chameleon LLM is and what it can do, take a look at the question on the left. It asks about the primary persuasive appeal in an ad image with three options: pathos, ethos, and logos. Choosing the correct answer may not be straightforward. We must analyze the image, comprehend the meanings of pathos, ethos, and logos, and then determine which option is the most relevant.

Chameleon LLM with a variety of tools

In April, a paper was released about an LLM that can solve complex real-world questions like a human or even better. The research team from UCLA and Microsoft presented the “Chameleon LLM”, a plug-and-play compositional reasoning framework. Unlike other LLMs, which have limitations due to outdated information or lack of precise mathematical reasoning, Chameleon uses a variety of tools such as GPT-4, HuggingFace, GitHub, Python, and Bing the search engine, to solve these complex real-world queries.
* LLM: Large Language Model is an advanced artificial intelligence model that can be trained on massive amounts of textual data. Examples include OpenAI's GPT series and Google's Bert.

Returning to the paper plate advertisement mentioned in this paper as an example, let's observe how Chameleon navigates this complex query. It analyzes the texts in the ad using Github, and comprehends the question and the three options to generate a solution employing GPT-4. In this case, Chameleon generates a solution by stating “This appeal is primarily based on the credibility and authority of the Sierra Club, which is an example of ethos (character)”. It's undoubtedly impressive.

Utilize an image verbalization tool(Hugging face)

Chameleon LLM Example 2

This paper showcases an example of using an image verbalization tool. When presented with an image and a question, Chameleon interprets the image using HuggingFace tool. In this case, the interpretation is “A polar bear is standing in the snow.”. Next, GPT-4 generates the query “Animal skin adaptations for cold environments.” and Bing is used to search for relevant information online. Finally, GPT-4 synthesizes the collected information to generate the answer. “As a polar bear is not in the options, the best possible answer would be (A)Eurasian lynx which lives in colder region than the Thorny devil.”

Mathematical reasoning with tabular context

Chameleon LLM Example 3

Chameleon has the ability to handle complex queries that involve a table. Here’s an example of mathematical reasoning to find the median value from the “Miles hiked by Wanda” table. As you can see above, Chameleon retrieves knowledge "what is the median?" and reads the table contents using GPT-4. It then calculates the table through Python code with simplified values, resulting in 9 as the correct answer.

Better than a HUMAN

ChameleonLLM_Result

According to the researchers, the Chameleon LLM stands out from other LLMs due to its flexibility and efficiency. They conducted two tasks, ScienceQA and TabMWP, to demonstrate this. As a result, Chameleon with GPT-4 outperformed other models in both tasks and even exceeded human performance (shown by the broken line) by 8.56% in the TabMWP task.

The potential of Chameleon LLM

Chameleon LLM is getting lots of attention, although it is limited to research or non-commercial use only. Companies across various industries may adopt this technology in the future because it can help speed up the development process and lower costs. By using a plug-and-play approach, Chameleon LLM can be easily integrated. As more tools are developed, they will be able to solve more complex problems in the real world. It is important to keep a close watch on the development of Chameleon LLM to fully utilize its potential.

TentuPlay, a personalized analysis tool for game users, is working on integrating various LLM models into our product. Our goal is to uncover more insightful behavioral patterns of users and utilize those insights to create customized events for your users. As AI continues to evolve, we're committed to providing you with deeper insights into your games. Stay connected with TentuPlay to see how we can help you improve your user experience.

<Reference>
Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Jianfeng Gao, "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models", arXiv preprint arXiv:2304.09842(2023).