Articles

Feature stories, news review, opinion & commentary on Artificial Intelligence

More Agents Boost Large Language Models' Performance, Study Finds


In a groundbreaking study, Junyou Li, Qin Zhang, Yangbin Yu, Qiang Fu, and Deheng Ye from Tencent Inc. have discovered that the performance of large language models (LLMs) can be significantly enhanced by simply increasing the number of agents involved in a sampling-and-voting method. This finding, outlined in their paper titled "More Agents Is All You Need", reveals a scalable and straightforward approach to improve LLMs' efficiency across a wide range of tasks, from language understanding to code generation.

The research team's method diverges from the complexity of existing enhancement techniques by utilizing a more straightforward approach that scales LLMs' performance based on the number of agents employed. Their comprehensive experiments across various benchmarks demonstrate that this method not only stands on its own but also complements and amplifies the effectiveness of more intricate methods.

One of the study's highlights is the revelation that smaller LLMs can match or even surpass the performance of their larger counterparts by increasing the ensemble size. For instance, when the ensemble size was scaled up to 15, a smaller LLM model achieved comparable accuracy to a model with a significantly larger number of parameters.

Moreover, the study explores the correlation between the method's effectiveness and the task's difficulty level. Through a detailed analysis, the researchers identified three critical dimensions affecting this correlation: the inherent difficulty of the task, the length of the reasoning steps required, and the prior probability of the correct answer. Their findings suggest that the performance gains from increasing the number of agents are more pronounced in more challenging tasks.

In response to these insights, Li and his colleagues have proposed several optimization strategies that leverage the identified properties to further enhance LLMs' performance. These include step-wise sampling-and-voting, which targets the improvement at each step of a task, and hierarchical sampling-and-voting, which breaks down complex tasks into simpler subtasks.

The paper, now publicly accessible along with the researchers' code, sets a foundation for future studies on scaling LLMs' performance. It opens up new avenues for enhancing LLMs' efficiency in a cost-effective and scalable manner, without the need for complex methodologies. As LLMs continue to play a pivotal role in various applications, the implications of this study are vast, offering a potential paradigm shift in how LLMs' performance is approached.