Want to know how ChatGPT, Bing, and Bard stack up against each other? Welcome to the Chatbot Arena(opens in a new tab).
A UC Berkeley research group(opens in a new tab) in partnership with UC San Diego and Carnegie Mellon University has devised an experiment where users can chat with two anonymous models at the same time and vote for the best one. Chatbot Arena includes LLMs from Open AI (GPT-4), Google (PaLM), Meta (LLaMA), and Anthropic’s Claude, as well as other models built using these companies’ APIs.
ChatGPT, Google Bard produce free Windows 11 keys
When you enter a prompt in the Chatbot Arena, two anonymous models give their responses. Once you cast your vote, the experiment tells you which model you voted for. You can also experiment with side-by-side comparisons of different models and check the leaderboard for the top voted model.
Which chatbot was the better Karen? I voted for A.
Credit: LMSYS Org
The research group, called Large Model Systems Organization (LMSYS) created the crowdsourced experiment as a way to effectively benchmark the many LLMs that have proliferated recently. “Benchmarking LLM assistants is extremely challenging because the problems can be open-ended, and it is very difficult to write a program to automatically evaluate the response quality,” said the LMSYS blog post announcing Chatbot Arena. So far, more than 40,000 votes have been cast.
So which LLM is the best? So far, that honor goes to GPT-4. In second place is Anthropic’s Claude-v1, followed by Claude Instant, which is Anthropic’s lighter, faster version of Claude. Check out the leaderboard(opens in a new tab) for the full results, and try out the Chatbot Arena(opens in a new tab) for yourself on the LMSYS website.
Read the full article here