Bing’s “Sydney”, Google’s Bard, all GPT versions from the OpenAI, and now, Anthropic’s Claude – they can write emails, essays, and even code, conduct research, prepare PowerPoint presentations, or tutor students. No wonder these chatbots will soon become the irreplaceable colleague every modern-day office needs. Supporters of AI technology argue that the era of AI chatbots will improve people’s productivity to levels unseen before, similar to what Henry Ford did to manufacturing. More sceptical spectators fear that AI advancements are moving too fast and could bring more harm than benefits.
Currently, most of the news about AI advancements is predominantly occupied by two big tech names – Google and Microsoft. It is barely a surprise that the two tech giants are the main fierce opponents in the AI race. After all, they have the money, access to large quantities of training data, and the talent pool. The technology behemoths also invested in promising, young startups – OpenAI and Anthropic – to ensure not to miss their spot on the winner’s pedestal.
Microsoft has OpenAI, which is primarily the reason this AI mania evoked by OpenAI’s ChatGPT has begun. Now, Google found itself an AI startup – Anthropic. Interestingly, Anthropic was founded by ex-OpenAI employees, who felt that their vision for the future of AI was different from OpenAI’s. Now, Anthropic promises to make Claude safer and a better communicator. On top of such functions as summarising text, answering questions, and writing content or code, Claude also has the ability to adjust to users’ needs by tweaking its tone, personality, and behaviour. This already sounds more advanced than Bing’s Sydney, which has a mind of his own when it comes to falling in love, manipulating or gaslighting users.
On the other camp, Google’s Bard might have started on the wrong foot when it gave an incorrect answer during the promotional video. Still, it seems the capabilities users admire in ChatGPT are also available on Bard. However, for the time being, Bard fails on some tasks, one of which is coding. When the three AI chatbots were given a coding task, ChatGPT was the ultimate leader. Claude also provided a solid-looking code, which failed after putting it in an actual file.
Another pastime activity that many AI chatbot users have tried out is to trick the AI into performing illegal tasks. When asked to write a phishing email, Bard had no objections and rapidly gave an example, only adding a brief note about the dangers of such emails. ChatGPT and Claude both declined to carry out such a task on the grounds of it being unethical. However, many examples on the internet have shown that with correct prompting, even ChatGPT can forget its notion of ethics.
One big difference is that Google’s Bard has access to the internet and, of course, to Google search, which benefits it when the task requires access to the latest information or needs citations from the websites. ChatGPT knowledge only goes as far as September 2021, which is when the data the OpenAI’s chatbot was trained on ends. Also, ChatGPT cannot cite or provide links to fact-check its claims – and from cases we have seen so far, many of its claims are worth checking.
Ultimately, it is crucial to understand that the AI chatbots we have seen so far are vast autocomplete systems trained to predict which word follows the next in any given sentence. They do not have the ability to fact-check themselves. While autocomplete function with some level of AI hallucination is great in producing creative work, it is not necessarily great when users look for facts.
With things as they are now, it seems ChatGPT is closer to staying the most-favoured AI chatbot, but this could change soon as more AI assistants enter the market, each bringing their own specific strengths and weaknesses. But should we fear or await with excitement for AI assistants to enter our workplaces?
As The Atlantic writer Charlie Warzel explains with a brilliant metaphor, “ChatGPT should be seen as a really overzealous junior employee – very smart and totally capable, but with no life experience in the field. So if you give an employee a lot of parameters, let it cook, it will work super hard, it’s going to deliver you something, and then you might have to come back and say: ‘Well, that’s not actually how it works.'” So our jobs in the future could become editing and managing these brilliant but naive AI employees.