Google is visibly pessimistic about the future as it has been largely surpassed in artificial intelligence (AI) by ChatGPT-4. This sentiment is evident in an internal note that recently leaked on the web. The note, written by an anonymous researcher and corroborated by several internal sources, sheds light on Google’s apprehension regarding the progress of other AIs.
Although AIs have made significant advancements in less than a year, the landscape of general artificial intelligence is primarily dominated by proprietary technologies. The success of GPT-4, for instance, can be attributed to a unique “secret sauce” that sets it apart from the competition. This proprietary advantage poses a challenge for bringing models like Bard, LLaMA, and other LLM models up to par with what OpenAI’s labs have achieved.
Open source AI poses a formidable threat that Google is deeply concerned about. About two months ago, the source code of LLaMA, an AI developed by Meta, was leaked on 4Chan. Initially, LLaMA was resource-intensive, requiring significant computing power. However, internet users, including experts in the field, swiftly enhanced the code, enabling the model to operate on less powerful machines while improving its efficiency.
Consequently, LLaMA’s performance became comparable to that of Bard and ChatGPT. Furthermore, advancements were made in “Low-Rank Adaptation,” which refers to the effort and resources required to train models. Just two weeks after the leak, Stanford launched a version of LLaMA named Alpaca-13B, which improved AI responses from 68% to 76% of ChatGPT-4’s capability.
This example clearly demonstrates that the open-source community can develop AI models that rival those developed by private companies. The note highlights the advantage Meta gained from the leak of the LLaMA model, as it can leverage the community’s contributions more quickly in its products.
By being one of the first LLM models embraced by the developer community, LLaMA is positioned favorably in the field of open-source AIs, which is still in the nascent stages. Consequently, the internal memo suggests that proprietary language models like those developed by Google and OpenAI will eventually be surpassed by completely open and freely available alternatives.
Moreover, these alternatives offer the advantage of being increasingly resource-efficient, enabling local execution unlike current models that rely on massive data centers to operate.