Categories
AI Artificial Intelligence technology

Open Source Alternatives to ChatGPT & Bard will Overtake Big Tech’s Solutions Very Soon

SUMMARY/TL;DR

A leaked Google document suggests that the company’s efforts to build the most powerful language models are being rapidly eclipsed by the work happening in the open-source community. While Google’s models still hold a slight edge in terms of quality, the gap is closing quickly.

Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that Google struggles with at $10M and 540B.

The document argues that LoRA, a technique that allows models to be fine-tuned in just a few hours of consumer hardware, is extremely effective and stackable. The paper concludes that keeping technology secret is a tenuous proposition, and that Google and OpenAI are making the same mistakes in their posture relative to open source.

The Rise of Open-Source Language Models

The field of natural language processing (NLP) has been revolutionized by the development of large language models (LLMs) over the past few years. Google and OpenAI are two of the biggest players in this space, with their models being among the most powerful and effective in the world. However, a leaked Google document suggests that the company’s efforts to build the most powerful language models are being rapidly eclipsed by the work happening in the open-source community. In this article, we will explore the reasons behind the rise of open-source language models and why they will ultimately triumph over Google & OpenAI’s efforts.

The Advantages of Open-Source Language Models

Open-source models have several advantages over proprietary models that give them the edge in the race for the most powerful language models. Here are some of the reasons why:

Customizability

Open-source models are highly customizable, allowing users to fine-tune the models to their specific needs. This is in contrast to proprietary models, which are often black boxes that cannot be modified or tweaked. With open-source models, users have access to the source code and can modify it to suit their needs, whether it be for a specific task or to improve performance.

Speed

Open-source models are often faster than proprietary models. This is because the open-source community is constantly working on improving the models and optimizing them for speed. Additionally, open-source models can be run on a wide range of hardware, including consumer-grade hardware. This is in contrast to proprietary models, which often require specialized hardware to run at peak performance.

Privacy

Privacy is a major concern for many people when it comes to language models. With proprietary models, users are often required to give up their data in order to use the model. This is not the case with open-source models, which can be run locally on a user’s computer, ensuring that their data remains private.

Capability & Cost

Open-source models are pound-for-pound more capable than their proprietary counterparts. They are doing things with $100 and 13B params that Google struggles with at $10M and 540B. This is because the open-source community is constantly working on improving the models and pushing the boundaries of what is possible.

The LoRA Technique and Stackable Models

One of the other key advantages of open-source models is the ability to use the so-called “LoRA technique” to fine-tune models in just a few hours of consumer hardware. LoRA stands for Language Model Reverse Annealing and is a technique that allows models to be fine-tuned quickly and effectively.

The leaked Google document argues that LoRA is extremely effective and stackable. Stackable models are models that can be combined to create a more powerful model. The paper concludes that keeping technology secret is a tenuous proposition, and that Google and OpenAI are making the same mistakes in their posture relative to open source.

Potential Challenges of Open-Source Language Models

While the rise of open-source language models is exciting, it is important to consider the potential challenges that lie ahead. One of the biggest challenges is safety. LLM’s are a Swiss-army knife that can be used for good or evil. The risk that these models could be used to spread disinformation, manipulate public opinion, or even create fake news is already unfolding, leading to the very real possibility that it could become almost impossible to tell if what we’re reading has its basis in the real world, or is just made-up. Crowd-source approaches, such as Twitter’s Community Notes are an example of how the collective experience and skill of a large number of people can help push back against false information, but it’s still very admittedly early days on this front, and time definitely isn’t something in our favour, due to the explosive growth of generative AI tools that have emerged in the last few months (or weeks, even!).

Another challenge is the potential for bias in the models. Open-source models are only as good as the data they are trained on. If the data is biased in some way, the model will be biased as well. It is important for developers to be aware of this and take steps to mitigate bias in their models, Open Assistant’s recent crowd-sourced solution has gone a long way to solving these issues

Finally, there is the challenge of scalability. Open-source models are often run on consumer-grade hardware, which can limit their scalability. As these models become more popular, there will be a need for more powerful hardware to run them on – to me, this seems like a superb opportunity to leverage blockchain distributed ledgers, and arguably might be a much better justification for using electricity than just creating PoW cryptocurrencies, as there are undeniable and tangiable benefits to both democratising and distributive generative AI solutions.

Conclusion

The rise of open-source language models is a significant development in the field of natural language processing. These models have several advantages over proprietary models, including customizability, speed, privacy, and capability. The LoRA technique and stackable models are particularly exciting developments that are helping to push the boundaries of what is possible with language models.

However, there are also potential challenges that need to be considered, such as safety, bias, and scalability. It is important for developers to be aware of these challenges and take steps to mitigate them.

Overall, the rise of open-source language models is a positive development that is democratizing access to powerful language models and accelerating innovation in the field of natural language processing. In the long run, I believe that open-source models will ultimately triumph over proprietary models due to their many advantages and the collaborative nature of the open-source community.

 

 

 

Dez Futak