top of page

Leaked Internal Google Document Claims Open Source AI Will Outcompete Google & OpenAI

Updated: Nov 13, 2023

Google "We Have No Moat, And Neither Does OpenAI"

According to a leaked internal document by Luke Sernau, a senior software engineer at Google, open source AI will outcompete Google and OpenAI. This document was shared by an anonymous source and was recently leaked in April 2023. The document argues that the work happening in the open source community will eclipse the work done by Google and OpenAI. This has triggered an AI arms race between tech giants such as Microsoft and Google and has pushed the industry towards more open-source development. Meta has also created a new LLaMA AI language model to further research into AI, which is available as an open-source package for anyone in the AI community to use.

A Summary of the document;

Open source is outpacing Google and OpenAI:

The open source community has been making significant progress in solving major open problems related to LLMs, with innovations such as running foundation models on a Pixel 6 at 5 tokens/sec, scalable personal AI, responsible release, and multimodality.

Open source models are faster, more customisable, more private, and more capable than Google and OpenAI models, doing things with $100 and 13B params that take Google and OpenAI $10M and 540B parameters.

Google should prioritise enabling third-party integrations and consider where its value add really is, as people are not likely to pay for a restricted model when free, unrestricted alternatives are comparable in quality.

Open source innovations are pivotal:

The innovations that powered open source's recent successes directly solve problems that Google is still struggling with.

LoRA is an incredibly powerful technique that Google should probably be paying more attention to, as it allows model fine-tuning at a fraction of the cost and time.

Retraining models from scratch is the hard path; instead, Google should be thoughtful about whether each new application or idea really needs a whole new model and invest in more aggressive forms of distillation that allow retaining as much of the previous generation's capabilities as possible.

Focusing on maintaining some of the largest models on the planet actually puts Google at a disadvantage, as iterating faster on small models is more effective in the long run.

Directly competing with open source is a losing proposition:

Google's business strategy needs to take into account the fact that people would choose a free, high quality open source alternative without usage restrictions over a Google product with restrictions.

Google should not expect to catch up with open source, as open source has significant advantages that Google cannot replicate.

Keeping technology secret was always a tenuous proposition, and it becomes even harder now that cutting-edge research in LLMs is affordable.

Open Source Innovation Outstrips Corporate Capacity:

Individuals are not constrained by licenses to the same degree as corporations

Much of this innovation is happening on top of leaked model weights

Being your own customer means you understand the use case

Owning the Ecosystem: Letting Open Source Work for Us:

Meta has effectively garnered an entire planet's worth of free labor

Owning the platform where innovation happens cement thought leadership

Tightly controlling models makes open alternatives more attractive

Establishing Leadership in the Open Source Community:

Google should take the lead by cooperating with the broader conversation

Publishing model weights for small ULM variants is necessary

Compromising control over models is inevitable

OpenAI's Closed Policy is a Moot Point:

Until the flow of poached senior researchers is stemmed, secrecy is moot

Open source alternatives can eclipse OpenAI unless they change their stance

We can make the first move

Timeline of Open Source Innovation:

LLaMA is launched by Meta

LLaMA is leaked to the public

Models are fine-tuned on low budgets

LLaMA is run on a MacBook CPU

Vicuna achieves 'parity' with Bard

Models are gathered together in one place

GPT-3 architecture is trained from scratch

LLaMA-Adapter introduces instruction tuning and multimodality

Real humans can't tell the difference between a 13B open model and ChatGPT

Open Assistant launches a dataset for Alignment via RLHF

The full alleged document :


Author Bio:

David W. Harvey, CEO of Design By Zen, merges 42 years of IT and high-tech design expertise with groundbreaking innovation. Inventor of the DBZ Comfort Index, Holistic Objectives algorithm, and the pioneering Social Harmony Ecosystem or Engine -SHE Zen AI architecture, David's work also includes the world's first intelligent earthquake table -EQ1. Holder of multiple international patents, his professional excellence parallels a fervent interest in exotic cars & simulation engineering. Off-screen, David finds balance in cultivating a Zen garden, reflecting his philosophy of harmony in technology and life through art.

42 views0 comments
Learn More
bottom of page