Vivid Headlines

Open AI Systems Lag Behind Proprietary and Closed Models


Open AI Systems Lag Behind Proprietary and Closed Models

A big debate about open versus closed AI systems is shaping up to be a large part of administrating new advances in reasoning AI.

We're at a specific point where these technologies are starting to have the capability to think through step-by-step processes, with error correction and other features that resemble human ways of thinking like never before. I wrote recently about chain of thought (CoT) systems where the model will show you its work and what it's thinking as it works through a multi-step process.

How is all of this new power going to emerge on the tech market? Now, some of the top people are asking how open and closed models stack up to each other, and why this is important.

What Are Open and Closed AI Models?

Many of us are available with the term open source technology. This was a common discussion point with cloud systems and all kinds of other emerging tools and software products.

The common definition of open source software was software where the source code is publicly available. Software that wasn't open source was licensed and proprietary. Open source software often has a thriving community of people around it - people who have viewed the source code, or maybe altered it in some way, or have tips about how to use it after delving into the source code itself.

The definition of open and closed for AI is a little different. Epoch AI came out with this report where they define open AI models as those with downloadable model weights, and "where closed systems are either unreleased, or can only be accessed with an API or hosted service."

Again, though, the similarity is that if you have the weights for the model, you have a look into its inner workings, its source code, if you will.

Open and Closed AI Model Development: The Findings

What Epoch AI found is that the open models trail the closed models by anything from 5 to 22 months on average. Meta's Llama was touted as the top open model, where OpenAI's closed models are obviously close to the curve of model supremacy. We have o1 demonstrating chain of thought reasoning, and Orion likely to drop early next year.

The writers of the report note that makers of systems like ChatGPT have an incentive to keep them closed. This, verbatim:

"Businesses that sell access to models like ChatGPT have a commercial incentive to keep models private. Industry AI labs have responded to these developments in various ways. Some models remain unreleased, such as Google DeepMind's Chinchilla model. Alternatively, models like GPT-4o have structured access, controlling how users can interact with the model."

In some ways, it's the old scenario, where large companies have the best capabilities to come up with powerful systems, and are not likely to make them open source, even partially.

However, there's also a danger component to this in terms of public administration. Some will argue that if you leave these highly powerful systems too open, you're going to get hackers and bad actors, using them to the detriment of the rest of us.

"Publishing models, code, and datasets enables innovation and external scrutiny, but is irrevocable and risks misuse if a model's safeguards are bypassed," write the report authors. "There is an ongoing debate about whether this trade-off is acceptable, or avoidable."

Risk of Opening Models

That itself is a powerful argument for closed systems that are administrated by committees, boards, or panels, as opposed to released into the world at large without any gatekeeping.

This is what OpenAI co-founder Ilya Sutskever reportedly wrote on the subject:

"As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science."

In the end, the report cites an "unclear picture" of where open models are headed. Again, Meta is cited as an important player for open models.

In some ways, as Sutskever suggests, it makes sense to assume that models will be closed or gated in some way, given the rapid in reasoning capabilities. We're just now seeing agentic AI take over significantly complex tasks, and that is causing its own concerns in the regulatory community.

It's likely that companies will continue to release limited models to the public, while keeping the most powerful aspects of the technologies in-house, and playing those cards close to the vest.

Previous articleNext article

POPULAR CATEGORY

entertainment

11266

discovery

5053

multipurpose

11860

athletics

11653