# Current AI Scaling Laws Are Showing Diminishing Returns, Forcing AI Labs to Change Course

## Metadata
- Author: [[Maxwell Zeff]]
- Full Title: Current AI Scaling Laws Are Showing Diminishing Returns, Forcing AI Labs to Change Course
- Category: #articles
- Summary: AI labs are facing diminishing returns from traditional scaling laws, leading them to seek new methods for improving model capabilities. The emerging focus is on "test-time compute," which allows models to use more resources during the inference phase, potentially enhancing performance. This shift suggests a change in how AI advancements will be approached in the coming years.
- URL: https://techcrunch.com/2024/11/20/ai-scaling-laws-are-showing-diminishing-returns-forcing-ai-labs-to-change-course/
## Highlights
- “When you’ve read a million reviews on Yelp, maybe the next reviews on Yelp don’t give you that much,” said Nishihara, referring to the limitations of scaling data. “But that’s pretraining. The methodology around post-training, I would say, is quite immature and has a lot of room left to improve.” ([View Highlight](https://read.readwise.io/read/01jd7wnsv08a424dfzd06d24pg))
- But trends suggest exponential growth is not possible by simply using more GPUs with existing strategies, so new methods are suddenly getting more attention. ([View Highlight](https://read.readwise.io/read/01jd7wpcaac2jsx0rqvncw9vpt))
- OpenAI improved its GPT models largely through traditional scaling laws: more data, more power during pretraining. But now that method reportedly isn’t gaining them much. The o1 framework of models relies on a new concept, test-time compute, so called because the computing resources are used after a prompt, not before. The technique hasn’t been explored much yet in the context of neural networks, but is already showing promise. ([View Highlight](https://read.readwise.io/read/01jd7wpy90e4czg1yqdv0f6vyh))