# A Chinese Lab Has Released a ‘Reasoning’ AI Model to Rival OpenAI’s O1 ![rw-book-cover](https://techcrunch.com/wp-content/uploads/2015/02/cropped-cropped-favicon-gradient.png?w=192) ## Metadata - Author: [[Kyle Wiggers]] - Full Title: A Chinese Lab Has Released a ‘Reasoning’ AI Model to Rival OpenAI’s O1 - Category: #articles - Summary: A Chinese lab, DeepSeek, has launched a new reasoning AI model called DeepSeek-R1 to compete with OpenAI's o1. This model claims to perform well on AI benchmarks but struggles with certain logic problems and avoids politically sensitive topics. DeepSeek plans to open source the model and is backed by a Chinese hedge fund that uses AI for trading. - URL: https://techcrunch.com/2024/11/20/a-chinese-lab-has-released-a-model-to-rival-openais-o1/ ## Highlights - The increased attention on reasoning models comes as the viability of “scaling laws,” long-held theories that throwing more data and computing power at a model would continuously increase its capabilities, are coming under scrutiny. A [flurry](https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai) of press reports suggest that models from major AI labs including OpenAI, Google, and Anthropic aren’t improving as dramatically as they once did. That’s led to a scramble for new AI approaches, architectures, and development techniques. One is test-time compute, which underpins models like o1 and DeepSeek-R1. Also known as inference compute, test-time compute essentially gives models extra processing time to complete tasks. ([View Highlight](https://read.readwise.io/read/01jd5ma1p76j4y6ze6r604qj47))