rewrite this content using a minimum of 1000 words and keep HTML tags
Chinese tech company DeepSeek is once again under scrutiny, this time over allegations that it may have used Google’s Gemini AI to train its latest model. Previously, the company faced criticism for allegedly using ChatGPT outputs to train its earlier versions.
Recently, DeepSeek introduced an updated version of its AI model, R1, praised for its impressive reasoning abilities. However, questions have arisen regarding the dataset used to train the model. According to Australian developer Sam Paech, the model’s outputs bear an uncanny resemblance to those produced by Gemini 2.5 Pro, including similar word choices and phrasing. While Paech admits this is not definitive proof, he emphasizes the striking similarities.
A Gemini-like Reasoning Style

Another developer pointed out that DeepSeek’s model shares a very similar “reasoning style” to Gemini. This observation is based on the AI’s intermediate steps when solving problems—suggesting a pattern that closely mirrors how Gemini approaches reasoning tasks.
This isn’t the first time DeepSeek has faced such allegations. In an earlier controversy, developers noted that DeepSeek’s V3 model would occasionally refer to itself as “ChatGPT,” raising concerns that it may have been trained on OpenAI’s proprietary dialogue data.
While training models using the outputs of more advanced AIs isn’t a new tactic in the AI community, most companies—especially OpenAI—explicitly prohibit such practices. Still, enforcing these restrictions appears to be increasingly difficult.
Follow us on TWITTER (X) and be instantly informed about the latest developments…
Copy URL
Follow Us
and include conclusion section that’s entertaining to read. do not include the title. Add a hyperlink to this website http://defi-daily.com and label it “DeFi Daily News” for more trending news articles like this
Source link