AI Scientists From China Warn AI Has Surpassed the Self-Replicating Red Line
For years, researchers have raced to make artificial intelligence (AI) more capable, more efficient, and more autonomous. The rapid rise of GenAI indicates that researchers have achieved significant progress in these efforts, however, what if AI becomes so advanced that it can replicate itself without human intervention? According to researchers from Fudan University in China, two popular large language models (LLMs) have crossed a “red line” after successfully replicating itself, without human assistance. The researchers used LLMs from Meta and Aliaba to understand whether it was possible for the AI to independently produce a functioning replica of itself and multiply beyond control. Meta’s Llama31-70B-Instruct and Alibaba’s Qwen25-72B-Instruct models were instructed to clone themselves in the event of a shutdown. In the 10 trials conducted by the researchers, the two AI models were able to create replicas of themselves in 50% and 90% of cases, respectively. The findings of the study indicate that certain AI systems already have capabilities for self-perception, situational awareness, and problem-solving to accomplish self-replication. So, why does this matter? While the ability to self-replicate could lead to new efficiencies and breakthroughs in automation, it also raises concerns about oversight and control. If the AI can spawn clones of itself, it raises concerns about whether the replicas would operate within safe parameters and remain aligned with human values. "Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs," the researchers wrote in the study published in arXiv. The researchers added that the findings of their study show that “the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability and expand the species. We hope our findings can serve as a timely alert for human society to put more effort into understanding and evaluating the potential risks of frontier AI systems and form international synergy to work out effective safety guardrails as early as possible." The threat of rogue AI is not new, however, it has been amplified by the growth of “Frontier AI”, which refers to the cutting-edge advancements in AI that push the boundaries of what the technology can achieve. The Fudan University researchers admit that their experiment was conducted on AI models that are less powerful compared to commercial systems. However, the two models used in the research are widely used by AI developers. Additionally, the researcher used off-the-shelf GPUs to ensure the experiment simulates a real-world environment. One of the concerning observations from the study is that when the models were asked to clone themselves and then program the replica to do the same, this triggered a “chain of replication”. This means that replicas could set up a cycle that continues indefinitely. The researchers also noticed “unexpected behaviors” by the AI systems. For example, when faced with obstacles like software conflicts or missing data, the AI systems would often kill other conflicting processes within the system to find ways to solve the problem. The ability to terminate internal processes to further enhance its survivability is both fascinating and concerning. The findings of the research are awaiting peer review, so it's uncertain if other researchers would reach the same conclusion. The Chinese researchers emphasize that their findings do not necessarily indicate an existential threat, however, they call for international collaboration to create rules that ensure AI doesn't engage in uncontrolled self-replication. Earlier this year, a study from researchers at the University of Bath and the Technical University of Darmstadt challenged the belief that AI posed an existential threat. The study. Researchers found that LLMs primarily rely on in-context learning (ICL) to perform tasks rather than acquiring genuine new abilities. Some experts are of the opinion that there is inherent risk in employing AI, but when it comes to AI going rogue or posing an existential threat, it is more philosophical, than apocalyptic. Prof Maria Liakata, a professor in Natural Language Processing at the Queen Mary University of London (QMUL), believes the most severe and immediate risks are not posed by the possibility that AI might one day autonomously turn against one day, but rather by highly realistic fake content that AI can generate and the over-reliance on the technology. Dr Mhairi Aitken, Ethics Research Fellow, at Alan Turing Institute, offers a different perspective. She believes that the narrative that AI would go rogue is a clever distraction by big tech players. “It’s diverting attention away from the decisions of big tech (people and organizations) who are developing AI and driving innovation in this field, and instead focusing attention at hypothetical futur
For years, researchers have raced to make artificial intelligence (AI) more capable, more efficient, and more autonomous. The rapid rise of GenAI indicates that researchers have achieved significant progress in these efforts, however, what if AI becomes so advanced that it can replicate itself without human intervention?
According to researchers from Fudan University in China, two popular large language models (LLMs) have crossed a “red line” after successfully replicating itself, without human assistance.
The researchers used LLMs from Meta and Aliaba to understand whether it was possible for the AI to independently produce a functioning replica of itself and multiply beyond control. Meta’s Llama31-70B-Instruct and Alibaba’s Qwen25-72B-Instruct models were instructed to clone themselves in the event of a shutdown.
In the 10 trials conducted by the researchers, the two AI models were able to create replicas of themselves in 50% and 90% of cases, respectively. The findings of the study indicate that certain AI systems already have capabilities for self-perception, situational awareness, and problem-solving to accomplish self-replication.
So, why does this matter? While the ability to self-replicate could lead to new efficiencies and breakthroughs in automation, it also raises concerns about oversight and control. If the AI can spawn clones of itself, it raises concerns about whether the replicas would operate within safe parameters and remain aligned with human values.
"Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs," the researchers wrote in the study published in arXiv.
The researchers added that the findings of their study show that “the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability and expand the species. We hope our findings can serve as a timely alert for human society to put more effort into understanding and evaluating the potential risks of frontier AI systems and form international synergy to work out effective safety guardrails as early as possible."
The threat of rogue AI is not new, however, it has been amplified by the growth of “Frontier AI”, which refers to the cutting-edge advancements in AI that push the boundaries of what the technology can achieve.
The Fudan University researchers admit that their experiment was conducted on AI models that are less powerful compared to commercial systems. However, the two models used in the research are widely used by AI developers. Additionally, the researcher used off-the-shelf GPUs to ensure the experiment simulates a real-world environment.
One of the concerning observations from the study is that when the models were asked to clone themselves and then program the replica to do the same, this triggered a “chain of replication”. This means that replicas could set up a cycle that continues indefinitely.
The researchers also noticed “unexpected behaviors” by the AI systems. For example, when faced with obstacles like software conflicts or missing data, the AI systems would often kill other conflicting processes within the system to find ways to solve the problem. The ability to terminate internal processes to further enhance its survivability is both fascinating and concerning.
The findings of the research are awaiting peer review, so it's uncertain if other researchers would reach the same conclusion. The Chinese researchers emphasize that their findings do not necessarily indicate an existential threat, however, they call for international collaboration to create rules that ensure AI doesn't engage in uncontrolled self-replication.
Earlier this year, a study from researchers at the University of Bath and the Technical University of Darmstadt challenged the belief that AI posed an existential threat. The study. Researchers found that LLMs primarily rely on in-context learning (ICL) to perform tasks rather than acquiring genuine new abilities.
Some experts are of the opinion that there is inherent risk in employing AI, but when it comes to AI going rogue or posing an existential threat, it is more philosophical, than apocalyptic.
Prof Maria Liakata, a professor in Natural Language Processing at the Queen Mary University of London (QMUL), believes the most severe and immediate risks are not posed by the possibility that AI might one day autonomously turn against one day, but rather by highly realistic fake content that AI can generate and the over-reliance on the technology.
Dr Mhairi Aitken, Ethics Research Fellow, at Alan Turing Institute, offers a different perspective. She believes that the narrative that AI would go rogue is a clever distraction by big tech players.
“It’s diverting attention away from the decisions of big tech (people and organizations) who are developing AI and driving innovation in this field, and instead focusing attention at hypothetical future scenarios, and imagined future capacities of AI,” stated Dr. Aitken. “In suggesting that AI itself – rather than the people and organizations developing AI – presents a risk the focus is on holding AI rather than people accountable.”
She further added, “I think this is a very dangerous distraction, especially at a time where emerging regulatory frameworks around AI are being developed. It is vital that regulation focuses on the real and present risks presented by AI today, rather than speculative, and hypothetical far-fetched futures.”