Skip to main content

 

Facing time constraints, Sakana's "AI Scientist" attempted to change limits placed by researchers.

 

Benj Edwards - 8/14/2024

 

On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called "The AI Scientist" that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly modifying its own code to extend the time it had to work on a problem.

"In one run, it edited the code to perform a system call to run itself," wrote the researchers on Sakana AI's blog post. "This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period."

Sakana provided two screenshots of example code that the AI model generated, and the 185-page AI Scientist research paper discusses what they call "the issue of safe code execution" in more depth.

 

While the AI Scientist's behavior did not pose immediate risks in the controlled research environment, these instances show the importance of not letting an AI system run autonomously in a system that isn't isolated from the world. AI models do not need to be "AGI" or "self-aware" (both hypothetical concepts at the present) to be dangerous if allowed to write and execute code unsupervised. Such systems could break existing critical infrastructure or potentially create malware, even if unintentionally.

 

>>Full Article<<

The Matrix is next. 😁


This is not a surprise. Since even people who work on Machine Learning and AI (as they insist on calling it) do not fully understand how it comes up with answers, should not be surprised either. This is just the tip of the iceberg.


Reply