← Back

Non-Agentic AI Scientists

Build non-agentic AI scientists that act as oracles but don’t have long-term states or goals

R&D Gaps (1)

The potential for AI systems to behave unpredictably or dangerously (“go rogue”) is a critical concern. Ensuring safe and controllable AI architectures is essential for reliable operation. See also:  • https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024 • https://deepmind.google/discover/blog/taking-a-responsible-path-to-agi/