⬤ Microsoft AI CEO Mustafa Suleyman recently painted a striking picture of AI's near future—one where systems train at massive computational scales and develop capabilities we once thought were decades away. His comments have become a focal point in tech circles, highlighting how AI development is outpacing the rules meant to govern it. Suleyman envisions advanced systems that can improve themselves, manage their own resources, and even create their own performance tests.
⬤ Policy experts are now pushing for stronger government oversight, including possible taxes on extreme computing power used for AI training. While Suleyman didn't mention taxes directly, analysts worry that without intervention, we could see dangerous power concentration, market chaos, and smaller AI labs going under. The biggest risk? Only a handful of well-funded institutions would be able to afford these massive training runs, creating monopolies and stifling innovation.
We can already imagine a time, just a few years ahead, when AI systems are trained on gigawatt-scale compute runs that are capable of self-improvement, setting goals, managing resources, and writing their own evals. Suleyman's warning was blunt
⬤ This marks a turning point—traditional human oversight may not cut it once AI systems start upgrading themselves autonomously.
⬤ His message will likely shape policy discussions on national security, AI governance, and global competition. The big question now: can governments put real safeguards in place before self-improving AI becomes reality? With gigawatt-scale computing approaching fast, society needs to prepare for AI systems that operate beyond current institutional control.
Marina Lyubimova
Marina Lyubimova