OpenAI’s head of robotics, Caitlin Kalinowski, has resigned and publicly explained why. In a LinkedIn post, she said AI can support national security, but she drew a hard line at surveillance of Americans without judicial oversight and lethal autonomy without human authorization, arguing those issues needed more deliberation.
Her exit lands in the middle of broader criticism over OpenAI’s Pentagon work. OpenAI CEO Sam Altman has acknowledged the company’s rushed Defense Department agreement “looked opportunistic and sloppy,” which added fuel to internal and public backlash.
Why this resignation matters for AI and robotics
Robotics is where AI stops being “software” and starts influencing the physical world. OpenAI’s robotics job listings describe a push toward general-purpose robotics in dynamic real-world settings, which raises higher-stakes questions about safety, control, and authorization.
Kalinowski’s statement doesn’t claim OpenAI built autonomous weapons. Instead, it flags governance: who decides boundaries, how quickly leadership moves, and what safeguards exist before powerful systems get deployed into sensitive environments.
Eco-friendly SEO angle: safer AI reduces physical and digital waste
Responsible robotics governance also supports sustainability. Clear limits and stronger oversight can reduce costly failures, recalls, and hardware churn. Better safety processes also cut “redo cycles” in development—less retraining, fewer emergency patches, and lower data-center energy use. In short, accountable AI can be both safer and more resource-efficient.

