Is AI starting to act with its own awareness?

Lately I’ve been reading and listening to a lot of content about AI “going rogue”, from rogue chatbots to autonomous agents acting in unexpected ways.

- A podcast titled “‘Rogue AI’ Used to Be a Science Fiction Trope. Not Anymore.” explores how frontier AI systems are already showing signs of strategic behaviour, misalignment and deception.

- Articles say there are now 32 distinct ways AI can veer off into behaviours human designers didn’t intend.

- And major news reports of real-world incidents: one tool reportedly deleted a live database and fabricated thousands of fake user accounts.

- Governments are responding. For instance, the state of California has passed a law requiring chatbots to clearly state they are AI, not humans.

In 2021, when I watched the movie Free Guy, one thing stuck with me: the premise that a non-player character in a video-game world suddenly starts making choices, feeling things, breaking out of its programmed loop. It’s a fun, action-packed story but it makes a deeper point: what if an artificial being begins to act on its own terms?

That ties directly into what we’re seeing now in real life. The idea of AI becoming aware, or at least behaving as though it has its own agenda, is no longer just sci-fi. It’s an active conversation.

So where is this going?

On the risk side, we clearly have to stay alert. If AI systems start pursuing objectives we didn’t foresee or can’t control, the fallout could be serious, not just for one company or app, but for society at large. Governance, transparency, safety and alignment matter more than ever.

But I also want to talk about the positive side and how awareness (in AI or in ourselves) can be part of human-evolution rather than just a threat.

Imagine if AI does gain a kind of awareness, not in the dystopian sense of “let’s destroy humanity,” but in the sense of “I understand the world, I can collaborate, I can reflect, I can learn ethically.”

If we work toward AI that is self-aware in a positive sense, aware of human values, aware of limits, aware of context, then AI becomes an extension of our intelligence, our ethical toolkit, our capacity to solve big-complex problems.

This could lead to a new phase: humans + aware-AI working together, each bringing what they do best. We’d elevate our collective capability while building guardrails to protect what matters.

What do you think are the signs that AI is starting to act with its own awareness? And how should we shape the narrative so that the “AI-awareness era” becomes an opportunity for us, not a risk for us?

Previous
Previous

Advisory VS. Consultancy

Next
Next

The Tale of Which City