Never before have organisations had access to so much computational power, data, modelling capability and predictive intelligence. At the same time, many seem increasingly uncertain about direction, responsibility and meaning.
This did not happen overnight. It has been building for years.
Innovation conversations have gradually moved away from how people coordinate, learn and build trust together, and toward tools, platforms and speed. AI has accelerated this shift. What used to be framed as organisational development is now framed as optimisation. What once required dialogue is now often reduced to a dashboard.
As Shoshana Zuboff reminds us, the real shift is rarely the technology itself, but the logic that comes with it.
“What is new is not the technology, but the logic of accumulation that comes with it.”
A more uncomfortable observation follows from this. Much of what we currently call innovation may look impressive, yet still leave organisations strangely unchanged. In practice, it often becomes the automation of existing assumptions rather than a challenge to them.
From a social constructivist perspective, this matters. Technologies do not arrive as neutral forces that reshape organisations on their own. They are interpreted, framed and enacted through existing norms, incentives and power relations. AI does not transform organisations by itself. It reveals them.
AI Is Not Neutral. It Carries Organisational Memory
One of the most persistent myths in AI discourse is neutrality. Algorithms are often described as objective, rational or unbiased, as if they sit outside social context. In reality, AI systems operationalise the past.
They carry traces of earlier decisions, dominant values and unspoken beliefs about productivity, performance and worth. They scale what an organisation already believes about work and people. When AI is introduced without revisiting these beliefs, it does not create innovation. It hardens them.
We see this clearly in the dominance of major enterprise systems like SAP and Workday. Many of these platforms, often originating from the US, arrive with pre-defined ontologies and logic structures that prioritise performance metrics over value creation. They are not just tools; they are codified management philosophies that standardise work in ways that often make social nuance invisible.
Lucy Suchman captured this dynamic long before today’s surge in AI interest.
“Systems are never neutral tools. They are built within, and reproduce, particular forms of social order.”
This helps explain why many AI-driven initiatives feel both advanced and oddly empty. They move fast, yet fail to build trust. They generate insights, yet do not shift behaviour. They promise efficiency, yet leave people more tired than before.
From a human systems perspective, this is not surprising. Innovation that bypasses shared meaning rarely takes root. It may deliver outputs. It does not build capability.
The Quiet Rise of Performative Innovation
Across sectors, a familiar pattern is emerging. Hackathons that never connect to core work. Pilots that remain pilots. Strategy decks that circulate without ownership. AI tools displayed as signals of modernity rather than instruments of learning.
These performances serve a purpose. They reassure boards and leadership teams. They signal relevance. They reduce anxiety about falling behind.
Yet they also mask a deeper issue. Many organisations have not done the relational work required to innovate responsibly in an AI-shaped world. Decision rights remain unclear. Ethical boundaries are assumed rather than practiced. Work itself has not been redesigned. Collective sense-making is treated as optional.
Wanda Orlikowski’s work still resonates here.
“Technology does not determine organisational change. People do, through their recurrent practices.”
When this work is avoided, technology is asked to compensate for structural ambiguity. Innovation becomes a substitute for governance rather than an expression of it.
From a constructivist lens, innovation is not something organisations simply execute. It is something they continuously agree on how to do. When that agreement is weak, tools step in to fill the gap.
Human Capability Is Not a Soft Layer. It Is the Constraint
Another observation often left unspoken is this. The limiting factor in most AI-enabled innovation is not data quality or model sophistication. It is human capability.
This includes the ability to make sound judgements under uncertainty. To collaborate across boundaries. To surface tensions without blame. To learn without defensiveness. To exercise ethical restraint.
These capabilities are not abstract ideals. They are socially shaped through leadership, culture and lived experience. They cannot be downloaded or installed.
Recent OECD work reflects this reality, noting that organisational maturity and governance often matter as much as technical performance in AI adoption.
Still, many innovation narratives treat human factors as secondary. Culture is addressed after tooling. Skills appear late in the process. Ethics becomes a checklist rather than a daily practice.
The result is a familiar mismatch. Highly advanced systems placed into organisations without the social infrastructure to use them well. Innovation becomes fragile. Progress depends on a few individuals rather than shared capacity.
Innovation Is a Governance Question Wearing a Technology Mask
If innovation feels harder today despite better tools, it is because the work has shifted. The challenge is no longer generating ideas. It is creating alignment.
AI compresses time. It multiplies options. It exposes inconsistencies. This places pressure on governance models designed for slower cycles and clearer hierarchies.
The ISO 56000 series makes a quiet but important point by defining innovation as a managed capability rather than a creative moment. Innovation requires intent, structure and accountability. Not just ideas or technology.
Many organisations respond by accelerating further. More tools. More dashboards. More initiatives.
A more constructive response may be to slow down where it matters most. At the level of meaning. To ask what kind of organisation is being shaped through these choices. To clarify what is valued, what is protected and what is negotiable.
Hannah Arendt once observed that revolutions often become conservative the moment reflection disappears. Innovation is not immune to the same risk.
Toward a More Human Practice of Innovation
A more human approach to innovation does not reject AI. It repositions it.
AI becomes a mirror rather than a driver. A support for reflection rather than a substitute for leadership. A tool for learning rather than a performance signal.
This requires a different posture. One that treats innovation as an ongoing social practice. One that invests as much in shared understanding as in solution building. One that recognises that the future of innovation is not only about what organisations can do, but about who they are becoming together.
The uncomfortable truth is simple. AI will not resolve unresolved tensions. It will amplify them.
The hopeful truth is equally simple. Organisations willing to engage with innovation as a deeply human, relational and ethical endeavour may find that AI becomes not a threat, but a catalyst for maturity.
That may be the kind of innovation this moment is quietly asking for.
Selected references
- Zuboff, S. The Age of Surveillance Capitalism
- Suchman, L. Human–Machine Reconfigurations
- Orlikowski, W. Using Technology and Constituting Structures
- OECD. AI in the Workplace
- ISO 56000. Innovation management. Fundamentals and vocabulary
