06.08.2025 09:16 AM

Humans and AI in innovation management: Why we need to rethink responsibility

The more extensive AI’s capabilities become, the more important humans are, too. Simply because responsibility cannot be delegated. In his latest innovation briefing, Kai Werner shows what teams need to consider in their innovation work when AI and humans are increasingly working together.Kai Werner in front of a smartphone

We can only work together: the future needs AI and humans

At neosfer, we work with AI daily. It often provides a good initial basis and makes many things easier in everyday life, but one thing is clear: we must continue to gauge this basis and make the decisions ourselves. The responsibility remains with us.

It’s a principle called “human in the loop”: AI systems should not run autonomously without anyone watching. The more tasks AI takes on, the more central the role of humans becomes. We need people who have a sense of what a system can do and where we should take a closer look, be it through random checks or targeted prompting, in order to rule out certain sources of error.

In my last Innovation Briefing, I described what we use AI for in our innovation work. To make it clear: Not all applications are the same. AI support for creating emails is very different from when AI is used in strategically relevant processes such as idea clustering or market analyses. As soon as context, intuition or judgment are required, humans need to be in the loop.

We need AI that explains its decision making

An example from our work: when we cluster workshop results with the help of AI, the system does this on the basis of semantic analyses and technical similarities. While it may be mathematically plausible, it does not necessarily make sense in an innovation context. It may be that the AI rejects an idea because it doesn’t fit some formality, but it may be that this the one exciting idea in a bunch of ideas. Innovation thrives on gut instinct and the courage to be unusual.

Bare minimum, what we need to successfully use AI in innovation work is explainable AI. These are systems that not only say what they recommend, but also why. Unfortunately, this is still the exception rather than the rule. On the contrary, most of the tools we work with today are black boxes. However, as long as a system cannot or will not explain how it arrives at a suggestion, people often cannot take responsibility for it.

This is also emphasized by the Humboldt Institute for Internet and Society, which is currently conducting intensive research on “Human in the loop”: If you want to take responsibility, you need transparency – from the data basis to the model logic to the interpretation of the results. If this traceability is lacking, there is a risk of dangerous pseudo-control and we may be slipping into algorithmic conformity. This means that we only follow what sounds plausible, no longer question flawed decisions and, in the worst case, lose creativity and critical thinking.

Responsibility needs expertise

This is precisely why we not only document which AI is used for what, but also have internal workshops to talk openly about mistakes and “aha moments”. Learning this way together helps us more than any guidelines. And these meetings also give our experiences with AI enough space, which is necessary to be able to take responsibility at all.

At the same time, our role in the company is changing through the use of AI. Research shows that it is not enough for us to be “in the loop”. We need to be able to orchestrate the interaction of tools and outputs. This is precisely where I see our role as innovation managers: to coordinate, structure and ensure clarity in the process. Last but not least, we need to bring the right people together. In order to be able to evaluate today’s AI results at all, you need a deep technical understanding, especially when we’re talking about a regulated area like banking.

Responsibility begins in everyday life

Responsibility is not reflected in grand theories, but in small things: in our daily use of the tools, in the way we check results, make decisions and talk about our work. It is quite natural for us to say: “The AI prepared this for me.” I think that’s exactly how it should be. But I also expect us to take a critical look at the result and understand what is behind it.

In the end, responsibility is not created by the last click, but by a shared understanding: Who uses what? For what? And with what goal? For me, this is the key to the future of innovation work. We need clarity, structure and people who don’t just accept responsibility but push it forward — and are willing to step out of their comfort zone to do it.

Wanna learn more about humans and AI?

State of Corporate Innovation: On the importance of innovaton management in challenging times