Skip to main content

The AI staff: treating AI systems as employees

How to boost innovation by taking responsibility

Regulations are often seen as a constraint on innovation, especially within the rapidly expanding field of AI. However, regulations can in fact serve as a powerful incentive to shape innovation more carefully and effectively. In this article, we argue that AI should not be treated as an anonymous tool, but as an employee: with responsibility, guidance, and trust. Only in this way can organisations innovate in a safe, transparent and human-centred manner, and be ready for the future.

Regulation as an opportunity for responsible innovation

What if regulation doesn’t hinder innovation but actually accelerates it? Recently, 44 CEOs sounded the alarm with a call to “Stop the Clock.” They are requesting a two-year delay of the AI Act, fearing it would stifle innovation. That delay now seems unlikely. We believe regulation is not a brake, but rather an opportunity to innovate more carefully.

Every day, we see that AI compliance requires much more than simply meeting the AI Act. Organisations cannot hide behind a system; they remain fully responsible for what their AI systems do. It is precisely an active approach, where risks and impacts are considered upfront, that accelerates safe and scalable innovation.

The rules that truly matter have long been in place: from anti-discrimination laws to environmental legislation and consumer protection. These rules apply just as much to AI as to any other technology. The AI Act complements these existing frameworks and mainly serves to add the finishing touches. Whether the law comes into effect tomorrow or in two years, the responsibility is already there.

This principle—taking responsibility upfront—fits perfectly with the European approach: prevention is better than cure. This is not slowness, but carefulness. In the US, speed often takes precedence, resulting in legal claims. Precisely now, our European approach is more valuable than ever. Don’t be distracted by the political noise around the AI Act. Build safely, transparently, and human-centered, and you will be prepared for the future.

The AI staff: treating AI systems as employees

What if we no longer see AI as an anonymous tool, but as an employee? One that makes decisions, needs context, and acts on behalf of the organisation. Then you treat AI accordingly: you train it, guide it, and correct it when necessary. Not to slow down innovation, but to accelerate it. Because if you trust and properly set up your systems, you can fully harness the power of AI.

This also applies to responsibility. An organisation is largely responsible for what its employees do, even if those employees make ‘mistakes.’ Whether it concerns incorrect offers, wrong advice, or discrimination during recruitment. The organisation remains responsible for such undesirable outcomes. It doesn’t matter if it’s an employee or an AI system.

Treat AI not as a tool, but as an employee who makes decisions and takes responsibility.

Carefulness over speed

AI is often seen as a threat to jobs, but we see mainly opportunities. AI contributes to solutions that truly advance society: from early detection of breast cancer to reducing food waste and combating loneliness. Such applications require careful development that considers risks for users and society.

Yet in practice, many organisations rush, treating it as a sprint. The pressure to participate is high, leading to pilots being launched too quickly without proper consideration of the impact. The solution? AI compliance by design. Building from day one with your own values as a starting point, so protection, legislation, and innovation continuously go hand in hand. It’s a marathon.

Whoever creates or acquires AI systems today must also take responsibility for ensuring these systems comply with laws and regulations. That organisation gains trust and the freedom to innovate better. Many organisations already work with ethical committees, bias tests, and open communication about AI. It can be done.

The marathon has begun, let’s start together.

Did you find this useful?

Thanks for your feedback