News archive

Back

When AI Meets GDPR: Trying to Regulate a Moving Target

At Baltic Domain Days, one session set out to do something ambitious: put AI and GDPR in the same room and see what happens. On stage was Erkki Pogoretski, Head of Data Analytics at Telia Estonia, the country’s largest telecom operator. His promise was honest from the start: there are no final answers yet. What we have instead are questions, tensions, and a fast-moving reality that refuses to wait for regulations to catch up.
When AI Meets GDPR: Trying to Regulate a Moving Target
Erki Pogoretski, Head of Data and Analytics at Telia (Photo: Karolin Köster, Baltic Domain Days 2025)

Erkki began with a simple observation that many organizations already feel in their bones. Technology, especially AI, is moving much faster than companies can adapt. Organizations change slowly by design. AI does not. Staying “in the saddle,” as he put it, is almost impossible.

From his experience at Telia and other large companies, Erkki has learned that the hardest part of AI is not the technology. The hardest part is the organization itself. What many companies call an “AI strategy” is really an organizational strategy problem. The tools are easy to find. The YouTube tutorials exist. The real challenge is deciding how to use AI responsibly, legally, and in a way that creates real value.

This is where regulation enters the picture. Laws like GDPR and the upcoming AI Act are often seen as obstacles, especially in Europe. Erkki pushed back on that idea. The real purpose of these regulations is not to slow innovation, but to protect human rights. Most people only truly care about privacy and rights after something goes wrong, but the rules exist to prevent harm before it happens.

GDPR has been with us since 2018, and while it caused frustration at first, it eventually brought clarity. Erkki even described it as a positive force. Before GDPR, analytics teams often did “crazy things” with data. GDPR created boundaries and forced companies to think more carefully. It focused on the data subject, the person behind the data, and set rules around consent, privacy, and protection.

The AI Act changes the focus. Instead of concentrating only on the data subject, it looks at the purpose. The same data, the same system, and the same technology can be low risk or high risk depending on how it is used. This shift is critical. AI regulation is no longer just about what data you have, but what you do with it.

At the same time, visibility is getting worse. Even a few years ago, Telia designed systems with a two- to three-year technology horizon in mind. Today, six months is already a long-term forecast. Models change. Capabilities jump forward. Surprises are guaranteed.

One of the biggest transformations Erkki highlighted is our new ability to work with multi-structured data. Not just neat tables and databases, but documents, memos, presentations, drawings, emails, everything. For the first time, AI can meaningfully “read,” combine, and reason over this mess of information. That sounds powerful, and it is. But it also raises a new problem: almost no organizations know how to govern this kind of data.

Everyone recognizes the example. Files named “final,” “final_v2,” and “final_final.” Old documents never deleted. Contradictory information stored forever. Erkki shared a personal example where an AI system confidently summarized ideas he had already abandoned a year earlier because he never cleaned them up. The AI did exactly what it was supposed to do. The problem was the input.

This leads back to an old rule that suddenly matters more than ever: garbage in, garbage out. AI is not stupid. It reflects what we feed it. If documents disagree, the model gets confused. If goals are unclear, outputs are meaningless.

According to Erkki, successful AI efforts rest on three things: good input, usable technology, and a clear problem worth solving. Ironically, the technology is the easiest part. The hardest parts are understanding your data and defining the value you are actually trying to create. Many companies struggle here, which is why they either invest far too much or far too little in AI.

Regulation adds another layer of complexity. AI brings automated systems into direct contact with accountability. Two people using the same AI tool in different ways can create very different legal and ethical risks. Unlike traditional systems, AI outputs are not always predictable. This means organizations must rethink monitoring, education, and responsibility. Teaching people how to use AI may be just as important as building the system itself.

Erkki also warned against relying too heavily on rigid rules. In an environment that changes this fast, rules can become outdated before they are fully implemented. Principles may work better than detailed instructions. Simple rules that people can actually follow are more useful than perfect rules that nobody can keep.

The final tension he explored was between technical controls and human oversight. Regulations often require “human oversight” of AI systems. In practice, humans like control but dislike responsibility. There is already a trend toward “guardian AIs” that monitor other AIs. It does not take much imagination to picture guardian AIs supervising guardian AIs, while a human tries to figure out what is going on at the very end. The idea sounds futuristic, but Erkki suggested it may arrive sooner than we expect.

He closed with a reminder that felt both reassuring and unsettling: the AI we are using today is the worst AI we will ever use. It will only become faster, stronger, and more capable. Whether that future works in our favor depends less on the technology itself and more on how thoughtfully we design organizations, governance, and purpose around it.

In short, when AI meets GDPR, there are no easy answers, but there is one clear message. Learning by doing is unavoidable, and doing nothing is not an option.


See the session recording in full here:

Email again:

See the latest news and blogs: