Skip to main content
gdpr data-privacy compliance

GDPR and AI: Why 'We'll Be Compliant Eventually' Isn't Good Enough

By amaiko 7 min read
Abstract representation of data protection and compliance

In May 2023, the Irish Data Protection Commission fined Meta EUR 1.2 billion for transferring European user data to the United States. One fine. One transfer mechanism. Two years later, in May 2025, TikTok got hit with EUR 530 million for moving EEA data to China. The violation was the same: personal data left the EU without adequate protection.

These aren’t warning shots. The warning shots were years ago.

If your company is rolling out AI tools that route data through US servers, the question isn’t whether regulators will care. It’s when they’ll get around to you.

The Regulatory Walls Are Going Up

The EU AI Act entered into force on August 1, 2024. This is not a proposal or a draft — it’s binding law with a phased enforcement timeline.

The first prohibitions took effect on February 2, 2025: social scoring, manipulative AI practices, and indiscriminate facial recognition are now illegal. Obligations for general-purpose AI models — the category that covers every major LLM — became enforceable on August 2, 2025. Transparency requirements and high-risk AI rules kick in on August 2, 2026. High-risk systems embedded in regulated products follow on August 2, 2027.

The penalty structure is aggressive. Violations involving prohibited AI practices carry fines up to EUR 35 million or 7% of global annual turnover, whichever is higher. General-purpose AI non-compliance: up to EUR 15 million or 3%. Procedural violations: EUR 7.5 million or 1%.

This sits on top of GDPR. Two regulatory frameworks, one set of data. Companies deploying AI tools in the EU now face parallel compliance obligations — and regulators on both sides are actively enforcing.

GDPR Enforcement Has Teeth

According to DLA Piper’s January 2026 survey, European supervisory authorities issued approximately EUR 1.2 billion in GDPR fines during 2025, matching the 2024 total. The cumulative total since GDPR took effect in May 2018 now stands at EUR 7.1 billion.

The trend line is flat at a very high number. Regulators handled an average of 443 data breach notifications per day in 2025 — a 22% increase over 2024. Ireland alone accounts for EUR 4.04 billion of the total, driven by enforcement against the big tech platforms headquartered there. Nine of the ten largest GDPR fines ever issued targeted tech companies.

AI companies are increasingly in the crosshairs. Italy’s Garante fined OpenAI EUR 15 million in December 2024 for GDPR violations related to ChatGPT — insufficient legal basis, lack of age verification, and inadequate user transparency. The Dutch Supervisory Authority hit Clearview AI with EUR 30.5 million in May 2024 for unlawful biometric data processing. Italy fined Luka Inc. EUR 5 million in May 2025 over the Replika chatbot’s handling of children’s data.

Then there’s TikTok. The EUR 530 million fine wasn’t about the content on the platform. It was about where data was processed. TikTok failed to verify that European user data accessed from China received protection equivalent to EU standards. During the investigation, TikTok told regulators it didn’t store EEA data on Chinese servers. In April 2025, it admitted that was wrong — limited EEA data had been stored in China after all.

The lesson is expensive: “We’ll sort out data residency later” has a price tag, and it starts at nine figures.

”Data Residency Coming Soon” Is Not a Compliance Strategy

Microsoft completed its EU Data Boundary in February 2025. Customer data at rest stays within the EU. That part works.

But data at rest is only half the equation. When a German employee types a prompt into Microsoft 365 Copilot, that prompt needs to be processed by a large language model. Where that processing happens matters.

Microsoft announced in November 2025 that in-country data processing for Copilot interactions would roll out in phases. (We covered the full Copilot comparison — including pricing, memory, and architecture — in a separate article.) The first wave — Australia, UK, India, and Japan — was scheduled for end of 2025. Germany, along with Canada, Italy, Poland, and seven other countries, was pushed to 2026. Until then, German Copilot prompts are processed wherever Microsoft has available LLM capacity, within the EU Data Boundary but not necessarily within Germany.

For a Mittelstand company with strict Datenschutz requirements, “data processing stays in the EU region” and “data processing stays in Germany” are fundamentally different statements.

And that’s the best-case scenario — a vendor actively building toward compliance. Many AI tools offer less. Plenty of them process European data in the US, relying on Standard Contractual Clauses or the EU-US Data Privacy Framework.

The DPF survived a legal challenge in September 2025, when the European General Court dismissed the Latombe case. That’s reassuring on the surface. But Max Schrems and NOYB — the people who killed Safe Harbor and Privacy Shield — have publicly stated they’re reviewing options for a broader challenge. The Court’s ruling upheld the DPF against a narrow set of claims; the fundamental tension between US surveillance law and EU privacy rights hasn’t been resolved.

Building your AI infrastructure on a transfer mechanism with a contested legal future is rolling the dice. And if you lose, there’s no grace period.

What Compliant AI Actually Requires

Compliance for AI tools in the EU comes down to a short list.

Data residency, not data boundary. Your prompts, responses, and AI memory should be processed and stored in a jurisdiction you control. For German companies, that means Germany — not “the EU region,” not “we’ll get there by 2026.”

No dependency on contested transfer mechanisms. If your AI vendor routes data through US infrastructure, your compliance posture depends on the continued validity of the DPF. That’s a political question, not a technical one. Hosting within the EU eliminates the question entirely.

Transparency under both GDPR and the AI Act. Under GDPR, your employees and customers have the right to know how their data is processed. Under the EU AI Act, deployers of high-risk AI systems face additional documentation, risk management, and human oversight requirements. Your vendor should make this straightforward, not something you have to reverse-engineer from a 200-page whitepaper.

Accountability today, not roadmap promises. A compliance roadmap is not compliance. If your vendor’s data residency is scheduled for next year, you carry the risk until then. Regulators don’t fine you based on your vendor’s roadmap. They fine you based on what’s happening with your data right now.

This is not hypothetical. Meta’s EUR 1.2 billion fine was for transfers that happened while a transfer mechanism was still technically in place. TikTok’s EUR 530 million fine was for transfers it believed were compliant. Good intentions don’t offset bad architecture.

Choosing AI Tools as a German Company

If you’re evaluating AI platforms, ask your vendor three questions: Where are prompts processed? Where is AI-generated content stored? What transfer mechanisms do you rely on for data that crosses borders?

If the answers include “US data centers,” “Standard Contractual Clauses,” or “we’re working on EU data residency” — you know the compliance gap you’re accepting.

The alternative exists. AI tools built in Germany, hosted in Germany, processing in Germany. No adequacy decisions to worry about. No data transfers to justify. No 2026 roadmap items that might slip to 2027.

amaiko was built this way from day one. German engineering, German hosting, Teams-native. Your data stays in Germany — not eventually, not on a roadmap, but today. For companies that take Datenschutz seriously, that’s not a feature. It’s the baseline.

Continue Reading