AI is the new intern everyone’s using—but no one’s quite sure who pays their salary, who owns their work, or what secrets they might leak.
In B2B marketing, agencies love AI for what it promises: speed, scale, and smarts. But their clients—especially the big enterprise ones—are starting to worry. Not just about brand tone or creative quality, but about intellectual property, confidential data, and regulatory risk.
At Living Stone, we work at the intersection of marketing innovation and compliance. And we’re seeing it firsthand: AI is making collaboration between agencies and clients both more powerful—and more complicated.
Let’s unpack the friction points.
The ownership headache
Who owns the blog post your agency delivers if it was generated with the help of ChatGPT? What if the prompt used internal client data?
This isn't a theoretical question anymore. Many enterprise clients now require agencies to assign full ownership of all AI-generated outputs—no matter how minimal the AI’s role was.
Real-world clause from an enterprise AI contract:
“Provider agrees that [Company] shall own… any outputs of or from the Provider AI Products… generated by or through the use of [Company] Data… including Training Data or Deliverables.”
That means – when you look at it in the focused approach of legal departments:
-
The agency can’t reuse the model you trained on their data.
-
The agency can’t even reuse the structure or output of your prompts.
-
And if the agency is blending that data into a general-use model? That’s a contractual—and possibly legal—violation.
As a client you want assurance that AI won’t turn their IP into everyone’s IP.
The legal black hole
IP law hasn’t caught up with AI. In most EU jurisdictions, you can’t copyright AI-generated content unless a human made meaningful creative decisions.
So where does that leave the agency that generates creative copy or image variations using Midjourney or Claude?
-
Clients assume they own what they paid for.
-
But if it’s not copyrightable, there’s no legal protection.
-
Worse, some tools generate output that might infringe others’ rights without you knowing.
Unless the agency contract clearly defines ownership, liability, and how AI is used, it may be unknowingly delivering unprotected or risky work.
Data-use dilemmas
Training AI on your client data seems logical—better prompts, smarter models, right?
Except… that your data is often proprietary, confidential, or contains personally identifiable information (PII). And if AI models “remember” what they’re trained on?
That’s a compliance nightmare.
Most major clients now require:
-
Written approval before AI touches their data
-
Documentation of how models are trained and what data was used
-
Strong guarantees against reuse—even in anonymized form
For EU-based agencies like Living Stone, this ties directly into GDPR and the upcoming wave of AI-specific audits. If we don’t have tight governance on AI tools, it’s not just about losing client trust—it’s about regulatory exposure.
Maybe it’s over the top, but it’s a reality that may be a landmine.
The EU AI Act is here (and it's heavy)
Before we get into how the Act affects agencies, let’s cover the basics. The EU AI Act, officially adopted in 2024, is the world’s first comprehensive legislation regulating artificial intelligence. It applies to any company doing business in the EU, regardless of where the AI is developed.
Here’s what it requires:
-
Risk-based classification of all AI systems (minimal, limited, high, or unacceptable)
-
Prohibited uses of AI (e.g., social scoring, manipulative behaviour, real-time biometric surveillance)
-
For high-risk AI systems (such as those involving profiling, automated decision-making, or data analysis in sensitive sectors), the law demands:
-
Detailed technical documentation
-
Human oversight
-
Traceability and audit logs
-
Risk assessments and mitigation plans
-
Conformity assessments before deployment
-
-
AI-generated content must be clearly labelled as such
-
Non-compliance could result in fines up to €30 million or 6% of global turnover
For agencies in the EU—especially those working with regulated industries like healthcare, legal, or finance—this law raises the bar on everything from tool selection to internal workflows.
So what does this mean for an agency?
-
It needs to know exactly which tools it is using and how it handles data.
-
It must be able to justify and trace the output of any AI system.
-
If it’s touching regulated data, it may be subject to conformity audits and documentation demands from both regulators and clients.
The EU AI Act is not theoretical. It’s operational. And it's going to reshape how agencies in the EU deliver services.
Coming up next: how to fix the friction
"Fixing the friction – How you can work together with your agency using AI responsibly in B2B.”
Includes practical tips, contract clauses, and risk mitigation templates:
-
Legal-ready AI clauses for agency contracts
-
Risk assessment checklist for EU agencies
-
How to be transparent without giving away your edge
-
Bonus: AI vendor questions to ask before you sign
Because this goldmine isn’t going away—but if you’re not careful, the landmines will find you first.
About this post
This blog post was developed based on a brief of our legal counsel, with the assistance of ChatGPT, based on prompts and strategic guidance provided by the Living Stone team. Content was reviewed and adapted by our editor to reflect the specific needs and realities of European B2B marketing leaders.
“Want Part 2? Get the full breakdown of AI contract clauses, risk audits, and transparency best practices—free of charge and ready to use.”