Blog - Living Stone

AI in B2B marketing: fixing the friction

Written by Anne-Mie Vansteelant | Sep 19, 2025 6:51:14 AM

AI is everywhere and your agency is likely using it already. Whether it’s drafting content, refining audience segments, or optimizing campaign visuals, AI has become the elephant in the room. 

As a marketing or communications leader in a European B2B company, you’re accountable not only for the messages your agency produces—but increasingly, how they’re produced. And with the EU AI Act now in play, it’s time to ask tougher questions. 

You don’t need to micromanage your agency’s tools—but you do need clear agreements, transparency, and responsible practices. Let’s explore how to get there.

   

Why your agency’s AI use matters more than you think

The appeal of AI is obvious: faster output, broader testing, and (possibly) lower cost. But under the hood, it’s more than just a smart assistant. 

When your agency uses AI to generate deliverables, they’re making decisions about: 

  • What data is used to train or prompt the model

  • How that data is stored or retained

  • Whether outputs are original, ethical, or legally sound

If the wrong image ends up in a campaign, or a biased algorithm targets the wrong audience, your brand takes the hit—not the tool. And under the EU AI Act, if the system used is considered “high-risk,” you may be required to explain how it works. 

This is why understanding your agency’s AI use is no longer a “nice to have”—it’s a leadership necessity.

What you need from your agency—beyond the creative brief 

In traditional agency relationships, you’d look for fresh thinking, solid strategy, and fast delivery. With AI in the mix, you now need four more things: 

Transparency 
Ask your agency to disclose what AI tools they’re using—and where in the process. Whether it’s ChatGPT, Midjourney, Adobe Firefly, Google Veo3 or something proprietary, you have the right to know how the work is being generated. 

Human oversight 
No content, copy, or visuals should go out the door without a human review. Agencies should be applying AI for efficiency, not outsourcing accountability. 

Data separation 
If your brand assets or audience data are used to train models, that model should not be reused on other clients. Request guarantees that protect your IP and confidential information. 

Ownership clarity 
AI blurs the line between creation and curation. Make sure your contracts or statements of work confirm that you own the final output, regardless of the tool used to create it. 

 

The compliance curve: understanding your responsibilities under the EU AI Act

The EU AI Act, passed in 2024, is the most comprehensive AI regulation in the world—and it doesn’t just affect AI developers. It also applies to companies that use AI or deploy its results, including through partners like agencies. 

Here’s why it matters: 

  • If your agency uses AI for profiling, audience targeting, or personalization, it may fall under the Act’s “high-risk” category

  • As the client, you may be held accountable for any regulatory breaches if proper oversight isn’t in place

  • You’ll be expected to ensure traceability, human oversight, and disclosure for AI-generated content or decisions 


Translation: you can’t outsource the risk. You can delegate execution—but not responsibility. 

 

How to build trust and alignment with your agency

Here’s the good news: the AI discussion can actually strengthen your agency relationship—if approached the right way. Start by: 

  • Requiring AI usage disclosures in every project brief

  • Asking for model traceability or audit documentation (especially for large campaigns)

  • Working only with partners who are up to speed on GDPR, the EU AI Act, and industry ethics

  • Making transparency a non-negotiable value, not just a compliance checkbox

But let’s be fair: not every expectation is realistic. 

Many corporate professionals expect agencies to operate under the same security and compliance conditions as internal enterprise teams. In reality, that’s often not feasible. Agencies typically: 

  • use a mix of commercial SaaS tools without dedicated private infrastructure

  • don’t control how foundation models (like ChatGPT) store or retain prompt data

  • can’t offer enterprise-level indemnity or IP guarantees on outputs from third-party AI tools 

This doesn’t mean agencies are being careless—it means the tools themselves aren’t built with the same controls corporations expect. 

So instead of pushing for rigid rules, push for: 

  • Clarity on what tools are used and how your data is handled

  • Boundaries: e.g., “no client data used for training without written consent”

  • Human judgment: Make sure creativity is still checked by experienced professionals 


Responsible AI isn’t about perfection. It’s about honest partnership and knowing what questions to ask. 

 

Living Stone supports B2B leaders who want to use AI boldly—but responsibly. If your agency relationships are evolving, we’re here to help you evolve your expectations too. Contact us today and find out how we can help to help you meet your goals. Contact Anne-Mie Vansteelant anne-mie.vansteelant@livingstone.eu   

About this post 
This blog post was developed based on a brief of our legal counsel, with the assistance of ChatGPT, based on prompts and strategic guidance provided by the Living Stone team. Content was reviewed and adapted by our editor to reflect the specific needs and realities of European B2B marketing leaders.

 

 
This blog post is part 2 of a blogpost on AI in B2B marketing. Did not read the first part?