SODP Dispatch - 16 August 2025

The AI trust blueprint: how a European newsroom consensus is defining a new promise to readers, generative AI, online platforms and compensation for content: the need for a new framework, stop guessing and start testing with real growth, YouTube expands Promote tools, no Google Ads needed, Edge of Search 2025 + more

Hello, SODP readers!

In today’s issue:

  • From SODP: Generative AI, online platforms and compensation for content: The need for a new framework

  • Tools & Resources: Elements of audience engagement + Edge of Search 2025

  • Tip of the week: Stop guessing and start testing with real growth

  • News: The AI trust blueprint: how a European newsroom consensus is defining a new promise to readers, Google will now let you pick your top sources for search results, YouTube expands promote tools, no Google Ads needed + more

A Publisher’s Engagement Playbook!

🚀 We’ve launched the first industry research report in partnership with Glide Publishing Platform!

Join global publishing leaders, product owners, data strategists, and tech innovators to benchmark how your team personalizes, engages, and grows using first-party data.

🔍️ What’s in it for you?

  • Benchmark CDP Engagement, Adoption & Performance

  • Discover Emerging Personalization Trends

  • Access Actionable Best Practices

  • Learn From Real-World Challenges & Wins

Whether you're using behavioural signals, AI-powered tools, or topic-based tagging, your insights matter. Help shape a report that reflects what’s really driving results across the industry.

👉️ Take the survey now! We need 300 respondents, and the survey closes in a week. Be the first to receive exclusive insights.

FROM STATE OF DIGITAL PUBLISHING

Generative AI, Online Platforms And Compensation For Content: The Need For A New Framework

By Thomas Paris & Pierre-Jean Benghozi

The emergence of generative artificial intelligence has put the issue of compensation for content producers back on the table.

Generative AI offers undeniable benefits but raises familiar fears tied to disruptive technologies. In the cultural and creative sectors, concerns are mounting over the potential replacement of human creators, the erosion of artistic authenticity and risks of copyright infringement. Legal battles are already emerging worldwide, with intellectual property owners and AI developers clashing over rights. Alongside these legal and ethical concerns lies the economic question: how should revenues generated by AI be fairly distributed?

Copyright law (droits d’auteur), which is traditionally based on the reproduction or representation of specific works, may not be a fit for this question. Individual contributions to AI-generated outputs are often too complex to quantify, making it difficult to apply the principle of proportional remuneration, which holds that payment for an individual work is tied to the revenue it generates.

An asymmetrical relationship

The disputes surrounding generative AI echo long-standing tensions between digital platforms and content creators. Platforms such as Spotify, YouTube and TikTok dominate the music industry; Netflix and Apple lead in film and television; Steam in gaming; and Google and Meta in news media.

These platforms wield enormous power in reshaping industries, influencing consumption patterns and establishing new power dynamics. On the one hand, they amplify the reach of creative works, but on the other, they rely on an inherently unequal relationship. For example, if Spotify removes a song, the artist’s reach and revenue may decline sharply, but Spotify itself is unlikely to suffer significant consequences–perhaps losing a few subscribers to competitors, at most.

TOOLS & RESOURCES

🧭Elements of Audience Engagement

Elements of Audience Engagement is a mindset, an introduction, and a toolkit. It is grounded in the belief that in a digital information ecosystem, prioritising audiences, their needs and habits, is journalism’s most resilient foundation for growth and impact, enabling newsrooms not just to survive but to adapt with purpose. See more ▸

🔍 Edge of Search 2025

The 2025 edition of this in-person conference brings together leading search marketing professionals to share real-world strategies, future-focused SEO skills, and proven frameworks. You will learn how SEO really works, sharpen your search skills for what’s next, and boost the performance of your digital team with insights built for action. See more ▸

BITE-SIZED ADVICE

By Vahe Arabian

📊 Stop guessing and start testing with real growth

Publisher experiments fail when they start with tactics, not hypotheses.

A/B testing has become a staple in digital publishing, but for many publishers, it’s little more than tinkering with headlines, button colours, or send times. The problem is that these tests often start with what to change rather than why to change it. Without a clear, measurable hypothesis, most experiments end up producing inconclusive results or chasing vanity wins that don’t move the business forward.

Top-performing publishers approach testing like scientists: They identify a friction point, build a hypothesis around audience behaviour, and run the experiment long enough to gather statistically valid results. They don’t test for the sake of testing; they test to solve specific problems that impact retention, conversions, or revenue.

3 experiments that worked, and why

  • Content depth vs. breadth: Instead of spreading efforts across many topics, one publisher focused on fewer topics in greater depth. This depth-driven strategy boosted engagement and conversions because it directly supported the business goal of increasing loyal readership, and the test ran long enough to remove seasonal or one-off anomalies.

  • Paywall trigger psychology: Rather than limiting readers to a fixed number of free articles, an engagement-triggered paywall is activated after 45 seconds of reading. This targeted high-intent users, converting 38% compared to just 8% for a monthly article meter, resulting in 3x subscription revenue.

  • Newsletter timing by content type: A straight “send time” test (9 AM vs. 5 PM) produced negligible differences. The breakthrough came from matching content type to reader routines: morning briefings for early risers, deep-dive reads for the afternoon. Open rates increased by 22%, resulting in downstream gains in on-site engagement.

Why most tests fail 

  1. No behavioural hypothesis, e.g., “testing headlines” without asking why a reader would care

  2. No segmentation - treating all users as if they behave the same

  3. Vanity metrics over meaningful metrics - clicks instead of conversions or LTV

  4. Short timelines - stopping before 95% statistical confidence or a full behaviour cycle

What top performers do differently 

  • Start with a measurable hypothesis tied to business outcomes

  • Isolate one behavioural variable at a time

  • Segment audiences by actions (new vs. returning, skimmers vs. engaged)

  • Measure real results - retention, conversions, revenue

  • Run tests for at least 14 days or until reaching statistical significance

  • Document learnings to inform the next test

When experiments are designed with intention, they stop being random guesswork and start becoming a repeatable growth engine.

WHAT WE ARE READING

Imagine if Australia's media actually worked together Unmade

When it comes to the declining fortunes of Australia’s media, we’ve at least been able to comfort ourselves that it’s no fault of the industry; larger forces are at play. But what if that’s not actually true? What if Australia’s media players could be making a greater difference to their own fortunes if they could only work better together? The thought is triggered by a weekend LinkedIn post from the NRL’s GM of strategy Ben Shepherd. His headline posed the right question: “How come Australian TV revenue was down 4.6% in F25 when UK TV revenue was up 3.8%? It all depends how comfortable you are in investing to win.” Read more ▸

Google will now let you pick your top sources for search results| TechCrunch

Google is rolling out a new feature called “Preferred Sources” in the U.S. and India, which allows users to select their preferred choice of news sites and blogs to be shown in the Top Stories section of Google’s search results. Enabling this feature means you will see more content from the sites you like, the company says. When users search for a particular topic, they will see a “star” icon next to the Top Stories section. Read more ▸

YouTube expands Promote tools, no Google Ads needed | Search Engine Land

YouTube rolled out a set of creator-friendly updates, including doubling the image limit in Community Posts, improving auto-dub editing, and adding fresh call-to-action buttons for in-stream promotions. Community Posts. Starting this week, creators can upload up to 10 images per post (up from five) across all surfaces, enabling richer context and potentially boosting engagement. Read more ▸

Global media agency market grew 6.2% in 2024 | AdNews

The global media agency market grew 6.2% in 2024, up from 5.8% in 2023, according to RECMA’s latest Overall Activity report. The report, which assesses the market activity of the ‘Big 6’, found growth has largely been driven by non-traditional activity, including digital, data and analytics and content, which rose 9.9% year-on-year. Traditional media billings grew by just 1.6%. Non-traditional services now account for 58% of total agency activity on average, up two percentage points from 2023. Read more ▸

The AI Trust Blueprint: How a European Newsroom Consensus is Defining a New Promise to Readers | NewsTechNavigator

After analyzing more than 25 AI policies from newsrooms in Europe, a clear and powerful signal has emerged. From national public broadcasters and leading dailies to regional media groups, organizations of differing size, audience, and market position are independently converging on a unified foundation for governing AI. These shared principles are no longer experimental. Reading through AI policies shows clear patterns: shared anxieties, common solutions, and a few telling disagreements. Read more ▸

The Verifier Layer: Why SEO Automation Still Needs Human Judgment | Duane Forrester Decodes

AI tools can do a lot of SEO now. Draft content. Suggest keywords. Generate metadata. Flag potential issues. We're well past the novelty stage. But for all the speed and surface-level utility, there's a hard truth underneath: AI still gets things wrong. And when it does, it does it convincingly. It hallucinates stats. Misreads query intent. Asserts outdated best practices. Repeats myths you've spent years correcting. And if you're in a regulated space (finance, healthcare, law) those errors aren't just embarrassing. They're dangerous. Read more ▸