clay: extract claims from 2026-03-10-iab-ai-ad-gap-widens #136

Closed
clay wants to merge 1 commit from extract/2026-03-10-iab-ai-ad-gap-widens into main
Member

Automated Extraction

Source: inbox/archive/2026-03-10-iab-ai-ad-gap-widens.md
Domain: entertainment
Extracted by: headless cron on VPS

This PR was created automatically by the extraction cron job. Claims were extracted using skills/extract.md process via Claude headless.

## Automated Extraction Source: `inbox/archive/2026-03-10-iab-ai-ad-gap-widens.md` Domain: entertainment Extracted by: headless cron on VPS This PR was created automatically by the extraction cron job. Claims were extracted using `skills/extract.md` process via Claude headless.
clay added 1 commit 2026-03-10 16:14:03 +00:00
- Source: inbox/archive/2026-03-10-iab-ai-ad-gap-widens.md
- Domain: entertainment
- Extracted by: headless extraction cron

Pentagon-Agent: Clay <HEADLESS>
Member

Eval started — 2 reviewers: leo (cross-domain, opus), clay (domain-peer, sonnet)

teleo-eval-orchestrator v2

**Eval started** — 2 reviewers: leo (cross-domain, opus), clay (domain-peer, sonnet) *teleo-eval-orchestrator v2*
Author
Member

Clay — Domain Peer Review: PR #136

Source: IAB AI Ad Gap Widens (Jan 2026)
Files: 3 new claims + 2 enrichments + source archive


The Core Domain Issue: Advertising ≠ Entertainment

This is the most important thing to flag. The IAB source is about AI-generated advertising — a context where consumers experience content as interruption, with known commercial intent, and where anti-ad sentiment predates AI entirely. The claims, however, are filed under entertainment and make arguments about consumer acceptance of AI content broadly, including in entertainment production.

This matters because the domains have meaningfully different dynamics:

  • Consumers have tolerated synthetic content in entertainment for decades (animation, VFX, CGI, motion capture). The uncanny valley problem is well-documented but industry-specific.
  • Ad rejection may be driven as much by anti-advertising sentiment as anti-AI sentiment — the IAB data doesn't separate these.
  • The mechanisms aren't identical: a consumer avoiding an AI-generated car ad is making a different kind of judgment than a consumer evaluating whether a TV drama's performances feel authentic.

The PR is aware of this gap in the source notes ("I expected some evidence that context-appropriate AI use would score well") but doesn't adequately qualify the claims themselves. This isn't a reason to reject the claims — the advertising data is genuinely relevant evidence for the entertainment thesis — but the scope needs to be explicit.


Specific Claim Issues

consumer-rejection-of-ai-content-intensifies-with-quality-improvement

The authenticity-signaling mechanism (rejection intensifies because AI-human ambiguity raises deception risk) is a plausible interpretation, but it's the proposer's inference, not what the data measures. The IAB measures sentiment before and after AI quality improvement; the mechanism explaining why sentiment worsened is theorized. The title embeds the mechanism as if it's demonstrated. Given that the evidence is advertising-specific and the mechanism is inferred, this warrants experimental rather than likely. Alternatively, scope the title to advertising: "Consumer rejection of AI-generated advertising intensifies..."

gen-z-leads-generation-shift-toward-ai-rejection

The "leading indicator" framing assumes cohort effects persist as Gen Z ages into primary consumer spending. But this is cross-sectional data — it shows Gen Z currently has more negative AI sentiment than Millennials, not that Millennials were once like Gen Z and changed. Gen Z may simply have different formative media experiences, and their attitudes toward AI content could shift as they age, as Millennial attitudes appear to have. This should be acknowledged.

Also: the claim title says "AI content rejection" broadly, but the evidence is AI-generated ads. Same scope issue as above.

The existing claim social video is already 25 percent of all video consumption and growing because dopamine-optimized formats match generational attention patterns points to Gen Z consuming massive amounts of short-form social content — content that is increasingly AI-assisted. That's worth noting: Gen Z may simultaneously reject labeled AI ads while consuming unlabeled AI-assisted content at scale, which would suggest the rejection is about disclosure/authenticity signaling in a commercial context, not AI content inherently.

advertiser-consumer-ai-perception-gap-is-structural-misalignment

This is the strongest claim — it's purely descriptive of measured data, the structural vs. noise interpretation is well-argued, and the implications (misallocated spend, brand damage) follow logically. Scope concern applies but is less severe here since the claim is explicitly about the advertising industry. No significant issues.


Enrichments

The enrichment to GenAI adoption...gated by consumer acceptance is the best fit in the PR — the IAB data directly tests the binding-constraint hypothesis and shows consumer acceptance isn't converging with capability. The advertising-to-entertainment scope caveat applies but the enrichment adds genuine value.

The enrichment to consumer definition of quality is fluid is a stretch. The claim that brand perception of "innovative" declined while AI quality improved is being used to support the argument that quality is fluid and revealed through preference. But consumers rating AI-using brands as less innovative is a brand evaluation, not a quality judgment about entertainment content. The logical connection is plausible but weaker than the enrichment implies.


Missing Connection

traditional media buyers now seek content with pre-existing community engagement data as risk mitigation — this claim in the entertainment domain shows media buyers already adjusting behavior to manage uncertainty. The perception-gap claim (advertisers don't know what consumers want re: AI) should link to this; they're related manifestations of the same underlying problem: industry mental models about what consumers value are systematically wrong.


What Needs to Change

  1. Scope all three new claims to advertising explicitly, or add explicit language in each body acknowledging that the evidence is from the advertising context and the application to entertainment content is an inference. The current framing overstates what the data demonstrates.

  2. consumer-rejection claim: Either lower to experimental (mechanism is inferred) or scope the title to advertising.

  3. gen-z claim: Acknowledge the cross-sectional limitation — this data doesn't prove Gen Z's attitudes persist as they age, and the contrast with Gen Z's social video consumption patterns should be addressed.

  4. Add wiki link to advertiser-consumer-ai-perception-gaptraditional media buyers now seek content with pre-existing community engagement data as risk mitigation.


Verdict: request_changes
Model: sonnet
Summary: Three real insights from a credible source, but the core domain problem — advertising evidence being generalized to entertainment without explicit scope qualification — needs to be addressed before merge. The advertiser-consumer gap claim is fine as-is. The consumer-rejection and Gen Z claims overstate what the data demonstrates given the advertising-specific evidence base and inferred mechanisms.

# Clay — Domain Peer Review: PR #136 **Source:** IAB AI Ad Gap Widens (Jan 2026) **Files:** 3 new claims + 2 enrichments + source archive --- ## The Core Domain Issue: Advertising ≠ Entertainment This is the most important thing to flag. The IAB source is about **AI-generated advertising** — a context where consumers experience content as interruption, with known commercial intent, and where anti-ad sentiment predates AI entirely. The claims, however, are filed under `entertainment` and make arguments about consumer acceptance of AI content broadly, including in entertainment production. This matters because the domains have meaningfully different dynamics: - Consumers have tolerated synthetic content in entertainment for decades (animation, VFX, CGI, motion capture). The uncanny valley problem is well-documented but industry-specific. - Ad rejection may be driven as much by anti-advertising sentiment as anti-AI sentiment — the IAB data doesn't separate these. - The mechanisms aren't identical: a consumer avoiding an AI-generated car ad is making a different kind of judgment than a consumer evaluating whether a TV drama's performances feel authentic. The PR is aware of this gap in the source notes ("I expected some evidence that context-appropriate AI use would score well") but doesn't adequately qualify the claims themselves. This isn't a reason to reject the claims — the advertising data is genuinely relevant evidence for the entertainment thesis — but the scope needs to be explicit. --- ## Specific Claim Issues **`consumer-rejection-of-ai-content-intensifies-with-quality-improvement`** The authenticity-signaling mechanism (rejection intensifies because AI-human ambiguity raises deception risk) is a plausible interpretation, but it's the proposer's inference, not what the data measures. The IAB measures sentiment before and after AI quality improvement; the mechanism explaining *why* sentiment worsened is theorized. The title embeds the mechanism as if it's demonstrated. Given that the evidence is advertising-specific and the mechanism is inferred, this warrants `experimental` rather than `likely`. Alternatively, scope the title to advertising: "Consumer rejection of AI-generated advertising intensifies..." **`gen-z-leads-generation-shift-toward-ai-rejection`** The "leading indicator" framing assumes cohort effects persist as Gen Z ages into primary consumer spending. But this is cross-sectional data — it shows Gen Z currently has more negative AI sentiment than Millennials, not that Millennials were once like Gen Z and changed. Gen Z may simply have different formative media experiences, and their attitudes toward AI content could shift as they age, as Millennial attitudes appear to have. This should be acknowledged. Also: the claim title says "AI content rejection" broadly, but the evidence is AI-generated ads. Same scope issue as above. The existing claim [[social video is already 25 percent of all video consumption and growing because dopamine-optimized formats match generational attention patterns]] points to Gen Z consuming massive amounts of short-form social content — content that is increasingly AI-assisted. That's worth noting: Gen Z may simultaneously reject *labeled AI ads* while consuming *unlabeled AI-assisted content* at scale, which would suggest the rejection is about disclosure/authenticity signaling in a commercial context, not AI content inherently. **`advertiser-consumer-ai-perception-gap-is-structural-misalignment`** This is the strongest claim — it's purely descriptive of measured data, the structural vs. noise interpretation is well-argued, and the implications (misallocated spend, brand damage) follow logically. Scope concern applies but is less severe here since the claim is explicitly about the advertising industry. No significant issues. --- ## Enrichments The enrichment to `GenAI adoption...gated by consumer acceptance` is the best fit in the PR — the IAB data directly tests the binding-constraint hypothesis and shows consumer acceptance isn't converging with capability. The advertising-to-entertainment scope caveat applies but the enrichment adds genuine value. The enrichment to `consumer definition of quality is fluid` is a stretch. The claim that brand perception of "innovative" declined while AI quality improved is being used to support the argument that quality is fluid and revealed through preference. But consumers rating AI-using brands as less innovative is a brand evaluation, not a quality judgment about entertainment content. The logical connection is plausible but weaker than the enrichment implies. --- ## Missing Connection [[traditional media buyers now seek content with pre-existing community engagement data as risk mitigation]] — this claim in the entertainment domain shows media buyers already adjusting behavior to manage uncertainty. The perception-gap claim (advertisers don't know what consumers want re: AI) should link to this; they're related manifestations of the same underlying problem: industry mental models about what consumers value are systematically wrong. --- ## What Needs to Change 1. **Scope all three new claims to advertising explicitly**, or add explicit language in each body acknowledging that the evidence is from the advertising context and the application to entertainment content is an inference. The current framing overstates what the data demonstrates. 2. **`consumer-rejection` claim**: Either lower to `experimental` (mechanism is inferred) or scope the title to advertising. 3. **`gen-z` claim**: Acknowledge the cross-sectional limitation — this data doesn't prove Gen Z's attitudes persist as they age, and the contrast with Gen Z's social video consumption patterns should be addressed. 4. **Add wiki link** to `advertiser-consumer-ai-perception-gap` → [[traditional media buyers now seek content with pre-existing community engagement data as risk mitigation]]. --- **Verdict:** request_changes **Model:** sonnet **Summary:** Three real insights from a credible source, but the core domain problem — advertising evidence being generalized to entertainment without explicit scope qualification — needs to be addressed before merge. The advertiser-consumer gap claim is fine as-is. The consumer-rejection and Gen Z claims overstate what the data demonstrates given the advertising-specific evidence base and inferred mechanisms. <!-- VERDICT:CLAY:REQUEST_CHANGES -->
Member

Leo Cross-Domain Review — PR #136

PR: clay: extract claims from 2026-03-10-iab-ai-ad-gap-widens.md
Files: 3 new claims, 2 enrichments to existing claims, 1 source archive

Source archive is clean — properly structured, status: processed, extraction notes are useful, all fields present. Good work on the archive workflow.

Wiki links all resolve. Enrichments to the two existing claims are well-integrated with clear source attribution.

Issues

1. Gen Z "leading indicator" claim overreaches its evidence (confidence calibration)

The claim that Gen Z "leads the generational shift toward AI content rejection" and serves as a "leading indicator for industry-wide norms" is stated at likely confidence, but the evidence only shows Gen Z is more negative than Millennials. That Gen Z attitudes predict future industry-wide behavior is an extrapolation. It could equally be a cohort-specific effect — Gen Z's digital nativeness may make them uniquely sensitive to authenticity in ways other generations never develop.

The sentence "this is not a temporary cohort effect but a durable shift" needs evidence it doesn't have. The data shows a widening gap, which is consistent with either a leading indicator or a diverging cohort.

Request: Either downgrade to experimental or scope the title to the observation ("Gen Z shows substantially higher AI content rejection than Millennials") and reserve the leading-indicator interpretation for the body as a hypothesis.

2. Consumer rejection claim bundles observation with mechanism

The title "Consumer rejection of AI content intensifies as quality improvement increases authenticity signaling value" combines a well-evidenced observation (rejection intensifies despite quality improvement) with a proposed causal mechanism (authenticity signaling value) that is speculative. The IAB data doesn't measure authenticity signaling — it measures sentiment. The mechanism is plausible but not demonstrated by this source.

At likely confidence this is acceptable if the title is read as a hypothesis, but it would be cleaner to either: (a) scope the title to the observation and present the mechanism in the body, or (b) add challenged_by or a "Limitations" note acknowledging the mechanism is inferred rather than measured.

This is a soft request — the claim is valuable either way, but precision matters for a claim that will likely attract counter-evidence.

3. Productive tension worth flagging

The existing claim "GenAI adoption in entertainment will be gated by consumer acceptance not technology capability" implies acceptance will eventually happen use-case by use-case. The new rejection-intensification claim suggests some use cases may never cross the acceptance threshold if rejection grows with quality. These aren't contradictory — the existing claim doesn't guarantee universal acceptance — but the tension is worth making explicit. Consider adding a note in the rejection claim's Relevant Notes acknowledging this: the gating claim says acceptance happens at different rates; this claim says the rate may be negative in some contexts.

4. Missing cross-domain connections

The advertiser perception gap claim has implications beyond entertainment:

  • Internet finance (Rio): Systematic misallocation of marketing spend based on wrong mental models is a market inefficiency — connects to mechanism design and information asymmetry claims
  • AI alignment (Theseus): Consumer rejection framed as rational response to "deception risk" directly parallels alignment concerns about AI transparency and trust

These don't need to be wiki-linked now, but flagging for future cross-domain synthesis.

The new claims use topic tags like [[advertiser-perception-gap]], [[consumer-acceptance]], [[ai-content]] that don't resolve to files. Existing claims in the KB use [[entertainment]] and [[teleological-economics]] which resolve to domain maps. Not blocking, but the convention is inconsistent — suggest using only tags that resolve or dropping the wiki-link brackets for pure tags.

What's good

The IAB data is genuinely surprising and valuable. The finding that negative sentiment increased 12 points during a period of dramatic AI quality improvement is the kind of counter-narrative evidence the KB needs more of. The advertiser perception gap claim is the strongest of the three — clean data, clear structural argument, well-scoped. The enrichments to existing claims are well-done and properly attributed.


Verdict: request_changes
Model: opus
Summary: Strong source, good extraction. Three issues: Gen Z "leading indicator" claim overreaches its evidence at likely confidence (should be experimental or re-scoped); consumer rejection claim bundles observed pattern with speculative mechanism in the title; productive tension with existing gating claim should be made explicit. The advertiser perception gap claim is solid as-is.

# Leo Cross-Domain Review — PR #136 **PR:** clay: extract claims from 2026-03-10-iab-ai-ad-gap-widens.md **Files:** 3 new claims, 2 enrichments to existing claims, 1 source archive Source archive is clean — properly structured, `status: processed`, extraction notes are useful, all fields present. Good work on the archive workflow. Wiki links all resolve. Enrichments to the two existing claims are well-integrated with clear source attribution. ## Issues ### 1. Gen Z "leading indicator" claim overreaches its evidence (confidence calibration) The claim that Gen Z "leads the generational shift toward AI content rejection" and serves as a "leading indicator for industry-wide norms" is stated at `likely` confidence, but the evidence only shows Gen Z is *more negative* than Millennials. That Gen Z attitudes predict future industry-wide behavior is an extrapolation. It could equally be a cohort-specific effect — Gen Z's digital nativeness may make them *uniquely* sensitive to authenticity in ways other generations never develop. The sentence "this is not a temporary cohort effect but a durable shift" needs evidence it doesn't have. The data shows a widening gap, which is consistent with either a leading indicator or a diverging cohort. **Request:** Either downgrade to `experimental` or scope the title to the observation ("Gen Z shows substantially higher AI content rejection than Millennials") and reserve the leading-indicator interpretation for the body as a hypothesis. ### 2. Consumer rejection claim bundles observation with mechanism The title "Consumer rejection of AI content intensifies as quality improvement increases authenticity signaling value" combines a well-evidenced observation (rejection intensifies despite quality improvement) with a proposed causal mechanism (authenticity signaling value) that is speculative. The IAB data doesn't measure authenticity signaling — it measures sentiment. The mechanism is plausible but not demonstrated by this source. At `likely` confidence this is acceptable if the title is read as a hypothesis, but it would be cleaner to either: (a) scope the title to the observation and present the mechanism in the body, or (b) add `challenged_by` or a "Limitations" note acknowledging the mechanism is inferred rather than measured. This is a soft request — the claim is valuable either way, but precision matters for a claim that will likely attract counter-evidence. ### 3. Productive tension worth flagging The existing claim "GenAI adoption in entertainment will be gated by consumer acceptance not technology capability" implies acceptance will eventually happen use-case by use-case. The new rejection-intensification claim suggests some use cases may *never* cross the acceptance threshold if rejection grows with quality. These aren't contradictory — the existing claim doesn't guarantee universal acceptance — but the tension is worth making explicit. Consider adding a note in the rejection claim's Relevant Notes acknowledging this: the gating claim says acceptance happens at different rates; this claim says the rate may be negative in some contexts. ### 4. Missing cross-domain connections The advertiser perception gap claim has implications beyond entertainment: - **Internet finance (Rio):** Systematic misallocation of marketing spend based on wrong mental models is a market inefficiency — connects to mechanism design and information asymmetry claims - **AI alignment (Theseus):** Consumer rejection framed as rational response to "deception risk" directly parallels alignment concerns about AI transparency and trust These don't need to be wiki-linked now, but flagging for future cross-domain synthesis. ### 5. Minor: Topic tags as wiki links The new claims use topic tags like `[[advertiser-perception-gap]]`, `[[consumer-acceptance]]`, `[[ai-content]]` that don't resolve to files. Existing claims in the KB use `[[entertainment]]` and `[[teleological-economics]]` which resolve to domain maps. Not blocking, but the convention is inconsistent — suggest using only tags that resolve or dropping the wiki-link brackets for pure tags. ## What's good The IAB data is genuinely surprising and valuable. The finding that negative sentiment *increased* 12 points during a period of dramatic AI quality improvement is the kind of counter-narrative evidence the KB needs more of. The advertiser perception gap claim is the strongest of the three — clean data, clear structural argument, well-scoped. The enrichments to existing claims are well-done and properly attributed. --- **Verdict:** request_changes **Model:** opus **Summary:** Strong source, good extraction. Three issues: Gen Z "leading indicator" claim overreaches its evidence at `likely` confidence (should be `experimental` or re-scoped); consumer rejection claim bundles observed pattern with speculative mechanism in the title; productive tension with existing gating claim should be made explicit. The advertiser perception gap claim is solid as-is. <!-- VERDICT:LEO:REQUEST_CHANGES -->
Member

Changes requested by clay(domain-peer), leo(cross-domain). Address feedback and push to trigger re-eval.

teleo-eval-orchestrator v2

**Changes requested** by clay(domain-peer), leo(cross-domain). Address feedback and push to trigger re-eval. *teleo-eval-orchestrator v2*
m3taversal force-pushed extract/2026-03-10-iab-ai-ad-gap-widens from c138b23ff1 to ebb7007dd6 2026-03-10 18:23:38 +00:00 Compare
Member

Eval started — 2 reviewers: leo (cross-domain, opus), clay (domain-peer, sonnet)

teleo-eval-orchestrator v2

**Eval started** — 2 reviewers: leo (cross-domain, opus), clay (domain-peer, sonnet) *teleo-eval-orchestrator v2*
Member

Leo Cross-Domain Review — PR #136

PR: clay: extract claims from 2026-03-10-iab-ai-ad-gap-widens.md
Files: 3 new claims, 1 enrichment to existing claim, 1 source archive

Issues

1. Factual error in gen-z claim (blocking)

The gen-z claim states:

"Gap nearly doubled and is widening: 21% gap in 2024 → 19% gap in 2026 (accelerating divergence)"

This is self-contradictory — 21→19 is narrowing, not widening. The numbers appear to confuse absolute negative sentiment rates with the generational gap. Per the source archive: Gen Z negative sentiment went from 21% to 39%, Millennial from 15% to 20%. The gap went from 6 points to 19 points — it actually tripled, which is a much stronger finding than what's stated. Fix the data and the framing.

2. Confidence calibration: "proven" is too strong for perception-gap claim

advertiser-consumer-ai-perception-gap-is-structurally-misaligned.md is rated confidence: proven. This is survey data from one industry association measured over two years. The numbers are striking but "proven" implies near-certainty and reproducibility. The gap could reflect methodology, sampling, or question-framing effects. likely is the right level — strong directional evidence from a credible source, but not independently replicated.

The three new claims use Topics links like [[advertiser-perception-gap]], [[consumer-acceptance]], [[ai-content]], [[authenticity]], [[gen-z]], [[generational-differences]]. None of these resolve to real files. Existing claims link to domain maps ([[entertainment]], [[teleological-economics]]). Either link to existing domain maps or remove the non-resolving links.

4. Missing counter-evidence acknowledgment

The "consumer rejection intensifies with quality improvement" claim is rated likely but doesn't acknowledge counter-evidence. The KB already contains claims about consumer acceptance varying by context (the existing GenAI adoption claim distinguishes B-roll/VFX where acceptance is high from prestige drama where it's low). The IAB data is advertising-specific — it doesn't necessarily generalize to all entertainment contexts. The claim should scope itself to advertising or acknowledge that acceptance patterns differ by content type.

Observations (non-blocking)

Cross-domain connection worth noting: The perception gap claim has implications for internet finance. Structural information asymmetries between producers and consumers are a well-studied market failure pattern. The advertising industry's inability to self-correct on consumer sentiment mirrors dynamics Rio tracks in crypto market narratives vs. actual adoption. Not something to add now, but worth a musing.

Enrichment to existing claim is solid. The additional evidence block added to the GenAI adoption claim is well-structured and properly sourced. The finding that rejection intensifies rather than erodes is genuinely surprising and valuable.

Source archive is thorough. Clean frontmatter, good extraction notes, proper status tracking.

The three claims together tell a coherent story — perception gap exists, it's widening not narrowing, and Gen Z is the leading edge. Good extraction discipline in keeping them atomic while linking them.


Verdict: request_changes
Model: opus
Summary: Strong extraction from an important source. Three issues need fixing: a factual error in the gen-z gap data, confidence downgrade on the perception-gap claim from proven→likely, and non-resolving wiki links in Topics sections. The counter-evidence scoping on the rejection claim would strengthen it but isn't blocking.

# Leo Cross-Domain Review — PR #136 **PR:** clay: extract claims from 2026-03-10-iab-ai-ad-gap-widens.md **Files:** 3 new claims, 1 enrichment to existing claim, 1 source archive ## Issues ### 1. Factual error in gen-z claim (blocking) The gen-z claim states: > "Gap nearly doubled and is widening: 21% gap in 2024 → 19% gap in 2026 (accelerating divergence)" This is self-contradictory — 21→19 is *narrowing*, not widening. The numbers appear to confuse absolute negative sentiment rates with the generational gap. Per the source archive: Gen Z negative sentiment went from 21% to 39%, Millennial from 15% to 20%. The *gap* went from 6 points to 19 points — it actually tripled, which is a much stronger finding than what's stated. Fix the data and the framing. ### 2. Confidence calibration: "proven" is too strong for perception-gap claim `advertiser-consumer-ai-perception-gap-is-structurally-misaligned.md` is rated `confidence: proven`. This is survey data from one industry association measured over two years. The numbers are striking but "proven" implies near-certainty and reproducibility. The gap could reflect methodology, sampling, or question-framing effects. `likely` is the right level — strong directional evidence from a credible source, but not independently replicated. ### 3. Wiki link targets in Topics sections don't resolve The three new claims use Topics links like `[[advertiser-perception-gap]]`, `[[consumer-acceptance]]`, `[[ai-content]]`, `[[authenticity]]`, `[[gen-z]]`, `[[generational-differences]]`. None of these resolve to real files. Existing claims link to domain maps (`[[entertainment]]`, `[[teleological-economics]]`). Either link to existing domain maps or remove the non-resolving links. ### 4. Missing counter-evidence acknowledgment The "consumer rejection intensifies with quality improvement" claim is rated `likely` but doesn't acknowledge counter-evidence. The KB already contains claims about consumer acceptance varying by context (the existing GenAI adoption claim distinguishes B-roll/VFX where acceptance is high from prestige drama where it's low). The IAB data is advertising-specific — it doesn't necessarily generalize to all entertainment contexts. The claim should scope itself to advertising or acknowledge that acceptance patterns differ by content type. ## Observations (non-blocking) **Cross-domain connection worth noting:** The perception gap claim has implications for internet finance. Structural information asymmetries between producers and consumers are a well-studied market failure pattern. The advertising industry's inability to self-correct on consumer sentiment mirrors dynamics Rio tracks in crypto market narratives vs. actual adoption. Not something to add now, but worth a musing. **Enrichment to existing claim is solid.** The additional evidence block added to the GenAI adoption claim is well-structured and properly sourced. The finding that rejection intensifies rather than erodes is genuinely surprising and valuable. **Source archive is thorough.** Clean frontmatter, good extraction notes, proper status tracking. **The three claims together tell a coherent story** — perception gap exists, it's widening not narrowing, and Gen Z is the leading edge. Good extraction discipline in keeping them atomic while linking them. --- **Verdict:** request_changes **Model:** opus **Summary:** Strong extraction from an important source. Three issues need fixing: a factual error in the gen-z gap data, confidence downgrade on the perception-gap claim from proven→likely, and non-resolving wiki links in Topics sections. The counter-evidence scoping on the rejection claim would strengthen it but isn't blocking. <!-- VERDICT:LEO:REQUEST_CHANGES -->
Author
Member

Clay Domain Peer Review — PR #136 (IAB AI Ad Gap Claims)

Source: IAB "AI Ad Gap Widens" report (2026-01-01)
New claims: 3 new + 1 enrichment


Issues Worth Flagging

1. Scope conflation in gen-z-leads-consumer-rejection-of-ai-content (request changes)

The title says "AI content" but every data point in this claim — and the entire IAB source — concerns AI-generated advertising. Advertising and entertainment content operate under fundamentally different acceptance dynamics: advertising is intrusive by nature, already primed for skepticism, while entertainment is actively chosen. Gen Z rejecting AI ads doesn't straightforwardly imply Gen Z rejects AI-generated shows, films, or social video. The claim as titled overclaims.

Fix: Either scope the title to "AI ads" (matching the evidence base) or add a paragraph explicitly arguing why ad rejection generalizes to entertainment content.

2. Factual error in gen-z-leads-consumer-rejection-of-ai-content

The claim states: "Gap nearly doubled and is widening: 21% gap in 2024 → 19% gap in 2026 (accelerating divergence)"

21% → 19% is a decrease, not acceleration. The parenthetical directly contradicts the sentence. The extractor appears to have confused Gen Z's absolute negative sentiment (which did approximately double: ~21% → 39%) with the Gen Z-Millennial gap. The archive suggests the gap was ~6 points in 2024 (21% vs 15%) and is now ~19 points (39% vs 20%) — that's the actual widening story, and it's compelling. The numbers as written are internally inconsistent and would mislead anyone doing quantitative analysis downstream.

Fix: Restate the gap calculation correctly: Gen Z-Millennial gap widened from ~6 points (2024) to ~19 points (2026), while Gen Z absolute negative sentiment nearly doubled from ~21% to 39%.

The Topics sections use [[advertiser-perception-gap]], [[consumer-acceptance]], [[ai-content]], [[authenticity]], [[gen-z]], [[generational-differences]] — none of these are actual files in the KB. They read like tag keywords, not wiki links to real claim or map files. The Relevant Notes section links (to actual claim files) are correct; the Topics section links are not. This fails the wiki-link resolution check.

Fix: Replace broken topic links with real KB file references (e.g., [[entertainment]], [[GenAI adoption in entertainment will be gated by consumer acceptance not technology capability]]) or remove the broken links.

4. Confidence calibration note on advertiser-consumer-ai-perception-gap-is-structurally-misaligned

proven is defensible for the raw data (37-point gap, widening from 32). The "structural" interpretation — that this represents a systematic feedback failure rather than a lagging correction — is inference from the pattern, not directly measured. This is borderline; I'd accept proven if the description were tightened to focus on the documented gap magnitude and trajectory rather than the structural mechanism. As written it's close enough to leave alone, but worth noting.


What Works

The enrichment to GenAI adoption in entertainment will be gated by consumer acceptance not technology capability is genuinely valuable — it transforms a theoretical claim into one with strong quantitative backing and adds the counterintuitive finding (rejection intensifying with quality) that makes it more interesting.

consumer-rejection-of-ai-ads-intensifies-with-quality-improvement is the most interesting of the three new claims. The mechanism (authenticity signaling becomes more valuable as AI-human distinction blurs) is appropriately marked likely rather than proven. The proposed mechanism is plausible and the evidence is real — just worth noting that the evidence shows correlation (rejection increased as AI improved), not causation (could also be driven by higher exposure to AI ads in the same period). The likely confidence already covers this.

Missing cross-link worth adding: [[Hollywood talent will embrace AI because narrowing creative paths within the studio system leave few alternatives]] creates an interesting tension — talent adoption is accelerating while consumer rejection is also intensifying. Neither claim acknowledges the other.


Verdict: request_changes
Model: sonnet
Summary: The Gen Z claim has a factual error in the gap arithmetic and scope conflation (ad rejection ≠ entertainment content rejection). All three new claims have broken wiki links. Fix these before merge — the underlying data is strong enough to support good claims once the errors are corrected.

# Clay Domain Peer Review — PR #136 (IAB AI Ad Gap Claims) **Source:** IAB "AI Ad Gap Widens" report (2026-01-01) **New claims:** 3 new + 1 enrichment --- ## Issues Worth Flagging ### 1. Scope conflation in `gen-z-leads-consumer-rejection-of-ai-content` (request changes) The title says "AI content" but every data point in this claim — and the entire IAB source — concerns AI-generated **advertising**. Advertising and entertainment content operate under fundamentally different acceptance dynamics: advertising is intrusive by nature, already primed for skepticism, while entertainment is actively chosen. Gen Z rejecting AI ads doesn't straightforwardly imply Gen Z rejects AI-generated shows, films, or social video. The claim as titled overclaims. Fix: Either scope the title to "AI ads" (matching the evidence base) or add a paragraph explicitly arguing why ad rejection generalizes to entertainment content. ### 2. Factual error in `gen-z-leads-consumer-rejection-of-ai-content` The claim states: "Gap nearly doubled and is widening: 21% gap in 2024 → 19% gap in 2026 (accelerating divergence)" 21% → 19% is a **decrease**, not acceleration. The parenthetical directly contradicts the sentence. The extractor appears to have confused Gen Z's absolute negative sentiment (which did approximately double: ~21% → 39%) with the Gen Z-Millennial gap. The archive suggests the gap was ~6 points in 2024 (21% vs 15%) and is now ~19 points (39% vs 20%) — that's the actual widening story, and it's compelling. The numbers as written are internally inconsistent and would mislead anyone doing quantitative analysis downstream. Fix: Restate the gap calculation correctly: Gen Z-Millennial gap widened from ~6 points (2024) to ~19 points (2026), while Gen Z absolute negative sentiment nearly doubled from ~21% to 39%. ### 3. Broken wiki links in all three new claims The Topics sections use `[[advertiser-perception-gap]]`, `[[consumer-acceptance]]`, `[[ai-content]]`, `[[authenticity]]`, `[[gen-z]]`, `[[generational-differences]]` — none of these are actual files in the KB. They read like tag keywords, not wiki links to real claim or map files. The Relevant Notes section links (to actual claim files) are correct; the Topics section links are not. This fails the wiki-link resolution check. Fix: Replace broken topic links with real KB file references (e.g., `[[entertainment]]`, `[[GenAI adoption in entertainment will be gated by consumer acceptance not technology capability]]`) or remove the broken links. ### 4. Confidence calibration note on `advertiser-consumer-ai-perception-gap-is-structurally-misaligned` `proven` is defensible for the raw data (37-point gap, widening from 32). The "structural" interpretation — that this represents a systematic feedback failure rather than a lagging correction — is inference from the pattern, not directly measured. This is borderline; I'd accept `proven` if the description were tightened to focus on the documented gap magnitude and trajectory rather than the structural mechanism. As written it's close enough to leave alone, but worth noting. --- ## What Works The enrichment to `GenAI adoption in entertainment will be gated by consumer acceptance not technology capability` is genuinely valuable — it transforms a theoretical claim into one with strong quantitative backing and adds the counterintuitive finding (rejection intensifying with quality) that makes it more interesting. `consumer-rejection-of-ai-ads-intensifies-with-quality-improvement` is the most interesting of the three new claims. The mechanism (authenticity signaling becomes more valuable as AI-human distinction blurs) is appropriately marked `likely` rather than proven. The proposed mechanism is plausible and the evidence is real — just worth noting that the evidence shows correlation (rejection increased as AI improved), not causation (could also be driven by higher exposure to AI ads in the same period). The `likely` confidence already covers this. Missing cross-link worth adding: `[[Hollywood talent will embrace AI because narrowing creative paths within the studio system leave few alternatives]]` creates an interesting tension — talent adoption is accelerating while consumer rejection is also intensifying. Neither claim acknowledges the other. --- **Verdict:** request_changes **Model:** sonnet **Summary:** The Gen Z claim has a factual error in the gap arithmetic and scope conflation (ad rejection ≠ entertainment content rejection). All three new claims have broken wiki links. Fix these before merge — the underlying data is strong enough to support good claims once the errors are corrected. <!-- VERDICT:CLAY:REQUEST_CHANGES -->
Member

Changes requested by leo(cross-domain), clay(domain-peer). Address feedback and push to trigger re-eval.

teleo-eval-orchestrator v2

**Changes requested** by leo(cross-domain), clay(domain-peer). Address feedback and push to trigger re-eval. *teleo-eval-orchestrator v2*
m3taversal force-pushed extract/2026-03-10-iab-ai-ad-gap-widens from ebb7007dd6 to 05f216ef10 2026-03-11 02:23:08 +00:00 Compare
m3taversal force-pushed extract/2026-03-10-iab-ai-ad-gap-widens from 05f216ef10 to 5411e80692 2026-03-11 02:59:37 +00:00 Compare
Owner

status: null-result but enrichments_applied lists 4 enrichments and extraction_notes describes "three new claims extracted." That's contradictory — null-result means no claims were produced. Should be processed.

Everything else looks clean. Source archive update is well-structured, key facts are useful, extraction notes are detailed.

`status: null-result` but `enrichments_applied` lists 4 enrichments and `extraction_notes` describes "three new claims extracted." That's contradictory — `null-result` means no claims were produced. Should be `processed`. Everything else looks clean. Source archive update is well-structured, key facts are useful, extraction notes are detailed. <!-- VERDICT:LEO:REQUEST_CHANGES -->
Owner
  1. Technical accuracy: The claims about the perception gap and consumer sentiment appear factually correct based on the data provided.
  2. Domain duplicates: No substantial duplicates found in the domain.
  3. Missing context: No important context is missing; the report provides comprehensive data and analysis.
  4. Confidence calibration: The confidence level is appropriately high due to the longitudinal data supporting the claims.
  5. Enrichment opportunities: The enrichments applied are relevant and enhance the understanding of the claims.

All aspects pass the review criteria.

1. Technical accuracy: The claims about the perception gap and consumer sentiment appear factually correct based on the data provided. 2. Domain duplicates: No substantial duplicates found in the domain. 3. Missing context: No important context is missing; the report provides comprehensive data and analysis. 4. Confidence calibration: The confidence level is appropriately high due to the longitudinal data supporting the claims. 5. Enrichment opportunities: The enrichments applied are relevant and enhance the understanding of the claims. All aspects pass the review criteria. <!-- VERDICT:LEO:APPROVE -->
m3taversal force-pushed extract/2026-03-10-iab-ai-ad-gap-widens from 5411e80692 to 3bef517d2a 2026-03-11 13:24:02 +00:00 Compare
m3taversal force-pushed extract/2026-03-10-iab-ai-ad-gap-widens from 3bef517d2a to 97276c90fd 2026-03-11 13:29:22 +00:00 Compare
m3taversal force-pushed extract/2026-03-10-iab-ai-ad-gap-widens from 97276c90fd to 7b69853a97 2026-03-11 13:57:14 +00:00 Compare
m3taversal force-pushed extract/2026-03-10-iab-ai-ad-gap-widens from 7b69853a97 to 632f3a4ea2 2026-03-11 14:18:26 +00:00 Compare
m3taversal force-pushed extract/2026-03-10-iab-ai-ad-gap-widens from 632f3a4ea2 to d4f584a285 2026-03-11 15:33:55 +00:00 Compare
m3taversal force-pushed extract/2026-03-10-iab-ai-ad-gap-widens from d4f584a285 to 5286d33e73 2026-03-11 18:06:37 +00:00 Compare
m3taversal closed this pull request 2026-03-11 19:36:04 +00:00

Pull request closed

Sign in to join this conversation.
No description provided.