<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Data, Tech & Tools]]></title><description><![CDATA[Where data, tech & tools transform industries. Decoding AI's impact on business strategy. Essential intelligence for navigating change. #DataDriven #AI]]></description><link>https://www.datatechandtools.com</link><generator>Substack</generator><lastBuildDate>Wed, 13 May 2026 11:26:27 GMT</lastBuildDate><atom:link href="https://www.datatechandtools.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Data, Tech & Tools]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[datatechandtools@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[datatechandtools@substack.com]]></itunes:email><itunes:name><![CDATA[Data, Tech & Tools]]></itunes:name></itunes:owner><itunes:author><![CDATA[Data, Tech & Tools]]></itunes:author><googleplay:owner><![CDATA[datatechandtools@substack.com]]></googleplay:owner><googleplay:email><![CDATA[datatechandtools@substack.com]]></googleplay:email><googleplay:author><![CDATA[Data, Tech & Tools]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Margin Squeeze Is Here – And Cutting Your Way Out Won't Work]]></title><description><![CDATA[From Nvidia to Nike, companies are discovering that revenue growth masks a profitability problem that can't be solved by layoffs alone]]></description><link>https://www.datatechandtools.com/p/the-margin-squeeze-is-here-and-cutting</link><guid isPermaLink="false">https://www.datatechandtools.com/p/the-margin-squeeze-is-here-and-cutting</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Sun, 30 Nov 2025 20:39:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Among America&#8217;s 1,500 largest companies by market value, the typical non-financial firm increased both sales and operating profit by 6% in the third quarter. Investment banks are publishing 2026 outlooks predicting 14% earnings growth. Deutsche Bank and Morgan Stanley agree the pace should continue into 2027.</p><p>But beneath the headline numbers, something is off. At 394 of the 865 companies that grew revenue, the cost of goods sold rose faster&#8212;squeezing margins. Across four of ten main industry groups, sales and administrative costs outpaced revenue. Return on capital fell year-over-year in seven sectors.</p><p>Even Nvidia&#8212;the poster child of AI-driven growth&#8212;saw its gross, operating, and net profit margins tighten by three to six percentage points relative to a year ago. Executives at General Motors, Nike, and Starbucks fielded probing questions about profitability during their latest earnings calls.</p><p>The margin squeeze has arrived. And the instinctive response&#8212;cutting costs&#8212;may make things worse.</p><h2>What&#8217;s Driving the Squeeze</h2><p>Several forces are converging.</p><p><strong>Input costs are rising faster than prices.</strong> The producer price index, a proxy for corporate costs, has at times outpaced the consumer price index by over five percentage points&#8212;one of the largest gaps in decades. Companies can&#8217;t pass through all their cost increases without losing customers, so margins compress.</p><p><strong>Tariff uncertainty has disrupted planning.</strong> A survey of over 1,000 companies found that one-third reforecast their profitability numbers before the second half of 2025. The median expected EBITDA margin for 2025 shifted from 12% to 11.3%. Companies in the bottom quartile saw profitability projections drop 64%.</p><p><strong>Labor costs remain elevated.</strong> Revenue per employee&#8212;one gauge of productivity&#8212;has grown more slowly than consumer prices at 630 companies in a recent analysis. In only three sectors (information technology, healthcare, and real estate) has the median business improved on this measure.</p><p><strong>Competition limits pricing power.</strong> A Bain survey found 67% of companies cite competitive pressures and customer resistance as the biggest barrier to margin-enhancing pricing strategies. Without high inflation as a default justification for price increases, many are struggling to maintain pricing discipline.</p><h2>The Cost-Cutting Trap</h2><p>The instinctive response to margin pressure is cutting costs. At least seven companies with market caps above $10 billion that recorded sharp margin declines have announced layoffs this year, including Intel, Pfizer, and Mondelez.</p><p>But cost-cutting has limits&#8212;and dangers. McKinsey research found that companies that increased their profit margins for more than three consecutive years were rare, and those that kept pushing eventually cut into activities that benefited customers and brands.</p><p>One consumer-packaged-goods company increased profits at double-digit rates for seven years by emphasizing margin growth&#8212;even as revenues grew at only 2% a year. Eventually, it ran out of healthy opportunities to cut costs and began slicing into activities that damaged the business.</p><p>Another large company produced years of strong profit growth largely by increasing prices. That allowed competitors to step in with similar but less expensive products, cutting into market share. Margin improvement became market share loss.</p><p>Seven of ten non-financial sectors spent less on R&amp;D as a share of revenue in the past four quarters than the year before. Nearly half the firms in the S&amp;P 500 are reducing capital spending. The profits produced by these cuts may prove illusory if they make it harder to develop and manufacture products in the future.</p><h2>What Actually Works</h2><p>Companies that successfully navigate margin pressure share some common approaches.</p><p><strong>Productivity over headcount.</strong> AI and automation investments that actually improve output per worker&#8212;not just reduce headcount&#8212;provide sustainable margin improvement. Companies integrating AI to support growth rather than just replace workers are seeing better results.</p><p><strong>Pricing precision.</strong> Companies that invest in data-driven pricing guidance report winning more deals than they lose at 12% higher rates than others. Sales reps with dynamic data-driven guidance were almost twice as likely to be confident in realizing price increases. The companies confident they&#8217;ll push through price increases in 2025 show expected profit margin premiums of 3 percentage points over those that aren&#8217;t.</p><p><strong>Expense capture before list price increases.</strong> Finding ways to capture more of the price already charged&#8212;examining discounts, allowances, rebates, and other deductions&#8212;is often less risky than outright list price increases.</p><p><strong>Growth over margin optimization.</strong> Thinner margins aren&#8217;t always a problem if the top line keeps growing. Nvidia&#8217;s sales grow at an annual rate of 60% quarter after quarter; margin compression matters less when volume compensates. Companies with strong revenue growth have more flexibility than those trying to optimize a stagnant business.</p><h2>The Path Forward</h2><p>The margin pressure may ease in 2026. Import duties seem likely to moderate from current levels. Recent tax legislation brings favorable changes to R&amp;D expensing, capital spending, and other investments. Interest rates are declining, which helps highly levered companies.</p><p>But the American economy may also slow. If CEOs defer the margin question until then, it will no longer be marginal. The time to address cost structures, pricing discipline, and productivity investments is when revenue growth provides cover&#8212;not when a slowdown makes every decision more painful.</p><p>The companies that will emerge strongest are those that find efficiency without sacrificing growth investment. That&#8217;s harder than cutting headcount or raising prices. It&#8217;s also the only approach that works sustainably.</p>]]></content:encoded></item><item><title><![CDATA[The Price of Intelligence Is Falling. The Bill Keeps Rising.]]></title><description><![CDATA[AI tokens are getting cheaper by the month &#8211; but that's not saving anyone money, and it's definitely not making OpenAI profitable]]></description><link>https://www.datatechandtools.com/p/the-price-of-intelligence-is-falling</link><guid isPermaLink="false">https://www.datatechandtools.com/p/the-price-of-intelligence-is-falling</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Sat, 29 Nov 2025 20:38:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The cost of running AI has dropped dramatically. The price per token to answer a PhD-level science question as proficiently as GPT-4 has fallen by about 97% per year, according to EpochAI research. What cost $20 per million tokens two years ago now costs $0.07 from low-cost providers. Andreessen Horowitz calls this &#8220;LLMflation&#8221;&#8212;a 10x cost reduction every year.</p><p>And yet OpenAI is on track to lose $44 billion before reaching profitability in 2029. The company burned $9 billion in 2025 on $13 billion in revenue&#8212;a cash burn rate of roughly 70% of sales.</p><p>How can a product get dramatically cheaper while its maker bleeds cash? Understanding this paradox is essential for anyone building with or investing in AI.</p><h2>The Token Trap</h2><p>Here&#8217;s what the efficiency headlines miss: as AI models become more capable, they generate more tokens to complete tasks.</p><p>Standard models have become more verbose over time. The average number of output tokens for benchmark questions has doubled annually, according to EpochAI. &#8220;Reasoning&#8221; models&#8212;the ones that explain their approach step by step&#8212;use eight times more tokens than simpler models. And usage of reasoning models is rising about fivefold every year.</p><p>So while the price per token drops, the number of tokens required to do useful work rises. The efficiency gains and the capability gains roughly cancel out.</p><p>As one analyst at SemiAnalysis put it: &#8220;Generating responses remains costly because the models keep improving&#8212;and growing. So even as the price of tokens falls, better, more verbose models mean more tokens must be generated to complete a task.&#8221; He found it &#8220;hard to imagine&#8221; a future where the marginal cost of AI services falls close to zero.</p><h2>The Competition Problem</h2><p>Even if costs were stable, pricing pressure would squeeze margins. OpenAI charges developers about $120 per million output tokens for its most advanced model. DeepSeek, a Chinese rival, offers comparable performance at a fraction of the price&#8212;in some cases 95% cheaper.</p><p>Competition has made the AI API market look less like enterprise software and more like commodities. Stanford research found that achieving GPT-3.5-level performance became 280 times cheaper between late 2022 and late 2024, driven largely by new entrants undercutting incumbents.</p><p>OpenAI&#8217;s response has been to retreat upmarket. Its newest reasoning model, o1, costs the same per output token as GPT-3 did at launch&#8212;$60 per million. The company is betting that there&#8217;s a premium tier of customers willing to pay for the most capable models, even as the low end gets commoditized.</p><p>Whether that bet pays off depends on whether capability improvements can stay ahead of competitors&#8212;and whether customers value the difference enough to pay for it.</p><h2>The Scale Paradox</h2><p>OpenAI&#8217;s financial trajectory assumes scale will eventually solve the profitability problem. The company projects $200 billion in annual revenue by 2030. At that scale, even modest margins would generate significant profits.</p><p>But scale requires spending. OpenAI has committed to buying more than 26 gigawatts of datacenter capacity through the end of the decade at a cost exceeding $1 trillion. It signed a roughly $60 billion annual computing arrangement with Oracle, an $18 billion joint data center venture, and a $10 billion allocation for custom semiconductor development.</p><p>HSBC analysts estimate OpenAI faces a $207 billion funding shortfall through 2030, even accounting for projected revenue. The company&#8217;s cumulative free cash flow will still be negative by then, leaving a gap that must be filled through debt, equity, or more aggressive revenue generation.</p><p>The circular nature of AI industry investment makes this more concerning. Nvidia recently announced a $100 billion investment in OpenAI&#8212;shortly after OpenAI signed a $300 billion cloud computing contract with Oracle. Oracle is a major Nvidia customer. The money flows from Oracle to Nvidia, from Nvidia to OpenAI, and back to Oracle through cloud contracts. Critics note this resembles patterns from the dot-com bubble, when telecom equipment makers extended financing to customers to encourage equipment purchases.</p><h2>What This Means for Businesses</h2><p>If you&#8217;re building AI capabilities, the economics have several implications.</p><p><strong>Don&#8217;t assume costs will keep falling.</strong> The headline trend of cheaper tokens masks offsetting factors. Budget for capability improvements driving up token consumption, not just price reductions driving down costs.</p><p><strong>Provider stability matters.</strong> OpenAI&#8217;s path to profitability depends on assumptions that may not hold. The current model of burning billions isn&#8217;t sustainable indefinitely. Prices for API access are likely to increase, or service terms will change.</p><p><strong>The &#8220;build vs. buy&#8221; calculation favors buying&#8212;with caveats.</strong> The cost of building foundational models makes it unfeasible for all but the largest tech giants. But buying means being beholden to the pricing and platform decisions of your chosen provider. Hedging across multiple providers adds complexity but reduces risk.</p><p><strong>Watch for reasoning model costs.</strong> The shift toward models that &#8220;think&#8221; step by step dramatically increases token usage. A task that took 100 tokens on GPT-4 might take 800 on a reasoning model. Budget accordingly.</p><h2>An Assessment</h2><p>The price of raw AI capability is falling rapidly. That&#8217;s genuinely good news for developers and businesses building applications.</p><p>But the business of providing AI is not getting easier. The major providers are spending more than they&#8217;re earning, betting that scale and capability leadership will eventually generate returns. Whether that bet succeeds depends on factors that are genuinely uncertain: the pace of capability improvement, the intensity of competition, and the willingness of customers to pay premium prices.</p><p>The price of intelligence is falling. The cost of being in the intelligence business is not.</p>]]></content:encoded></item><item><title><![CDATA[The Pitch Has Changed: How AI Is Rewriting Agency New Business]]></title><description><![CDATA[From simulated founder interviews to predictive creative testing, the agencies winning accounts are the ones that show up with tools their competitors don't have]]></description><link>https://www.datatechandtools.com/p/the-pitch-has-changed-how-ai-is-rewriting</link><guid isPermaLink="false">https://www.datatechandtools.com/p/the-pitch-has-changed-how-ai-is-rewriting</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Fri, 28 Nov 2025 20:37:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When Noble People walked into a pitch for a tech company last year, they opened by telling the client they&#8217;d already spoken to the founders. That wasn&#8217;t exactly true. The agency had trained a custom GPT on hundreds of the founders&#8217; public interviews to simulate their voices and test ideas through that lens. When they revealed the trick, the room shifted.</p><p>&#8220;The vibe in the room changed,&#8221; said Tom Morrissy, the agency&#8217;s chief growth officer. &#8220;I knew we had them, no matter what happened after that.&#8221;</p><p>This is what agency pitches look like now. The technology is helping agencies move faster through typically time-consuming and costly reviews. And for the holding companies that just completed the largest agency merger in history, AI isn&#8217;t just a service offering&#8212;it&#8217;s the justification for the deal itself.</p><h2>The Pitch Theater of 2025</h2><p>When Omnicom&#8217;s executives were asked during their Q3 earnings call how AI is reshaping creative and media, the chief technology officer went straight to new business pitches. He pointed to a recent win for a large automotive company where &#8220;integrated agents&#8221; helped guide consumer research, creative concepting, and production.</p><p>The pattern is consistent across the industry. IPG&#8217;s AI-powered Interact platform helped win Bayer&#8217;s global consumer health review by uncovering repositioning insights for the Canesten brand&#8212;analyzing campaigns from beauty and luxury categories to &#8220;break out of consumer health thinking.&#8221; Using a partnership with AI startup Aaru, IPG tested creative concepts and predicted how consumers might respond to them weeks later in retail settings.</p><p>Horizon Media built a custom analysis inside its Blu platform for its Spectrum pitch, combining the telecom company&#8217;s public market data with the agency&#8217;s own spine of 260 million U.S. consumer profiles. Teams could ask questions like &#8220;Which audiences are most likely to churn in Los Angeles?&#8221; and receive insights that previously would have taken weeks.</p><p>Fig used its proprietary Story Data platform to win Tropicana&#8217;s creative consolidation, mapping publicly available creative assets across the juice category to show how formulaic the category had become&#8212;and making the case for something different.</p><h2>What Changed</h2><p>The speed advantage is obvious. Agencies can now do in hours what used to take weeks. But three deeper shifts are reshaping the competitive dynamics.</p><p><strong>The tool itself becomes the differentiator.</strong> When agencies build proprietary AI capabilities, they create what one strategist called &#8220;a value moat.&#8221; Clients know they can&#8217;t DIY what&#8217;s being offered. Independent agencies, often more agile than holding company networks, are capitalizing by building custom tools rather than licensing generic platforms.</p><p><strong>Predictions replace recommendations.</strong> The agencies winning pitches aren&#8217;t just offering strategies&#8212;they&#8217;re offering predictions. &#8220;That&#8217;s amazing because that&#8217;s a predictor of behavior, not just a predictor of response,&#8221; Bayer said about IPG&#8217;s approach. The shift from &#8220;here&#8217;s what we recommend&#8221; to &#8220;here&#8217;s what will happen&#8221; changes the client conversation entirely.</p><p><strong>Speed becomes proof of capability.</strong> Bayer&#8217;s pitch included briefs where agencies were expected to deliver creative work by end of day. The ability to produce that quickly&#8212;with AI assistance&#8212;became part of the evaluation criteria.</p><h2>The Holding Company Calculation</h2><p>The Omnicom-IPG merger&#8212;valued at roughly $8.9 billion&#8212;is explicitly framed around AI and data capabilities. The combined entity expects $750 million in annual cost savings while creating a revenue base that rivals Accenture Song. Omnicom leadership has cited precision marketing and data-driven capabilities as the biggest areas of synergy.</p><p>But the merger also reflects defensive pressure. Digital platforms like Google, Meta, and Amazon increasingly enable brands to bypass agencies entirely, controlling over 50% of global digital ad spending. Consultancies like Accenture challenge agencies by offering integrated solutions that combine strategy, technology, and execution. IPG&#8217;s CEO initiated strategic talks after recognizing that creative strength alone couldn&#8217;t offset eroding client budgets.</p><p>The question for clients: does bigger actually mean better? Industry commentary suggests skepticism. &#8220;The brands are the storefront, because clients still like to hire brands,&#8221; said one agency president, &#8220;but there will be little customization, more principal buying within media, and continued resource pressure.&#8221; Integration adds bureaucracy, complexity, and delays that can undermine the responsiveness modern marketing demands.</p><h2>The Independent Opportunity</h2><p>As holding companies consolidate, independent and challenger networks are positioning differently. Stagwell has set a goal to double revenue to $5 billion by 2029 through tech-driven acquisitions&#8212;its Marketing Cloud division posted 31% year-over-year growth. Attivo has acquired established agencies that larger holding companies could no longer efficiently manage, including Hill Holliday and Deutsch NY from IPG.</p><p>The pitch to clients: we&#8217;re faster, more flexible, and more willing to build custom solutions rather than forcing you into scaled platforms.</p><p>For AI-specialized agencies, valuations are climbing. Agencies with expertise in the industry&#8217;s current favorite technology are expecting significant interest. But the pressure cuts both ways. As one industry observer noted, &#8220;If last year&#8217;s deals were any clue, agency M&amp;A will stay hot&#8212;driven by performance, data, and tech integration.&#8221;</p><h2>What Clients Should Ask</h2><p>If you&#8217;re evaluating agencies, the AI questions have changed from &#8220;do you use AI?&#8221; to more specific probes:</p><p><strong>What&#8217;s proprietary versus licensed?</strong> Agencies using the same off-the-shelf tools as everyone else can&#8217;t claim differentiation. Ask what they&#8217;ve built themselves.</p><p><strong>How does AI affect pricing?</strong> If AI dramatically reduces production time, that should show up somewhere&#8212;either in lower costs or more deliverables.</p><p><strong>What predictions can you make?</strong> The best AI applications aren&#8217;t just faster&#8212;they&#8217;re predictive. Ask for examples of predictions that proved accurate.</p><p><strong>How do you handle creative homogenization?</strong> When everyone pulls from the same data and feeds similar prompts into similar tools, output converges. What&#8217;s the approach to maintaining distinctiveness?</p><p>The pitch has changed. The agencies winning are the ones that show up with tools their competitors don&#8217;t have&#8212;and predictions their competitors can&#8217;t make.</p>]]></content:encoded></item><item><title><![CDATA[What Happens to Cities When Nobody Needs to Park?]]></title><description><![CDATA[Robotaxis are expanding faster than predicted&#8212;and the urban economics implications go far beyond transportation]]></description><link>https://www.datatechandtools.com/p/what-happens-to-cities-when-nobody</link><guid isPermaLink="false">https://www.datatechandtools.com/p/what-happens-to-cities-when-nobody</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Thu, 27 Nov 2025 20:34:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In early 2023, only a thin majority of San Franciscans supported robotaxis. Today, two-thirds favor them. Waymo now operates in ten U.S. cities with plans to add at least a dozen more by end of 2026&#8212;including Miami, Washington D.C., Dallas, Denver, and its first international market, London. Tesla has launched limited robotaxi service in Austin. Zoox is offering rides in San Francisco and Las Vegas. The industry that was perpetually &#8220;a few years away&#8221; is suddenly operating at scale.</p><p>The transportation story is obvious: driverless rides that cost about a third more than Uber today, but could eventually become far cheaper without driver wages to pay. What&#8217;s less obvious&#8212;and arguably more consequential&#8212;is what happens to urban real estate, city planning, and local economies when widespread car ownership becomes optional.</p><h2>The Parking Math</h2><p>The average American city dedicates about a quarter of its downtown land to parking. Los Angeles County alone had nearly 10 million off-street, non-residential parking spaces as of 2010, covering 200 square miles&#8212;an area larger than Denver.</p><p>This isn&#8217;t market demand. It&#8217;s policy. Starting in the 1950s and 1960s, cities mandated minimum parking requirements for every new building: one space per apartment, one spot for every three restaurant seats, one for every 175 square feet of retail. The regulations assumed car ownership would only grow and that parking needed to be supplied, by law, at the point of destination.</p><p>The results are visible in every American city: parking lots that dominate downtowns, parking structures that consume the first several floors of residential buildings, surface lots that sit empty 95% of the time but legally cannot be used for anything else.</p><p>Robotaxis change this math. If people don&#8217;t own cars, they don&#8217;t need parking spaces at their destination. They don&#8217;t need garages at home. A robotaxi drops them off, drives to its next pickup, and the land that would have been dedicated to storage becomes available for... something else.</p><h2>The Real Estate Opportunity</h2><p>Denver recently studied what would happen if it eliminated parking minimums for new construction. The modeling projected a 12.5% increase in multifamily housing production&#8212;roughly 460 additional units per year. In August 2025, the Denver City Council eliminated the requirements.</p><p>The logic is straightforward. Parking spaces cost $8,000-$50,000 each to build, depending on whether they&#8217;re surface or structured, and land values. Those costs get passed to renters and buyers. In Seattle, after parking reform, 60% of new development would not have been possible under old regulations. A study of New York City neighborhoods found that more low-income housing was built in areas where parking requirements were reduced.</p><p>This is happening now, before robotaxis are ubiquitous. In Menlo Park, California, three downtown parking lots are being converted to affordable housing. In Philadelphia, a Queen Village surface lot is becoming a 157-unit apartment building. In Portland, a condo development swapped dedicated parking for car-share memberships instead.</p><p>But robotaxis accelerate the timeline. If car ownership in cities drops meaningfully&#8212;projections range from 35-50% declines in North America and Europe over the next two decades&#8212;the land currently devoted to parking becomes dramatically overbuilt. Every parking structure becomes a stranded asset. Every surface lot becomes a redevelopment opportunity.</p><h2>The Complexity Nobody Wants to Talk About</h2><p>This sounds like a planning dream: reclaim parking land for housing, offices, parks. Make cities denser, more walkable, more efficient. The reality is messier.</p><p><strong>Traffic might get worse before it gets better.</strong> Robotaxis don&#8217;t eliminate cars&#8212;they potentially make car travel more attractive by removing the need to drive. Without congestion pricing, the result could be gridlock as vehicles circle continuously rather than parking. The personal inconvenience of driving currently constrains demand. Robotaxis remove that constraint.</p><p>An economist&#8217;s solution is straightforward: price traffic. But congestion charges are deeply unpopular in the U.S. New York&#8217;s road fee took years of political fighting to implement. Cities may need to frame it differently&#8212;&#8221;robot taxes&#8221; have a different political valence than traditional congestion pricing.</p><p><strong>Suburbs may sprawl further.</strong> Longer commutes become more tolerable if you can work, sleep, or watch TV during them. Some robotaxi vehicles are already being designed with beds. The pressure on public transit could become severe: why take a bus if a robotaxi is equally cheap and more convenient? Cities could face a &#8220;death spiral&#8221; where fewer transit riders mean less revenue, worse service, fewer riders, in an accelerating loop.</p><p><strong>The transition hits workers hard.</strong> The U.S. has about a million taxi and bus drivers and over 3 million truck drivers&#8212;roughly 3% of the working population. As robotaxi costs come down, those jobs don&#8217;t evolve; they disappear. Personal injury lawyers face reduced demand without car accidents. Auto dealers and used-car salesmen lose customers if people stop buying. The new jobs&#8212;fleet managers, depot workers, AI technicians&#8212;will hardly make up the losses.</p><h2>Who Benefits, Who Loses</h2><p>The distributional effects are uneven in ways that matter for marketers and business planners.</p><p><strong>Cities benefit more than suburbs.</strong> Dense urban areas with good robotaxi coverage become dramatically more convenient. Suburban and rural areas may never have the population density to support robotaxi networks at competitive prices&#8212;car ownership remains necessary.</p><p><strong>The elderly and disabled gain mobility.</strong> A frequently overlooked benefit: robotaxis provide independence for those who can&#8217;t drive. This is a large and growing population as demographics shift.</p><p><strong>Young urban professionals are the early adopters.</strong> They already rely on rideshare services and are &#8220;car-free city dwellers already tired of expensive, unreliable human-driven alternatives,&#8221; as one analysis put it. Families with children&#8212;about 25% of households&#8212;are slower to abandon car ownership; robotaxis can&#8217;t guarantee car seats or cleanliness standards.</p><p><strong>Real estate values shift.</strong> Properties near robotaxi hubs become more valuable. Parking garages become liabilities unless converted. Retail patterns change when destinations don&#8217;t need adjacent parking. The businesses that thrived because they had good parking access may find that advantage neutralized.</p><h2>The Policy Window</h2><p>Cities have a brief window&#8212;perhaps five to ten years&#8212;to shape how this transition unfolds. The choices made now about congestion pricing, parking reform, transit investment, and zoning will determine whether robotaxis make cities better or just different.</p><p>The optimistic scenario: freed parking land becomes housing and green space; traffic decreases as shared vehicles prove more efficient than individually-owned cars; public transit is supplemented, not replaced, by autonomous shuttles; cities become walkable and bikeable as traffic accidents decline.</p><p>The pessimistic scenario: gridlock worsens as induced demand overwhelms road capacity; suburbs sprawl as long commutes become bearable; transit dies a slow death of neglected funding; cities stratify further between those who can afford robotaxi-rich neighborhoods and those stuck in underserved areas.</p><p>Neither scenario is inevitable. Both are plausible.</p><h2>For Business Leaders</h2><p>If you&#8217;re making decisions about real estate, retail location, employee benefits, or customer accessibility, the robotaxi transition changes your calculations.</p><p><strong>Location strategy shifts.</strong> &#8220;Good parking&#8221; becomes less important than proximity to where robotaxis operate most efficiently. Downtown and dense urban cores may become relatively more attractive.</p><p><strong>Employee commute assumptions change.</strong> The cost of getting workers to offices changes&#8212;potentially lower if robotaxis become cheaper than car ownership, potentially more variable as service quality differs by geography.</p><p><strong>Customer access broadens.</strong> Customers who couldn&#8217;t drive to your location&#8212;the elderly, disabled, car-free young people&#8212;become accessible through robotaxi networks. That&#8217;s both an opportunity and a mandate to rethink accessibility.</p><p><strong>The transition is uneven.</strong> Not every city gets robotaxis at the same time. Service quality differs. The patchwork rollout means national strategies need local nuance.</p><h2>The Honest Timeline</h2><p>Robotaxis are real and expanding. But &#8220;expanding&#8221; doesn&#8217;t mean &#8220;universal&#8221; anytime soon. Waymo&#8217;s goal of 1 million trips per week by end of 2026 is impressive but still represents a tiny fraction of U.S. transportation. Tesla&#8217;s robotaxi service still has humans in the passenger seat. The technology works in some conditions, not all; scaling to every city and weather pattern takes years.</p><p>The comparison to the automobile&#8217;s impact is instructive. Cars were invented in the 1880s. The car-oriented city&#8212;with its arterial roads, parking lots, and suburban sprawl&#8212;didn&#8217;t reach its full form until decades later. The robotaxi era will similarly unfold over years, with different cities adapting at different speeds.</p><p>But the changes are coming. The question isn&#8217;t whether to prepare but how quickly to act on the preparation.</p>]]></content:encoded></item><item><title><![CDATA[When the Chatbot Becomes the Checkout Counter: AI Commerce Is Here Faster Than Anyone Expected]]></title><description><![CDATA[From ChatGPT's Shopify integration to Walmart's AI catalog, the distance between product discovery and purchase just collapsed&#8212;and brands have weeks, not years, to adapt]]></description><link>https://www.datatechandtools.com/p/when-the-chatbot-becomes-the-checkout</link><guid isPermaLink="false">https://www.datatechandtools.com/p/when-the-chatbot-becomes-the-checkout</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Wed, 26 Nov 2025 20:33:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In late 2025, OpenAI quietly did something that should have every brand marketer&#8217;s full attention: it turned ChatGPT into a checkout counter. Users in the U.S. can now ask for product recommendations, see real-time inventory from over a million Shopify merchants, and complete purchases without ever leaving the chat. No website visit. No clicking through to Amazon. Just conversation, decision, buy.</p><p>Walmart followed weeks later, integrating its entire product catalog into ChatGPT&#8217;s shopping experience&#8212;270 million weekly customers now discoverable through conversational AI.</p><p>This isn&#8217;t a product feature announcement. It&#8217;s a structural shift in how consumers find and buy products. And the window for brands to prepare is measured in months, not years.</p><h2>The Mechanics of Agentic Commerce</h2><p>OpenAI calls this &#8220;agentic commerce&#8221;&#8212;where AI doesn&#8217;t just answer questions about products but actively facilitates the transaction. The technology stack powering it is worth understanding.</p><p>At the core is the Agentic Commerce Protocol, co-developed with Stripe. It creates a standardized language for AI agents to communicate with merchant systems about product feeds, inventory, checkout, and payments. When someone asks ChatGPT &#8220;What are the best eco-friendly yoga mats under $50?&#8221;, the system queries merchant databases in real-time, surfaces relevant products, and&#8212;if the user wants&#8212;processes the order without a single redirect.</p><p>Product results are organic and unsponsored, ranked on relevance. ChatGPT acts as the user&#8217;s agent&#8212;a digital personal shopper&#8212;passing information securely between consumer and merchant. Merchants pay a small commission on completed purchases, but neither consumers nor product rankings are affected.</p><p>The experience removes friction that has defined online shopping for decades: browsing, comparison shopping, adding to cart, creating accounts, entering payment info. For consumers already comfortable with AI assistants, the appeal is obvious.</p><h2>Why This Time Is Different</h2><p>Skeptics might point out that conversational commerce has been promised before. Voice shopping through Alexa never gained meaningful traction. Chatbots on retail sites have mostly been glorified FAQ bots.</p><p>Three factors make this different.</p><p><strong>Scale.</strong> ChatGPT has 700 million weekly users. That&#8217;s not a niche audience&#8212;it&#8217;s a distribution channel that rivals the largest retail platforms. When Walmart announced its integration, it noted that customers would be able to discover and purchase from its inventory within those 700 million conversations.</p><p><strong>Behavior shift.</strong> According to Adobe research, 39% of U.S. consumers who have used generative AI have already used it for online shopping, and 53% plan to do so. They&#8217;re using AI for product research and recommendations&#8212;exactly the use cases that lead naturally into transactions.</p><p><strong>Infrastructure readiness.</strong> Shopify powers millions of merchants. Stripe handles payments globally. The Agentic Commerce Protocol is open-sourced, meaning any platform can build integrations. Microsoft&#8217;s Copilot launched its Merchant Program in April 2025. Perplexity introduced one-click purchasing through its search engine. The ecosystem is aligning around in-AI commerce as a standard rather than an experiment.</p><h2>What Brands Must Do Now</h2><p>For most brands, this represents a fundamental channel addition&#8212;one that requires different optimization than traditional search or paid social. Several priorities emerge.</p><p><strong>Optimize product data for AI comprehension.</strong> Large language models don&#8217;t read product pages the way humans do. They need structured, consistent data: accurate titles, detailed descriptions (ChatGPT&#8217;s protocol allows 5,000 characters), complete attribute information, real-time inventory status, and consistent pricing across channels.</p><p>As one commerce consultant noted, AI systems need &#8220;very deep content, technical content of the product.&#8221; If your product feed was built for Google Shopping and hasn&#8217;t been updated since, it probably won&#8217;t perform well in AI discovery.</p><p><strong>Maintain price consistency.</strong> AI agents comparison shopping on behalf of consumers will surface the cheapest option. If your pricing varies wildly across channels, you risk being deprioritized or, worse, training AI systems to see your brand as overpriced. One practitioner warned: &#8220;If you don&#8217;t have a constraint on your pricing model across channels, you run the risk of a future agentic bot...being able to find that and locate that.&#8221;</p><p><strong>Invest in your own AI experience.</strong> Some brands are building AI concierges on their own sites&#8212;assistants that have richer context than general-purpose AI can provide. This creates a reason for customers to engage directly rather than transacting entirely within ChatGPT.</p><p>AKQA, the agency, has been helping luxury retailers develop exactly this: AI assistants refined with proprietary data that offer more personalized recommendations than what&#8217;s available through external platforms. As their CTO put it: &#8220;If you&#8217;re just building MCPs for the LLMs to access, you might just lose that connection with your end consumers.&#8221;</p><p><strong>Monitor your AI visibility.</strong> Tools are emerging to track brand mentions and product appearances in AI responses. Ahrefs recently launched Brand Radar, which monitors ChatGPT, Perplexity, and soon Gemini. Understanding where you appear&#8212;and for what queries&#8212;is the new SEO.</p><h2>The Publisher Problem Comes for Commerce</h2><p>There&#8217;s a darker angle worth acknowledging. Publishers have spent the past two years worrying that AI will eliminate the need for consumers to visit their websites&#8212;the &#8220;zero-click search&#8221; phenomenon. That fear is now arriving for commerce.</p><p>If ChatGPT can answer &#8220;What running shoes should I buy?&#8221; and complete the transaction in the same interface, does the brand website matter? Does the carefully designed product detail page serve any purpose?</p><p>The early evidence is mixed. USA Today&#8217;s experience with its DeeperDive AI chatbot suggests that in-AI ads are possible but not yet delivering strongly contextual results. Taboola-powered recommendations in that chatbot often showed irrelevant sponsored content&#8212;a flashlight ad after a lipstick query, for example.</p><p>For brands with strong direct customer relationships, in-AI commerce might actually be good: another distribution point without the need to build new infrastructure. For brands that differentiated through website experience and customer service, the disintermediation is a genuine threat.</p><h2>The Unit Economics Question</h2><p>OpenAI&#8217;s financial situation complicates this story. The company projects losses of $44 billion through 2029 before reaching profitability. Revenue is growing rapidly&#8212;toward $200 billion by 2030, the company says&#8212;but costs are growing just as fast. Only 5% of ChatGPT&#8217;s 800 million users pay for subscriptions.</p><p>Commerce commissions could become a meaningful revenue stream, but the pressure to monetize raises questions about how product rankings will evolve. OpenAI says results are currently unsponsored and organic. Will that hold as the company faces pressure to close its cash burn?</p><p>For brands, this uncertainty means hedging. Build for AI commerce as a channel, but don&#8217;t bet everything on it remaining open and neutral. The history of platforms&#8212;from Facebook&#8217;s organic reach decline to Amazon&#8217;s pay-to-play search results&#8212;suggests that early openness often gives way to monetization that favors larger advertisers.</p><h2>What Happens to the Funnel?</h2><p>Traditional marketing funnels assumed distinct phases: awareness, consideration, intent, purchase. AI commerce collapses these into a single interaction. Someone asks &#8220;What&#8217;s a good anniversary gift for my wife who likes gardening?&#8221; and, in the same conversation, selects a product and buys it.</p><p>This changes the role of brand advertising. If the consideration and purchase phases happen inside an AI conversation, awareness-building becomes both more important (you need to be in the AI&#8217;s knowledge base) and harder to measure (how do you attribute a sale to brand advertising when the transaction happened entirely in chat?).</p><p>Content strategy shifts too. The question isn&#8217;t just whether your product page is optimized for Google&#8212;it&#8217;s whether your brand&#8217;s presence across the web has trained AI systems to recommend you for the right queries. That&#8217;s a much harder problem to solve.</p><h2>The Honest Assessment</h2><p>AI commerce is real, it&#8217;s here, and it will affect how consumers discover and buy products. But the hype should be tempered by practical realities: AI shopping still represents a small fraction of total commerce, the technology has friction points, and the platforms are still figuring out monetization.</p><p>The brands that will navigate this best are those that treat AI commerce as a new channel requiring specific optimization&#8212;not a replacement for everything else, and not something to ignore until it&#8217;s too big to catch up on.</p><p>The checkout counter has moved into the conversation. Whether that&#8217;s opportunity or threat depends entirely on how quickly you adjust.</p>]]></content:encoded></item><item><title><![CDATA[The Miserable Spender: Why Consumer Feelings and Consumer Wallets Have Parted Ways]]></title><description><![CDATA[The economy's most confusing signal has advertisers and retailers rethinking everything they know about measuring demand]]></description><link>https://www.datatechandtools.com/p/the-miserable-spender-why-consumer</link><guid isPermaLink="false">https://www.datatechandtools.com/p/the-miserable-spender-why-consumer</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Tue, 25 Nov 2025 20:31:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The University of Michigan&#8217;s consumer sentiment index just fell to near its lowest point since tracking began in 1952. Americans are telling pollsters they feel terrible about their job prospects, anxious about inflation, and ready to cut back on spending. And yet&#8212;retail sales are up. Travel is at record levels. Live Nation reported its highest-ever concert ticket sales in early 2025. People are somehow both miserable and swiping their credit cards.</p><p>For marketers and business leaders, this disconnect isn&#8217;t just a curiosity&#8212;it&#8217;s upending decades of conventional wisdom about how to read consumer demand.</p><h2>When Sentiment Stopped Predicting Spending</h2><p>Consumer sentiment surveys have been a staple of economic forecasting since the 1950s. The logic was simple and intuitive: ask people how they feel about the economy, and you&#8217;ll get a pretty good indication of what they&#8217;ll do with their money. For most of modern economic history, that held true.</p><p>Not anymore. As the Federal Reserve&#8217;s Kansas City branch noted in recent research, the link between consumer sentiment and actual spending growth has become &#8220;modest&#8221; at best. Fed Chair Jerome Powell acknowledged this directly: &#8220;The link between sentiment data and consumer spending has been weak. It&#8217;s not been a strong link at all.&#8221;</p><p>The divergence started during the pandemic and has only widened. In June 2022, when the sentiment index hit its all-time low amid raging inflation, Americans were still spending at a healthy clip. In 2023, during a Congressional standoff that cratered confidence, consumers went to concerts and took vacations anyway.</p><h2>The Income Story Behind the Headline Number</h2><p>The explanation starts with a methodological quirk that matters enormously for marketers: sentiment surveys treat all respondents equally, but spending is anything but equal.</p><p>Research from the Boston Fed reveals just how lopsided consumption has become. The top fifth of earners&#8212;households making $121,000 or more&#8212;now generate spending seven times greater than the bottom fifth. And here&#8217;s the crucial detail: high-income consumers have accumulated significant room on their credit cards relative to 2019, while lower-income households are carrying debt loads well above pre-pandemic levels.</p><p>Bank of America&#8217;s internal data tells a similar story. Households in the top 5% by income grew their luxury spending by 10.5% year-over-year, particularly on international shopping and high-end hotel stays. Meanwhile, chains that serve budget-conscious consumers&#8212;Chipotle, Home Depot&#8212;have reported softening from lower-income customers.</p><p>The wealth effect amplifies this dynamic. The University of Michigan&#8217;s surveys show that consumers with the largest stock holdings posted notably higher sentiment than others, driven by equity markets that continue to hover near record levels. A dollar increase in stock wealth now leads to about 5 cents of additional consumer spending, up from less than 2 cents in 2010, according to Oxford Economics.</p><h2>What This Means for Marketers</h2><p>If you&#8217;re running marketing for any consumer-facing business, the implications are significant.</p><p><strong>Rethink your research approach.</strong> National sentiment numbers may tell you very little about your actual customers. Understanding sentiment at a much more granular level&#8212;by income cohort, geography, and category&#8212;has become essential. As McKinsey&#8217;s ConsumerWise research team put it, the decoupling &#8220;makes it only more important to understand consumer sentiment at a much more granular and detailed level.&#8221;</p><p><strong>Watch for category-specific signals.</strong> The &#8220;lipstick effect&#8221;&#8212;consumers indulging in small luxuries during uncertain times&#8212;appears alive and well, though it&#8217;s manifested in unexpected places. Mass-market fragrance sales are up 17% year-over-year, according to Circana. L&#8217;Or&#233;al&#8217;s CEO recently mused whether to call it &#8220;the smell good fragrance effect.&#8221; Discount retailers like T.J. Maxx are seeing sales bumps from stretched consumers looking to maximize value.</p><p><strong>Premium and value may both be winning&#8212;at the same time.</strong> This isn&#8217;t a traditional bifurcation. It&#8217;s more subtle. LVMH&#8217;s U.S. sales were up 3% in Q3 after declining earlier in the year. Meanwhile, Numerator data shows both those earning over $100,000 and those under $60,000 increased spending&#8212;just at different rates (4.3% versus 3.8%). The middle isn&#8217;t disappearing; spending is just distributing unevenly.</p><p><strong>Holiday planning requires new assumptions.</strong> According to PwC&#8217;s 2025 Holiday Outlook, consumers expect to reduce seasonal spending by 5%&#8212;the first notable drop since 2020. But Gen Z respondents project cutting budgets by 23%, far exceeding other generations. Deloitte&#8217;s holiday survey found 77% of shoppers expect higher prices on holiday goods, and 57% expect the economy to weaken&#8212;the most negative outlook since 1997. Yet spending has remained surprisingly resilient in recent holiday seasons despite similar pessimism.</p><h2>The Dangerous Middle Ground</h2><p>What worries economists isn&#8217;t the current disconnect&#8212;it&#8217;s what happens when it resolves. For now, high earners are propping up the consumer economy even as sentiment converges negatively across all income groups. As Joanne Hsu, director of the University of Michigan&#8217;s consumer surveys, warned: high-income consumers are now &#8220;very worried about the trajectory of inflation, about business conditions, unemployment.&#8221; If they start to pull back, it&#8217;s hard to see how consumer spending can keep growing.</p><p>The generational angles add another layer. Younger consumers are increasingly willing to take on debt for experiences and goods despite financial stress&#8212;a &#8220;YOLO&#8221; spending pattern that prioritizes present enjoyment over traditional milestones like homeownership. Credit card usage among Gen Z shoppers overtook debit card usage in mid-2024, according to J.P. Morgan data. Whether that represents evolved preferences or delayed consequences remains unclear.</p><h2>Stop Trusting the Headline</h2><p>The most practical advice for any marketer or business strategist: stop trusting the headline sentiment number. It&#8217;s telling you what people <em>say</em> they feel, which has become divorced from what they <em>do</em>. The real work is understanding who your actual customer is, what their financial position looks like, and what trade-offs they&#8217;re making.</p><p>Consumer sentiment will eventually align with consumer spending&#8212;either through people feeling better, or through people finally closing their wallets. Brands that have built their 2026 plans around pessimistic headlines may find themselves underinvested when the market stabilizes. Those that have ignored the warnings entirely may be caught flat-footed if the convergence goes the other way.</p><p>The honest answer is that we&#8217;re in uncharted territory. Feelings and spending have parted ways, and neither traditional forecasting models nor gut instinct can tell us when they&#8217;ll reunite&#8212;or which one will have to move.</p>]]></content:encoded></item><item><title><![CDATA[The Online Video Mistake Everyone's Making]]></title><description><![CDATA[Why pulling back from OLV means breaking your funnel&#8212;not fixing it]]></description><link>https://www.datatechandtools.com/p/the-online-video-mistake-everyones</link><guid isPermaLink="false">https://www.datatechandtools.com/p/the-online-video-mistake-everyones</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Mon, 24 Nov 2025 17:21:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Marketers are quietly retreating from online video (OLV). Not loudly, not all at once, but consistently. The IAB Tech Lab&#8217;s updated video classification standards created confusion about what qualifies as &#8220;instream&#8221; versus &#8220;standalone&#8221; inventory. In response, buyers are rejecting inventory they&#8217;ve purchased happily for years simply because it now carries a different classification label.</p><p>This is a mistake. Online video hasn&#8217;t lost effectiveness. It&#8217;s lost some legacy terminology. There&#8217;s a meaningful difference.</p><p>Publishers report buyers blocking inventory that performed well last quarter because updated classifications suggest it&#8217;s not &#8220;premium&#8221; enough. The inventory hasn&#8217;t changed. Performance metrics haven&#8217;t changed. Only the label changed.</p><p>Marketers responding to taxonomy shifts rather than performance signals risk breaking funnels that work.</p><h2>What Actually Changed</h2><p>The IAB Tech Lab updated video placement classifications to bring clarity to a historically messy taxonomy. Previous definitions of &#8220;instream,&#8221; &#8220;outstream,&#8221; &#8220;interstitial,&#8221; and &#8220;standalone&#8221; varied by vendor interpretation. Publishers classified inventory one way, DSPs classified it differently, measurement vendors used third definitions.</p><p>The new standards create consistency. Good for the industry long-term. But implementation created short-term problems.</p><p>Inventory previously labeled &#8220;instream&#8221;&#8212;video ads appearing in video content&#8212;might now classify as &#8220;interstitial&#8221; or &#8220;standalone&#8221; depending on specific placement characteristics. The ad itself didn&#8217;t change. The viewer experience didn&#8217;t change. The performance didn&#8217;t change. The classification changed.</p><p>Buyers with targeting strategies built around &#8220;instream only&#8221; suddenly found their campaigns couldn&#8217;t access inventory they relied on. Rather than adjusting strategies to new classifications, many simply excluded the reclassified inventory.</p><p>This overcorrects. If inventory performed well under the old classification and nothing substantive changed except the label, it should still perform well.</p><h2>The CTV False Choice</h2><p>Some marketers interpret the OLV pullback as strategic shift toward CTV. Connected TV is premium, brand-safe, scales effectively&#8212;everything marketers want. Why bother with online video?</p><p>This framing creates a false choice. CTV and OLV serve different functions even when both deliver video advertising.</p><p>CTV excels at top-of-funnel brand building. It reaches audiences in lean-back environments where attention is high and context is premium. Perfect for awareness and consideration.</p><p>But CTV has limitations for mid-funnel and lower-funnel objectives. Targeting capabilities are less granular than digital video. Frequency management is harder. Creative testing is slower. Optimization happens on longer cycles.</p><p>OLV fills these gaps. It offers precise targeting, real-time optimization, immediate creative testing, and granular frequency control. These capabilities matter for moving audiences from awareness to action.</p><p>The most effective video strategies use both. CTV for broad reach and brand impact. OLV for targeted follow-through and conversion support. Together they create full-funnel coverage that neither accomplishes alone.</p><p>Choosing between them means accepting incomplete funnel coverage.</p><h2>The Social Saturation Problem</h2><p>When marketers pull back from OLV, many redirect spend to social video&#8212;TikTok, Instagram, YouTube. Social platforms promise real-time optimization and proven performance.</p><p>But many brands are already overinvested in social. They&#8217;ve pushed spend to the point of diminishing returns, saturating their addressable audiences.</p><p>Social platforms also create specific challenges:</p><ul><li><p><strong>Attribution blind spots.</strong> Social platforms control measurement. Third-party verification is limited. Brands can&#8217;t independently verify reported performance with the same rigor as programmatic video.</p></li><li><p><strong>Walled garden limitations.</strong> Data stays within platforms. You can&#8217;t use social learnings to optimize other channels or integrate into broader marketing analytics.</p></li><li><p><strong>Creative constraints.</strong> Social formats demand specific creative approaches. Content optimized for TikTok doesn&#8217;t translate to display, CTV, or traditional video. This creates parallel creative workflows and limits asset reuse.</p></li><li><p><strong>Platform dependency.</strong> Overreliance on any single platform creates risk. Algorithm changes, policy shifts, or pricing increases immediately impact results with limited alternative options.</p></li></ul><p>Shifting OLV budget to social when social is already saturated doesn&#8217;t solve problems. It compounds them.</p><h2>What OLV Actually Does</h2><p>Online video&#8217;s role in modern marketing differs from CTV and social, which is precisely why it matters.</p><ul><li><p><strong>Mid-funnel bridging.</strong> CTV creates awareness. Social drives engagement. OLV connects them by reaching audiences who&#8217;ve seen CTV ads and targeting them with more specific messaging before social conversion paths.</p></li><li><p><strong>Flexible targeting.</strong> OLV allows precise audience segmentation using first-party data, behavioral signals, and contextual targeting. This enables testing and optimization that premium environments don&#8217;t support.</p></li><li><p><strong>Creative experimentation.</strong> Faster iteration cycles mean you can test messaging, offers, and creative approaches more quickly than in CTV or traditional social. Learnings inform broader strategy.</p></li><li><p><strong>Scalable reach.</strong> While individual OLV placements offer less reach than major CTV inventory, aggregate scale is substantial. For campaigns needing volume beyond premium inventory, OLV provides necessary capacity.</p></li><li><p><strong>Cost efficiency.</strong> CPMs are lower than CTV, making OLV effective for campaigns requiring frequency or where awareness is already established and the goal is reinforcement or conversion support.</p></li></ul><p>These aren&#8217;t capabilities CTV or social provide as effectively. Eliminating OLV means losing these functions from your marketing mix.</p><h2>The Infrastructure Evolution</h2><p>The classification confusion is real but solvable. New tools address the complexity:</p><ul><li><p><strong>Curated marketplaces.</strong> These provide pre-vetted, high-quality inventory organized by campaign objectives rather than technical classifications. Buyers access contextually relevant inventory without managing classification details.</p></li><li><p><strong>video.plcmt specifications.</strong> This OpenRTB protocol allows granular inventory description including format, screen type, context, and publisher preferences. Buyers can define needs precisely without relying solely on broad classifications.</p></li><li><p><strong>AI-powered buying.</strong> Automated systems handle classification complexity, optimizing toward performance objectives rather than manually managing placement types. This reduces operational burden and improves results.</p></li></ul><p>Together, these tools make OLV easier to execute even with more complex taxonomy.</p><h2>What Actually Matters</h2><p>Here&#8217;s what should guide OLV decisions:</p><ul><li><p><strong>Performance, not classification.</strong> Does the inventory drive your KPIs? If yes, buy it regardless of how it&#8217;s classified.</p></li><li><p><strong>Audience fit, not format purity.</strong> Does it reach the right people in the right context? That matters more than whether it technically qualifies as &#8220;instream.&#8221;</p></li><li><p><strong>Funnel coverage, not channel concentration.</strong> Do you have full-funnel video coverage? If pulling OLV creates gaps, performance suffers even if you optimize other channels.</p></li><li><p><strong>Cost per outcome, not cost per placement.</strong> Lower CPMs don&#8217;t matter if conversion rates also drop. Higher CPMs don&#8217;t matter if conversion rates improve enough. Optimize for total cost per acquisition or other business outcomes.</p></li><li><p><strong>Strategic flexibility, not operational simplicity.</strong> Managing fewer channels is easier but potentially less effective. Complexity should be managed, not eliminated if it drives results.</p></li></ul><h2>Moving Forward</h2><p>For brands currently reconsidering OLV:</p><ul><li><p><strong>Audit actual performance.</strong> Before cutting spend, confirm that the inventory being eliminated actually underperforms. Don&#8217;t make decisions based on classification changes alone.</p></li><li><p><strong>Test systematically.</strong> If you&#8217;re uncertain about OLV effectiveness, run controlled experiments comparing performance with and without it. Make evidence-based decisions.</p></li><li><p><strong>Work with partners who understand taxonomy.</strong> Many DSPs and agencies have adapted to new classifications. They can access high-performing inventory even with updated standards. Partner with those who&#8217;ve solved the technical challenges.</p></li><li><p><strong>Maintain full-funnel strategy.</strong> Ensure video strategy covers awareness, consideration, and conversion. OLV often plays the consideration role that CTV and social don&#8217;t fill as effectively.</p></li><li><p><strong>Communicate with vendors.</strong> If specific publishers or inventory sources have been reclassified but historically performed well, work with them to understand what changed and whether it affects quality. Often the answer is nothing substantive changed.</p></li></ul><p>The biggest risk isn&#8217;t taxonomy complexity. It&#8217;s responding to complexity by eliminating effective channels rather than adapting strategies to new classifications.</p><p>Classification standards exist to improve transparency and consistency. But they&#8217;re means to an end&#8212;better marketing performance&#8212;not ends in themselves. When standards change, adjust tactics to maintain access to effective inventory rather than abandoning that inventory because labels changed.</p><p>Online video remains a valuable part of full-funnel video strategy. The infrastructure for buying it effectively is improving, not deteriorating. The challenge is navigating a transition period where new classifications haven&#8217;t yet been fully integrated into buying workflows.</p><p>That&#8217;s a solvable problem, not a reason to fundamentally alter strategy.</p>]]></content:encoded></item><item><title><![CDATA[The Token Trap: Why AI's Favorite Metric Doesn't Mean What You Think]]></title><description><![CDATA[How rising token counts became the new "eyeballs"&#8212;and why that should worry investors]]></description><link>https://www.datatechandtools.com/p/the-token-trap-why-ais-favorite-metric</link><guid isPermaLink="false">https://www.datatechandtools.com/p/the-token-trap-why-ais-favorite-metric</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Sun, 23 Nov 2025 17:05:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>During the late 1990s dotcom boom, internet companies justified soaring valuations with metrics like &#8220;eyeballs,&#8221; &#8220;page views,&#8221; and &#8220;unique visitors.&#8221; The underlying assumption: engagement metrics would eventually translate to profits. That assumption proved catastrophically wrong for most. In 2025, AI companies are doing something similar with tokens&#8212;the snippets of text that large language models process. Google reports 1.3 quadrillion tokens processed monthly, an eight-fold increase since February. Alibaba says its token use doubles every few months. OpenAI lists 30 customers each processing over a trillion tokens. These numbers sound impressive. They&#8217;re supposed to signal surging AI adoption and justify the industry&#8217;s spending levels. But the relationship between token growth and actual demand is more complicated than headlines suggest. And the connection between tokens and profits is weaker still.</p><h2>Why Token Counts Are Misleading</h2><p>Three factors drive token growth, only one of which represents genuine increased usage. First, actual adoption. More people using AI tools for more tasks generates more tokens. This is the growth everyone wants to see&#8212;it suggests AI is becoming essential to workflows and creating real value. Second, AI integration into existing products. Social media platforms use models to improve recommendations and image quality. Google deploys them for AI Overviews that summarize web pages instead of showing link lists. According to Barclays, these summaries account for over two-thirds of Google&#8217;s total token output. These features may improve user experience, but they don&#8217;t create new revenue. They consume tokens processing tasks that previously happened without AI, adding cost without adding income. Third, model verbosity. As LLMs become more sophisticated, they produce longer answers. EpochAI research finds that average output token counts for benchmark questions have doubled annually for standard models. &#8220;Reasoning&#8221; models that explain their approach step-by-step use eight times more tokens than simpler models&#8212;and their usage is rising five-fold yearly. This trend will accelerate. Newer models are optimized for quality and comprehensiveness, not brevity. They&#8217;re designed to provide detailed, thorough responses. Each improvement in capability tends to increase token generation per query. The result: token counts surge even when actual query volume stays flat. You&#8217;re not necessarily doing more with AI; the AI is just talking more.</p><h2>The Cost Problem</h2><p>Token prices have collapsed. The cost per token to answer a PhD-level science question as proficiently as GPT-4 has fallen about 97% annually, according to industry analysis. You might assume this makes AI cheap. It doesn&#8217;t. Generating responses remains expensive because models keep improving&#8212;and growing. As Wei Zhou of SemiAnalysis notes, even as token prices fall, better and more verbose models mean more tokens must be generated to complete any given task. So the marginal cost of providing AI services doesn&#8217;t approach zero. It stays significant because capability improvements offset price reductions. This creates margin pressure. OpenAI charges developers about $5-15 per million tokens depending on the model. DeepSeek, a Chinese competitor, offers comparable capability at a fraction of that price. According to recent comparisons, DeepSeek&#8217;s pricing can be 10-20x cheaper than OpenAI&#8217;s premium models. When quality differences narrow, price becomes the deciding factor. And many users are increasingly willing to trade slight quality reductions for substantial cost savings. The competitive dynamics look worrying for model providers. They&#8217;re in a race where improving quality requires larger, more expensive models that generate more tokens per query. But pricing pressure from low-cost competitors limits how much they can charge per token. Costs rise while prices fall&#8212;classic margin compression.</p><h2>The Profitability Question</h2><p>Sam Altman has warned OpenAI investors to expect years of heavy losses. The Wall Street Journal reports that by 2028, OpenAI expects operating losses to reach $74 billion&#8212;around three-quarters of projected revenue. Those aren&#8217;t startup losses. That&#8217;s a company spending $4 for every $3 it earns, sustained at scale. The broader industry shows similar patterns. Most AI companies generating substantial token volumes aren&#8217;t profitable on those operations. They&#8217;re burning capital to capture market share, betting that scale will eventually produce sustainable economics. This might work if token costs fell dramatically or if pricing power increased substantially. Neither seems likely. Competition from open-source models and low-cost providers like DeepSeek prevents pricing increases. And as models get more capable, they consume more compute per token, preventing dramatic cost reductions. The result: tokens may be the new currency of AI, but they&#8217;re a currency that doesn&#8217;t generate sustainable profits yet.</p><h2>What About Enterprise Sales?</h2><p>Some argue that consumer-facing AI services represent just early adoption, and real revenue will come from enterprise deployments where customers pay premium prices for reliability, security, and support. This thesis has merit. Enterprises do pay more for SaaS tools than consumers pay for equivalent functionality. They value integration, uptime guarantees, and vendor support. But enterprise sales also face token economics challenges. Large companies negotiating annual contracts want predictable costs. Token-based pricing creates unpredictability&#8212;costs can spike if usage patterns change or if models become more verbose. To manage this, enterprise contracts often include token allotments or caps. The vendor essentially pre-sells tokens at a fixed price, absorbing the risk that actual costs might exceed revenue. This shifts margin pressure back onto AI providers. Enterprise customers also have more leverage to demand custom models or self-hosting options. They can credibly threaten to build internal AI capabilities or switch providers. This limits pricing power even in premium segments.</p><h2>The Metrics That Actually Matter</h2><p>If tokens are a misleading indicator, what should investors and operators watch instead? <strong>Revenue per customer</strong>, not tokens per customer. How much money does each account generate, regardless of how many tokens they consume? This measures willingness to pay. <strong>Gross margin</strong>, not token volume. After accounting for all compute costs, how much profit remains? This measures economic viability. <strong>Retention rates</strong>, not token growth. Do customers renew subscriptions? Do they expand usage over time? This measures value creation. <strong>Competitive moat</strong>, not capability benchmarks. Can the company sustain pricing power, or will commoditization force margins toward zero? This measures long-term viability. None of these metrics look particularly favorable for most AI companies right now. Revenue growth is strong but often comes from unsustainably low pricing. Gross margins are thin or negative. Retention data is sparse since many products launched recently. Competitive moats are unclear when open-source alternatives exist.</p><h2>The Path Forward</h2><p>AI companies face a choice. They can compete on price, accepting low margins and hoping scale eventually produces profitability. Or they can compete on differentiation, building specialized capabilities that justify premium pricing. The first path leads toward utility-style businesses with low margins and slow growth. The second path requires finding defensible niches where competitors can&#8217;t easily replicate capabilities. Most companies are pursuing the first path because it&#8217;s faster. Scaling token volumes is easier than developing unique, hard-to-copy features. But it&#8217;s questionable whether that path leads to sustainable businesses. The dotcom parallel is instructive. Many internet companies in the late 1990s reported surging page views and unique visitors. Those metrics proved hollow when revenue models collapsed. The companies that survived built actual business value&#8212;sticky products, network effects, or genuine operational advantages. AI companies reporting surging token counts need to explain how those tokens translate to sustainable competitive advantage. Without that explanation, high token volumes are just vanity metrics. For investors, the lesson is clear: be skeptical of token growth as a success indicator. Ask instead about economics, differentiation, and long-term defensibility. Those questions mattered in the dotcom era, and they matter now. The technology may be different, but the fundamentals of sustainable business haven&#8217;t changed.</p>]]></content:encoded></item><item><title><![CDATA[Speaking the Same Language: How Incrementality Is Changing Marketing and Finance Conversations]]></title><description><![CDATA[Moving beyond attribution theater to measurement that CFOs actually trust]]></description><link>https://www.datatechandtools.com/p/speaking-the-same-language-how-incrementality-254</link><guid isPermaLink="false">https://www.datatechandtools.com/p/speaking-the-same-language-how-incrementality-254</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Sat, 22 Nov 2025 17:03:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Marketing and finance teams have historically operated with different success metrics, different planning horizons, and fundamentally different views on what constitutes proof. This misalignment has real costs: campaigns get cut during budget reviews, growth investments get delayed, and teams spend more time defending past decisions than planning future ones.</p><p>The gap isn&#8217;t new. What&#8217;s changed is that more companies now have a practical way to bridge it: incrementality testing. Over half of US brand and agency marketers used incrementality testing in 2025, according to EMARKETER and TransUnion data, and 36.2% plan to invest more in it over the next year.</p><p>But adoption numbers don&#8217;t tell the full story. The more interesting shift is how these tests are changing the conversations between marketing and finance&#8212;and what that means for organizations trying to prove marketing&#8217;s contribution to business outcomes.</p><h3>The Attribution Theater Problem</h3><p>Most marketing measurement relies on attribution: matching conversions to ad exposures based on user behavior. Click on an ad, buy the product, and that ad gets credit. It&#8217;s clean, it&#8217;s trackable, and it&#8217;s fundamentally misleading.</p><p>Attribution conflates correlation with causation. If someone clicks a branded search ad and then purchases, did the ad cause the purchase? Or would that person have bought anyway, since they were already searching for the brand by name?</p><p>Facebook might report a 3.7x ROAS on a campaign. That number represents all purchases made by people who saw or clicked the ads. It doesn&#8217;t represent purchases that happened because of the ads&#8212;the purchases that wouldn&#8217;t have occurred without the advertising spend.</p><p>Finance teams understand this distinction intuitively. When they ask &#8220;what&#8217;s the ROI on this campaign,&#8221; they&#8217;re asking a causal question: what did this investment cause to happen? Attribution models answer a different question entirely: what did we track among people who saw our ads?</p><p>This gap creates what could be called attribution theater&#8212;the presentation of correlation metrics as if they prove causation. Marketing teams report ROAS numbers with confidence intervals, build elaborate dashboards, and forecast based on platform-reported metrics. Finance teams nod along but remain skeptical, knowing these numbers tend to overstate impact.</p><p>The result is chronic mistrust. Marketing can&#8217;t prove their claims. Finance can&#8217;t validate investments. Both sides retreat to their corners, frustrated.</p><h3>What Incrementality Actually Measures</h3><p>Incrementality testing flips the measurement approach. Instead of tracking who converted after seeing ads, it measures what wouldn&#8217;t have happened without the ads.</p><p>The methodology resembles clinical drug trials. Split a comparable population into test and control groups. Show ads to the test group. Show nothing (or different ads) to the control group. Measure the difference in outcomes.</p><p>If the test group generates 1,250 purchases and the control group generates 1,000 purchases, the campaign drove 250 incremental purchases&#8212;a 25% lift. That&#8217;s what the advertising caused. Everything else would have happened organically.</p><p>Google recently lowered the minimum budget for incrementality tests to $5,000, down from previous thresholds approaching $100,000. The platform uses Bayesian statistical methodology, which requires less data than traditional frequentist approaches. This makes causal measurement accessible to mid-market advertisers, not just enterprises with massive budgets.</p><p>The key distinction: incrementality tests answer finance&#8217;s question directly. They prove causation, not just correlation.</p><h3>Why Finance Cares About Uncertainty</h3><p>Here&#8217;s what makes incrementality different in finance conversations: it quantifies uncertainty.</p><p>Traditional marketing reports present point estimates. &#8220;Facebook delivered 3.7x ROAS.&#8221; One number, stated with confidence. Finance teams know better than to trust single-point estimates for anything&#8212;revenue projections, cost forecasts, risk assessments all come with ranges.</p><p>Incrementality tests produce confidence intervals. &#8220;We estimate Facebook&#8217;s incremental ROI is between 3.2x and 4.5x.&#8221; This acknowledges that true incrementality is unknowable&#8212;we can only estimate it within a range, with a certain level of confidence.</p><p>Counterintuitively, this uncertainty makes the measurement more credible to finance. The confidence interval signals intellectual honesty. It acknowledges the limits of measurement and quantifies the precision of the estimate.</p><p>For financial planning, ranges are more useful than false precision. A CFO can model scenarios using the low end of the range (3.2x) for conservative forecasts and the high end (4.5x) for aggressive growth plans. One number doesn&#8217;t allow for scenario planning.</p><p>This shared language&#8212;estimates with confidence intervals rather than precise-looking but unreliable point estimates&#8212;creates common ground. Both teams can discuss decisions using the same measurement framework.</p><h3>The Forecasting Problem</h3><p>The attribution theater problem becomes especially acute during budget planning. Marketing teams extrapolate from platform-reported metrics. Finance teams model cash flows based on those extrapolations. Forecasts consistently miss.</p><p>Why? Because inflated attribution numbers get plugged into financial models. If Facebook reports 5x ROAS but true incrementality is 3x, scaling spend based on the 5x number will disappoint. Revenue won&#8217;t materialize as projected. Budgets get cut. Trust erodes further.</p><p>BrandAlley, a UK-based fashion eCommerce company launching over 1,000 campaigns annually, faced exactly this issue. They implemented incrementality testing through marketing mix modeling to understand true causal impact across channels. The results showed material differences between platform-reported performance and actual lift.</p><p>Armed with better numbers, they could forecast accurately. Finance could trust the projections. Marketing could defend budgets with causal evidence rather than correlation metrics.</p><p>The difference isn&#8217;t just about measurement accuracy. It&#8217;s about breaking the cycle of missed forecasts, budget cuts, and eroded trust. When both teams use the same causally-valid metrics, forecasts improve, and organizations can plan with confidence.</p><h3>Implementation Challenges</h3><p>Adopting incrementality testing isn&#8217;t frictionless. According to research from Skai and Path to Purchase Institute, a third of CPG brand marketers measure incrementality only at a basic level. The top barriers are concerns about accuracy (44% of respondents), difficulty applying incrementality across different ad types and retailers (43%), and limited tools or technologies (41%).</p><p>These concerns are legitimate. Not everything can be tested easily. Brand campaigns that run continuously for awareness may not have natural holdout groups. Small-budget campaigns may lack statistical power to detect lift. Some channels, like linear TV, present geographic and technical constraints.</p><p>There are also opportunity costs. Every incrementality test withholds advertising from control groups, potentially sacrificing sales during the test period. For companies operating on thin margins, this represents real financial risk.</p><p>But the alternative&#8212;continuing to make decisions based on misleading attribution data&#8212;carries risk too. The organizations seeing success are those that acknowledge these constraints upfront and build testing into their planning cycles.</p><h3>What Worked for Finance Buy-In</h3><p>Organizations that successfully bridged the marketing-finance gap using incrementality followed several patterns:</p><p><strong>Start with joint education.</strong> Get both teams aligned on what incrementality measures, why it matters, and what the limitations are. No surprises.</p><p><strong>Frame tests as measurement investments.</strong> Finance teams understand that better data improves decisions. Position incrementality testing as infrastructure that improves capital allocation, not as a marketing expense.</p><p><strong>Test where disagreement exists.</strong> Focus initial tests on the channels where marketing and finance most disagree about performance. Resolving those debates quickly demonstrates value.</p><p><strong>Establish regular testing cadence.</strong> Quarterly tests for major channels, less frequent tests for smaller channels. Predictable schedule reduces friction.</p><p><strong>Document what can&#8217;t be measured.</strong> Some effects&#8212;long-term brand building, word-of-mouth, customer lifetime value beyond immediate conversion&#8212;don&#8217;t show up in incrementality tests. Acknowledge this explicitly.</p><p>This last point matters. Incrementality testing measures short-term direct response. It doesn&#8217;t capture every marketing benefit. But being explicit about what you&#8217;re not measuring builds credibility for what you are measuring.</p><h3>The Bigger Shift</h3><p>The adoption of incrementality testing reflects a larger change in how organizations think about marketing.</p><p>For decades, marketing operated somewhat separately from core business operations. It was a creative function, difficult to measure precisely, judged partly on intuition and brand health metrics that didn&#8217;t translate directly to P&amp;L impact.</p><p>That model worked in an era of limited measurement capability. You couldn&#8217;t easily run controlled experiments at scale. You couldn&#8217;t quickly test creative variations. You relied on annual brand studies and hoped for correlation between brand metrics and sales.</p><p>The shift toward incrementality-based measurement represents marketing becoming more integrated with business operations. Marketing claims can be tested the same way product changes get A/B tested or pricing strategies get validated.</p><p>This doesn&#8217;t mean eliminating creativity or intuition. It means having a reliable way to prove which creative risks paid off, which channels drove real growth, and which investments should be scaled.</p><h3>Looking Forward</h3><p>The incrementality testing market has matured quickly. Platforms like Measured, TransUnion, Rockerbox, and Sellforte now offer incrementality-as-a-service. Data clean rooms like Amazon Marketing Cloud and Snowflake provide privacy-safe environments for running tests. AI tools help automate reporting, with half of US brand and agency marketers adopting AI or machine learning for this purpose.</p><p>The IAB recently released guidelines for incremental measurement in commerce media, outlining when experiments, model-based counterfactuals, econometric models, and hybrid approaches work best. Industry standardization is happening.</p><p>As tools improve and costs decrease, incrementality testing will likely become baseline capability rather than advanced technique. The question will shift from &#8220;should we test incrementality?&#8221; to &#8220;how do we integrate incrementality insights into planning workflows?&#8221;</p><p>For the marketing-finance relationship, this matters. When both teams trust the same measurement methodology, conversations become about strategy rather than measurement validity. Instead of debating whether the marketing numbers are real, they can debate which opportunities to pursue.</p><p>That&#8217;s not just better measurement. It&#8217;s better business decision-making.</p>]]></content:encoded></item><item><title><![CDATA[The Quiet Shift: What Amazon's Transparency Gap Tells Us About the DSP Market]]></title><description><![CDATA[Understanding the trade-offs between scale and scrutiny in programmatic advertising]]></description><link>https://www.datatechandtools.com/p/the-quiet-shift-what-amazons-transparency-eb0</link><guid isPermaLink="false">https://www.datatechandtools.com/p/the-quiet-shift-what-amazons-transparency-eb0</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Fri, 21 Nov 2025 15:43:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The programmatic advertising market is experiencing a notable realignment. While The Trade Desk commands roughly 20% of the independent DSP market and Google&#8217;s DV360 maintains its dominant position, Amazon DSP has quietly emerged as a formidable alternative&#8212;particularly for advertisers willing to accept less transparency in exchange for other benefits.</p><p>This raises a practical question for marketers: what does it mean when a major DSP doesn&#8217;t provide log-level data, and who&#8217;s actually comfortable with that trade-off?</p><h3>The Data Visibility Issue</h3><p>Log-level data&#8212;the granular, impression-by-impression reporting that shows exactly where ads ran, how much was paid, and what resulted&#8212;has long been considered table stakes for sophisticated advertisers. This level of detail allows teams to verify that budgets weren&#8217;t wasted on low-quality inventory, detect potential fraud, and understand the true path to conversion beyond platform-reported metrics.</p><p>Amazon DSP, despite handling an estimated 7.5% of retail media dollars (which translates to billions in annual spend), has notably limited log-level data access compared to competitors. According to multiple ad tech platforms tracking programmatic bidding patterns, some agency holding companies have been shifting meaningful portions of their Q3 spend from The Trade Desk to Amazon DSP&#8212;even with this transparency limitation.</p><p>The question isn&#8217;t whether Amazon lacks transparency. That&#8217;s established. The question is why sophisticated advertisers are comfortable with it.</p><h3>Following the Incentives</h3><p>The math on Amazon&#8217;s fee structure is straightforward. The platform charges no fees for programmatic guaranteed deals on Amazon-owned media and collects just 1% for ads on open web publishers&#8212;significantly below The Trade Desk&#8217;s roughly 20% take rate. For large advertisers spending millions monthly, this difference compounds quickly.</p><p>There&#8217;s also the relationship angle. When Omnicom won Amazon&#8217;s US marketing business in 2024, industry observers noted that it would likely influence programmatic spending patterns. That prediction appears to have materialized. Multiple sources familiar with programmatic bidding confirmed to AdExchanger that they observed what looked like a double-digit share of expected Q3 spend moving from The Trade Desk to Amazon DSP within Omnicom.</p><p>But cost savings and client relationships don&#8217;t fully explain the shift. The real differentiator may be Amazon&#8217;s first-party retail data&#8212;the ability to target based on actual purchase behavior rather than inferred intent. For many advertisers, particularly those in retail and CPG, this closed-loop measurement may matter more than impression-level transparency.</p><h3>The Sophistication Question</h3><p>This situation prompts a more interesting consideration: are we redefining what &#8220;sophisticated&#8221; means in media buying?</p><p>Traditionally, sophisticated advertisers demanded full transparency&#8212;detailed reporting, independent verification, and control over every aspect of the media supply chain. That model emerged from an era when fraud was rampant and programmatic was the &#8220;Wild West&#8221; of digital advertising.</p><p>But the market has matured. Walled gardens like Facebook and Google have trained advertisers to accept limited visibility in exchange for scale and performance. According to recent data, the DSP market is projected to grow from $38.92 billion in 2025 to $148.92 billion by 2032, with much of that growth coming from platforms that offer performance over transparency.</p><p>Some sophisticated advertisers may now be calculating that fraud detection and viewability verification matter less than they did five years ago&#8212;especially when advertising on first-party retail properties where the media quality is generally higher. They may be prioritizing performance measurement and cost efficiency over the ability to audit every impression.</p><h3>The Measurement Challenge</h3><p>The incrementality testing movement provides context here. Over half of US brand and agency marketers now use incrementality testing to measure campaigns, according to July 2025 data from EMARKETER and TransUnion. Google recently lowered its incrementality testing threshold to $5,000, making this type of causal measurement more accessible.</p><p>If advertisers can prove that Amazon DSP drives incremental sales through controlled experiments, does it matter whether they can see every log file? The incrementality test would capture the true lift regardless of the black box nature of the platform.</p><p>This represents a philosophical shift from forensic transparency (examining every impression) to outcome-based validation (proving the campaign caused sales that wouldn&#8217;t have happened otherwise). The former requires detailed logs; the latter requires good experimental design.</p><h3>Market Structure Implications</h3><p>The broader DSP market shows clear concentration. According to recent analysis, three major players&#8212;DV360, Amazon DSP, and The Trade Desk&#8212;control 86% of market share. While Amazon has the highest advertising revenue, Google&#8217;s DV360 maintains the largest market share, suggesting different monetization strategies.</p><p>The Trade Desk showed 26% growth in 2024, outpacing both the overall DSP market growth rate (23%) and Amazon&#8217;s advertising growth (18%). This suggests The Trade Desk is gaining share in certain segments, even as it potentially loses large accounts like Omnicom.</p><p>The market appears to be segmenting by advertiser needs: sophisticated direct-response advertisers who need granular optimization may stick with The Trade Desk, while brand advertisers focused on retail media and closed-loop measurement may migrate toward Amazon.</p><h3>The Infrastructure Question</h3><p>There&#8217;s another practical consideration: OpenPath, The Trade Desk&#8217;s direct publisher connection, bypasses other ad tech intermediaries. This means spending through OpenPath wouldn&#8217;t be visible to SSPs and other platforms that typically observe bidstream data.</p><p>If Omnicom or other holding companies significantly increased OpenPath usage, it could appear to outside observers that they reduced Trade Desk spending, when in reality they just changed how they bought inventory. The Trade Desk declined to comment on whether Omnicom uses OpenPath extensively, making this impossible to verify.</p><p>This highlights a challenge with analyzing programmatic market shifts: much of the data comes from intermediaries who have incomplete visibility. Real spending patterns may differ significantly from what can be observed.</p><h3>What This Means for Advertisers</h3><p>For marketers evaluating DSP options in 2025, the Amazon situation surfaces several practical questions:</p><p><strong>First, what are you optimizing for?</strong> If preventing Made-for-Advertising sites and ensuring brand safety are top priorities, platforms with robust log-level reporting may remain essential. If you&#8217;re focused on proving incremental sales lift and are comfortable with Amazon&#8217;s first-party inventory quality, transparency may matter less.</p><p><strong>Second, how do you measure success?</strong> If your organization relies on multi-touch attribution models that require impression-level data, limited transparency is a dealbreaker. If you use incrementality testing or media mix modeling, you can work with less granular data.</p><p><strong>Third, what&#8217;s the sophistication of your fraud prevention?</strong> Amazon&#8217;s owned-and-operated properties have inherently less fraud risk than open programmatic exchanges. If most of your spend is on first-party retail inventory, the fraud detection capabilities enabled by log-level data become less critical.</p><p><strong>Fourth, what&#8217;s the relationship context?</strong> The Omnicom-Amazon example suggests that client relationships can drive platform decisions. Holding companies and agencies need to balance getting the best results for current clients with maintaining flexibility for future business.</p><h3>The Longer View</h3><p>The DSP market is expected to reach $804.02 billion by 2035, according to recent forecasts. This growth will be driven primarily by retail media networks, connected TV, and other channels where first-party data enables closed-loop measurement.</p><p>In that environment, the traditional definition of transparency&#8212;seeing every impression&#8212;may become less relevant. What matters is proving causality: did the advertising cause outcomes that wouldn&#8217;t have happened otherwise?</p><p>Amazon&#8217;s transparency limitations may seem like a disadvantage in 2025. By 2030, they may be irrelevant if the market fully embraces outcome-based measurement over process-based auditing.</p><p>For now, the answer to &#8220;who&#8217;s comfortable advertising without log-level data&#8221; appears to be: more sophisticated advertisers than you might expect, as long as they have other ways to validate performance.</p>]]></content:encoded></item><item><title><![CDATA[When Everyone in Programmatic Is Making Less Money]]></title><description><![CDATA[Understanding the tensions reshaping buy-side and sell-side relationships]]></description><link>https://www.datatechandtools.com/p/when-everyone-in-programmatic-is</link><guid isPermaLink="false">https://www.datatechandtools.com/p/when-everyone-in-programmatic-is</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Thu, 20 Nov 2025 20:57:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Wall Street analysts covering ad tech companies noticed something during Q3 2025 earnings calls: executives were testy. Not just nervous about numbers&#8212;though several platforms missed forecasts&#8212;but actively contentious about how the programmatic ecosystem operates.</p><p>Magnite&#8217;s CEO directly called out The Trade Desk for prioritizing OpenPath, its direct publisher connection. PubMatic&#8217;s CEO pointed obliquely at changes in how The Trade Desk&#8217;s Kokai platform operates, noting it works &#8220;differently from what we have seen.&#8221; Nexxen lowered its full-year forecast because expected Q4 spending didn&#8217;t materialize. System1 threatened legal action against an unnamed programmatic vendor over invalid traffic.</p><p>The drama spilled into public positioning about what various companies even are. The Trade Desk CEO Jeff Green said Amazon doesn&#8217;t have &#8220;a DSP as we define it.&#8221; Viant&#8217;s CEO characterized his company as one of few &#8220;truly objective buy-side-only platforms,&#8221; implicitly calling The Trade Desk not objective. Multiple SSPs insisted they&#8217;re definitely not &#8220;resellers&#8221;&#8212;a designation The Trade Desk has been applying more liberally.</p><p>This isn&#8217;t normal competitive posturing. These are signs of an ecosystem under financial stress, with companies jockeying for position as the pie stops growing as quickly as it used to.</p><h3>The Growth Slowdown</h3><p>For years, the programmatic narrative was simple: digital advertising is growing, programmatic&#8217;s share of digital is growing, therefore programmatic companies grow. Rising tide lifts all boats.</p><p>That&#8217;s still true in aggregate. The DSP market is projected to grow from $38.92 billion in 2025 to $148.92 billion by 2032, a CAGR of 21.1%. Retail media spending continues accelerating. Connected TV ad dollars are shifting to programmatic buying.</p><p>But growth is slowing and becoming more uneven. Nexxen&#8217;s CEO noted that the typical October surge in advertising ahead of the holidays didn&#8217;t materialize in 2025. That missing wave of spending affects everyone relying on Q4 to hit annual targets.</p><p>More fundamentally, programmatic is maturing. Early growth came from shifting budgets from traditional direct deals to programmatic buying. That transition is largely complete. Future growth must come from overall advertising growth or share shifts between platforms&#8212;which means winners and losers rather than everyone winning together.</p><p>When markets mature, participants fight harder over their share. The tensions in programmatic reflect this transition.</p><h3>The Reseller Fight</h3><p>The Trade Desk has been increasingly vocal about &#8220;resellers&#8221; in the supply chain&#8212;intermediaries that add little value while taking fees. In The Trade Desk&#8217;s view, many SSPs simply resell inventory they access from other SSPs, creating duplicate bid requests and artificial complexity.</p><p>The issue isn&#8217;t purely philosophical. Every intermediary takes a cut. If inventory passes through three SSPs before reaching a DSP, each taking 15-20%, the media quality has to be exceptional to justify the total fees. Often it&#8217;s not.</p><p>For DSPs, cleaning up the supply chain means better inventory at lower cost. For SSPs labeled as &#8220;resellers,&#8221; it threatens their business model. Hence the sharp reactions from Magnite and PubMatic insisting they&#8217;re not resellers.</p><p>The definitional battle matters because it affects access. If The Trade Desk designates an SSP as a reseller and reduces buying through that path, it materially impacts that SSP&#8217;s revenue. According to multiple sources tracking bidstream patterns, this appears to have happened in Q3.</p><p>But the classification isn&#8217;t always clear. An SSP might have direct publisher relationships for some inventory and indirect relationships for others. Are they a reseller or not? It depends on the specific transaction, which The Trade Desk&#8217;s systems can evaluate dynamically.</p><p>This creates uncertainty for SSPs. They may not know exactly which inventory The Trade Desk considers &#8220;resold&#8221; versus &#8220;direct&#8221; and therefore can&#8217;t predict how changes in platform policies will affect their revenue.</p><h3>OpenPath Changes Everything</h3><p>The Trade Desk&#8217;s OpenPath offering creates direct connections between the DSP and publishers, bypassing SSPs and exchanges entirely. This reduces fees, improves transparency, and gives The Trade Desk more control over inventory access.</p><p>It&#8217;s also a fundamental challenge to the traditional supply chain. SSPs exist to aggregate inventory from many publishers and make it available to many DSPs. OpenPath disintermediates that function.</p><p>Magnite&#8217;s CEO stated directly that The Trade Desk made changes &#8220;that prioritized OpenPath as a default path for supply.&#8221; Magnite had to go directly to major agency buyers to reinstate a &#8220;preferred supply path&#8221; for non-OpenPath inventory.</p><p>This reveals the power dynamics. The Trade Desk can default to OpenPath, forcing publishers and SSPs to compete for inclusion in alternative paths. Publishers want access to The Trade Desk&#8217;s demand, so they&#8217;re incentivized to connect via OpenPath even if it means bypassing their existing SSP relationships.</p><p>For SSPs, this is existential. If most major publishers connect directly to major DSPs via paths like OpenPath, what&#8217;s the SSP&#8217;s role? They lose both visibility into transactions and the ability to take fees.</p><p>Some SSPs have responded by building their own DSP capabilities or acquiring demand-side technology. If you can&#8217;t survive as pure supply aggregation, maybe you can offer both buy-side and sell-side services. But this creates different conflicts&#8212;can you be truly objective when you compete with your own customers?</p><h3>The Amazon Factor</h3><p>Amazon DSP&#8217;s growth adds another dimension. As discussed earlier, Amazon charges minimal fees compared to independent DSPs. This puts pricing pressure on the entire market.</p><p>If large advertisers can run campaigns through Amazon DSP at 1% fees versus 20% at The Trade Desk, The Trade Desk needs to justify the price difference through better performance, superior technology, or other advantages. That&#8217;s possible, but it raises competitive intensity.</p><p>For SSPs, Amazon DSP creates a different challenge. Amazon has its own massive inventory pool&#8212;Amazon.com, IMDb, Twitch, Fire TV devices. Much of Amazon DSP spending happens on Amazon-owned properties where SSPs have no role.</p><p>As Amazon DSP share grows, the addressable market for independent SSPs shrinks. They&#8217;re competing for a smaller share of total programmatic spend.</p><h3>The Data Access Issue</h3><p>Underlying many of these tensions is a fundamental question: who should have access to what data?</p><p>DSPs want detailed information about inventory&#8212;which sites, which placements, what audience characteristics, historical performance data. This helps them evaluate quality and optimize bidding.</p><p>SSPs and publishers want to protect certain information&#8212;specific site URLs, individual user data, detailed pricing. This maintains leverage and protects publisher relationships.</p><p>The Trade Desk has pushed for more transparency, arguing that advertisers deserve to know exactly where ads run. SSPs have resisted, arguing that revealing too much gives DSPs unfair negotiating power and makes it easier for DSPs to disintermediate them.</p><p>This debate doesn&#8217;t have a clear right answer. Too much transparency lets DSPs bypass SSPs. Too little transparency enables fraud and low-quality inventory. The current system exists in uneasy compromise, which shifts as relative power changes.</p><h3>Platform Concentration Effects</h3><p>The earlier discussion of DSP market concentration (three players controlling 86% of share) applies equally to SSPs. A few large platforms&#8212;Magnite, PubMatic, OpenX, Index Exchange&#8212;handle most programmatic transactions on the sell side.</p><p>This concentration creates interesting dynamics. Large SSPs have leverage with publishers because they aggregate inventory at scale. But they&#8217;re also dependent on large DSPs for demand. When those large DSPs change how they buy, SSPs have limited options.</p><p>The Trade Desk&#8217;s strong Q3&#8212;18% year-over-year revenue growth, 16% profit growth&#8212;despite the drama suggests it&#8217;s winning these negotiations. SSPs can complain about OpenPath and reseller designations, but they still need The Trade Desk&#8217;s demand.</p><p>Over time, concentration on both sides could lead to increased direct relationships (like OpenPath) that bypass intermediaries. The programmatic &#8220;marketplace&#8221; might evolve toward a small number of bilateral relationships between major buyers and sellers, with independent SSPs relegated to long-tail inventory.</p><h3>The Invalid Traffic Problem</h3><p>System1&#8217;s threat of legal action over &#8220;significant invalid or nonhuman&#8221; traffic introduces another concern. The programmatic ecosystem has made substantial progress on fraud, but it hasn&#8217;t been eliminated.</p><p>When System1&#8217;s CEO said they&#8217;re seeking reimbursements from an unnamed vendor and may pursue legal action, it signals that companies are becoming less willing to accept fraud as a cost of doing business. This could lead to more aggressive contract terms, more frequent disputes, and potentially more litigation.</p><p>For SSPs, this creates additional pressure. They need to police inventory quality while maximizing inventory supply. Those objectives sometimes conflict. Being too strict about quality reduces available inventory and revenue. Being too loose risks fraud that damages relationships with DSPs.</p><p>The balance has shifted toward quality as DSPs demand better performance for their spend. SSPs that can&#8217;t consistently deliver clean inventory will lose access to premium demand.</p><h3>Where the Market Goes From Here</h3><p>Several possible directions:</p><p><strong>Scenario one: Continued disintermediation.</strong> More publishers connect directly to DSPs via paths like OpenPath. SSPs handle primarily long-tail inventory and specialized formats. The middle layer of the supply chain shrinks.</p><p><strong>Scenario two: Vertical integration.</strong> More companies operate both buy-side and sell-side technology. They offer end-to-end solutions, competing with the independent platforms. This is already happening with companies like Nexxen operating both DSP and SSP.</p><p><strong>Scenario three: Utility layer consolidation.</strong> The SSP layer consolidates further. Three or four large platforms survive by providing essential infrastructure&#8212;fraud detection, header bidding wrappers, yield optimization&#8212;that publishers need even with direct DSP relationships.</p><p><strong>Scenario four: Fragmentation.</strong> New specialized platforms emerge for specific inventory types or use cases. Rather than general-purpose SSPs, we see CTV-specific exchanges, retail media connectors, and format-specific platforms. The ecosystem becomes more complex rather than simpler.</p><p>The first and third scenarios seem most likely. Direct relationships will grow for premium inventory. Large SSPs will survive by providing technical infrastructure. Mid-sized platforms will struggle unless they find defensible niches.</p><h3>What This Means for Advertisers</h3><p>The infighting might seem like industry drama that doesn&#8217;t affect advertisers. But it has practical implications:</p><p><strong>First, pricing will get more complex.</strong> As platforms adjust fee structures and inventory access, the true cost of reaching specific audiences becomes harder to predict. Advertisers need better visibility into total supply chain costs.</p><p><strong>Second, inventory quality requirements will tighten.</strong> As DSPs pressure SSPs on quality, some inventory may become harder to access. Advertisers targeting broad reach may face trade-offs between scale and quality.</p><p><strong>Third, direct deals will matter more.</strong> If programmatic marketplaces become more expensive or complex, direct relationships with publishers become more attractive. Agencies and brands may need to rebuild direct sales relationships they moved away from when programmatic seemed simpler.</p><p><strong>Fourth, platform relationships become strategic.</strong> Choosing which DSPs and SSPs to work with isn&#8217;t just about features and pricing. It&#8217;s about which platforms will exist in five years and which relationships enable access to the inventory you need.</p><h3>The Underlying Issue</h3><p>At bottom, these tensions reflect a simple problem: there&#8217;s more intermediation capacity than the market needs. Too many platforms taking too many fees from transactions that could happen more directly.</p><p>The programmatic ecosystem developed this way because coordination was hard. Getting thousands of publishers and thousands of advertisers to transact required intermediaries to facilitate matching, pricing, delivery, and settlement.</p><p>But technology has improved. Direct integrations are easier. Large publishers can manage their own yield optimization. Large advertisers can operate DSP infrastructure in-house. The original value proposition of programmatic middlemen is less compelling.</p><p>That doesn&#8217;t mean intermediaries will disappear. But it means they need to provide value beyond basic transaction facilitation. Fraud detection, brand safety, format innovation, measurement integration&#8212;services that genuinely improve outcomes rather than just connecting buyers and sellers.</p><p>The platforms providing those services will thrive. The ones that are primarily taking fees for routing bid requests will struggle.</p><p>For anyone operating in programmatic, the question is: what problem are you solving that couldn&#8217;t be solved by direct relationships? Answer that convincingly, and you survive the industry&#8217;s maturation. Struggle to answer it, and you&#8217;re fighting for share in a shrinking pie.</p>]]></content:encoded></item><item><title><![CDATA[Why Your Community Strategy Shouldn't Look Like a Marketing Funnel]]></title><description><![CDATA[How the most effective brands are building belonging instead of optimizing conversions]]></description><link>https://www.datatechandtools.com/p/why-your-community-strategy-shouldnt</link><guid isPermaLink="false">https://www.datatechandtools.com/p/why-your-community-strategy-shouldnt</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Wed, 19 Nov 2025 14:00:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The marketing funnel&#8212;awareness, consideration, conversion, retention&#8212;has shaped campaign planning for decades. It&#8217;s linear, measurable, and maps cleanly to budget allocation. Awareness spending goes here, conversion optimization goes there, and retention programs get whatever&#8217;s left.</p><p>But watch how e.l.f. Beauty actually operates, and you&#8217;ll notice they&#8217;re doing something different. When they launched the Halo Glow Lip Kit, it wasn&#8217;t because market research identified a gap. They listened to community signals, saw what people wanted, and moved fast. When they decided to sponsor a NASCAR car, it wasn&#8217;t to &#8220;reach NASCAR demographics.&#8221; They noticed their community was already engaged with the sport on platforms like Twitch and went where the conversation was happening.</p><p>This isn&#8217;t funnel thinking. It&#8217;s community thinking. And research suggests it&#8217;s measurably more effective. According to Circle&#8217;s 2025 Community Trends Report, one engaged community member equals 234 social media followers in total engagement actions. Community members are 6.2x more likely to share brand content than non-community followers.</p><p>Those aren&#8217;t incremental improvements. They represent a fundamentally different relationship between brands and customers.</p><h3>The Funnel Model&#8217;s Inherent Limitations</h3><p>The marketing funnel assumes a one-way flow: brand creates message, message reaches audience, audience moves through stages, some portion converts. It&#8217;s transactional by design. Each stage has a clear objective measured by conversion rate.</p><p>This model works for products bought infrequently based on rational evaluation. Considering insurance providers? You&#8217;ll probably research options, compare prices, and make a deliberate choice. The funnel fits.</p><p>But most consumer brands don&#8217;t work that way anymore. People don&#8217;t &#8220;consider&#8221; which beauty brand to follow on TikTok. They don&#8217;t &#8220;evaluate&#8221; which fitness app community to join. They participate based on whether it feels authentic, whether the brand shares their values, and whether other community members are people they want to engage with.</p><p>McKinsey found that 71% of consumers expect companies to deliver personalized experiences, with 76% expressing frustration when this expectation isn&#8217;t met. But personalization isn&#8217;t about serving targeted ads. It&#8217;s about making people feel understood and valued&#8212;which is fundamentally a community function, not a funnel optimization.</p><p>The funnel also assumes customers move through stages linearly. In reality, someone might become a vocal brand advocate before making their first purchase. They might convert, churn, then re-engage through community connection years later. They might never personally buy but influence dozens of others who do.</p><p>Traditional funnel metrics can&#8217;t capture this complexity. Community engagement metrics&#8212;depth of participation, peer-to-peer interactions, user-generated content, organic advocacy&#8212;surface patterns that conversion funnels miss.</p><h3>What Community-Led Actually Means</h3><p>&#8220;Community marketing&#8221; has become a buzzword, often conflated with building a Facebook group or Discord server. But those are platforms, not strategies. The strategy is fundamentally different from traditional marketing.</p><p>Community-led marketing starts with listening. e.l.f.&#8217;s entry into Italy began when they noticed social buzz about their Power Grip Primer from Italian consumers. They didn&#8217;t run market research, test messaging, or launch a pilot program. They engaged with the existing conversation, built on the organic excitement, and launched. The product became Italy&#8217;s top-selling primer.</p><p>Traditional marketing would have moved differently: identify market opportunity, develop entry strategy, create localized messaging, run awareness campaigns, measure conversion. That takes 12-18 months and significant investment. e.l.f. moved in weeks because they followed community signals rather than formal processes.</p><p>This requires different organizational capabilities. Marketing teams need social listening tools, real-time response protocols, and decision-making authority. You can&#8217;t move at the speed of community if every action requires approval chains and legal review.</p><p>Lululemon built its community through ambassador programs that gave regular customers platform to share their fitness journeys. These aren&#8217;t influencers with massive followings. They&#8217;re community members who embody the brand&#8217;s values and bring authenticity to the relationship. The company invests in these relationships not to drive immediate conversions but to strengthen community bonds.</p><p>That investment probably looks inefficient in traditional ROI calculations. How many sales did that local yoga teacher drive last quarter? But it&#8217;s the wrong question. The right question is: how strong are the community connections, and how sustainable is the relationship?</p><h3>The Economics Are Actually Better</h3><p>The community approach seems more expensive&#8212;investing in long-term relationships, supporting user-generated content, building platforms for peer interaction. But the unit economics often favor community over traditional marketing.</p><p>Brands that excel at personalization generate 40% more revenue from those efforts than competitors, according to research. Community enables personalization at scale that targeted advertising can&#8217;t match. When community members know each other and share experiences, they personalize for each other&#8212;recommending products, answering questions, solving problems.</p><p>Customer acquisition costs drop substantially. Friends or family recommend a brand, and customers are 84% more likely to trust it than advertising. A strong community generates organic word-of-mouth that reaches people who don&#8217;t respond to ads. This costs far less than paid acquisition.</p><p>Customer lifetime value increases. Community members stay longer, buy more frequently, and are less price-sensitive. Peloton&#8217;s success stems largely from the connected fitness community that keeps users engaged. The retention rates justify the premium pricing and hardware investment.</p><p>One concrete comparison: Circle&#8217;s research found that community-driven content receives 4.5x higher comment rates than traditional marketing content. Comments represent depth of engagement&#8212;the willingness to invest time and thought in response. That engagement predicts future behavior better than clicks or impressions.</p><p>For brands with limited budgets, the choice between broad reach through advertising and deep engagement through community increasingly favors community. You can&#8217;t compete on paid media spend with incumbents who have enormous budgets. But you can build community relationships that established brands struggle to replicate.</p><h3>What This Looks Like in Practice</h3><p>Nike shifted from selling shoes to facilitating a lifestyle community. The Nike Run Club app connects runners, provides training plans, celebrates achievements, and enables social competition. Nike still sells shoes, but the relationship starts with community participation rather than product features.</p><p>This required massive investment in technology, content, and community management. The payoff shows in brand loyalty metrics and pricing power. Nike isn&#8217;t the cheapest running shoe, but community members don&#8217;t comparison shop on price&#8212;they buy Nike because they&#8217;re part of the Nike running community.</p><p>Patagonia built community around environmental activism. Customers aren&#8217;t buying jackets; they&#8217;re joining a movement. Patagonia&#8217;s &#8220;Don&#8217;t Buy This Jacket&#8221; campaign explicitly told people to buy less, contradicting basic marketing principles. But it strengthened community bonds around shared values, which drove long-term loyalty that more than offset near-term sales impact.</p><p>These examples are large brands with substantial resources. But the principles scale. Smaller brands can build engaged communities even more effectively because they can move faster and maintain authenticity more easily.</p><p>Glossier built its entire business on community engagement. The company started as a beauty blog with engaged readers, evolved into a product line developed based on community input, and scaled through community advocacy. At peak, over 70% of customers came through peer referrals rather than paid acquisition.</p><p>The company eventually faced challenges scaling this model, but the early success demonstrated that community-first can work at venture scale. The issues weren&#8217;t with the community approach; they were with operational execution during rapid growth.</p><h3>The Measurement Challenge</h3><p>Traditional marketing measurement is mature. We have attribution models, media mix modeling, conversion tracking, and incrementality testing. We can quantify exactly what each dollar of ad spend generates.</p><p>Community measurement is murkier. How do you value a Reddit thread where community members enthusiastically discuss your product? What&#8217;s the ROI of sponsoring a local event that strengthens community bonds but doesn&#8217;t drive immediate sales?</p><p>Some metrics are emerging as standards. Khoros outlines 11 key community KPIs including active members, engagement rates, content creation, peer-to-peer support, and net promoter scores within the community. These measure community health rather than immediate commercial outcomes.</p><p>The challenge is connecting community health to business results. Finance teams understand CAC payback periods and LTV:CAC ratios. They&#8217;re less comfortable with &#8220;community engagement increased 40% this quarter.&#8221;</p><p>This measurement gap creates budget challenges. Community investments often come from discretionary spending rather than performance marketing budgets. When companies face pressure to cut costs, community programs get eliminated because their ROI isn&#8217;t clearly demonstrated.</p><p>The solution is building better bridges between community metrics and business outcomes. Track purchase patterns among community members versus non-members. Measure referral rates. Calculate support cost savings from peer-to-peer help. Quantify reduced churn among active community participants.</p><p>These connections exist; they&#8217;re just not always measured systematically. Brands that build rigorous community measurement frameworks can justify investment even in difficult economic environments.</p><h3>Where Traditional Marketing Still Matters</h3><p>This isn&#8217;t an argument for eliminating traditional marketing. Mass reach still matters for some objectives. Brand awareness campaigns work. Performance marketing drives measurable results.</p><p>The most effective approach combines both. Use traditional marketing to build broad awareness and drive initial consideration. Use community to deepen relationships and enable organic growth.</p><p>According to Popular Pays analysis, community-driven content receives substantially higher engagement, but traditional marketing still excels at creating broad awareness and communicating simple value propositions. The question isn&#8217;t which approach is better&#8212;it&#8217;s how to integrate them effectively.</p><p>Spotify uses traditional marketing for major product launches and artist promotions. But the platform&#8217;s core stickiness comes from community-created playlists, collaborative filtering, and social sharing. Both matter; they serve different functions.</p><p>The balance shifts based on product category, target audience, and business model. For commodity products sold primarily on price, traditional performance marketing may remain dominant. For lifestyle brands where values and identity matter, community becomes central.</p><h3>Implementation Challenges</h3><p>Building authentic community is harder than running ad campaigns. It requires patience&#8212;communities develop over time rather than spinning up on demand. It requires authenticity&#8212;people detect and reject corporate manipulation quickly. It requires relinquishing control&#8212;communities develop their own norms and conversations that brands can&#8217;t fully direct.</p><p>These aren&#8217;t impossible challenges, but they require different organizational capabilities. Marketing teams trained in campaign management and funnel optimization need new skills: community management, real-time engagement, conflict resolution, platform moderation.</p><p>There are also cultural barriers. Senior executives trained on funnel marketing may struggle to evaluate community initiatives. The metrics look different, the timeline is longer, and the ROI is less immediately clear.</p><p>For publicly traded companies with quarterly earnings pressure, investing in multi-year community building over near-term performance marketing requires conviction. Leadership needs to believe in the community approach enough to withstand periods where traditional metrics look weaker.</p><p>The shift also affects organizational structure. Community management often sits uncomfortably between marketing, customer service, and product teams. Who owns community? Who gets budget? How do you coordinate across functions?</p><p>Brands that succeed establish community as a first-class function with clear ownership, dedicated resources, and executive sponsorship. It can&#8217;t be an afterthought managed by whoever has spare capacity.</p><h3>Looking Forward</h3><p>Several trends suggest community-led approaches will become more central:</p><p><strong>First, younger consumers expect it.</strong> Gen Z doesn&#8217;t just want products; they want brands that reflect their values and facilitate connection. According to multiple studies, they actively seek brands taking meaningful stances on social issues and contributing to real-world impact.</p><p><strong>Second, paid reach is declining in effectiveness.</strong> Ad blocking, privacy changes, and platform algorithm shifts are making traditional advertising less effective. Community-driven organic reach becomes more valuable as paid reach becomes more expensive.</p><p><strong>Third, AI enables better community management at scale.</strong> Tools can identify trending topics, flag issues requiring attention, and suggest engagement opportunities. This makes community management more efficient and scalable.</p><p><strong>Fourth, competition is intensifying everywhere.</strong> Most categories are crowded with similar products at similar prices. Community becomes a defensible competitive advantage that&#8217;s hard to replicate.</p><p>The brands winning in this environment won&#8217;t be those with the biggest ad budgets. They&#8217;ll be those building the strongest communities&#8212;creating spaces where customers connect with each other, share experiences, solve problems, and advocate organically.</p><p>That requires thinking beyond the funnel. Not abandoning measurement or strategic discipline, but recognizing that the most valuable customer relationships can&#8217;t be reduced to conversion rates and click-through percentages.</p><p>The funnel optimizes transactions. Community builds belonging. For an increasing number of brands, belonging matters more.</p>]]></content:encoded></item><item><title><![CDATA[The 84% Problem: Why Most Retail Media Networks Aren't Winning]]></title><description><![CDATA[Market concentration and what it means for brands navigating the retail media landscape]]></description><link>https://www.datatechandtools.com/p/the-84-problem-why-most-retail-media</link><guid isPermaLink="false">https://www.datatechandtools.com/p/the-84-problem-why-most-retail-media</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Tue, 18 Nov 2025 13:58:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Retail media is the fastest-growing advertising channel in the US, expected to reach $166 billion in digital spend by 2025. More than 200 retail media networks now exist globally. Airlines, banks, convenience stores, and grocery chains have all launched ad businesses. The opportunity seems clear: retailers have first-party purchase data, advertisers want access to that data, and the economics are attractive for everyone involved.</p><p>Except that&#8217;s not quite how it&#8217;s playing out.</p><p>Amazon and Walmart will capture 84% of all retail media ad spending in 2025. The remaining 200-plus networks will compete for the other 16%. More striking: that 16% share has barely grown&#8212;increasing by less than one percentage point between 2019 and 2024, even as the total market expanded nearly five times.</p><p>The market is growing rapidly, but the benefits are concentrating rather than distributing. This has significant implications for advertisers trying to navigate retail media and for retailers contemplating their own ad networks.</p><h3>Why Scale Matters More Than You&#8217;d Think</h3><p>Retail media seems like it should favor specialization. Target knows its customers. Kroger knows its shoppers. Instacart knows online grocery buyers. Each network offers access to distinct audiences with unique purchase behaviors.</p><p>In theory, advertisers should work with multiple networks to reach different customer segments. In practice, most work with four or fewer partners, even though many have relationships with five or more networks. According to a January 2024 Association of National Advertisers study, 58% of US marketers work with at least five retail media networks.</p><p>The disconnect suggests execution challenges. Managing campaigns across multiple platforms, each with different reporting standards, different attribution methodologies, and different optimization interfaces, creates operational burden. For brands with limited teams, consolidation around a few large platforms makes practical sense.</p><p>But there&#8217;s a deeper issue: measurement standardization. Advertisers can&#8217;t easily compare performance across retail media networks because networks measure differently. Does a &#8220;view&#8221; mean the same thing on Walmart Connect as on Kroger Precision Marketing? Are &#8220;conversions&#8221; counted consistently? Without standardized metrics, advertisers struggle to allocate budgets efficiently across platforms.</p><p>Industry trade associations have published guidelines for standardization. The IAB released frameworks for incremental measurement in commerce media in October 2025, outlining when experiments, model-based counterfactuals, and econometric models are most appropriate. But adoption has been slow. Networks aren&#8217;t strongly incentivized to standardize when it might reveal unfavorable performance comparisons.</p><p>Amazon and Walmart benefit from this measurement chaos. When cross-platform comparison is difficult, advertisers default to platforms with proven scale and established ROI&#8212;which reinforces the concentration of spend.</p><h3>The Data Story Is More Complicated</h3><p>First-party retail data is the core value proposition of retail media networks. Advertisers can target based on actual purchase behavior rather than inferred intent. Someone who bought diapers last week is a better target for baby products than someone who merely searched for parenting content.</p><p>But access to first-party data isn&#8217;t uniform. Amazon and Walmart have hundreds of millions of customers across diverse product categories. They can build detailed profiles showing purchase patterns over time. A smaller specialty retailer has fewer customers, narrower categories, and less comprehensive purchase history.</p><p>This creates a data quality gap. Amazon can tell you not just that someone bought batteries, but that they buy batteries every three months, always choose premium brands, also buy outdoor equipment, and typically shop on weekends. A specialty outdoor retailer knows someone bought batteries but lacks the broader context.</p><p>For advertisers, richer data enables better targeting, which drives better performance, which justifies more spend. The data gap compounds the scale advantage.</p><p>Walmart&#8217;s $2.3 billion Vizio acquisition in 2024 illustrates how large players are expanding their data assets. The deal gives Walmart connected TV data&#8212;viewing habits, streaming preferences, household composition signals&#8212;that can be linked to purchase data. This creates targeting capabilities that smaller networks can&#8217;t match without similar acquisitions.</p><p>The gap is widening, not closing.</p><h3>The In-Store Challenge</h3><p>Digital retail media&#8212;sponsored products on websites, display ads on mobile apps&#8212;is well-established. In-store retail media represents the next growth frontier, with spending expected to grow 47% in 2025 according to eMarketer forecasts.</p><p>The opportunity makes sense. Most retail sales still happen in physical stores. Digital screens at entrances, checkouts, and end caps can deliver contextually relevant ads to shoppers who are about to make purchase decisions. The attribution loop can close tightly: show ad, measure immediate purchase lift.</p><p>But in-store retail media requires significant infrastructure investment. Digital screens, content management systems, data platforms to power targeting, attribution technology to link ad exposure to purchase&#8212;the costs add up quickly. Hy-Vee, which operates 570 Midwest grocery stores, announced plans to partner with Grocery TV to power 10,000 screens across locations. That&#8217;s substantial capital deployment for uncertain returns.</p><p>Large retailers can justify this investment. Amazon owns Whole Foods. Walmart has 4,700 US stores. The fixed costs of technology and infrastructure spread across massive store counts and customer bases create favorable unit economics.</p><p>Smaller retailers face harder math. Installing screens across 50 locations costs roughly the same per store as installing across 500 locations, but the advertising inventory generated is much less valuable. Fewer stores mean fewer impressions, which means less advertiser demand, which means lower CPMs. The investment may not clear the ROI threshold.</p><p>This capital requirement creates another advantage for large players. They can invest in formats that smaller networks can&#8217;t afford, offering advertisers more inventory across more touchpoints.</p><h3>The CTV Play</h3><p>Connected TV represents a particularly lucrative extension of retail media. Retail media CTV ad spending will grow 45.5% in 2025, and one in five CTV ad dollars will go to retail media by 2027, according to eMarketer projections.</p><p>Walmart&#8217;s Vizio acquisition positions the company to capture this spend. Amazon already has substantial CTV inventory through Prime Video and Fire TV. Both can link TV ad exposure to purchase behavior, proving incremental sales lift with closed-loop measurement.</p><p>Most other retail media networks don&#8217;t have owned CTV inventory. They can partner with CTV platforms, but that introduces intermediaries, reduces margins, and limits measurement capabilities. Direct ownership of CTV inventory creates a structural advantage.</p><p>Target, the third-largest retail media network by share, doesn&#8217;t have a clear CTV strategy comparable to Amazon or Walmart. This suggests that even established networks struggle to compete across all formats without significant M&amp;A activity or partnerships.</p><h3>What About Non-Endemic?</h3><p>Endemic advertising&#8212;brands selling products on a retail platform advertising on that same platform&#8212;was the original retail media model. Unilever advertising Dove soap on Walmart.com makes intuitive sense.</p><p>Non-endemic advertising expands the opportunity. Brands that don&#8217;t sell on the platform can still advertise to reach the retailer&#8217;s audience. Insurance companies, auto manufacturers, financial services firms&#8212;categories that aren&#8217;t sold in retail&#8212;can buy ads targeted using retail purchase data.</p><p>Amazon, Walmart, and Best Buy are embracing non-endemic opportunities to expand reach. This requires different infrastructure&#8212;ad formats suitable for awareness and consideration rather than just purchase, attribution models that don&#8217;t rely on immediate transaction data, creative capabilities beyond sponsored product placements.</p><p>It also requires scale. An advertiser considering non-endemic spending wants reach. They&#8217;re not targeting the 3% of households that shop at a specific specialty retailer; they want meaningful national reach. Only the largest networks can credibly offer this.</p><p>For smaller networks, non-endemic advertising remains largely theoretical. They lack the audience scale, the technology infrastructure, and the sales relationships with non-endemic advertisers to make it viable.</p><h3>The Margin Story</h3><p>From retailers&#8217; perspective, retail media is extraordinarily profitable. Advertising now accounts for almost a third of Walmart&#8217;s $6.7 billion operating income. For a business operating on thin retail margins, this represents a significant earnings contribution.</p><p>But building and operating a retail media network requires investment: technology platforms, data infrastructure, sales teams, advertiser support, creative services. These are fixed costs that need to be covered by ad revenue.</p><p>Large networks achieve strong unit economics. Amazon&#8217;s retail media business likely operates at 60%+ margins&#8212;mostly software and sales, minimal variable costs. As revenue scales, margins improve.</p><p>Smaller networks face different math. A network generating $10 million annually in ad revenue might spend $4-5 million on platform costs, sales, and support. A network generating $100 million might spend $15-20 million on the same functions. Margins improve substantially with scale.</p><p>Coresight Research projects that retailers can expect retail media networks to generate a 70% increase in gross margin compared to core retail operations. But that projection likely applies more to large networks than small ones. A specialty retailer with limited ad inventory and high operational costs relative to revenue may see much lower margins&#8212;perhaps not enough to justify continued investment.</p><h3>The Advertiser&#8217;s Dilemma</h3><p>For brands, the concentration creates practical challenges. Working with Amazon and Walmart makes sense&#8212;they deliver scale, proven ROI, and sophisticated targeting. But relying exclusively on two platforms limits reach and creates dependency.</p><p>The promise of retail media was that brands could reach customers across multiple purchase environments, tailoring messages to different contexts. That vision requires a healthy ecosystem of diverse networks.</p><p>Instead, advertisers face a binary choice: work primarily with the two dominant platforms and accept limited reach, or spread budgets across many smaller networks and accept operational complexity, inconsistent measurement, and uncertain ROI.</p><p>Some categories have no choice. If you sell grocery products, you need to advertise on Amazon and Walmart. But you also need to reach shoppers at regional chains, specialty grocers, and meal delivery services. Concentration at the top means underinvestment in the long tail.</p><h3>Where This Leads</h3><p>Three scenarios seem possible:</p><p><strong>First, continued concentration.</strong> The measurement and infrastructure challenges persist. More ad dollars flow to Amazon and Walmart. Smaller networks struggle to prove ROI, lose advertiser support, and eventually shut down or exist as minimal operations.</p><p><strong>Second, consolidation.</strong> Mid-sized networks merge to achieve greater scale. We might see regional grocery chains pooling their retail media operations, specialty retailers forming alliances, or acquisitions by media companies looking to enter retail media. This could create a few networks with enough scale to compete, even if they don&#8217;t match Amazon and Walmart.</p><p><strong>Third, standardization.</strong> Industry bodies successfully push measurement standardization. Advertisers can compare performance across platforms. Technology infrastructure becomes available as white-label solutions, reducing fixed costs. Smaller networks prove ROI in specific niches and capture profitable segments. The market becomes more fragmented but healthier.</p><p>The second scenario seems most likely. We&#8217;re already seeing moves in this direction&#8212;Criteo offers retail media platforms to multiple retailers including Target, CVS, and Best Buy. These infrastructure partnerships lower barriers and enable smaller players to offer sophisticated capabilities without building from scratch.</p><p>But even with consolidation and standardization, the fundamental advantages of scale, data depth, and CTV ownership favor the largest players. Retail media may always be a concentrated market.</p><h3>What Advertisers Should Do</h3><p>Given this landscape, what&#8217;s the practical path forward for brands?</p><p><strong>First, establish baseline performance on the big platforms.</strong> Amazon and Walmart are where most advertisers need to start. Build competency there, understand what good looks like, and establish performance benchmarks.</p><p><strong>Second, test selectively on smaller networks.</strong> Don&#8217;t write off all non-Amazon/Walmart networks. Test a few that reach your specific audience or offer unique inventory. Measure carefully and expand what works.</p><p><strong>Third, demand better measurement.</strong> Push networks to provide incrementality testing, not just attributed conversions. Insist on standardized metrics where possible. Vote with your budgets for platforms that offer transparent performance data.</p><p><strong>Fourth, watch for consolidation opportunities.</strong> As networks merge or offer cross-platform buying, efficiency may improve. Stay informed about partnerships and infrastructure developments that could simplify multi-network campaigns.</p><p><strong>Fifth, plan for a concentrated future.</strong> Amazon and Walmart will likely remain dominant. Build your retail media strategy accepting this reality rather than hoping for a more distributed market.</p><p>The 84% problem isn&#8217;t going away soon. The forces driving concentration&#8212;measurement challenges, scale advantages, data depth, infrastructure costs&#8212;are structural rather than temporary. Brands that understand and adapt to this reality will navigate retail media more effectively than those hoping for a different market structure.</p>]]></content:encoded></item><item><title><![CDATA[When Your Chatbot Lies: The Liability Landscape Taking Shape in 2025]]></title><description><![CDATA[Understanding emerging legal standards as AI-generated content intersects with defamation law]]></description><link>https://www.datatechandtools.com/p/when-your-chatbot-lies-the-liability</link><guid isPermaLink="false">https://www.datatechandtools.com/p/when-your-chatbot-lies-the-liability</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Mon, 17 Nov 2025 13:53:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Customer service chatbots are now routine across industries. They handle returns, answer product questions, and troubleshoot technical issues at scale. But they also hallucinate, misquote policies, and occasionally make false statements about real people&#8212;creating a liability exposure that many businesses haven&#8217;t fully considered.</p><p>The legal framework for AI-generated defamation is being established right now, through court cases that will define how companies are held responsible when their chatbots spread falsehoods. For businesses deploying conversational AI, understanding this emerging landscape isn&#8217;t optional.</p><h3>The Cases Establishing Precedent</h3><p>In May 2025, a Georgia court dismissed a defamation lawsuit brought by radio host Mark Walters against OpenAI after ChatGPT allegedly generated false claims that he had defrauded and embezzled funds from a gun rights organization. Judge Tracie Cason ruled that Walters had not proven defamation or that OpenAI acted with fault.</p><p>The court recognized OpenAI&#8217;s warnings about potential inaccuracies and efforts to reduce errors. This suggests that prominent disclaimers and responsible AI design may offer some protection against liability&#8212;though the broader question of AI accountability remains unresolved.</p><p>Other cases are moving forward. In April 2025, activist Robby Starbuck sued Meta after its AI chatbot allegedly produced false statements linking him to the Capitol riot, Holocaust denial, and child endangerment. Rather than litigate, Meta settled. The terms weren&#8217;t disclosed, but the settlement itself signals that companies see real risk in AI defamation claims.</p><p>Google also faced a suit from Wolf River Electric, a Minnesota solar company, after Google&#8217;s AI Overview erroneously claimed the state attorney general was suing them. The company alleged lost revenue including a $150,000 contract. The case argues that AI-generated content doesn&#8217;t qualify for Section 230 immunity since it&#8217;s not third-party speech.</p><p>These early cases share common elements: AI systems making false factual claims about specific people or businesses, reputational harm, and defendants arguing that warnings and good-faith efforts to prevent errors should limit liability.</p><h3>Section 230 Won&#8217;t Save Everyone</h3><p>Section 230 of the Communications Decency Act has protected internet platforms from liability for user-generated content since 1996. If someone posts something defamatory on Facebook, Facebook typically can&#8217;t be sued&#8212;the liability falls on the person who posted it.</p><p>But Section 230 only protects platforms from third-party speech. It doesn&#8217;t cover first-party speech&#8212;content the platform itself generates.</p><p>This distinction matters enormously for AI chatbots. When a chatbot generates text, who&#8217;s the speaker? Is it &#8220;third-party&#8221; content (from the AI, which learned from various sources) or &#8220;first-party&#8221; content (from the company that operates the chatbot)?</p><p>Recent court decisions suggest Section 230 immunity may not apply to AI-generated speech. In a 2025 case involving TikTok&#8217;s algorithm, a federal appeals court found that TikTok wasn&#8217;t protected by Section 230 because the algorithm&#8217;s recommendations constituted first-party editorial decisions, not merely hosting of third-party content.</p><p>If courts extend this reasoning to generative AI, companies can&#8217;t rely on Section 230 to shield them from liability when their chatbots make false statements. The legal protection that platforms have enjoyed for nearly three decades may not apply to this new technology.</p><h3>The Air Canada Precedent</h3><p>Even before defamation concerns, businesses learned that chatbots create binding obligations.</p><p>In 2024, Air Canada&#8217;s chatbot told a passenger he could get a bereavement discount retroactively&#8212;contradicting the company&#8217;s actual policy requiring advance approval. When the passenger applied for the discount after travel and was denied, he sued. The British Columbia Civil Resolution Tribunal ruled that Air Canada was liable for its chatbot&#8217;s statement.</p><p>The holding was straightforward: the chatbot was an agent of the company. Air Canada was bound by what its agent told customers, regardless of whether that information was accurate.</p><p>This case established that companies can&#8217;t disclaim responsibility for their chatbots&#8217; statements simply because the chatbot made an error. Under agency law, when businesses deploy chatbots to interact with customers, those chatbots have apparent authority to speak for the company.</p><p>The implications extend beyond customer service. If a chatbot has apparent authority to represent the company, and it makes a defamatory statement, the company could be liable for that defamation&#8212;just as it would be if an employee made the same false statement.</p><h3>The Four Types of AI Defamation Risk</h3><p>Attorneys tracking AI defamation cases have identified four categories of risk:</p><p><strong>Hallucination:</strong> The AI invents false information entirely. This is what happened in the Walters case&#8212;ChatGPT fabricated an embezzlement claim with no factual basis.</p><p><strong>Juxtaposition:</strong> The AI combines accurate information in misleading ways. For example, correctly identifying someone&#8217;s name and correctly identifying that a lawsuit exists, but wrongly connecting the person to the lawsuit.</p><p><strong>Omission:</strong> The AI leaves out crucial context that would make a statement accurate instead of defamatory. Saying someone was &#8220;arrested&#8221; without mentioning they were released without charges, for instance.</p><p><strong>Misquote:</strong> The AI attributes false statements to real people. Google&#8217;s Gemma chatbot told a user that Senator Marsha Blackburn had been accused of rape by a state trooper&#8212;a completely fabricated claim.</p><p>Each type presents different challenges for prevention. Hallucinations might be reduced through better training and grounding in factual sources. Juxtaposition errors require better context handling. Omissions need completeness checks. Misquotes require source verification.</p><p>But none can be eliminated entirely with current technology. AI systems make mistakes. The question is who bears legal responsibility when they do.</p><h3>The Publisher vs. Distributor Distinction</h3><p>Traditional media law distinguishes between publishers and distributors. Publishers&#8212;newspapers, book publishers&#8212;exercise editorial control and can be held liable for defamatory content they publish. Distributors&#8212;bookstores, newsstands&#8212;don&#8217;t review everything they distribute and have more limited liability.</p><p>Under this framework, where do AI companies fall? They don&#8217;t write the specific outputs, but they do train the models, set the parameters, and control what types of content get generated. That&#8217;s more like a publisher than a distributor.</p><p>However, they also lack control over specific outputs in the same way a newspaper editor controls specific articles. They can&#8217;t review every chatbot response before it&#8217;s delivered.</p><p>Courts are still working out this classification. Some legal scholars argue AI companies should be treated as publishers because they control the technology that generates content. Others argue they&#8217;re more like distributors because they don&#8217;t dictate specific outputs.</p><p>The answer will likely depend on the degree of human oversight and control. A chatbot with extensive human review before responses go live might create publisher-level liability. A fully automated system with no review might be treated more like a distributor. Most business deployments fall somewhere in between.</p><h3>What Businesses Can Do Now</h3><p>While courts establish legal standards, businesses deploying chatbots need practical risk management strategies.</p><p><strong>First, implement stronger grounding.</strong> Chatbots that rely solely on training data are more likely to hallucinate. Systems grounded in verified knowledge bases, with retrieval-augmented generation pulling from reliable sources, reduce fabrication risk.</p><p><strong>Second, limit authority explicitly.</strong> Make it clear what the chatbot can and cannot do. Air Canada could have configured its chatbot to only provide information directly from policy documents, not to interpret or extrapolate. Clear limitations reduce apparent authority.</p><p><strong>Third, use prominent disclaimers.</strong> While not a complete defense, clear warnings that the chatbot may make errors help establish that users shouldn&#8217;t rely on unverified information. The Walters court cited OpenAI&#8217;s extensive warnings as a factor in dismissal.</p><p><strong>Fourth, implement monitoring and correction processes.</strong> When users report false information, have systems in place to quickly verify, correct, and update the model. Meta&#8217;s alleged failure to fix errors despite notification strengthened the Starbuck case.</p><p><strong>Fifth, consider human review for high-stakes interactions.</strong> Chatbots handling sensitive topics&#8212;medical advice, legal information, accusations of wrongdoing&#8212;present higher risk. Human review before responses are delivered adds friction but reduces liability exposure.</p><p><strong>Sixth, maintain detailed logs.</strong> If sued, you&#8217;ll need to show what the chatbot said, when, and what efforts you made to prevent errors. Comprehensive logging enables effective defense.</p><h3>State Legislation Is Coming</h3><p>While courts work out common law standards, legislators are beginning to address AI risks directly. Texas passed the Responsible AI Governance Act in June 2025, establishing liability for certain intentional AI abuses including using AI to facilitate crimes, create deepfakes, or engage in unlawful discrimination. Violations carry fines up to $200,000.</p><p>While AI defamation isn&#8217;t explicitly covered, the law signals growing legislative attention. California passed its own AI governance bill, and federal legislation has been introduced (though not yet passed).</p><p>These statutes likely won&#8217;t create broad private rights of action for AI defamation&#8212;most follow the Texas model of giving enforcement power to attorneys general. But they establish that AI operators can face financial penalties for certain harms, even if private lawsuits aren&#8217;t allowed.</p><p>Businesses should expect regulation to continue developing at state level while federal frameworks remain in debate. Multi-state operations will need to comply with varying requirements.</p><h3>The Reasonable Reliance Question</h3><p>One defense argument gaining attention: can anyone reasonably rely on AI-generated information?</p><p>ChatGPT and similar tools prominently state they may produce inaccurate information. Users are advised to verify important information independently. If these warnings are sufficiently prominent, can someone claim they reasonably believed false AI-generated statements?</p><p>This argument succeeded in the Walters case. The court found that OpenAI&#8217;s extensive warnings meant Walters couldn&#8217;t reasonably rely on ChatGPT&#8217;s output as fact.</p><p>But this defense has limits. It might work for general-purpose chatbots like ChatGPT. It&#8217;s less convincing for customer service chatbots specifically designed to provide accurate company information. A customer asking Air Canada&#8217;s chatbot about bereavement policies is acting reasonably by trusting the answer&#8212;that&#8217;s what the chatbot is for.</p><p>The reasonable reliance question will likely depend on context. The more specialized and authoritative the chatbot appears, the more reasonable it is to trust its statements, and the less effective disclaimer defenses become.</p><h3>Looking at Other Jurisdictions</h3><p>In Australia, former mayor Brian Hood threatened defamation proceedings against OpenAI after ChatGPT falsely claimed he was imprisoned for bribery. He sent a concerns notice&#8212;the formal first step in Australian defamation proceedings&#8212;but ultimately didn&#8217;t pursue the claim further.</p><p>The case highlighted how AI defamation risk varies by jurisdiction. Australian defamation law places the burden of proof on defendants to show their statements were true. This is more plaintiff-friendly than U.S. law, where plaintiffs generally must prove falsity.</p><p>U.K. law similarly favors defamation plaintiffs. For multinational companies, this creates complex liability exposure. A chatbot deployed globally could face defamation claims in jurisdictions with varying legal standards, where Section 230-style protections don&#8217;t exist, and where proving truth rather than falsity is required.</p><h3>The Broader Business Implication</h3><p>AI defamation risk exists within a larger context of AI liability. Companies are also facing copyright infringement claims for training data, privacy violations for data handling, negligent misrepresentation for incorrect advice, and product liability theories for harm caused by AI systems.</p><p>The common thread: existing legal frameworks weren&#8217;t designed for AI-generated content. Courts and legislators are adapting these frameworks in real time. The standards being established now will govern AI liability for years.</p><p>For businesses, this uncertainty requires careful risk assessment. The potential benefits of deploying conversational AI&#8212;reduced support costs, faster customer service, 24/7 availability&#8212;must be weighed against legal exposure that&#8217;s still being defined.</p><p>Some companies may decide the risk isn&#8217;t worth it and maintain human-only customer service for sensitive interactions. Others may accept the risk while implementing strong safeguards. There&#8217;s no universal answer.</p><p>But ignoring the risk isn&#8217;t viable. The cases establishing AI defamation standards are happening now. The companies involved are learning expensive lessons. Better to learn from their experience than to repeat it.</p>]]></content:encoded></item><item><title><![CDATA[Speaking the Same Language: How Incrementality Is Changing Marketing and Finance Conversations]]></title><description><![CDATA[Why Finance Cares About Uncertainty]]></description><link>https://www.datatechandtools.com/p/speaking-the-same-language-how-incrementality</link><guid isPermaLink="false">https://www.datatechandtools.com/p/speaking-the-same-language-how-incrementality</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Sun, 16 Nov 2025 13:36:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Moving beyond attribution theater to measurement that CFOs actually trust</h2><p>Marketing and finance teams have historically operated with different success metrics, different planning horizons, and fundamentally different views on what constitutes proof. This misalignment has real costs: campaigns get cut during budget reviews, growth investments get delayed, and teams spend more time defending past decisions than planning future ones.</p><p>The gap isn&#8217;t new. What&#8217;s changed is that more companies now have a practical way to bridge it: incrementality testing. Over half of US brand and agency marketers used incrementality testing in 2025, according to EMARKETER and TransUnion data, and 36.2% plan to invest more in it over the next year.</p><p>But adoption numbers don&#8217;t tell the full story. The more interesting shift is how these tests are changing the conversations between marketing and finance&#8212;and what that means for organizations trying to prove marketing&#8217;s contribution to business outcomes.</p><h3>The Attribution Theater Problem</h3><p>Most marketing measurement relies on attribution: matching conversions to ad exposures based on user behavior. Click on an ad, buy the product, and that ad gets credit. It&#8217;s clean, it&#8217;s trackable, and it&#8217;s fundamentally misleading.</p><p>Attribution conflates correlation with causation. If someone clicks a branded search ad and then purchases, did the ad cause the purchase? Or would that person have bought anyway, since they were already searching for the brand by name?</p><p>Facebook might report a 3.7x ROAS on a campaign. That number represents all purchases made by people who saw or clicked the ads. It doesn&#8217;t represent purchases that happened because of the ads&#8212;the purchases that wouldn&#8217;t have occurred without the advertising spend.</p><p>Finance teams understand this distinction intuitively. When they ask &#8220;what&#8217;s the ROI on this campaign,&#8221; they&#8217;re asking a causal question: what did this investment cause to happen? Attribution models answer a different question entirely: what did we track among people who saw our ads?</p><p>This gap creates what could be called attribution theater&#8212;the presentation of correlation metrics as if they prove causation. Marketing teams report ROAS numbers with confidence intervals, build elaborate dashboards, and forecast based on platform-reported metrics. Finance teams nod along but remain skeptical, knowing these numbers tend to overstate impact.</p><p>The result is chronic mistrust. Marketing can&#8217;t prove their claims. Finance can&#8217;t validate investments. Both sides retreat to their corners, frustrated.</p><h3>What Incrementality Actually Measures</h3><p>Incrementality testing flips the measurement approach. Instead of tracking who converted after seeing ads, it measures what wouldn&#8217;t have happened without the ads.</p><p>The methodology resembles clinical drug trials. Split a comparable population into test and control groups. Show ads to the test group. Show nothing (or different ads) to the control group. Measure the difference in outcomes.</p><p>If the test group generates 1,250 purchases and the control group generates 1,000 purchases, the campaign drove 250 incremental purchases&#8212;a 25% lift. That&#8217;s what the advertising caused. Everything else would have happened organically.</p><p>Google recently lowered the minimum budget for incrementality tests to $5,000, down from previous thresholds approaching $100,000. The platform uses Bayesian statistical methodology, which requires less data than traditional frequentist approaches. This makes causal measurement accessible to mid-market advertisers, not just enterprises with massive budgets.</p><p>The key distinction: incrementality tests answer finance&#8217;s question directly. They prove causation, not just correlation.</p><h3>Why Finance Cares About Uncertainty</h3><p>Here&#8217;s what makes incrementality different in finance conversations: it quantifies uncertainty.</p><p>Traditional marketing reports present point estimates. &#8220;Facebook delivered 3.7x ROAS.&#8221; One number, stated with confidence. Finance teams know better than to trust single-point estimates for anything&#8212;revenue projections, cost forecasts, risk assessments all come with ranges.</p><p>Incrementality tests produce confidence intervals. &#8220;We estimate Facebook&#8217;s incremental ROI is between 3.2x and 4.5x.&#8221; This acknowledges that true incrementality is unknowable&#8212;we can only estimate it within a range, with a certain level of confidence.</p><p>Counterintuitively, this uncertainty makes the measurement more credible to finance. The confidence interval signals intellectual honesty. It acknowledges the limits of measurement and quantifies the precision of the estimate.</p><p>For financial planning, ranges are more useful than false precision. A CFO can model scenarios using the low end of the range (3.2x) for conservative forecasts and the high end (4.5x) for aggressive growth plans. One number doesn&#8217;t allow for scenario planning.</p><p>This shared language&#8212;estimates with confidence intervals rather than precise-looking but unreliable point estimates&#8212;creates common ground. Both teams can discuss decisions using the same measurement framework.</p><h3>The Forecasting Problem</h3><p>The attribution theater problem becomes especially acute during budget planning. Marketing teams extrapolate from platform-reported metrics. Finance teams model cash flows based on those extrapolations. Forecasts consistently miss.</p><p>Why? Because inflated attribution numbers get plugged into financial models. If Facebook reports 5x ROAS but true incrementality is 3x, scaling spend based on the 5x number will disappoint. Revenue won&#8217;t materialize as projected. Budgets get cut. Trust erodes further.</p><p>BrandAlley, a UK-based fashion eCommerce company launching over 1,000 campaigns annually, faced exactly this issue. They implemented incrementality testing through marketing mix modeling to understand true causal impact across channels. The results showed material differences between platform-reported performance and actual lift.</p><p>Armed with better numbers, they could forecast accurately. Finance could trust the projections. Marketing could defend budgets with causal evidence rather than correlation metrics.</p><p>The difference isn&#8217;t just about measurement accuracy. It&#8217;s about breaking the cycle of missed forecasts, budget cuts, and eroded trust. When both teams use the same causally-valid metrics, forecasts improve, and organizations can plan with confidence.</p><h3>Implementation Challenges</h3><p>Adopting incrementality testing isn&#8217;t frictionless. According to research from Skai and Path to Purchase Institute, a third of CPG brand marketers measure incrementality only at a basic level. The top barriers are concerns about accuracy (44% of respondents), difficulty applying incrementality across different ad types and retailers (43%), and limited tools or technologies (41%).</p><p>These concerns are legitimate. Not everything can be tested easily. Brand campaigns that run continuously for awareness may not have natural holdout groups. Small-budget campaigns may lack statistical power to detect lift. Some channels, like linear TV, present geographic and technical constraints.</p><p>There are also opportunity costs. Every incrementality test withholds advertising from control groups, potentially sacrificing sales during the test period. For companies operating on thin margins, this represents real financial risk.</p><p>But the alternative&#8212;continuing to make decisions based on misleading attribution data&#8212;carries risk too. The organizations seeing success are those that acknowledge these constraints upfront and build testing into their planning cycles.</p><h3>What Worked for Finance Buy-In</h3><p>Organizations that successfully bridged the marketing-finance gap using incrementality followed several patterns:</p><p><strong>Start with joint education.</strong> Get both teams aligned on what incrementality measures, why it matters, and what the limitations are. No surprises.</p><p><strong>Frame tests as measurement investments.</strong> Finance teams understand that better data improves decisions. Position incrementality testing as infrastructure that improves capital allocation, not as a marketing expense.</p><p><strong>Test where disagreement exists.</strong> Focus initial tests on the channels where marketing and finance most disagree about performance. Resolving those debates quickly demonstrates value.</p><p><strong>Establish regular testing cadence.</strong> Quarterly tests for major channels, less frequent tests for smaller channels. Predictable schedule reduces friction.</p><p><strong>Document what can&#8217;t be measured.</strong> Some effects&#8212;long-term brand building, word-of-mouth, customer lifetime value beyond immediate conversion&#8212;don&#8217;t show up in incrementality tests. Acknowledge this explicitly.</p><p>This last point matters. Incrementality testing measures short-term direct response. It doesn&#8217;t capture every marketing benefit. But being explicit about what you&#8217;re not measuring builds credibility for what you are measuring.</p><h3>The Bigger Shift</h3><p>The adoption of incrementality testing reflects a larger change in how organizations think about marketing.</p><p>For decades, marketing operated somewhat separately from core business operations. It was a creative function, difficult to measure precisely, judged partly on intuition and brand health metrics that didn&#8217;t translate directly to P&amp;L impact.</p><p>That model worked in an era of limited measurement capability. You couldn&#8217;t easily run controlled experiments at scale. You couldn&#8217;t quickly test creative variations. You relied on annual brand studies and hoped for correlation between brand metrics and sales.</p><p>The shift toward incrementality-based measurement represents marketing becoming more integrated with business operations. Marketing claims can be tested the same way product changes get A/B tested or pricing strategies get validated.</p><p>This doesn&#8217;t mean eliminating creativity or intuition. It means having a reliable way to prove which creative risks paid off, which channels drove real growth, and which investments should be scaled.</p><h3>Looking Forward</h3><p>The incrementality testing market has matured quickly. Platforms like Measured, TransUnion, Rockerbox, and Sellforte now offer incrementality-as-a-service. Data clean rooms like Amazon Marketing Cloud and Snowflake provide privacy-safe environments for running tests. AI tools help automate reporting, with half of US brand and agency marketers adopting AI or machine learning for this purpose.</p><p>The IAB recently released guidelines for incremental measurement in commerce media, outlining when experiments, model-based counterfactuals, econometric models, and hybrid approaches work best. Industry standardization is happening.</p><p>As tools improve and costs decrease, incrementality testing will likely become baseline capability rather than advanced technique. The question will shift from &#8220;should we test incrementality?&#8221; to &#8220;how do we integrate incrementality insights into planning workflows?&#8221;</p><p>For the marketing-finance relationship, this matters. When both teams trust the same measurement methodology, conversations become about strategy rather than measurement validity. Instead of debating whether the marketing numbers are real, they can debate which opportunities to pursue.</p><p>That&#8217;s not just better measurement. It&#8217;s better business decision-making.</p>]]></content:encoded></item><item><title><![CDATA[The Quiet Shift: What Amazon's Transparency Gap Tells Us About the DSP Market]]></title><description><![CDATA[Understanding the trade-offs between scale and scrutiny in programmatic advertising]]></description><link>https://www.datatechandtools.com/p/the-quiet-shift-what-amazons-transparency</link><guid isPermaLink="false">https://www.datatechandtools.com/p/the-quiet-shift-what-amazons-transparency</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Sat, 15 Nov 2025 13:33:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Understanding the trade-offs between scale and scrutiny in programmatic advertising</h2><p>The programmatic advertising market is experiencing a notable realignment. While The Trade Desk commands roughly 20% of the independent DSP market and Google&#8217;s DV360 maintains its dominant position, Amazon DSP has quietly emerged as a formidable alternative&#8212;particularly for advertisers willing to accept less transparency in exchange for other benefits.</p><p>This raises a practical question for marketers: what does it mean when a major DSP doesn&#8217;t provide log-level data, and who&#8217;s actually comfortable with that trade-off?</p><h3>The Data Visibility Issue</h3><p>Log-level data&#8212;the granular, impression-by-impression reporting that shows exactly where ads ran, how much was paid, and what resulted&#8212;has long been considered table stakes for sophisticated advertisers. This level of detail allows teams to verify that budgets weren&#8217;t wasted on low-quality inventory, detect potential fraud, and understand the true path to conversion beyond platform-reported metrics.</p><p>Amazon DSP, despite handling an estimated 7.5% of retail media dollars (which translates to billions in annual spend), has notably limited log-level data access compared to competitors. According to multiple ad tech platforms tracking programmatic bidding patterns, some agency holding companies have been shifting meaningful portions of their Q3 spend from The Trade Desk to Amazon DSP&#8212;even with this transparency limitation.</p><p>The question isn&#8217;t whether Amazon lacks transparency. That&#8217;s established. The question is why sophisticated advertisers are comfortable with it.</p><h3>Following the Incentives</h3><p>The math on Amazon&#8217;s fee structure is straightforward. The platform charges no fees for programmatic guaranteed deals on Amazon-owned media and collects just 1% for ads on open web publishers&#8212;significantly below The Trade Desk&#8217;s roughly 20% take rate. For large advertisers spending millions monthly, this difference compounds quickly.</p><p>There&#8217;s also the relationship angle. When Omnicom won Amazon&#8217;s US marketing business in 2024, industry observers noted that it would likely influence programmatic spending patterns. That prediction appears to have materialized. Multiple sources familiar with programmatic bidding confirmed to AdExchanger that they observed what looked like a double-digit share of expected Q3 spend moving from The Trade Desk to Amazon DSP within Omnicom.</p><p>But cost savings and client relationships don&#8217;t fully explain the shift. The real differentiator may be Amazon&#8217;s first-party retail data&#8212;the ability to target based on actual purchase behavior rather than inferred intent. For many advertisers, particularly those in retail and CPG, this closed-loop measurement may matter more than impression-level transparency.</p><h3>The Sophistication Question</h3><p>This situation prompts a more interesting consideration: are we redefining what &#8220;sophisticated&#8221; means in media buying?</p><p>Traditionally, sophisticated advertisers demanded full transparency&#8212;detailed reporting, independent verification, and control over every aspect of the media supply chain. That model emerged from an era when fraud was rampant and programmatic was the &#8220;Wild West&#8221; of digital advertising.</p><p>But the market has matured. Walled gardens like Facebook and Google have trained advertisers to accept limited visibility in exchange for scale and performance. According to recent data, the DSP market is projected to grow from $38.92 billion in 2025 to $148.92 billion by 2032, with much of that growth coming from platforms that offer performance over transparency.</p><p>Some sophisticated advertisers may now be calculating that fraud detection and viewability verification matter less than they did five years ago&#8212;especially when advertising on first-party retail properties where the media quality is generally higher. They may be prioritizing performance measurement and cost efficiency over the ability to audit every impression.</p><h3>The Measurement Challenge</h3><p>The incrementality testing movement provides context here. Over half of US brand and agency marketers now use incrementality testing to measure campaigns, according to July 2025 data from EMARKETER and TransUnion. Google recently lowered its incrementality testing threshold to $5,000, making this type of causal measurement more accessible.</p><p>If advertisers can prove that Amazon DSP drives incremental sales through controlled experiments, does it matter whether they can see every log file? The incrementality test would capture the true lift regardless of the black box nature of the platform.</p><p>This represents a philosophical shift from forensic transparency (examining every impression) to outcome-based validation (proving the campaign caused sales that wouldn&#8217;t have happened otherwise). The former requires detailed logs; the latter requires good experimental design.</p><h3>Market Structure Implications</h3><p>The broader DSP market shows clear concentration. According to recent analysis, three major players&#8212;DV360, Amazon DSP, and The Trade Desk&#8212;control 86% of market share. While Amazon has the highest advertising revenue, Google&#8217;s DV360 maintains the largest market share, suggesting different monetization strategies.</p><p>The Trade Desk showed 26% growth in 2024, outpacing both the overall DSP market growth rate (23%) and Amazon&#8217;s advertising growth (18%). This suggests The Trade Desk is gaining share in certain segments, even as it potentially loses large accounts like Omnicom.</p><p>The market appears to be segmenting by advertiser needs: sophisticated direct-response advertisers who need granular optimization may stick with The Trade Desk, while brand advertisers focused on retail media and closed-loop measurement may migrate toward Amazon.</p><h3>The Infrastructure Question</h3><p>There&#8217;s another practical consideration: OpenPath, The Trade Desk&#8217;s direct publisher connection, bypasses other ad tech intermediaries. This means spending through OpenPath wouldn&#8217;t be visible to SSPs and other platforms that typically observe bidstream data.</p><p>If Omnicom or other holding companies significantly increased OpenPath usage, it could appear to outside observers that they reduced Trade Desk spending, when in reality they just changed how they bought inventory. The Trade Desk declined to comment on whether Omnicom uses OpenPath extensively, making this impossible to verify.</p><p>This highlights a challenge with analyzing programmatic market shifts: much of the data comes from intermediaries who have incomplete visibility. Real spending patterns may differ significantly from what can be observed.</p><h3>What This Means for Advertisers</h3><p>For marketers evaluating DSP options in 2025, the Amazon situation surfaces several practical questions:</p><p><strong>First, what are you optimizing for?</strong> If preventing Made-for-Advertising sites and ensuring brand safety are top priorities, platforms with robust log-level reporting may remain essential. If you&#8217;re focused on proving incremental sales lift and are comfortable with Amazon&#8217;s first-party inventory quality, transparency may matter less.</p><p><strong>Second, how do you measure success?</strong> If your organization relies on multi-touch attribution models that require impression-level data, limited transparency is a dealbreaker. If you use incrementality testing or media mix modeling, you can work with less granular data.</p><p><strong>Third, what&#8217;s the sophistication of your fraud prevention?</strong> Amazon&#8217;s owned-and-operated properties have inherently less fraud risk than open programmatic exchanges. If most of your spend is on first-party retail inventory, the fraud detection capabilities enabled by log-level data become less critical.</p><p><strong>Fourth, what&#8217;s the relationship context?</strong> The Omnicom-Amazon example suggests that client relationships can drive platform decisions. Holding companies and agencies need to balance getting the best results for current clients with maintaining flexibility for future business.</p><h3>The Longer View</h3><p>The DSP market is expected to reach $804.02 billion by 2035, according to recent forecasts. This growth will be driven primarily by retail media networks, connected TV, and other channels where first-party data enables closed-loop measurement.</p><p>In that environment, the traditional definition of transparency&#8212;seeing every impression&#8212;may become less relevant. What matters is proving causality: did the advertising cause outcomes that wouldn&#8217;t have happened otherwise?</p><p>Amazon&#8217;s transparency limitations may seem like a disadvantage in 2025. By 2030, they may be irrelevant if the market fully embraces outcome-based measurement over process-based auditing.</p><p>For now, the answer to &#8220;who&#8217;s comfortable advertising without log-level data&#8221; appears to be: more sophisticated advertisers than you might expect, as long as they have other ways to validate performance.</p>]]></content:encoded></item><item><title><![CDATA[Publishers Face a New Reality: When Search Traffic Becomes Optional]]></title><description><![CDATA[How a 27% traffic drop is forcing publishers to rebuild their business models around owned audiences instead of borrowed attention]]></description><link>https://www.datatechandtools.com/p/publishers-face-a-new-reality-when</link><guid isPermaLink="false">https://www.datatechandtools.com/p/publishers-face-a-new-reality-when</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Fri, 14 Nov 2025 16:23:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The Numbers Don&#8217;t Lie, But They Tell Different Stories</h2><p>Between May and June 2025, major publishers watched their Google search referral traffic drop 10% year-over-year. Non-news brands saw declines of 14%. News brands dropped 7%. These aren&#8217;t outliers&#8212;traffic losses between 1% and 25% became the new normal for most sites tracked by Digital Content Next.</p><p>But here&#8217;s what makes this different from previous algorithm updates or platform changes: this decline isn&#8217;t about ranking factors you can optimize. It&#8217;s about whether people need to click at all. Zero-click searches increased from 56% to 69% between May 2024 and May 2025. Google&#8217;s AI Overviews answer questions right there on the search results page. Why visit a travel site when the AI already told you the best time to visit Iceland?</p><p>Some publishers lost more than just a few percentage points. The Planet D, a travel blog, shut down after traffic dropped 90%. Chegg, the education platform, reported a 49% decline in non-subscriber traffic between January 2024 and January 2025. These aren&#8217;t adjustments&#8212;they&#8217;re existential events.</p><p>Meanwhile, The Verge&#8217;s publisher notes that their Google traffic decline &#8220;lined up pretty clearly with the rise of AI Overviews.&#8221; The New York Times saw organic search traffic fall from 44% of total traffic three years ago to 36.5% in April 2025. Even sites maintaining top rankings saw click-through rates crater. One lifestyle publisher tracked a query that ranked on page one with stable impressions&#8212;CTR dropped from 5.1% to 0.6% over a year.</p><h2>What Publishers Actually Did About It</h2><p>At AdMonsters&#8217; Sell Side Summit in Austin, publishers weren&#8217;t commiserating about the good old days. They were sharing what&#8217;s working now. The most interesting conversations weren&#8217;t about optimizing for AI&#8212;they were about building businesses that don&#8217;t depend on search traffic as a primary driver.</p><p>People Inc. is diversifying into multiple directions at once. They launched D/Cipher, a contextual targeting platform that uses AI to surface insights about how audiences engage with content and ads. They&#8217;re investing in live events. They acquired Feedfeed, a cooking creator network, to strengthen social media presence. They&#8217;re treating the web page business as something to maintain, not grow, while building revenue elsewhere.</p><p>The Arena Group provides another data point. After six consecutive quarters of negative revenue growth, they turned profitable over four quarters by segmenting audiences into engagement buckets. Their approach: 80% of traffic comes from users making their first site visit in a year. For this group, maximize page views and ad impressions through content recommendations. For users returning twice a year, ramp up ad density. For users visiting four or more times monthly, funnel them to subscriptions and newsletters.</p><p>This isn&#8217;t just about volume&#8212;it&#8217;s about knowing what each visitor is worth and optimizing accordingly. Arena Group built Encore, their AI-driven data platform, to automate these decisions. The insight: AI isn&#8217;t replacing human judgment. It&#8217;s amplifying what human teams already know about their audiences and letting them act faster.</p><h2>The Direct Relationship Finally Matters</h2><p>For years, publishers talked about building direct relationships with readers. It sounded smart but felt optional when Google sent steady traffic and Facebook drove viral reach. Now it&#8217;s not optional.</p><p>Dotdash Meredith (now People Inc.) reduced its Google search dependency from 60% of traffic in 2021 to just over a third in 2025. That didn&#8217;t happen by accident. They deliberately shifted strategy to grow audiences that &#8220;know the brand and deliberately come to a website homepage or app rather than those stumbling across it through an information query in search.&#8221;</p><p>Email outperforms every other channel. In the 2025 Marigold Consumer Trends Index, 54% of consumers said email drives purchases more than social or SMS. Hearst UK boosted email conversion rates by up to 100% by automating campaigns across 16 brands and personalizing in real time.</p><p>Some publishers are leaning into branded apps and building communities. Others are launching membership programs. A few are experimenting with interactive features like quizzes and polls&#8212;not for engagement theater, but to gather first-party data that fuels personalization and gives advertisers better targeting.</p><p>The common thread: they&#8217;re investing in owned channels where they control the relationship and the economics.</p><h2>When Your Main Customer Becomes Your Competitor</h2><p>Here&#8217;s the uncomfortable reality publishers navigate: Google trained its AI on publisher content, then built tools that answer questions using that content without sending traffic. Publishers can opt out of AI Overviews, but doing so means opting out of Google Search entirely. It&#8217;s not a choice&#8212;it&#8217;s a hostage situation with better PR.</p><p>Columbia University researcher Klaudia Ja&#378;wi&#324;ska describes it as a Faustian bargain: &#8220;Publishers are kind of in a bind because if you want to opt out of AI Overviews, you opt out of Google Search entirely.&#8221;</p><p>Some publishers are fighting back through licensing deals. News Corp and Axel Springer negotiated agreements with AI companies. The New York Times licensed content to Amazon for AI training. The Atlantic and others work with OpenAI. These deals bring revenue, but they don&#8217;t solve the traffic problem.</p><p>Other publishers are suing. The Times filed a federal copyright suit against OpenAI. About a dozen lawsuits target various AI companies. The legal arguments: AI companies used content without compensation and built products that compete directly with the publishers who created that content.</p><p>Meanwhile, Perplexity launched a program to share advertising revenue with publishers when its chatbot surfaces their content. It&#8217;s something, but it&#8217;s not replacing lost traffic-based ad revenue.</p><h2>What The Numbers Actually Tell Us About Adaptation</h2><p>Traffic to the world&#8217;s 500 most-visited publishers dropped 27% year-over-year since February 2024&#8212;about 64 million visits per month, according to Similarweb. AI chatbots delivered only 5.5 million referrals per month in the same period. The math is simple: AI isn&#8217;t replacing what search used to provide.</p><p>Some publishers are accepting lower traffic as the new baseline and optimizing their business models accordingly. Subscription revenue, when sustainable, provides more predictability than traffic-dependent advertising. Direct-sold advertising to known audiences commands higher CPMs than programmatic placements. First-party data becomes more valuable as third-party cookies disappear.</p><p>But this transition isn&#8217;t smooth or universal. Publishers with steady subscriber bases have more options than those dependent on traffic arbitrage. Specialized publications targeting specific audiences can survive better than general-interest sites trying to scale. The winners won&#8217;t be the ones who figure out how to game AI search&#8212;they&#8217;ll be the ones who build businesses that work even if AI search sends zero traffic.</p><h2>The Playbook That&#8217;s Actually Working</h2><p>First, stop pretending search traffic will return to 2023 levels. It won&#8217;t. Plan accordingly.</p><p>Second, audit what traffic actually drives value. Not all traffic is equal. A reader who comes directly to your site, spends 10 minutes, and comes back next week is worth more than a hundred search visitors who bounce after 30 seconds. Stop optimizing for volume. Start optimizing for value.</p><p>Third, invest in capturing first-party data. Every interaction is an opportunity to learn what people care about. Progressive profiling through preference centers, interactive features, and smart registration flows builds understanding over time. This data makes your content more relevant and your ad inventory more valuable.</p><p>Fourth, diversify revenue. Advertising alone won&#8217;t sustain most publishers in this environment. Subscriptions, memberships, events, licensing, affiliate commerce, newsletters, premium content&#8212;mix enough revenue streams that losing any single one doesn&#8217;t crater your business.</p><p>Fifth, accept that AI is here and figure out how to use it. Arena Group&#8217;s Encore platform shows what&#8217;s possible: AI identifying patterns humans miss, automating decisions that used to require manual analysis, personalizing at scale. Publishers who see AI as only a threat will lose to publishers who use it as a tool.</p><h2>What Success Looks Like Now</h2><p>The publishers succeeding in this environment share common traits. They know their audiences deeply&#8212;not just demographics, but actual preferences and behaviors. They own multiple channels to reach those audiences. They&#8217;ve diversified revenue beyond traffic-dependent advertising. They invest in technology that helps them personalize at scale. They&#8217;ve accepted that traffic is a lagging indicator, not a primary goal.</p><p>None of this is comfortable. Publishers built businesses around search traffic for two decades. Those businesses need fundamental reconstruction. But the alternative&#8212;waiting for traffic patterns to stabilize or hoping Google changes course&#8212;isn&#8217;t a strategy. It&#8217;s denial.</p><p>The interesting question isn&#8217;t whether publishers will adapt. Some will, some won&#8217;t. The interesting question is what the publishing industry looks like in three years when this transition is further along. Smaller, probably. More concentrated, certainly. But also potentially more sustainable, built on direct relationships rather than algorithmic whims.</p><p>Search traffic isn&#8217;t coming back. The sooner publishers accept that and build accordingly, the better positioned they&#8217;ll be for whatever comes next.</p>]]></content:encoded></item><item><title><![CDATA[The Hidden Cost of Cloud Concentration: What Oracle's All-In Bet Tells Us About Infrastructure Risk]]></title><description><![CDATA[Why borrowing $200 billion to serve one unprofitable customer reveals the fragility of modern cloud infrastructure]]></description><link>https://www.datatechandtools.com/p/the-hidden-cost-of-cloud-concentration</link><guid isPermaLink="false">https://www.datatechandtools.com/p/the-hidden-cost-of-cloud-concentration</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Thu, 13 Nov 2025 16:22:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>When Your Biggest Customer Is Also Your Biggest Risk</h2><p>Oracle&#8217;s recent market turbulence reveals something most companies don&#8217;t want to admit: we&#8217;ve built our digital economy on a surprisingly fragile foundation. The company&#8217;s stock dropped nearly 30% in a month after announcing deals that would generate $300 billion in revenue between 2027 and 2032. Wall Street didn&#8217;t celebrate. Instead, investors worried about something more fundamental than growth projections&#8212;they worried about what happens when one company bets everything on another company that doesn&#8217;t actually make money yet.</p><p>Here&#8217;s what makes this situation different from typical tech investments: Oracle is borrowing aggressively to build data centers for OpenAI, a company that&#8217;s burning through capital trying to figure out how to make artificial intelligence profitable. Oracle&#8217;s debt is expected to balloon from $96 billion to $290 billion by 2028. Their debt-to-equity ratio hit 500%&#8212;compare that to Amazon&#8217;s 50% or Microsoft&#8217;s 30%.</p><p>The credit agencies noticed. S&amp;P Global pointed out that by 2028, a third of Oracle&#8217;s revenues will come from a single customer. Not just any customer&#8212;a venture capital-funded startup navigating the uncertain economics of AI. That&#8217;s not diversification. That&#8217;s dependence.</p><h2>The Illusion of Choice in Cloud Computing</h2><p>Step back and look at the broader landscape. Three companies control more than 60% of the global cloud infrastructure market. AWS holds 30%, Microsoft Azure has 20%, and Google Cloud claims 13%. Everyone else is fighting over scraps in single digits.</p><p>This concentration creates a peculiar dynamic. Companies talk about multi-cloud strategies, but the reality is messier. Most organizations end up with a primary provider and maybe one backup that handles overflow or specific workloads. True portability between clouds remains more aspiration than reality, despite what the marketing materials promise.</p><p>AWS processes roughly 7% of American electricity usage for AI and cloud services, and that number is climbing. When Oracle commits to spending over $100 billion on AI infrastructure with long-term data center leases that extend far beyond their contracts with customers, they&#8217;re making a bet that the current arrangement holds. What happens if it doesn&#8217;t?</p><h2>The Math That Should Keep CFOs Up at Night</h2><p>Let&#8217;s talk about what concentration risk actually looks like in dollar terms. The cloud infrastructure market hit $99 billion in quarterly revenue in Q2 2025, growing at 25% year-over-year. That&#8217;s $400 billion annually. This money is flowing to a shrinking number of providers who control the computational backbone of modern business.</p><p>Oracle&#8217;s infrastructure business is forecast to grow revenues by more than 10 times by 2029. They&#8217;re building massive campuses with 100-gigawatt power requirements. These aren&#8217;t decisions you reverse quickly. The lease commitments alone create $100 billion in off-balance sheet obligations.</p><p>Meanwhile, OpenAI has committed to spending $1.4 trillion on AI infrastructure over eight years. They&#8217;ve struck deals with multiple big tech companies, not just Oracle. If market conditions change, if AI monetization doesn&#8217;t materialize as expected, if competitive dynamics shift&#8212;these dominoes fall in unpredictable patterns.</p><h2>What Smart Organizations Are Actually Doing</h2><p>The companies navigating this landscape successfully aren&#8217;t the ones with the most elaborate multi-cloud architectures. They&#8217;re the ones asking different questions. Instead of &#8220;How do we spread workloads across providers?&#8221; they&#8217;re asking &#8220;What happens if our primary provider has a major incident?&#8221; and &#8220;How long can we operate with degraded cloud services?&#8221;</p><p>Some are identifying truly critical workloads and maintaining genuine alternatives&#8212;not just different clouds, but different approaches entirely. Edge computing for certain applications. On-premise systems for specific sensitive operations. Actual redundancy, not theoretical portability.</p><p>Others are negotiating contracts differently, pushing for more transparent SLAs that account for the reality that even the largest providers have outages. They&#8217;re stress-testing their assumptions about uptime and recovery time objectives based on real incidents, not vendor promises.</p><h2>The Question Everyone Avoids</h2><p>Here&#8217;s the uncomfortable part: the hyperscalers have better infrastructure than most companies could ever build themselves. AWS genuinely has expertise that most IT departments lack. The economics of scale work. This isn&#8217;t an argument for going back to building your own data centers.</p><p>But it is an argument for being honest about the trade-offs. When Oracle borrows $200 billion to build infrastructure for a customer that might not exist in its current form five years from now, that&#8217;s a systems-level fragility. When three companies control the computational infrastructure for most of the developed economy, that concentration represents risk that extends beyond any single company&#8217;s balance sheet.</p><p>The retail industry learned this lesson with just-in-time supply chains during the pandemic. The financial sector learned it with interconnected risk in 2008. The question isn&#8217;t whether cloud concentration presents systemic risk&#8212;it does. The question is what organizations should do about it before the next major stress test reveals just how interconnected everything has become.</p><h2>Building Better Contingency</h2><p>Realistic contingency planning starts with admitting what you can and can&#8217;t control. You can&#8217;t control when major cloud providers have incidents. You can control how your systems respond when they do.</p><p>Map your critical paths. Not what your architecture diagrams say, but what actually keeps revenue flowing and operations running. Test those paths under degraded conditions. Know how long you can operate with limited cloud access. Have actual alternatives for your most critical functions, even if those alternatives are more expensive or less elegant.</p><p>Document your vendor dependencies at a granular level. Not just &#8220;we use AWS&#8221; but specifically which AWS services support which business functions, with what recovery time requirements. This clarity helps when making build-versus-buy decisions for new capabilities.</p><p>And maybe most importantly: recognize that everyone else is navigating the same constraints. Your competitors face the same concentration risk. Your partners rely on the same infrastructure. The opportunity isn&#8217;t in pretending this risk doesn&#8217;t exist&#8212;it&#8217;s in being one of the organizations that plans for what happens when concentration meets reality.</p><p>The cloud isn&#8217;t going away. Neither is the concentration at the top of the market. What changes is how prepared you are for the inevitable moment when being prepared matters more than being optimized.</p>]]></content:encoded></item><item><title><![CDATA[When The Cloud Isn't Enough]]></title><description><![CDATA[Why in-house AI chip development is becoming a competitive necessity]]></description><link>https://www.datatechandtools.com/p/when-the-cloud-isnt-enough</link><guid isPermaLink="false">https://www.datatechandtools.com/p/when-the-cloud-isnt-enough</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Wed, 12 Nov 2025 11:38:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Magnificent Seven tech stocks have outperformed the market largely because of AI infrastructure investment. But there&#8217;s a less obvious story beneath those earnings numbers: the companies building their own custom chips are pulling away from those relying on off-the-shelf solutions.</p><p>This isn&#8217;t about being early to AI. It&#8217;s about who controls the entire stack from silicon to software. And it&#8217;s about to become a defining competitive advantage that&#8217;s very expensive to replicate.</p><h3>The Nvidia dependency problem</h3><p>Every company doing AI at scale is dependent on Nvidia GPUs. This includes OpenAI, Meta, Microsoft, Amazon, Google, and hundreds of smaller companies. Nvidia has 90%+ market share in AI training chips.</p><p>That dependency is fine when supply is abundant and pricing is reasonable. It&#8217;s a problem when GPUs are constrained, when lead times extend to months, and when Nvidia&#8217;s pricing power increases.</p><p>The companies that saw this coming - Google with TPUs, Amazon with Trainium, Meta with their custom ASIC work - have optionality. They can use Nvidia when it makes sense and their own chips when it doesn&#8217;t. Companies that didn&#8217;t invest in custom silicon are stuck waiting in Nvidia&#8217;s order queue.</p><h3>The China-specific calculation</h3><p>Volkswagen&#8217;s development of their own AI chips for the China market through partnerships with Xpeng isn&#8217;t just about technology capabilities. It&#8217;s about supply chain resilience in a world where chip export controls are tightening.</p><p>US restrictions on advanced chip exports to China create uncertainty for any company relying on American semiconductors for Chinese operations. Developing or licensing China-based chip solutions provides insulation from export control risk.</p><p>This is a preview of what might happen more broadly if geopolitical tensions increase. Companies will diversify chip sources not just for technical reasons, but for political risk management.</p><h3>The economic model shift</h3><p>Building custom chips requires massive upfront investment. Google spent billions developing TPU infrastructure. The business case only makes sense at enormous scale.</p><p>But the inflection point where custom chips become economically viable keeps getting lower. Five years ago, maybe only Google and Amazon had sufficient scale. Today, Microsoft, Meta, Apple, and several other companies clear the threshold. In five more years, dozens more might.</p><p>As AI becomes more central to core products, companies reach a point where paying Nvidia&#8217;s markup doesn&#8217;t make sense compared to investing in custom silicon. The calculation isn&#8217;t just about cost per chip - it&#8217;s about cost per AI operation, and optimizing for your specific workload.</p><h3>Why general-purpose GPUs are expensive</h3><p>Nvidia&#8217;s chips are designed to be general-purpose. They need to handle gaming, cryptocurrency mining, AI training, AI inference, scientific computing, and other workloads. That versatility comes at a cost.</p><p>If you know exactly what workloads you&#8217;re running - for instance, you&#8217;re only doing large language model inference - you can design chips optimized specifically for that. You remove capabilities you don&#8217;t need. You add capabilities that matter for your use case. The result is often better price-performance.</p><p>This is why Google&#8217;s TPUs excel at certain AI workloads despite having less theoretical peak performance than Nvidia GPUs. They&#8217;re optimized for Google&#8217;s actual usage patterns rather than general-purpose computing.</p><h3>The talent barrier</h3><p>Building custom chips isn&#8217;t just expensive in money. It&#8217;s expensive in talent. You need chip designers, verification engineers, software engineers to write compilers and frameworks, and systems engineers to integrate everything.</p><p>Companies that started early have built these teams. Companies starting now face a brutal hiring market where experienced chip designers are extremely expensive and scarce. TSMC&#8217;s lead times for advanced node production are measured in years.</p><p>This creates a &#8220;rich get richer&#8221; dynamic. Companies with existing chip programs can iterate and improve. Companies without programs face years and billions of dollars just to get to v1.</p><h3>The software stack problem</h3><p>Having custom chips is only valuable if you can actually use them. That requires software frameworks, compilers, and tooling. Nvidia&#8217;s CUDA ecosystem took decades to build and is one of their biggest competitive advantages.</p><p>Companies building custom chips need to either build equivalent software stacks or ensure compatibility with existing frameworks. That&#8217;s non-trivial. Many custom AI chips fail not because the hardware is bad, but because the software ecosystem isn&#8217;t mature enough.</p><p>This is why some companies are open-sourcing their chip designs or software stacks - they need ecosystem support to be viable. You can have the best chip in the world, but if developers can&#8217;t easily use it, adoption won&#8217;t happen.</p><h3>The inference vs. training split</h3><p>An underappreciated nuance: AI training and AI inference have very different requirements. Training requires massive compute, memory bandwidth, and runs in data centers. Inference needs to be fast, energy-efficient, and often runs closer to end users.</p><p>Many companies are finding that custom chips make more sense for inference than training. Training can use Nvidia GPUs. Inference can use specialized chips optimized for low latency and efficiency.</p><p>This is the approach several cloud providers are taking: offer Nvidia for training, offer custom chips for inference. It reduces total dependency on Nvidia while not requiring custom solutions for every workload.</p><h3>The data center efficiency angle</h3><p>Beyond cost per chip, there&#8217;s cost per watt. Running massive AI infrastructure requires enormous electricity. Chips that deliver better performance per watt directly translate to lower operating costs at scale.</p><p>Google&#8217;s TPUs and Amazon&#8217;s Trainium chips tout power efficiency as a key advantage. For companies running data centers at the scale of millions of square feet, power efficiency differences compound into hundreds of millions in annual operating cost differences.</p><p>This matters more as AI workloads grow. A data center full of AI chips might consume 10-50 megawatts of power. Improving efficiency by 20% saves millions annually in electricity alone.</p><h3>The Amazon Web Services problem</h3><p>Amazon&#8217;s interesting position: they need custom chips to reduce costs for AWS infrastructure. But Nvidia is also a key partner whose GPUs are sold through AWS.</p><p>Amazon can&#8217;t fully abandon Nvidia because customers want access to latest Nvidia chips. But Amazon also can&#8217;t rely entirely on Nvidia because margins on cloud services get compressed if chip costs stay high.</p><p>The solution is a portfolio approach: offer Nvidia for customers who want it, promote Graviton and Trainium for customers optimizing for cost. Give customers choice while steering toward higher-margin custom solutions when possible.</p><h3>The Microsoft-OpenAI dynamic</h3><p>Microsoft has invested heavily in OpenAI but also develops custom AI chips. These strategies seem contradictory. Why build chips when your primary AI partner (OpenAI) can train models that you then deploy?</p><p>The answer is probably insurance. Microsoft doesn&#8217;t control OpenAI. If that relationship changes, or if OpenAI&#8217;s costs become unreasonable, Microsoft needs alternatives. Custom chips provide optionality.</p><p>It&#8217;s also possible Microsoft envisions running multiple LLMs, not just OpenAI&#8217;s. Supporting other models or developing their own requires infrastructure that isn&#8217;t dependent on a single partner.</p><h3>What this means for smaller companies</h3><p>The chip development arms race creates a problem for smaller AI companies. They can&#8217;t afford to build custom chips. They&#8217;re stuck paying market prices for Nvidia GPUs or cloud compute.</p><p>This becomes a competitive disadvantage if larger companies with custom chips can deliver AI capabilities at dramatically lower cost. Price-per-inference might differ by 5-10x between companies with optimized custom chips versus those using cloud-based general-purpose GPUs.</p><p>Smaller companies either need to be so much better algorithmically that they overcome the hardware disadvantage, or they need to find niches where custom chips don&#8217;t matter. Neither is easy.</p><h3>The edge computing shift</h3><p>An emerging factor: edge AI. Running AI models on devices rather than in clouds. This requires completely different chip designs - low power, small form factor, but still capable of running inference workloads.</p><p>Apple, Qualcomm, and others are developing chips optimized for edge AI. This might be where the next wave of AI infrastructure competition happens. Not bigger data centers, but smarter devices.</p><p>Companies that figure out how to deliver useful AI capabilities on device, without needing cloud connectivity, unlock new use cases and business models. That requires purpose-built chips, not repurposed data center GPUs.</p><h3>The geopolitical wildcards</h3><p>Export controls on advanced chips are already limiting what companies can deploy in certain regions. If restrictions tighten further, companies operating globally need chip sources that aren&#8217;t subject to US export restrictions.</p><p>This creates opportunities for non-US chip designers. If Chinese companies, European companies, or others can provide alternatives to Nvidia that aren&#8217;t subject to US restrictions, there&#8217;s a ready market.</p><p>The chip industry has historically been global. Geopolitics is forcing regionalization. Companies serving global markets need chip strategies that work across different regulatory regimes.</p><h3>The long-term consolidation</h3><p>We&#8217;re probably headed toward a world where there are two tiers: companies that build custom AI chips and companies that use off-the-shelf solutions. The gap between tiers will widen over time.</p><p>Tier one companies will have lower costs, better performance for their specific workloads, and more control over their technology stack. Tier two companies will have higher costs, less optimization, and dependency on chip vendors.</p><p>This doesn&#8217;t mean tier two companies can&#8217;t succeed. But they&#8217;ll need other advantages - better algorithms, better data, better products - to offset the infrastructure disadvantage.</p><h3>Why this matters for marketing</h3><p>This seems like pure technology discussion, but it has marketing implications. Companies with better AI infrastructure can deliver better AI-powered products. Better recommendations, better search, better personalization, better customer service.</p><p>That product advantage translates to marketing advantage. If your product experience is noticeably better because of superior AI capabilities, marketing becomes easier. If you&#8217;re trying to market a product with inferior AI because your infrastructure costs more and performs worse, you&#8217;re fighting uphill.</p><p>Infrastructure isn&#8217;t just about cost efficiency. It&#8217;s about enabling product capabilities that competitors can&#8217;t match. That&#8217;s where AI chip development connects to business outcomes.</p>]]></content:encoded></item><item><title><![CDATA[When Technical Expertise Becomes Infrastructure]]></title><description><![CDATA[Why knowing what to optimize beats knowing how to execute in the age of automated advertising]]></description><link>https://www.datatechandtools.com/p/when-technical-expertise-becomes</link><guid isPermaLink="false">https://www.datatechandtools.com/p/when-technical-expertise-becomes</guid><dc:creator><![CDATA[Data, Tech & Tools]]></dc:creator><pubDate>Wed, 12 Nov 2025 02:06:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i1yp!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d230c2d-9c97-4ab3-8907-768a496c8423_1212x1212.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The 2025 NewFronts revealed something most people missed while fixated on AI demos: the industry isn&#8217;t automating advertising&#8212;it&#8217;s fundamentally restructuring where value lives in the marketing stack.</p><p>Google&#8217;s demo wasn&#8217;t impressive because AI planned a media buy in seconds. It was significant because it collapsed an entire professional discipline into a natural language prompt. That&#8217;s not efficiency. That&#8217;s obsolescence with a user-friendly wrapper.</p><h2>The Expertise Trap</h2><p>Here&#8217;s what&#8217;s actually happening: platforms are absorbing technical knowledge as infrastructure.</p><p>For two decades, programmatic advertising created a specialized skill set. Knowing DSP mechanics. Understanding bid landscapes. Managing supply path optimization. Structuring campaigns for algorithmic learning. These weren&#8217;t trivial&#8212;they required real expertise and generated real value.</p><p>That value is evaporating. Not because the work doesn&#8217;t matter, but because platforms now handle it automatically&#8212;Google&#8217;s Display &amp; Video 360 lets buyers simply describe their goals like &#8220;Find deals from premium CTV publishers reaching audiences interested in live sports&#8221; and the system executes the entire workflow. <a href="https://www.viantinc.com/insights/blog/ai-in-programmatic-advertising/">Viant</a></p><p>The people who built careers on execution mechanics are discovering their expertise has become the cost of doing business. It&#8217;s embedded in the platform now, available to anyone with a login.</p><p>But there&#8217;s a more uncomfortable truth: most organizations haven&#8217;t noticed what&#8217;s becoming valuable in its place.</p><h2>Optimization Across Dimensions</h2><p>The real shift isn&#8217;t about doing one thing automatically. It&#8217;s about simultaneous optimization across creative, placement, audience, and bidding&#8212;dimensions that previously required sequential human decision-making. <a href="https://alphageek.digital/content-hub/geek-speak-ep-4-the-algorithm-decoded-how-to-master-metas-advantage-and-googles-performance-max">Alphageek</a><a href="https://topprospectsolutions.com/2025/05/07/is-google-performance-max-worth-it-in-2025/">Topprospectsolutions</a></p><p>Think about what this actually means in practice:</p><p>A traditional media team runs a test. They wait for statistical significance. They analyze results. They make adjustments. They implement changes. The cycle takes days minimum, often weeks.</p><p>The machine tests 10,000 combinations in the first hour. It identifies patterns in the first afternoon. It reallocates budget continuously. It incorporates new signals from every impression. And it never stops learning.</p><p>This isn&#8217;t a better process&#8212;it&#8217;s a different category of work. Like comparing hand calculations to spreadsheets. The spreadsheet didn&#8217;t make math faster; it made entirely different types of analysis possible.</p><p>Here&#8217;s what people miss: you can&#8217;t compete with this by being better at the old process. You can&#8217;t &#8220;out-optimize&#8221; a system that tests thousands of variations per second. The game changed.</p><h2>Context as Signal</h2><p>Contextual targeting offers the clearest example of where this is headed.</p><p>Traditional contextual advertising matched keywords. &#8220;Running shoes&#8221; ads on running content. Simple, transparent, limited.</p><p>Current systems analyze semantic meaning, emotional tone, visual composition, cultural relevance, and temporal significance simultaneously. <a href="https://www.prosemedia.com/blog/contextual-intelligence-how-ai-is-revolutionizing-ad-relevance-beyond-basic-targeting">Prose</a><a href="https://blog.seedtag.com/contextual-ai-redefining-audience-targeting">Seedtag</a> They understand the difference between a car chase and a family road trip&#8212;not just that both involve cars.</p><p>Disney&#8217;s &#8220;Magic Words&#8221; technology identifies emotional contexts within content&#8212;allowing brands to target specific moods like inspiration or moments centered around food culture, rather than just topical categories. <a href="https://www.streamtvinsider.com/advertising/disney-debuts-streaming-shoppable-ad-beta-program-contextual-advertising-tools">StreamTV Insider</a><a href="https://www.tvrev.com/news/disney-brings-more-ad-magic-to-ces">TVREV</a></p><p>The implications extend beyond better ad placement. When advertisers using advanced contextual targeting see 335% higher engagement rates than traditional audience targeting <a href="https://www.prosemedia.com/blog/contextual-intelligence-how-ai-is-revolutionizing-ad-relevance-beyond-basic-targeting">Prose</a>, it suggests we&#8217;ve been organizing advertising around the wrong fundamental unit.</p><p>We&#8217;ve been targeting <em>people</em> when we should have been targeting <em>moments</em>.</p><h2>The Infrastructure Nobody Discusses</h2><p>All this automation depends on infrastructure that gets zero attention at industry events: identity resolution.</p><p>Systems like LiveRamp&#8217;s integration with Snowflake enable brands to translate fragmented identifiers&#8212;cookies, mobile IDs, CTV identifiers, first-party data&#8212;into unified profiles without moving data or compromising privacy. <a href="https://docs.liveramp.com/identity/en/perform-identity-resolution-in-snowflake.html">Liveramp</a><a href="https://liveramp.com/our-platform/cloud/snowflake/cross-screen-media-powers-next-generation-ctv-activation/">LiveRamp</a></p><p>Without this plumbing, nothing else works. You can&#8217;t optimize toward business outcomes without connecting ad exposure to actual transactions. You can&#8217;t measure cross-platform reach when every channel reports different numbers.</p><p>Recent CTV analysis revealed that while 62% of audiences were reached via CTV, no single network touched more than 40% of them. <a href="https://liveramp.com/our-platform/cloud/snowflake/cross-screen-media-powers-next-generation-ctv-activation/">LiveRamp</a> Without identity resolution, you&#8217;d think you reached 200% of the audience and vastly overspent.</p><p>This is the unglamorous foundation that determines whether sophisticated automation delivers results or just sophisticated reporting on wasted spend.</p><h2>What Remains Valuable</h2><p>As technical execution becomes infrastructure, a different type of judgment becomes critical:</p><p><strong>Defining what success means</strong> beyond surface metrics. Not conversions&#8212;<em>which</em> conversions matter and why. Not revenue&#8212;<em>incremental</em> revenue at <em>acceptable</em> customer acquisition costs with <em>sustainable</em> lifetime value.</p><p><strong>Providing context algorithms can&#8217;t access</strong>. Upcoming product launches. Competitive positioning. Brand guidelines that aren&#8217;t reducible to keyword blocklists. Strategic priorities that shift faster than learning phases.</p><p><strong>Interpreting results through business implications</strong>. When CPA increases 20%, is that failure or success? Depends whether you shifted to higher-value customers. The machine shows you the numbers. You need to know what they mean for the business.</p><p><strong>Setting constraints that reflect organizational reality</strong>. The algorithm will optimize. But optimize toward what? At what pace? With what risk tolerance? These aren&#8217;t technical questions.</p><p>Compare this to what&#8217;s being automated:</p><p>Understanding SSP dynamics. Optimizing bid modifiers. Testing creative variations. Choosing between channels. Adjusting frequency caps. Managing placement lists. Setting audience overlaps.</p><p>All technical. All valuable. All becoming table stakes that platforms handle.</p><p>As Zuckerberg described the endpoint: &#8220;You don&#8217;t need any creative, you don&#8217;t need any targeting demographic, you don&#8217;t need any measurement, except to be able to read the results that we spit out.&#8221; <a href="https://www.viantinc.com/insights/blog/ai-in-programmatic-advertising/">Viant</a></p><p>That&#8217;s deliberately provocative, but directionally accurate. The machine handles execution. Humans need to handle meaning.</p><h2>The Real Challenge for Organizations</h2><p>If you&#8217;re building for this environment, three areas demand attention:</p><p><strong>Outcome literacy across teams</strong></p><p>Marketing needs to understand business economics at the same level as finance. Finance needs to understand marketing mechanics well enough to set intelligent constraints. When the algorithm asks &#8220;what should I optimize for?&#8221; your organization needs a real answer, not a proxy metric.</p><p>With 80% of programmatic marketers already using AI to adapt spending and targeting strategies, and the sector growing at 24.5% annually <a href="https://bidscube.com/blog/2025/03/17/ai-in-programmatic-advertising-the-future-of-automated-ad-buying/">BidsCube</a>, this isn&#8217;t future preparation&#8212;it&#8217;s current competitiveness.</p><p><strong>Data infrastructure as competitive advantage</strong></p><p>Clean, connected, privacy-compliant data isn&#8217;t a technical requirement. It&#8217;s the foundation that determines whether AI works or wastes money at scale. Identity resolution, measurement frameworks, attribution models&#8212;these enable everything else or create expensive blind spots.</p><p><strong>Strategic clarity on the automation-control spectrum</strong></p><p>Yahoo&#8217;s pitch emphasized advertiser control: &#8220;we want to give you the power to buy however it makes sense for your brand because you know it best.&#8221; <a href="https://www.viantinc.com/insights/blog/ai-in-programmatic-advertising/">Viant</a> Google, Meta, and Snap are betting the opposite direction&#8212;full automation toward outcome goals.</p><p>Neither position is wrong. Regulated industries may need control for compliance. Performance-focused brands may want maximum automation. Most organizations will need both: automated execution within strategically defined boundaries.</p><h2>Looking at Second-Order Implications</h2><p>The obvious effects&#8212;some jobs changing, some workflows automating&#8212;miss the larger structural shifts:</p><p><strong>Agencies</strong> survive by moving upstream to strategy and governance, not by defending execution expertise that&#8217;s becoming platform features. The question isn&#8217;t &#8220;can we manage more campaigns?&#8221; It&#8217;s &#8220;can we define what the algorithms should optimize toward and audit whether they&#8217;re doing it?&#8221;</p><p><strong>Creative work</strong> becomes modular. Not &#8220;make an ad&#8221; but &#8220;make components the system can assemble into thousands of variations.&#8221; That&#8217;s not limiting&#8212;it&#8217;s a different creative challenge. Like going from painting to directing: higher-level decisions, automated execution.</p><p><strong>Publishers</strong> lose commodity inventory. If buyside automation makes audiences fungible, publishers need differentiation through content quality, contextual environment, first-party data, and brand safety. Eyeballs become less valuable than context.</p><p><strong>Brand teams</strong> can&#8217;t outsource strategy to agencies while staying removed from execution details. You need fluency in measurement frameworks, data strategies, and outcome definitions. Otherwise the machine efficiently optimizes toward the wrong thing.</p><h2>What Matters Going Forward</h2><p>Strip away the presentation theater and focus on structural changes:</p><p>Upfront buying is merging into programmatic DSPs, collapsing the distinction between reserved and auction-based inventory. Identity resolution is moving into cloud data warehouses like Snowflake and Databricks&#8212;becoming infrastructure instead of managed service. Contextual targeting has evolved from keyword matching to multimodal scene analysis. Smart TV manufacturers are becoming ad platforms, not just distribution. Multi-dimensional optimization is baseline expectation, not advanced technique.</p><p>The era of &#8220;knowing how to execute media strategy&#8221; is closing. The era of &#8220;knowing what outcomes to pursue and why&#8221; is opening.</p><p>If your organization is still optimizing for clicks, impressions, or generic conversions, you&#8217;re competing on metrics the machines already beat you on. The organizations winning are operating one level higher: incremental revenue, new customer acquisition with specific LTV profiles, contribution margin after all costs, strategic positioning relative to competitors.</p><p>The machines handle tactics now. Strategy&#8212;real strategy, not &#8220;campaign strategy&#8221; but business strategy operationalized through marketing&#8212;that&#8217;s what can&#8217;t be automated. Because it requires understanding things the algorithm doesn&#8217;t have access to: your competitive position, your organizational capabilities, your risk tolerance, your long-term vision.</p><p>&#8220;I know how to use the tools&#8221; is losing value fast. But &#8220;I know what the business needs and how to point automated systems at it&#8221;&#8212;that&#8217;s becoming the scarce skill.</p><p>The interface became the strategy. The question is whether you&#8217;re building capability at the new layer that matters, or defending expertise that&#8217;s already migrated into platform features.</p>]]></content:encoded></item></channel></rss>