Unprompted: AI Assumptions That Are Costing Teams Time, Money and Legal Protection

In this edition of Unprompted: The AI Marketing Brief, we unpack the research, legal decisions and workforce frameworks quietly dismantling the premises most marketing AI strategies are built on.

Key Highlights

  • AI agents are currently tested mainly on software engineering tasks, underrepresenting complex marketing functions like management, legal and interpersonal skills.
  • Deploying AI in one area often creates ripple effects, requiring additional roles such as prompt managers, quality reviewers and infrastructure support, which can offset initial efficiency gains.
  • Effective prompt strategies include providing past work examples, requesting multiple options, and avoiding role-playing prompts for factual tasks to improve output quality.
  • Legal rulings indicate that AI-generated visual assets without human creative input cannot be copyrighted, urging teams to document human involvement in content creation.
  • A structured scenario mapping framework helps marketing teams anticipate AI's impact across workflows, enabling more strategic and role-specific automation planning.

Welcome to Unprompted: The AI Marketing Brief, where I cut through the noise in AI news and research to show marketers what’s happening — and why it matters for your work, your team and your career. 

Let me ask you something. How much of what you think you know about AI is actually true? 

This month's research didn't add to the AI conversation so much as it quietly dismantled several assumptions that most marketing teams are actively building on. And not minor assumptions — foundational ones. The kind that are already baked into your training programs, your workforce plans, your content workflows and your legal agreements. 

Here's what the research is actually saying: AI agents aren't as ready for complex marketing work as the vendors are telling you.

A new arXiv study found that the benchmarks used to validate agent performance are overwhelmingly built around software engineering tasks, not the judgment-intensive, interpersonal and strategically complex work that defines most marketing roles. 

The workforce planning framework you're probably using — if you have one at all — is almost certainly too simple, because Gartner's new four-scenario model shows that deploying AI in one area almost always creates unexpected ripple effects in three others.

The prompting tricks your team learned in that training session last year? The BBC asked the researchers, and most of them don't hold up. And those AI-generated visual assets you've been treating as proprietary brand IP? The Supreme Court just declined to establish copyright protection for them, which means they probably aren't yours in the way you think they are. 

This edition of Unprompted is a reality check — not a retreat. The teams pulling ahead right now aren't the ones moving fastest. They're the ones who stopped long enough to question what they thought they knew, rebuilt on solid ground and then accelerated. 

How Well Does Agent Development Reflect Real-World Work? 

Authors: Zora Z. Wang, Sanidhya Vijayvargiya, Aspen Chen, Hanmo Zhang, Venu Arvind Arangarajan, Jett Chen, Valerie Chen, Diyi Yang, Daniel Fried, Graham Neubig  

Website: arXiv 

Just the Facts: Researchers analyzed 43 AI agent benchmarks and 72,342 tasks, mapping them against all 1,016 real-world occupations in the U.S. labor market using the O*NET occupational database, in order to determine how well current AI agent development actually reflects the distribution of human work.

The study found a substantial mismatch: Agent benchmarks are heavily concentrated in computer and mathematical domains — which represent only 7.6% of U.S. employment — while economically significant and highly digitized fields like management and legal work remain substantially underrepresented, and widely prevalent workplace skills such as interpersonal interaction are largely absent from agent testing frameworks.

The paper proposes a unified task complexity measure to assess agent autonomy levels across different work scenarios, and offers three benchmark design principles — domain and skill coverage, task realism and complexity, and granular evaluation — to guide future agent development toward more socially grounded and practically relevant progress.  

Why It Matters to Marketers: 

  • The finding that AI agent benchmarks overwhelmingly test software engineering and technical tasks — while largely ignoring management, communication and interpersonal skills — has direct implications for B2B marketing teams evaluating AI agents for their own workflows. Content strategy, editorial judgment, audience development and client communication are precisely the kinds of complex, interpersonal and judgment-intensive tasks the research identifies as undertested and underserved by current agent capabilities, meaning claims of agent readiness for marketing work should be treated with healthy skepticism.  
  • The paper's finding that agent development targets skills accounting for less than 5% of total U.S. employment suggests the "AI will automate knowledge work broadly" narrative is significantly ahead of where development actually is. For B2B marketing leaders being pressured to deploy agentic AI at scale, this research provides useful cover to take a measured, role-specific approach to automation — prioritizing the technically well-developed use cases (research, data processing, code-adjacent tasks) while preserving human judgment in the areas agents demonstrably cannot yet handle.
  • The research specifically calls out management and legal work as domains that are both economically significant and technically difficult for agents — characterized by ambiguous objectives and long-horizon dependencies. B2B marketing functions like brand strategy, editorial standards governance, campaign prioritization and vendor negotiations fall squarely into this category. Teams should be wary of vendor claims about agentic AI readiness in these areas, as the underlying benchmarks used to validate those claims almost certainly do not test for them.
  • The paper proposes a practical framework for assessing agent autonomy levels across different task types, which B2B marketing operations and content leaders can adapt as a due diligence lens when evaluating AI agent tools. Before deploying any agentic system on a marketing workflow, teams should ask vendors specifically which benchmarks were used to validate performance — and whether those benchmarks include tasks representative of the actual work involved, not just software engineering proxies. 

Use This Framework to Anticipate the Impact of AI on Jobs  

Author: Helen Poitevin  

Website: Gartner 

Just the Facts: Gartner introduces a four-scenario framework for how AI reshapes human roles across industries, organized around two key drivers: How much autonomy an organization gives AI, and how much effort goes into transforming work from what it currently is, with each task, process or role falling into a different scenario depending on where it lands relative to those two dimensions.

The four scenarios range from Scenario 1 (fewer workers filling in where AI cannot) through Scenario 2 (many workers using AI to do more), Scenario 3 (innovative workers collaborating with AI to push knowledge frontiers), to Scenario 4 (few to no workers running a fully AI-first enterprise or function). The article warns that even when an organization plans for a single scenario, "ripple effects" — secondary and often unexpected consequences of AI deployment — frequently trigger the need to support all four simultaneously, illustrating this through a customer service case study where a Scenario 1 headcount reduction creates parallel demand for Scenario 2 bot managers, Scenario 3 experience designers, and Scenario 4 AI agents with sales targets. 

Why It Matters to Marketers: 

  • The four-scenario framework maps directly onto where most B2B marketing and editorial teams sit right now — somewhere between Scenario 1 (AI handles routine content tasks, humans fill gaps) and Scenario 2 (humans use AI to produce more output without adding headcount). The ripple effect warning is practically important: Teams that add AI to content workflows to save time often find they've simultaneously created new coordination roles — prompt managers, AI output reviewers, editorial quality leads — that offset the savings in ways leadership didn't budget for. Naming this pattern gives marketing ops leaders a framework to anticipate it rather than react to it.  
  • The Scenario 3 description — "many innovative workers collaborate with AI to surpass the frontiers of knowledge" through deep cross-disciplinary and combinatorial innovation — is the scenario B2B publishing companies should be actively designing toward, not just stumbling into. This scenario looks like editors with deep domain expertise using AI to synthesize across verticals, identify cross-industry patterns and surface insights no single human could assemble alone — a fundamentally more valuable and defensible form of journalism than AI-assisted volume production.
  • The article's core warning — that planning for one scenario often forces you to fund all four simultaneously — has direct budget and headcount implications for marketing and content leaders who have sold efficiency gains upward without accounting for ripple effects. If your organization reduced editorial headcount expecting Scenario 1 savings, it may now find itself needing Scenario 2 workers to manage AI tools, Scenario 3 strategists to maintain brand and audience alignment and Scenario 4 infrastructure to run autonomous distribution — all at once, and none of which was in the original business case.
  • Use the two-axis framework — AI autonomy versus work transformation — as a practical mapping exercise for your own marketing function. For each major workflow (newsletter production, SEO content, social distribution, demand gen, events promotion), place it in one of the four scenarios based on where it sits today, where leadership wants it in 12 months, and what ripple effects that movement is likely to create in adjacent roles. This kind of structured scenario mapping is a concrete deliverable that editorial and marketing ops leaders can bring to a VP-level conversation about AI strategy without requiring a formal business case. 

Do You Have to Be Polite to AI?   

Author: Thomas Germain  

Website: BBC  

ID 358845195 | Ai Wrong © Elenabsl | Dreamstime.com
Human and AI robot completing a jigsaw puzzle, the man is holding the right piece
ID 423741164 | Ai © Wittaya Pinpan | Dreamstime.com
alexis0331
ID 133667961 © Robot100 | Dreamstime.com
edgecirclemask_1080x500

Just the Facts: BBC journalist Thomas Germain examines the research behind popular prompting strategies — including politeness, flattery, insults and role-playing — and finds that most accepted wisdom about how to talk to AI chatbots is either unsupported by consistent evidence or has been rendered obsolete by how dramatically AI models have improved, with experts noting that newer mainstream models are better at identifying the most important parts of a prompt and are unlikely to be meaningfully swayed by minor changes in tone or word choice.

The article presents practical, expert-backed alternatives: asking for multiple options rather than one answer, providing examples of past work rather than lists of instructions, asking the AI to conduct an interview and gather information one question at a time, avoiding leading questions that bias the response, and using role-playing selectively — only for open-ended creative or exploratory tasks, not for factual questions where it can increase overconfidence and hallucination.

On the politeness question specifically, researchers note that while being courteous does not reliably improve AI accuracy, it may make users more comfortable engaging with the tool, and philosophers like Kant would argue that maintaining civil habits — even with non-sentient systems — has value for one's own character regardless of whether the AI benefits. 

Why It Matters to Marketers: 

  • The two most actionable tips in this article — providing examples of past work rather than instruction lists, and asking for multiple options rather than a single output — are directly applicable to every B2B content workflow that uses AI for drafting. Editorial teams that share past newsletters, articles, or social posts as style references before prompting will consistently get more on-brand output than teams that describe their style in abstract terms, and asking for three variations of a headline or intro forces the human editor back into the creative judgment seat rather than passively accepting the first result.  
  • The article's framing — "stop treating AI like a person and start treating it like a tool" — reflects a maturing phase of AI literacy that has significant implications for how organizations structure AI training programs. Early-stage AI adoption tends to generate a lot of folklore and mythology around prompting tricks; the research cited here suggests that phase is ending as models improve, and the skills that will matter going forward are structural (how to express a task clearly, how to provide useful examples, how to build in iteration) rather than rhetorical (which magic words to use). Training programs that are still teaching prompt "hacks" should pivot toward this more durable framework.
  • The article's warning about role-playing prompts — that telling an AI it is an expert can increase hallucination by encouraging the model to over-rely on its internal knowledge rather than staying appropriately uncertain — is particularly important for B2B editorial teams using AI for research, fact-checking or industry analysis. Prompts like "act as a healthcare industry analyst" or "respond as an expert in logistics" are common in content workflows but carry real accuracy risk on factual questions. The guidance here is clear: Role-playing is appropriate for brainstorming and creative tasks, not for questions with a right answer. 
  • The "ask for an interview" technique described by Vanderbilt professor Jules White is one of the most underused prompting strategies in content and marketing workflows and is worth testing immediately. Rather than front-loading a complex prompt for a campaign brief, content strategy or job description, instruct the AI to ask clarifying questions one at a time and adapt to your answers. This produces significantly more tailored output and forces the kind of structured thinking that improves the final result regardless of what the AI ultimately generates. 

U.S. Supreme Court Declines to Hear Dispute Over Copyrights for AI-Generated Material 

Author: Blake Brittain  

Website: Reuters  

Just the Facts: The U.S. Supreme Court declined on March 2, 2026 to take up the question of whether art generated by artificial intelligence can be copyrighted under U.S. law, turning away an appeal from Missouri computer scientist Stephen Thaler, who was denied a copyright for a piece of visual art his AI system DABUS created autonomously — a 2018 application for a work titled "A Recent Entrance to Paradise" depicting train tracks entering a portal surrounded by plant imagery.

The U.S. Copyright Office rejected the application in 2022 on the grounds that creative works must have human authors to qualify for copyright protection; a federal judge upheld that decision in 2023, writing that human authorship is a "bedrock requirement of copyright," and the U.S. Court of Appeals for the D.C. Circuit affirmed the ruling in 2025 — with the Trump administration also urging the Supreme Court not to hear the appeal.

The Court's refusal is not a ruling on the merits, but effectively cements the current legal landscape for now — a separate but related issue remains unresolved, as the Copyright Office has also rejected copyright bids from human artists seeking protection for images they created with AI assistance through Midjourney, arguing they were entitled to copyrights precisely because of their active creative involvement. 

Why It Matters to Marketers: 

  • The clearest practical implication for B2B marketing and editorial teams is that purely AI-generated visual assets — images, graphics and illustrations produced without meaningful human creative input — currently cannot be copyrighted under U.S. law. Teams that have been generating and treating such assets as proprietary brand IP should reassess: those files offer no copyright-based protection, which affects how they're licensed, syndicated, shared with vendors, or used in client-facing materials where IP ownership is a contractual consideration.  
  • The unresolved thread in this ruling — that the Copyright Office has separately rejected bids from human artists who used Midjourney with active creative involvement — means the legal line between "AI-assisted and protectable" and "AI-generated and unprotectable" is still being drawn in real time. For B2B publishers and content teams, this makes documenting human creative decisions in AI-assisted workflows a strategic priority right now, not a future concern. The organizations that can demonstrate which elements were human-selected, human-edited or human-arranged will be best positioned to assert copyright on the components that qualify, regardless of how future rulings shift the boundary.  
  • Thaler's lawyers warned in their Supreme Court filing that even if the Court later overturns the Copyright Office's position in another case, "it will be too late" — the office will have irreversibly shaped AI development in the creative industry during critically important years. B2B marketing and media leaders should not assume the current rules are the permanent rules and should avoid building content strategies, licensing structures or vendor agreements that depend entirely on the current framework holding. The patent parallel is also instructive: Thaler's AI patent applications were rejected on the same grounds, suggesting the human-authorship requirement is being applied consistently across IP domains, not just copyright.
  • The most actionable response to this ruling is establishing a lightweight creative provenance practice for your team's AI-assisted content — particularly visual assets. For each piece, capture which creative decisions involved human judgment: the prompt design, the selection from multiple AI outputs, the cropping and editing, and the sequencing in a layout. This documentation doesn't need to be elaborate, but it should exist. It protects the organization in vendor and licensing disputes, supports any future copyright claims on the human-authored components, and — given the growing scrutiny around AI content in search and editorial credibility — demonstrates that human editorial judgment is genuinely embedded in your production workflow. 

 

This piece was created with the help of generative AI tools and edited by our content team for clarity and accuracy.

About the Author

Alexis Gajewski

Alexis Gajewski

Contributor / AI Expert

Alexis Gajewski is the Associate Director of Newsroom Operations and Development at EndeavorB2B, where she leads editorial strategy and AI integration across a portfolio of 80+ B2B brands and 150 editors. With 18+ years in B2B media, she is best known for building the systems, training programs, and organizational infrastructure that help editorial teams operate at a higher level — faster, smarter, and with clearer standards.

Her expertise spans the full editorial stack — from SEO, GEO, and analytics to AI literacy, content strategy, and journalistic standards — with a particular focus on translating emerging technology into practical frameworks editorial teams can actually adopt. She designs and delivers training programs that meet teams where they are and build toward where the industry is going, with a specialty in AI integration that covers everything from foundational literacy to advanced workflows and agentic applications. A frequent guest on ASBPE webinars, Alexis is a recognized voice on the intersection of journalism and AI, and she writes for marketers, editors, and authors on how to thoughtfully and strategically implement AI practices.

Connect with Alexis on LinkedIn

Quiz

This piece was created with the help of generative AI tools and edited by our content team for clarity and accuracy.
mktg-icon Your Competitive Edge, Delivered

Elevate your strategy with weekly insights from marketing leaders who are redefining engagement and growth. From campaign best practices to creative innovation and data-driven trends, MarketingEDGE delivers the ideas and inspiration you need to outperform your competition.

marketing-image