Unprompted: We're Getting Better at Using AI Wrong
I have some good news and some bad news. Let's be optimistic and start with the good news.
People are getting genuinely better at using AI. Conversations are more productive, outputs are more polished, and the early awkwardness of not knowing how to talk to these tools is fading fast. According to a recent AI fluency report from Anthropic, the single strongest predictor of AI output quality is iteration, and 85% of users are now doing it naturally. That's real progress.
Here's the bad news: That comfort is making us sloppy.
As AI outputs get more polished and professional-looking, users are becoming less likely to question them. Less likely to fact-check. Less likely to push back. The very thing that signals progress — a confident, fluent AI conversation that produces a clean, well-structured result — turns out to be the moment we're most likely to skip the scrutiny that matters most. We're getting better at using AI. We're getting worse at second-guessing it.
And that's not the only bad news. A recent eight-month study from UC Berkeley found that despite all the promise and the hype, AI isn't actually reducing workloads — it's intensifying them. Employees using AI are working longer hours, moving at a faster pace, and taking on broader task scopes than before. The efficiency gains are real, but so is the acceleration. And without deliberate guardrails, that acceleration compounds quietly until it becomes burnout.
This edition of Unprompted doesn't ignore the good news — there's plenty worth celebrating, from Google's ambitious educator AI training initiative to two genuinely impressive tool launches that expand what marketing teams can do creatively and operationally. But the research tells a more complicated story than the one most AI evangelists are telling.
Our commitment to make AI training available to all 6 million U.S. educators
Author: Chris Phillips
Website: The Keyword (Google's official blog)
Just the Facts: Google for Education is partnering with ISTE+ASCD to provide free, comprehensive Gemini AI training to all 6 million K-12 and higher education faculty in the U.S., describing it as the largest initiative of its kind, with the goal of helping educators and their more than 74 million students safely and thoughtfully use Google's AI tools, including Gemini and NotebookLM.
The training is structured as concise, flexible, bite-sized modules built by educators for educators, covering real-world classroom applications such as creating personalized lessons from same-day assessment results, providing individualized study coaching in large lecture settings and adapting materials for diverse student needs, including reading levels, primary languages and visual learning styles. Educators who complete sessions will receive micro-credentials and badges to demonstrate AI literacy using Google tools. The initiative will roll out in the coming months, with an interest form available now.
Why It Matters to Marketers:
- The training framework — concise, modular, role-specific and immediately applicable — is a directly transferable model for B2B marketing teams designing internal AI literacy programs. The bite-sized, use-case-first structure addresses the same resistance common in corporate training: time constraints and unclear practical relevance.
- Google embedding Gemini and NotebookLM into the foundational training of 6 million educators signals a deliberate long-term platform lock-in strategy through the education sector — meaning the next generation of workforce entrants will arrive with Google AI fluency as a baseline, not a differentiator. B2B marketers targeting education verticals or early-career audiences should factor this into product and content strategy.
- The article notes that existing AI training initiatives "often require hours of time" and "don't always clearly show how teachers can use what they've learned" — a critique that applies equally to most corporate AI training programs. Marketing and training leads should audit whether their current AI enablement efforts are outcome-oriented or just awareness-oriented, as the gap between the two is where adoption stalls.
- Google's use of micro-credentials and badges as recognition for completed training is a low-cost, high-signal mechanism to drive completion and make AI skill-building visible across an organization. B2B marketing and content teams building internal AI training programs should consider adopting a similar credentialing structure to reward participation and create accountability without requiring formal certification.
AI Doesn't Reduce Work — It Intensifies It
Author: Aruna Ranganathan and Xingqi Maggie Ye
Website: Harvard Business Review
Just the Facts: In an eight-month ethnographic study of a U.S.-based technology company with approximately 200 employees, UC Berkeley researchers Aruna Ranganathan and Xingqi Maggie Ye found that generative AI tools did not reduce workloads. Rather, they consistently intensified them, with employees working faster, taking on broader task scopes, and extending their work hours into more of the day without being explicitly asked to do so.
The researchers identified three primary mechanisms of intensification: task expansion, in which AI made stepping into others' responsibilities feel accessible and newly rewarding; blurred work-non-work boundaries, in which the low friction of prompting allowed work to spill into breaks, evenings and early mornings without deliberate intention; and increased multitasking, in which managing multiple AI-assisted threads simultaneously created cognitive load even as work felt productive. The authors recommend that organizations counter this dynamic by developing an "AI practice" — a set of intentional norms and routines governing how and when AI is used — structured around three interventions: intentional decision pauses, work sequencing to reduce fragmentation, and protected time for human connection and dialogue.
Why It Matters to Marketers:
- The three intensification patterns — task expansion, boundary erosion and multitasking — are already visible in B2B marketing teams using AI for content, research and campaign work. The finding that workers felt busier, not less busy, after AI adoption is a direct challenge to the ROI narrative on which most marketing AI implementations are built.
- The study's core finding — that AI creates a self-reinforcing cycle of acceleration, expanded scope and rising expectations — signals that, without deliberate governance, AI adoption in marketing departments is likely to produce burnout and degradation in decision quality before it yields sustainable efficiency gains.
- The authors warn that, because AI-driven workload expansion is voluntary and often framed as enjoyable experimentation, leaders systematically underestimate the load employees are actually carrying. Marketing managers who rely on anecdotal enthusiasm as a proxy for sustainable adoption risk missing warning signs until turnover or quality problems become visible.
- The "AI practice" framework the authors recommend — intentional pauses before major decisions, sequencing to batch outputs and protect focus windows and structured human connection time — is directly applicable to editorial and content operations workflows and can be piloted at the team level without cross-departmental approval.
Anthropic Education Report: The AI Fluency Index
Website: Anthropic
Just the Facts: Anthropic analyzed 9,830 anonymized Claude.ai conversations from a single week in January 2026 to establish a baseline measurement of AI fluency, defined using the 4D AI Fluency Framework's 11 directly observable behaviors. They found that 85.7% of conversations exhibited iteration and refinement, the single strongest correlate of all other fluency behaviors, with iterative conversations showing roughly double the number of fluency behaviors compared to non-iterative ones.
A key finding is that conversations involving AI-generated artifacts — including code, documents and interactive tools — showed higher rates of directive behaviors like clarifying goals and specifying formats, but meaningfully lower rates of evaluative behaviors: users were less likely to identify missing context (-5.2 percentage points), check facts (-3.7pp), or question the model's reasoning (-3.1pp) compared to non-artifact conversations. The report frames these findings as a baseline for tracking how AI fluency develops over time, with Anthropic planning future cohort analyses comparing new and experienced users, qualitative research into behaviors not observable in chat transcripts, and an investigation into whether encouraging iterative conversations causally increases critical evaluation.
Why It Matters to Marketers:
- The finding that only 30% of conversations include users telling Claude how they'd like it to interact with them is a direct, actionable gap for B2B marketing teams using AI for content and research. Establishing upfront interaction instructions — asking for pushback, uncertainty flagging or reasoning walkthroughs — is a low-effort behavior change that the data suggest meaningfully changes output quality.
- The pattern that polished-looking AI outputs coincide with lower rates of critical evaluation — even when those are the conversations where errors matter most — has direct implications for marketing teams producing AI-assisted content at scale. As AI outputs become more polished and harder to distinguish from human work, the organizational risk of reduced scrutiny grows, not shrinks.
- The report acknowledges that its sample likely skews toward early adopters already comfortable with AI, and therefore may not represent the behaviors of broader or more resistant user populations. Marketing and training leads building AI fluency programs for mixed-adoption teams should not use this baseline as a benchmark for their own workforce without accounting for that skew.
- The report's three practical fluency recommendations — staying in the conversation through iteration, questioning polished outputs before accepting them and setting explicit terms for how AI should interact — are concrete, teachable behaviors that B2B marketing trainers can incorporate directly into AI onboarding curricula without requiring technical expertise or tool-specific training.
Introducing Perplexity Computer
Author: Perplexity Team
Website: Perplexity AI
Just the Facts: Perplexity AI has launched Perplexity Computer, described as a general-purpose digital AI worker that operates software interfaces the same way a human would, capable of creating and executing entire workflows that can run for hours or months rather than responding to single queries or completing discrete tasks. The system functions by accepting a described outcome, breaking it into tasks and subtasks, and spawning sub-agents that execute work in parallel across isolated compute environments with access to a real filesystem, browser and tool integrations — with sub-agents coordinating automatically and asynchronously so users can focus elsewhere or run multiple instances simultaneously.
Perplexity Computer uses multi-model orchestration rather than a single AI model, currently running Claude Opus 4.6 as its core reasoning engine while deploying Gemini for deep research, Nano Banana for images, Veo 3.1 for video, Grok for lightweight speed tasks, and ChatGPT 5.2 for long-context recall — with the model selection designed to change as individual models advance.
Why It Matters to Marketers:
- The sub-agent architecture — where one agent drafts a document while another simultaneously gathers the data it needs — maps directly to multi-step B2B marketing workflows such as competitive research reports, content briefs and campaign planning that currently require sequential human handoffs between tools. The asynchronous execution model means these could run in the background without active management.
- Perplexity Computer's multi-model orchestration approach — selecting the best available model for each specific subtask rather than relying on a single provider — signals that competitive advantage in AI tooling is shifting from model quality to orchestration intelligence. Marketing technology vendors and enterprise AI platform evaluators should factor this architectural approach into long-term procurement decisions, not just current benchmark comparisons.
- The article describes workflows that can run for "hours or even months" with limited human checkpoints — an appealing efficiency claim that also poses a significant governance risk for B2B marketing teams, where brand voice, factual accuracy and compliance review are non-negotiable. Teams evaluating Perplexity Computer should clearly define which workflow stages require human review before any long-running autonomous execution is deployed to client-facing or regulated content.
- Perplexity Computer is currently available to Perplexity Max subscribers, making it accessible to individual practitioners for immediate piloting. B2B content and demand gen teams should identify one bounded, repeatable research-heavy workflow — such as a weekly competitive intelligence brief or an industry news digest — and test the system on that use case before evaluating it for broader deployment.
We're getting better at using AI. We're getting worse at second-guessing it.
Nano Banana 2: Combining Pro Capabilities with Lightning-Fast Speed
Author: Naina Raisinghani
Website: The Keyword (Google's official blog)
Just the Facts: Google DeepMind released Nano Banana 2 (Gemini 3.1 Flash Image), positioning it as a model that combines the advanced world knowledge and quality of Nano Banana Pro with the speed of Gemini Flash — making capabilities previously limited to the Pro tier available to a broader user base at faster generation speeds. The model introduces subject consistency for up to five characters and 14 objects in a single workflow, precision instruction following, production-ready output from 512px to 4K across multiple aspect ratios, and text rendering with translation capabilities for localizing text within images.
Nano Banana 2 is rolling out across Google's product ecosystem including the Gemini app, Google Search's AI Mode and Lens, AI Studio, Vertex AI, Flow and Google Ads, and is paired with expanded AI content provenance tools — SynthID watermarking and C2PA Content Credentials — which have already been used over 20 million times since their November launch to help users identify AI-generated media.
Why It Matters to Marketers:
- Nano Banana 2's availability in Google Ads — powering creative suggestions during campaign creation — and its production-ready specs from 512px to 4K across multiple aspect ratios means B2B marketing teams can generate on-spec visual assets for paid campaigns, social and display directly within tools they already use, reducing the handoff cycle between content and design for time-sensitive deliverables.
- The integration of AI image generation directly into Google Search's AI Mode and Ads infrastructure signals that AI-generated visuals are transitioning from a standalone creative tool into embedded, default infrastructure across the entire Google marketing stack. B2B marketers who have been slow to develop AI image workflows should treat this as a forcing function — the ecosystem is moving toward AI-assisted creative as the default, not the exception.
- The model's use of real-time web search to ground image generation — pulling from Gemini's real-world knowledge base and live image search — raises important questions about the sourcing of visual references for rendering specific subjects. B2B marketers generating images of real locations, real products or real people's likenesses should review their organization's legal and brand guidelines before using search-grounded generation for client-facing or regulated content, as provenance of reference material may be difficult to document or audit.
- Nano Banana 2 is the new default image model in Google Flow at zero credits for all users, making it immediately accessible for no additional cost to anyone already using the platform. B2B content teams should pilot the model specifically on infographic and data visualization use cases — the article highlights these as a primary strength — which are among the highest-demand, highest-effort visual formats in B2B content programs and a natural proving ground for evaluating whether AI generation can meaningfully reduce production time.
Like what you're reading? Subscribe to our free weekly newsletter!
About the Author

Alexis Gajewski
Contributor
Alexis Gajewski is the Associate Director of Newsroom Operations and Development at EndeavorB2B, bringing 18 years of experience in B2B media and publishing. A digital-first editorial leader, she sets the vision and direction for content strategies that maximize reach, engagement, and visibility across EndeavorB2B’s portfolio of brands. Alexis oversees editorial planning, workflow management, and team development, ensuring that all content aligns with both audience needs and business objectives. With deep expertise in SEO, AI, and analytics, she drives data-informed editorial decisions that strengthen storytelling, boost organic growth, and uphold the highest standards of quality and integrity.
As a strategist and mentor, Alexis works across the editorial department to foster a culture of creativity, collaboration, and continuous learning. She develops company-wide editorial standards, training programs, and performance frameworks designed to elevate content quality and operational efficiency. Her passion for innovation keeps teams at the forefront of media transformation—whether implementing AI-driven tools, refining workflows, or exploring new content formats. Through her leadership, Alexis empowers editors, reporters, and content strategists at EndeavorB2B to adapt, grow, and deliver impactful, audience-focused journalism in a fast-evolving digital landscape.
Resources
Quiz
Elevate your strategy with weekly insights from marketing leaders who are redefining engagement and growth. From campaign best practices to creative innovation and data-driven trends, MarketingEDGE delivers the ideas and inspiration you need to outperform your competition.




