The Challenge: Weak Creative Performance Insight

Modern marketing teams run hundreds of ad variations across Meta, Google, TikTok, LinkedIn and more. Yet when performance drops, answering a simple question – which creative elements actually drive clicks, conversions, or ROAS? – becomes almost impossible. Data is scattered across platforms, naming conventions are inconsistent, and weekly performance reports rarely go deeper than “this campaign worked, this one didn’t.”

Traditional analysis methods rely on manual spreadsheet work, gut feeling in creative reviews, and one-off deep dives when something is on fire. Analysts manually tag creatives, export CSVs, build pivot tables, and try to isolate variables like headline, visual style, or call-to-action. By the time patterns emerge, the campaign is often over, budgets have shifted, and the opportunity to iterate quickly has been lost. The result is a constant lag between what happens in the market and how your creative strategy responds.

The business impact is substantial. Without clear creative performance insight, brands over-invest in underperforming angles, miss out on scaling winners early, and waste hours each week on low-value reporting. Cost per acquisition creeps up, experimentation slows down, and marketing teams struggle to justify spend in conversations with finance. Over time, this erodes competitive advantage: faster, more data-driven competitors simply learn quicker which creative stories convert and outbid you in the auction.

This challenge is real, but it is solvable. With the latest generation of AI models like Claude, it’s now possible to ingest messy ad exports and transform them into structured, nuanced insight about what truly drives ROAS. At Reruption, we’ve built AI-powered analysis and decision-support tools inside organizations facing similar complexity. The rest of this page walks through practical ways to use Claude to move from noisy dashboards to clear creative hypotheses – and how to set this up so it actually sticks in your marketing workflow.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s work building AI-first analytics and decision tools, we’ve seen that the real unlock is not "more data" but better questions and structure. Claude is particularly strong at long-form reasoning over messy inputs – exactly what you need to make sense of raw ad exports, creative briefs, and fragmented dashboards. Used well, Claude can become a creative insight copilot that helps your marketing team see patterns, form hypotheses, and prioritize testing, instead of drowning in spreadsheets.

Think in Creative Hypotheses, Not Just Metrics

Most marketing teams think in terms of surface metrics: CTR, CPC, conversion rate, ROAS. Those are crucial, but they don’t explain why a creative works. To get value from Claude for ad performance optimization, you need to frame the problem as a set of hypotheses: “Is urgency-based messaging outperforming aspirational messaging?” “Do product-centric visuals beat lifestyle shots on retargeting?” Claude is very good at ingesting data and narrative context, then suggesting nuanced hypotheses you can test.

Before you upload any data, write down the 3–5 questions you want Claude to answer about your creatives. Combine quantitative objectives (e.g. lower CPA) with qualitative angles (e.g. emotional tone, problem/solution framing, benefit hierarchy). This mindset shift turns Claude from a glorified reporting tool into a strategic creative insight partner your team can use in ongoing campaign planning.

Design a Minimal but Robust Data Structure

Claude handles unstructured text very well, but for systematic creative performance insight you still need a minimal structure: consistent naming for campaigns, ad sets, and asset variants; clear columns for spend, impressions, clicks, conversions, revenue. Without that, you’ll get interesting narratives but weak, repeatable insight. Reruption often starts projects by defining a pragmatic data schema that your team can actually maintain, instead of a theoretically perfect taxonomy that collapses after two weeks.

Strategically, this also means aligning marketing, analytics, and sometimes finance on what “good performance” means. If your BI team uses contribution margin while your marketers optimize for ROAS, Claude will surface conflicting signals. A shared metric layer – even if simple at first – lets you use Claude to prioritize creative directions in a way the whole organization can trust.

Prepare Your Team for an AI-Augmented Workflow

Introducing Claude into creative performance analysis is not just a tooling change; it’s a workflow and culture shift. Creative, performance marketing, and analytics teams need to understand where AI-driven insights fit into existing rituals like weekly performance calls, creative reviews, and sprint planning. If Claude’s recommendations live in a parallel universe, they’ll be ignored after the initial novelty wears off.

We recommend defining explicit touchpoints: for example, “Every Monday, Claude summarizes last week’s performance and proposes 3 new creative hypotheses,” or “Before new campaigns go live, Claude reviews the brief against past performance patterns.” This makes the AI visible and useful, rather than a side experiment only one analyst cares about.

Mitigate Risk with Guardrails and Human Oversight

Claude is powerful but not infallible. It can misinterpret spurious correlations or overfit to a limited sample of campaigns. Strategically, you need clear guardrails: Claude should suggest patterns and hypotheses, not autonomously switch off your top-performing campaigns or reallocate budgets without human review. Pair its qualitative pattern recognition with your existing quantitative checks in tools like Google Ads, Meta Ads Manager, or your BI stack.

At Reruption, we design workflows where Claude’s output feeds into a human decision step. For example, Claude might propose that “short benefit-led headlines with product imagery” outperform others. A performance marketer then validates this against native platform reports, sanity-checks the sample size, and turns the insight into a structured A/B test plan. This keeps risk low while still accelerating learning.

Start with a Focused Pilot Before Scaling Across Channels

It’s tempting to throw all your Meta, Google, TikTok, and programmatic data at Claude from day one. In practice, this leads to confusion and over-engineering. A better strategic path is to pick one channel and one core objective – e.g. Meta prospecting for new customer acquisition – and pilot Claude as your “creative insights analyst” there. Once the workflow is proven and your team trusts the output, expand step by step.

This pilot-first approach aligns with Reruption’s AI PoC philosophy: validate that AI-driven creative analysis delivers real lift (e.g. lower CPA, higher ROAS, faster creative iteration) in a contained environment. Then, invest in automation, integrations, and process changes to scale it. You de-risk the initiative while still moving faster than traditional consulting or BI projects.

Used with the right structure and mindset, Claude can transform weak creative performance insight into a repeatable advantage: clearer patterns, sharper hypotheses, and faster creative iteration that shows up in ROAS. Reruption combines this tool with deep engineering and workflow design experience to embed AI-driven creative analysis directly into your marketing routines, not just in a slide deck. If you want to explore a focused pilot or turn your existing exports into actionable insight, we’re happy to discuss how our AI PoC and Co-Preneur approach could fit your specific setup.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Banking to Logistics: Learn how companies successfully use Claude.

bunq

Banking

As bunq experienced rapid growth as the second-largest neobank in Europe, scaling customer support became a critical challenge. With millions of users demanding personalized banking information on accounts, spending patterns, and financial advice on demand, the company faced pressure to deliver instant responses without proportionally expanding its human support teams, which would increase costs and slow operations. Traditional search functions in the app were insufficient for complex, contextual queries, leading to inefficiencies and user frustration. Additionally, ensuring data privacy and accuracy in a highly regulated fintech environment posed risks. bunq needed a solution that could handle nuanced conversations while complying with EU banking regulations, avoiding hallucinations common in early GenAI models, and integrating seamlessly without disrupting app performance. The goal was to offload routine inquiries, allowing human agents to focus on high-value issues.

Lösung

bunq addressed these challenges by developing Finn, a proprietary GenAI platform integrated directly into its mobile app, replacing the traditional search function with a conversational AI chatbot. After hiring over a dozen data specialists in the prior year, the team built Finn to query user-specific financial data securely, answer questions on balances, transactions, budgets, and even provide general advice while remembering conversation context across sessions. Launched as Europe's first AI-powered bank assistant in December 2023 following a beta, Finn evolved rapidly. By May 2024, it became fully conversational, enabling natural back-and-forth interactions. This retrieval-augmented generation (RAG) approach grounded responses in real-time user data, minimizing errors and enhancing personalization.

Ergebnisse

  • 100,000+ questions answered within months post-beta (end-2023)
  • 40% of user queries fully resolved autonomously by mid-2024
  • 35% of queries assisted, totaling 75% immediate support coverage
  • Hired 12+ data specialists pre-launch for data infrastructure
  • Second-largest neobank in Europe by user base (1M+ users)
Read case study →

Mass General Brigham

Healthcare

Mass General Brigham, one of the largest healthcare systems in the U.S., faced a deluge of medical imaging data from radiology, pathology, and surgical procedures. With millions of scans annually across its 12 hospitals, clinicians struggled with analysis overload, leading to delays in diagnosis and increased burnout rates among radiologists and surgeons. The need for precise, rapid interpretation was critical, as manual reviews limited throughput and risked errors in complex cases like tumor detection or surgical risk assessment. Additionally, operative workflows required better predictive tools. Surgeons needed models to forecast complications, optimize scheduling, and personalize interventions, but fragmented data silos and regulatory hurdles impeded progress. Staff shortages exacerbated these issues, demanding decision support systems to alleviate cognitive load and improve patient outcomes.

Lösung

To address these, Mass General Brigham established a dedicated Artificial Intelligence Center, centralizing research, development, and deployment of hundreds of AI models focused on computer vision for imaging and predictive analytics for surgery. This enterprise-wide initiative integrates ML into clinical workflows, partnering with tech giants like Microsoft for foundation models in medical imaging. Key solutions include deep learning algorithms for automated anomaly detection in X-rays, MRIs, and CTs, reducing radiologist review time. For surgery, predictive models analyze patient data to predict post-op risks, enhancing planning. Robust governance frameworks ensure ethical deployment, addressing bias and explainability.

Ergebnisse

  • $30 million AI investment fund established
  • Hundreds of AI models managed for radiology and pathology
  • Improved diagnostic throughput via AI-assisted radiology
  • AI foundation models developed through Microsoft partnership
  • Initiatives for AI governance in medical imaging deployed
  • Reduced clinician workload and burnout through decision support
Read case study →

Rolls-Royce Holdings

Aerospace

Jet engines are highly complex, operating under extreme conditions with millions of components subject to wear. Airlines faced unexpected failures leading to costly groundings, with unplanned maintenance causing millions in daily losses per aircraft. Traditional scheduled maintenance was inefficient, often resulting in over-maintenance or missed issues, exacerbating downtime and fuel inefficiency. Rolls-Royce needed to predict failures proactively amid vast data from thousands of engines in flight. Challenges included integrating real-time IoT sensor data (hundreds per engine), handling terabytes of telemetry, and ensuring accuracy in predictions to avoid false alarms that could disrupt operations. The aerospace industry's stringent safety regulations added pressure to deliver reliable AI without compromising performance.

Lösung

Rolls-Royce developed the IntelligentEngine platform, combining digital twins—virtual replicas of physical engines—with machine learning models. Sensors stream live data to cloud-based systems, where ML algorithms analyze patterns to predict wear, anomalies, and optimal maintenance windows. Digital twins enable simulation of engine behavior pre- and post-flight, optimizing designs and schedules. Partnerships with Microsoft Azure IoT and Siemens enhanced data processing and VR modeling, scaling AI across Trent series engines like Trent 7000 and 1000. Ethical AI frameworks ensure data security and bias-free predictions.

Ergebnisse

  • 48% increase in time on wing before first removal
  • Doubled Trent 7000 engine time on wing
  • Reduced unplanned downtime by up to 30%
  • Improved fuel efficiency by 1-2% via optimized ops
  • Cut maintenance costs by 20-25% for operators
  • Processed terabytes of real-time data from 1000s of engines
Read case study →

Citibank Hong Kong

Wealth Management

Citibank Hong Kong faced growing demand for advanced personal finance management tools accessible via mobile devices. Customers sought predictive insights into budgeting, investing, and financial tracking, but traditional apps lacked personalization and real-time interactivity. In a competitive retail banking landscape, especially in wealth management, clients expected seamless, proactive advice amid volatile markets and rising digital expectations in Asia. Key challenges included integrating vast customer data for accurate forecasts, ensuring conversational interfaces felt natural, and overcoming data privacy hurdles in Hong Kong's regulated environment. Early mobile tools showed low engagement, with users abandoning apps due to generic recommendations, highlighting the need for AI-driven personalization to retain high-net-worth individuals.

Lösung

Wealth 360 emerged as Citibank HK's AI-powered personal finance manager, embedded in the Citi Mobile app. It leverages predictive analytics to forecast spending patterns, investment returns, and portfolio risks, delivering personalized recommendations via a conversational interface like chatbots. Drawing from Citi's global AI expertise, it processes transaction data, market trends, and user behavior for tailored advice on budgeting and wealth growth. Implementation involved machine learning models for personalization and natural language processing (NLP) for intuitive chats, building on Citi's prior successes like Asia-Pacific chatbots and APIs. This solution addressed gaps by enabling proactive alerts and virtual consultations, enhancing customer experience without human intervention.

Ergebnisse

  • 30% increase in mobile app engagement metrics
  • 25% improvement in wealth management service retention
  • 40% faster response times via conversational AI
  • 85% customer satisfaction score for personalized insights
  • 18M+ API calls processed in similar Citi initiatives
  • 50% reduction in manual advisory queries
Read case study →

Nubank

Fintech

Nubank, Latin America's largest digital bank serving 114 million customers across Brazil, Mexico, and Colombia, faced immense pressure to scale customer support amid explosive growth. Traditional systems struggled with high-volume Tier-1 inquiries, leading to longer wait times and inconsistent personalization, while fraud detection required real-time analysis of massive transaction data from over 100 million users. Balancing fee-free services, personalized experiences, and robust security was critical in a competitive fintech landscape plagued by sophisticated scams like spoofing and false central fraud. Internally, call centers and support teams needed tools to handle complex queries efficiently without compromising quality. Pre-AI, response times were bottlenecks, and manual fraud checks were resource-intensive, risking customer trust and regulatory compliance in dynamic LatAm markets.

Lösung

Nubank integrated OpenAI GPT-4 models into its ecosystem for a generative AI chat assistant, call center copilot, and advanced fraud detection combining NLP and computer vision. The chat assistant autonomously resolves Tier-1 issues, while the copilot aids human agents with real-time insights. For fraud, foundation model-based ML analyzes transaction patterns at scale. Implementation involved a phased approach: piloting GPT-4 for support in 2024, expanding to internal tools by early 2025, and enhancing fraud systems with multimodal AI. This AI-first strategy, rooted in machine learning, enabled seamless personalization and efficiency gains across operations.

Ergebnisse

  • 55% of Tier-1 support queries handled autonomously by AI
  • 70% reduction in chat response times
  • 5,000+ employees using internal AI tools by 2025
  • 114 million customers benefiting from personalized AI service
  • Real-time fraud detection for 100M+ transaction analyses
  • Significant boost in operational efficiency for call centers
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Standardize Your Ad Export and Brief Format for Claude

Claude delivers the best insights when it sees consistent, well-labeled data. Before every analysis session, export your ad performance data (from Meta, Google, etc.) into a structured CSV or Excel and make sure key columns are present: campaign, ad set, ad name, creative text, image/video description or ALT text, spend, impressions, clicks, conversions, revenue/ROAS.

In parallel, align your creative briefs in a standard template (objective, target audience, main message, emotional tone, key benefits, offer). When you share both the export and the brief with Claude, it can connect the intent of the creative with its actual performance, producing deeper insights than metrics-only analysis.

Example prompt to Claude:
You are an AI marketing analyst helping us understand which ad creatives drive ROAS.

Inputs:
1) Ad performance export (CSV pasted below)
2) Creative brief template and a few real examples

Tasks:
- Identify performance patterns across headlines, body copy, visual descriptions, and CTAs.
- Highlight 3–5 creative angles that consistently outperform others.
- Highlight 3–5 angles that consistently underperform.
- Propose 5 concrete hypotheses we should test next week.
- Present results in a structured table with columns: Angle, Evidence, Channels, Suggested Next Test.

Expected outcome: Claude produces an insight report that your performance marketer can quickly review and turn into a prioritized test plan, cutting manual analysis time by several hours per week.

Tag and Decompose Creatives into Testable Elements

To move beyond “this ad works” toward actionable creative insight, you need to break each ad into components: value proposition, emotional tone, offer type, format, CTA style, and visual concept. You can either do this manually or let Claude propose tags based on your raw ad text and descriptions.

Start by asking Claude to generate a tagging scheme and automatically assign tags to each ad row from your export. Then, in a second step, ask it to analyze performance by tag combination.

Example prompt to Claude:
You are a creative performance analyst.
1) Define a concise tagging scheme for our ads, including:
   - Value proposition (e.g. price, quality, convenience, social proof)
   - Emotional tone (e.g. urgent, aspirational, reassuring, playful)
   - Offer type (e.g. discount, free trial, bundle, new launch)
   - Visual concept (based on descriptions in the data)
2) Apply tags to each ad row in the dataset below.
3) Then, analyze performance by tag and tag combination, focusing on ROAS and CPA.
4) Output two tables:
   - Table 1: Tags ranked by performance
   - Table 2: Best-performing tag combinations and their evidence.

Expected outcome: a clear view of which creative themes and combinations actually move your KPIs, enabling more focused ideation and scaling decisions.

Use Claude to Draft Data-Backed Creative Briefs

Once you know which angles perform, close the loop by letting Claude assist with new briefs. Instead of starting from a blank page, you can have Claude produce a data-backed brief that summarizes winning themes, audience insights, and example messages tailored to each channel.

Feed Claude your past performance analysis and ask it to generate a concise brief for the next sprint, aligned with your growth targets and budgets.

Example prompt to Claude:
You are a senior performance creative strategist.
Based on the analysis below (paste Claude's previous insight output), create a creative brief for our next campaign.

Brief should include:
- Objective and primary KPI
- Target audiences and key pain points
- 3–4 winning creative angles with supporting evidence
- Do's and don'ts for copy and visuals, based on past performance
- 5 concrete ad concepts per channel (Meta, Google Display, TikTok) with sample headlines and body copy.

Expected outcome: your creative team receives a structured, insight-based brief that translates past performance into future concepts, reducing back-and-forth and time-to-first-draft.

Automate Weekly Creative Performance Summaries

Instead of manually compiling weekly decks, you can give Claude a recurring task: ingest the latest exports and generate a standardized insight summary for your team. This doesn’t require deep integration at first – even a simple workflow where an analyst exports CSVs and pastes them into Claude on Monday morning can dramatically speed up reporting.

Define a fixed summary format that matches how your leadership and creative teams like to consume insight.

Example prompt to Claude:
You are our weekly creative insights assistant.
Using the ad performance data from last week (pasted below):
- Summarize overall performance vs. the previous 4 weeks.
- Identify top 10 winning creatives and explain WHY they worked.
- Identify top 10 underperformers and likely reasons.
- Suggest 5 concrete optimization actions for this week.
- Produce an email-ready summary with bullet points for leadership
  and a more detailed section for the performance/creative team.

Expected outcome: consistent, high-quality weekly insights in 10–15 minutes instead of hours, freeing your senior marketers to focus on decisions, not deck-building.

Turn Insights into Structured Test Plans and Naming Conventions

Insight only matters if it changes what you test next. Use Claude to convert qualitative findings into a structured A/B testing roadmap and harmonized naming conventions that make future analysis easier. This creates a virtuous cycle: better naming → better data → better insights.

Ask Claude to propose a testing backlog prioritized by expected impact and ease of implementation, plus a naming scheme that encodes key creative variables, so next month’s exports are easier to analyze.

Example prompt to Claude:
You are an experimentation lead.
Given the creative insight report below, create:
1) A prioritized test plan for the next 4 weeks, including:
   - Test name
   - Hypothesis
   - Variants to create
   - Primary KPI and guardrail metrics
2) A simple, scalable naming convention for campaigns/ad sets/ads
   that encodes: audience, offer, angle, format, and CTA.
3) A checklist for our team to follow when setting up each new test.

Expected outcome: a clear roadmap for experimentation and a consistent naming convention that makes each future Claude analysis faster and more reliable.

Expected Outcomes and Realistic Benchmarks

When implemented as part of your workflow, Claude-powered creative insight typically aims at three realistic outcomes in the first 8–12 weeks: (1) 30–50% reduction in manual analysis and reporting time for performance marketers, (2) consistently faster creative iteration cycles (e.g. from monthly to bi-weekly or weekly), and (3) measurable improvements in ROAS or CPA on key campaigns driven by better scaling of winning angles and earlier pruning of weak ones. Exact numbers will depend on your spend levels, test volume, and how tightly you integrate Claude’s recommendations into decision-making, but the pattern is clear: more learning per euro spent.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Claude can ingest your raw ad exports, creative text, and even high-level briefs, then decompose each ad into themes and elements such as value proposition, emotional tone, offer type, and visual concept. It then cross-references these elements with performance metrics like CTR, CPA, and ROAS to surface patterns you would struggle to see manually.

Instead of just telling you which ads worked, Claude helps explain why they worked, proposing clear hypotheses like “social proof + reassurance tone performs best on retargeting” or “short, benefit-led headlines outperform feature lists on prospecting.” This lets your team focus new creative work and budget on the angles that empirically move the needle.

You don’t need a large data science team to benefit from Claude-based creative analysis. In most organizations, the core requirements are:

  • A performance marketer or analyst who can export data from your ad platforms and understands your core KPIs.
  • Someone who can maintain basic consistency in naming conventions and brief templates.
  • Clear ownership of the workflow (e.g. “performance lead runs the weekly Claude analysis and shares insights”).

Claude handles the heavy lifting of reading raw tables, interpreting text, and suggesting patterns. Reruption can help you define the right prompts, data structure, and routines so your existing team can run this without hiring new specialists.

Time-to-impact depends on your spend level and test velocity, but many teams see qualitative improvements in clarity within the first 1–2 weeks: clearer weekly summaries, better hypotheses, and more focused briefs. Quantitative impact on ROAS and CPA usually appears over a few test cycles, typically in the 4–12 week range, as you start to scale proven angles and stop funding weak ones earlier.

The key is to treat Claude as part of your experimentation loop: analyze → hypothesize → test → analyze again. If your team is already running frequent creative tests, Claude can accelerate learning quickly. If your testing culture is still maturing, the first benefit will be structure and speed in how you prioritize what to test.

The direct cost of using Claude is relatively low compared to typical media budgets or agency retainers. The main investment is in setting up the right workflows, prompts, and data structure. ROI comes from three areas:

  • Reduced analysis time: performance teams spend fewer hours in spreadsheets and reporting.
  • Smarter budget allocation: faster identification and scaling of winning angles, and earlier pruning of losers.
  • Higher creative hit rate: briefs and concepts are guided by actual performance patterns, not just intuition.

In practice, even a small percentage improvement in ROAS on your main channels often exceeds the implementation and usage cost of Claude by a wide margin. Reruption’s AI PoC approach is designed to validate this quickly in your real environment before you commit to broader rollout.

Reruption supports you end-to-end, from idea to working solution. With our AI PoC offering (9,900€), we first define and scope a concrete use case such as “weekly Claude-powered creative insight for Meta and Google campaigns,” then build a functioning prototype in days: data ingestion, prompt design, and example outputs tailored to your setup.

Beyond the PoC, our Co-Preneur approach means we embed with your marketing and analytics teams, operate inside your P&L, and help you ship real internal tools and workflows – not just slides. We bring the engineering depth to connect Claude into your existing tools where needed, design guardrails for security and compliance, and coach your team on running AI-augmented creative reviews and test planning. The goal is simple: a sustainable, AI-first way of learning which creatives actually drive ROAS in your organization.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media