The Challenge: Unclear Channel Attribution

Marketing teams are under constant pressure to prove which channels actually drive revenue. But modern customer journeys run across search, social, display, email, marketplaces and offline touchpoints. When a buyer has 10+ interactions before converting, it becomes almost impossible to say which touchpoints really mattered using simple web analytics views. The result is unclear channel attribution, shaky ROI numbers and endless debates about where the next euro of budget should go.

Traditional approaches like last-click, first-click or static position-based models were built for a simpler web. They ignore the sequencing and synergy of touchpoints, treat every user path as if it were identical and cannot reconcile conflicting numbers from Google Ads, Meta, CRM and GA4. Even multi-touch rule-based models quickly become unmanageable as channels, campaigns and formats multiply. In a privacy-first world with partial tracking loss and walled gardens, these methods simply do not capture reality anymore.

The business impact is significant. Effective but early-funnel channels such as YouTube, display, content syndication or awareness campaigns are chronically underfunded, while retargeting and brand search appear artificially overperformant. Budget decisions become reactive and political instead of data-driven. Teams waste time arguing about whose numbers are "right" instead of optimising creative, audiences and offers. Over time, this leads to missed revenue, higher acquisition costs and a competitive disadvantage against organisations that truly understand their channel mix.

The good news: this problem is hard, but it is solvable. With modern cloud data stacks, GA4 exports and AI tools like Gemini, you can move beyond black-box platform reports and build attribution that reflects your unique business reality. At Reruption, we have hands-on experience stitching together fragmented data, validating AI models and turning them into usable tools for marketing teams. In the sections below, you'll find practical guidance on how to approach AI-driven attribution and how to use Gemini to bring clarity back into your marketing analytics.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption's perspective, using Gemini for marketing attribution is not about replacing your existing analytics stack, but about upgrading it. By connecting Gemini with GA4 exports, BigQuery and your first-party data, you can let the model do the heavy lifting: generating SQL, suggesting attribution and MMM model structures, reviewing Python code and checking data quality across sources. Our hands-on engineering work with AI products has shown that this human-in-the-loop setup is where AI-driven channel attribution creates real value without turning into another black box.

Start with a Clear Attribution Strategy, Not with the Model

Before you open Gemini or write a single line of SQL, align internally on what decisions your attribution model should support. Are you trying to rebalance spend between Google and Meta, defend upper-funnel investment, or understand the role of affiliates? Different questions call for different modelling approaches, lookback windows and granularity. Marketing leadership, performance marketers and data teams need a shared definition of "success" and acceptable uncertainty.

Use Gemini to help document and stress-test this strategy. You can describe your business model, sales cycle and channels, then ask Gemini to propose appropriate multi-touch attribution and marketing mix modelling (MMM) approaches, including trade-offs. This turns vague goals into a concrete blueprint that both marketers and analysts can work from.

Design a Human-in-the-Loop Workflow, Not Full Automation

Trying to fully automate attribution decisions with AI on day one is risky. Instead, design a workflow where Gemini supports analysts and marketers: generating queries, reviewing code, suggesting model variations and highlighting anomalies. Final judgement on model choice and budget shifts should remain with humans who understand the market context, seasonality and campaign goals.

This human-in-the-loop approach also builds trust. When teams see Gemini's reasoning, intermediate outputs and code suggestions, they are more likely to adopt insights. Reruption’s experience building AI tools shows that embedding explainability and review steps prevents the "black box" feeling that often kills advanced analytics projects.

Invest in Data Foundations Before Scaling AI Attribution

Gemini is only as good as the data it can see. If your GA4 implementation is inconsistent, UTM tagging is unreliable, or CRM data does not line up with online sessions, even the most sophisticated model will mislead you. Treat data quality and identity resolution as a strategic prerequisite, not a nice-to-have add-on.

Strategically, this means marketing leaders must prioritise a clean channel taxonomy, tracking standards and stable data pipelines from GA4 into BigQuery. Gemini can assist by generating data quality checks and reconciliation queries, but the organisation must commit to enforcing those standards across teams and agencies.

Balance Short-Term Attribution with Long-Term MMM

AI-driven attribution often focuses on user-level paths, but relying only on path-based models keeps you locked into what is trackable. With increasing privacy restrictions, you need to complement this with marketing mix modelling (MMM) that works on aggregated spend and outcome data. Strategically, think in terms of a dual system: path-based models to optimise in-channel tactics, MMM to calibrate your big budget moves.

Gemini is particularly useful here as a strategic assistant. It can propose MMM model specifications, comment on variable selection, and help your data team implement and iterate models on Vertex AI. This strategic combination gives you resilience against tracking loss and platform biases.

Prepare the Team for a Culture Shift in Decision-Making

Implementing AI-driven channel attribution is as much an organisational change project as it is a technical one. Performance marketers, brand teams, finance and leadership need to be ready to challenge long-held beliefs (for example, that retargeting is always the hero) and accept that uncertainty bands and confidence intervals are part of modern marketing analytics.

Use Gemini not only as a modelling tool but also as an explainer. Have it generate plain-language summaries, scenario comparisons and Q&A explanations that non-technical stakeholders can understand. This lowers resistance and helps embed data-informed, AI-augmented decision making into your marketing governance structures.

Used in the right way, Gemini turns unclear channel attribution into a manageable, testable problem instead of a constant source of conflict. By combining your GA4 and BigQuery data with Gemini’s ability to generate and review models, you can build attribution and MMM setups that marketing and finance both trust. Reruption brings the engineering depth and Co-Preneur mindset to help you move from concept to a working AI-driven attribution workflow inside your own stack; if you want to explore what this could look like for your team, we’re happy to validate it with a focused PoC and then scale what works.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Banking to Energy: Learn how companies successfully use Gemini.

Lunar

Banking

Lunar, a leading Danish neobank, faced surging customer service demand outside business hours, with many users preferring voice interactions over apps due to accessibility issues. Long wait times frustrated customers, especially elderly or less tech-savvy ones struggling with digital interfaces, leading to inefficiencies and higher operational costs. This was compounded by the need for round-the-clock support in a competitive fintech landscape where 24/7 availability is key. Traditional call centers couldn't scale without ballooning expenses, and voice preference was evident but underserved, resulting in lost satisfaction and potential churn.

Lösung

Lunar deployed Europe's first GenAI-native voice assistant powered by GPT-4, enabling natural, telephony-based conversations for handling inquiries anytime without queues. The agent processes complex banking queries like balance checks, transfers, and support in Danish and English. Integrated with advanced speech-to-text and text-to-speech, it mimics human agents, escalating only edge cases to humans. This conversational AI approach overcame scalability limits, leveraging OpenAI's tech for accuracy in regulated fintech.

Ergebnisse

  • ~75% of all customer calls expected to be handled autonomously
  • 24/7 availability eliminating wait times for voice queries
  • Positive early feedback from app-challenged users
  • First European bank with GenAI-native voice tech
  • Significant operational cost reductions projected
Read case study →

Associated Press (AP)

News Media

In the mid-2010s, the Associated Press (AP) faced significant constraints in its business newsroom due to limited manual resources. With only a handful of journalists dedicated to earnings coverage, AP could produce just around 300 quarterly earnings reports per quarter, primarily focusing on major S&P 500 companies. This manual process was labor-intensive: reporters had to extract data from financial filings, analyze key metrics like revenue, profits, and growth rates, and craft concise narratives under tight deadlines. As the number of publicly traded companies grew, AP struggled to cover smaller firms, leaving vast amounts of market-relevant information unreported. This limitation not only reduced AP's comprehensive market coverage but also tied up journalists on rote tasks, preventing them from pursuing investigative stories or deeper analysis. The pressure of quarterly earnings seasons amplified these issues, with deadlines coinciding across thousands of companies, making scalable reporting impossible without innovation.

Lösung

To address this, AP partnered with Automated Insights in 2014, implementing their Wordsmith NLG platform. Wordsmith uses templated algorithms to transform structured financial data—such as earnings per share, revenue figures, and year-over-year changes—into readable, journalistic prose. Reporters input verified data from sources like Zacks Investment Research, and the AI generates draft stories in seconds, which humans then lightly edit for accuracy and style. The solution involved creating custom NLG templates tailored to AP's style, ensuring stories sounded human-written while adhering to journalistic standards. This hybrid approach—AI for volume, humans for oversight—overcame quality concerns. By 2015, AP announced it would automate the majority of U.S. corporate earnings stories, scaling coverage dramatically without proportional staff increases.

Ergebnisse

  • 14x increase in quarterly earnings stories: 300 to 4,200
  • Coverage expanded to 4,000+ U.S. public companies per quarter
  • Equivalent to freeing time of 20 full-time reporters
  • Stories published in seconds vs. hours manually
  • Zero reported errors in automated stories post-implementation
  • Sustained use expanded to sports, weather, and lottery reports
Read case study →

Klarna

Fintech

Klarna, a leading fintech BNPL provider, faced enormous pressure from millions of customer service inquiries across multiple languages for its 150 million users worldwide. Queries spanned complex fintech issues like refunds, returns, order tracking, and payments, requiring high accuracy, regulatory compliance, and 24/7 availability. Traditional human agents couldn't scale efficiently, leading to long wait times averaging 11 minutes per resolution and rising costs. Additionally, providing personalized shopping advice at scale was challenging, as customers expected conversational, context-aware guidance across retail partners. Multilingual support was critical in markets like US, Europe, and beyond, but hiring multilingual agents was costly and slow. This bottleneck hindered growth and customer satisfaction in a competitive BNPL sector.

Lösung

Klarna partnered with OpenAI to deploy a generative AI chatbot powered by GPT-4, customized as a multilingual customer service assistant. The bot handles refunds, returns, order issues, and acts as a conversational shopping advisor, integrated seamlessly into Klarna's app and website. Key innovations included fine-tuning on Klarna's data, retrieval-augmented generation (RAG) for real-time policy access, and safeguards for fintech compliance. It supports dozens of languages, escalating complex cases to humans while learning from interactions. This AI-native approach enabled rapid scaling without proportional headcount growth.

Ergebnisse

  • 2/3 of all customer service chats handled by AI
  • 2.3 million conversations in first month alone
  • Resolution time: 11 minutes → 2 minutes (82% reduction)
  • CSAT: 4.4/5 (AI) vs. 4.2/5 (humans)
  • $40 million annual cost savings
  • Equivalent to 700 full-time human agents
  • 80%+ queries resolved without human intervention
Read case study →

Morgan Stanley

Banking

Financial advisors at Morgan Stanley struggled with rapid access to the firm's extensive proprietary research database, comprising over 350,000 documents spanning decades of institutional knowledge. Manual searches through this vast repository were time-intensive, often taking 30 minutes or more per query, hindering advisors' ability to deliver timely, personalized advice during client interactions . This bottleneck limited scalability in wealth management, where high-net-worth clients demand immediate, data-driven insights amid volatile markets. Additionally, the sheer volume of unstructured data—40 million words of research reports—made it challenging to synthesize relevant information quickly, risking suboptimal recommendations and reduced client satisfaction. Advisors needed a solution to democratize access to this 'goldmine' of intelligence without extensive training or technical expertise .

Lösung

Morgan Stanley partnered with OpenAI to develop AI @ Morgan Stanley Debrief, a GPT-4-powered generative AI chatbot tailored for wealth management advisors. The tool uses retrieval-augmented generation (RAG) to securely query the firm's proprietary research database, providing instant, context-aware responses grounded in verified sources . Implemented as a conversational assistant, Debrief allows advisors to ask natural-language questions like 'What are the risks of investing in AI stocks?' and receive synthesized answers with citations, eliminating manual digging. Rigorous AI evaluations and human oversight ensure accuracy, with custom fine-tuning to align with Morgan Stanley's institutional knowledge . This approach overcame data silos and enabled seamless integration into advisors' workflows.

Ergebnisse

  • 98% adoption rate among wealth management advisors
  • Access for nearly 50% of Morgan Stanley's total employees
  • Queries answered in seconds vs. 30+ minutes manually
  • Over 350,000 proprietary research documents indexed
  • 60% employee access at peers like JPMorgan for comparison
  • Significant productivity gains reported by CAO
Read case study →

NYU Langone Health

Healthcare

NYU Langone Health, a leading academic medical center, faced significant hurdles in leveraging the vast amounts of unstructured clinical notes generated daily across its network. Traditional clinical predictive models relied heavily on structured data like lab results and vitals, but these required complex ETL processes that were time-consuming and limited in scope. Unstructured notes, rich with nuanced physician insights, were underutilized due to challenges in natural language processing, hindering accurate predictions of critical outcomes such as in-hospital mortality, length of stay (LOS), readmissions, and operational events like insurance denials. Clinicians needed real-time, scalable tools to identify at-risk patients early, but existing models struggled with the volume and variability of EHR data—over 4 million notes spanning a decade. This gap led to reactive care, increased costs, and suboptimal patient outcomes, prompting the need for an innovative approach to transform raw text into actionable foresight.

Lösung

To address these challenges, NYU Langone's Division of Applied AI Technologies at the Center for Healthcare Innovation and Delivery Science developed NYUTron, a proprietary large language model (LLM) specifically trained on internal clinical notes. Unlike off-the-shelf models, NYUTron was fine-tuned on unstructured EHR text from millions of encounters, enabling it to serve as an all-purpose prediction engine for diverse tasks. The solution involved pre-training a 13-billion-parameter LLM on over 10 years of de-identified notes (approximately 4.8 million inpatient notes), followed by task-specific fine-tuning. This allowed seamless integration into clinical workflows, automating risk flagging directly from physician documentation without manual data structuring. Collaborative efforts, including AI 'Prompt-a-Thons,' accelerated adoption by engaging clinicians in model refinement.

Ergebnisse

  • AUROC: 0.961 for 48-hour mortality prediction (vs. 0.938 benchmark)
  • 92% accuracy in identifying high-risk patients from notes
  • LOS prediction AUROC: 0.891 (5.6% improvement over prior models)
  • Readmission prediction: AUROC 0.812, outperforming clinicians in some tasks
  • Operational predictions (e.g., insurance denial): AUROC up to 0.85
  • 24 clinical tasks with superior performance across mortality, LOS, and comorbidities
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Connect GA4, BigQuery and First-Party Data into a Single View

The first tactical step is to centralise all relevant touchpoint and conversion data. Enable GA4 BigQuery export (if not already active) and ensure that all key events (leads, sign-ups, purchases) are flowing into BigQuery with consistent parameters. In parallel, load your CRM or transaction data into BigQuery, including offline conversions and revenue indicators.

Use Gemini to draft and refine the SQL needed to join these tables. For example, you can ask Gemini to generate a user-level or session-level dataset that combines GA4 events with CRM conversions via hashed user IDs or other identifiers.

Prompt example for Gemini:
You are a senior analytics engineer.
We have GA4 export tables in BigQuery (dataset ga4_export) and a CRM
conversions table (dataset crm).
Write SQL to create a user-level table that:
- Links GA4 users to CRM conversions via user_pseudo_id & a hashed email
- Aggregates all channels (source/medium/campaign) seen in the 60 days
  before the first conversion
- Outputs: user_id, conversion_date, revenue, list of channels in order
Return optimised Standard SQL for BigQuery.

This consolidated view becomes the foundation for all subsequent AI-driven attribution or MMM work.

Use Gemini to Prototype and Compare Attribution Models Quickly

Instead of hard-coding one attribution model and hoping it fits, use Gemini to quickly prototype multiple approaches: time-decay, data-driven (Markov chain or Shapley-like), position-based, and blended models. Describe your constraints (data volume, lookback window, channels) and have Gemini generate Python code or SQL transformations to implement each candidate model.

Prompt example for Gemini:
We have a BigQuery table user_paths with columns:
user_id, conversion_flag, conversion_value, touchpoint_order,
channel, days_before_conversion.
Suggest 3 different multi-touch attribution methods suitable for
this data, and write Python (using pandas) to calculate channel-level
attributed revenue for each method.
Explain the pros/cons of each in comments.

Run these variants on your historical data and compare stability, interpretability and alignment with business intuition. Gemini can also help you generate evaluation metrics and visualisations, then refine the winning model for production.

Automate Data Quality and Reconciliation Checks with Gemini-Generated SQL

Attribution fails quietly when data is inconsistent. Use Gemini to systematically create data quality checks and reconciliation queries between ad platforms, GA4 and your first-party conversions. For example, you can validate that total daily conversions from your attribution dataset are within an acceptable range of CRM numbers, or flag sudden drops in tracked touchpoints for specific channels.

Prompt example for Gemini:
Generate BigQuery SQL checks to validate our attribution base table
attribution_base:
- Compare daily total conversions to crm.conversions (tolerance +/- 5%)
- Detect days where a channel's share of impressions or clicks changes
  by more than 40% vs. 7-day average
- Output a summary table with flags and severity levels.
Optimise for low cost and readability.

Schedule these checks as part of your pipeline and alert the analytics team when anomalies appear. This makes your Gemini-powered attribution more robust and trustworthy over time.

Leverage Gemini and Vertex AI to Build a Lightweight MMM

To complement user-level attribution, implement a lightweight marketing mix model using aggregated spend and conversion data. Start by exporting daily (or weekly) channel spend, impressions and conversions into BigQuery. Then use Gemini to propose an MMM specification (for example, Bayesian regression with adstock and saturation) and generate the code to run it on Vertex AI or in a managed notebook.

Prompt example for Gemini:
We want to build a simple MMM on Google Cloud.
We have a BigQuery table mmm_data with daily rows and columns:
- date, conversions, revenue
- spend_search, spend_social, spend_display, spend_email
- control variables: seasonality_index, promo_flag
Propose a Bayesian MMM specification with adstock & saturation and
write Python code (using PyMC) we can run on Vertex AI Workbench.
Comment on how to interpret channel ROI and diminishing returns curves.

Use the resulting channel ROI and diminishing returns curves to validate or adjust your attribution-based budgets. This dual setup (attribution + MMM) provides a more realistic view of true channel contribution, especially for upper-funnel and brand activity.

Generate Plain-Language Insights and Budget Recommendations

Numbers alone rarely change decisions. Once your models are producing attributed revenue and ROI per channel, use Gemini to transform raw outputs into clear, actionable summaries for marketers and leadership. Provide Gemini with aggregated results tables and ask it to generate concise narratives, charts suggestions and budget shift scenarios.

Prompt example for Gemini:
You are a marketing analytics advisor.
Here is a table with channel-level results from our attribution model
and MMM (pasted below).
1) Summarise the main insights in max 10 bullet points for a CMO.
2) Highlight 3-5 specific budget reallocation actions with rationale.
3) Call out any caveats or data limitations we should mention.
Use clear, non-technical language.

Embed these narratives into dashboards or regular performance reviews so that AI-driven attribution informs real budget decisions instead of staying in experimental reports.

Institutionalise Versioning and Governance for Attribution Models

As you iterate with Gemini, you will create many variations of models and configurations. Without governance, teams lose track of what is live, what changed and why. Implement a simple but strict versioning approach: store model code in Git, tag each production deployment, and document the assumptions, input data and validation results. Gemini can help you generate and maintain this documentation.

Prompt example for Gemini:
We just updated our attribution model.
Here is a short description of the changes and the validation results.
Draft a change log entry and a 1-page internal documentation including:
- Model version and date
- Key assumptions and parameters
- Data sources and lookback window
- Validation metrics vs. previous version
- Guidance on how to interpret differences in channel ROI.

Over time, this governance layer turns your Gemini-based attribution setup into a reliable internal asset rather than a fragile experiment.

When implemented this way, organisations typically see clearer channel ROI within one or two optimisation cycles, more confident budget reallocations, and a meaningful reduction in time spent arguing about attribution definitions. It is realistic to target a 10–20% improvement in marketing efficiency over several quarters as AI-driven attribution and MMM insights are steadily integrated into planning and optimisation routines.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

GA4 provides useful attribution views, but it is limited to predefined models and what is trackable inside Google's ecosystem. Gemini improves channel attribution by helping you build custom models on top of your GA4 BigQuery export and first-party data. It generates SQL and Python to combine user paths, CRM conversions and platform spend, then prototypes multi-touch attribution and MMM tailored to your business.

This means you can reconcile conflicting platform-reported conversions with your own data, test different modelling assumptions, and arrive at a channel performance view that reflects your actual customer journeys rather than generic defaults.

You will get the most value from Gemini when you combine it with a basic modern data stack and some data expertise. Practically, you need: access to GA4 BigQuery exports, a place to store CRM/transaction data (often also in BigQuery), and someone with enough analytics or engineering experience to review and run the SQL/Python that Gemini generates.

The advantage is that Gemini significantly reduces the manual coding burden. A small team of a marketing analyst and a data engineer can achieve what previously required a dedicated data science team. Reruption often supports clients by providing the missing engineering depth and setting up reusable templates so internal teams can operate the solution day to day.

Timelines depend on data readiness, but for most organisations with GA4 and basic CRM data in place, you can get to a first working attribution prototype in a few weeks. In our AI PoC format, we typically aim to connect data, define evaluation metrics, and ship a tested prototype model within the scope of a single engagement.

Meaningful business impact on budget decisions usually appears after one or two optimisation cycles, once the team has validated the model against their intuition and seen that recommendations hold up in the real world. From there, Gemini helps you iterate and refine models quickly as campaigns, channels and market conditions change.

The direct technology costs of using Gemini with BigQuery and GA4 are typically modest compared to media budgets: you pay for BigQuery storage/queries, Vertex AI or compute for model runs, and Gemini usage. The larger investment is in setting up the data pipelines, models and governance correctly.

ROI should be evaluated against marketing efficiency: are you reallocating budget from over-credited channels to under-valued ones and seeing lower blended CAC or higher incremental revenue? Even a 5–10% improvement in budget allocation on a multi-million marketing spend far outweighs the cost of building and running the AI-driven attribution setup.

Reruption accelerates this journey by combining strategic clarity with deep engineering execution. Through our AI PoC offering (9.900€), we can quickly test whether a Gemini-based attribution or MMM setup works with your actual GA4 and first-party data: we define the use case, build a functioning prototype, measure performance and outline a production plan.

Beyond the PoC, our Co-Preneur approach means we embed with your team, operate in your P&L and help build the AI-first marketing analytics capabilities directly inside your organisation. We handle the Gemini prompts, BigQuery/Vertex AI pipelines, and governance structures while enabling your marketing and data teams to own and extend the solution over time.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media