The Challenge: Untargeted Product Recommendations

Most marketing teams still rely on static bestseller blocks, broad category suggestions or manually defined cross-sell rules. On the surface, these modules look like personalization, but in reality every shopper sees nearly the same products regardless of their tastes, intent or current context. The result is a generic experience that fails to reflect what customers actually want in the moment.

This approach worked when data was sparse and channels were simple, but it breaks down in modern e-commerce and digital marketing. Users move fluidly between website, app, email, search and social. Their behavior leaves rich signals about preferences, price sensitivity and intent – yet traditional recommendation engines and rule-based setups rarely use more than a handful of attributes. Updating rules is slow, manual and quickly becomes unmanageable for hundreds of categories and thousands of SKUs.

The business impact is significant. Irrelevant product recommendations train customers to ignore your on-site and in-channel suggestions, depressing click-through and conversion rates. Average order value stays flat because true cross-sell and upsell opportunities are missed. Marketing teams pour budget into acquisition, only to lose potential revenue on the last mile of the journey. Meanwhile, competitors investing in smarter personalization quietly gain higher revenue per visitor and stronger customer loyalty.

The good news: this is a solvable problem. With modern generative AI like Gemini, marketers can finally connect behavioral data, product catalogs and campaign content into one continuous personalization loop. At Reruption, we’ve seen how AI-first thinking can replace fragile rules with adaptive, data-driven recommendations that ship in weeks, not years. In the sections below, you’ll find a practical roadmap to move from untargeted product blocks to intelligent, Gemini-powered personalization.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s perspective, using Gemini for product recommendation personalization is not about adding another widget to your site – it is about rethinking how your marketing stack decides what to show, to whom, and when. Drawing on our hands-on experience building AI products and internal tools inside complex organisations, we see Gemini as the reasoning layer that can sit between your analytics, product feed and campaign systems, orchestrating next-best-product decisions and the content that wraps around them.

Frame Recommendations as a Business System, Not a Widget

Many teams treat product recommendations as a front-end feature: a carousel on the homepage, a block in the basket, a placeholder in an email. To use Gemini for personalized recommendations effectively, you need to frame it as a core business system that touches merchandising, CRM, performance marketing and product management. That means aligning on shared objectives like incremental revenue per session, margin-aware upsell, or reduction in abandonment — not just “widget CTR”.

At a strategic level, clarify the decision logic you want Gemini to support: Should it prioritize margin or conversion probability? How should it trade off recency vs. diversity of recommendations? Which channels need to be consistent, and where is experimentation acceptable? This shared view turns Gemini into a controlled driver of commercial outcomes rather than an opaque black box owned by one team.

Design a Data Strategy Before You Design Prompts

Gemini is powerful at reasoning over complex data — but only if you feed it the right signals. Before thinking about prompt templates or campaign copy, marketing leaders should steer a clear data strategy for AI-driven recommendations. Which behavioral signals matter most for your business (e.g. high-intent views, search queries, wishlist activity, content consumption)? How will that data reliably reach Gemini via APIs or batch processes?

This is where close collaboration between marketing, data and engineering is essential. Define a minimal but robust event schema, decide what product attributes (price bands, margin buckets, compatibility tags, lifestyle themes) need to be exposed, and ensure consent and privacy considerations are addressed up front. With this foundation, you can ask Gemini better questions and trust the outputs.

Start Narrow: One High-Value Journey, Not Full-Site Personalization

It is tempting to promise “AI personalization everywhere” and then stall under the complexity. A more effective strategy is to deploy Gemini in one clearly defined, high-impact journey first. For many brands, that might be cart and post-purchase cross-sell recommendations, or a key lifecycle email such as first-time buyer nurture. This creates a contained environment for experimentation, data-learning and organisational change.

By focusing on one journey, you can define clean success metrics (e.g. uplift in AOV, attach rate of accessories, or click-through on recommendation blocks), gather qualitative feedback from customers and internal stakeholders, and iterate on the Gemini workflow quickly. Once this path is working reliably, you can extend the same patterns to home, category, search and CRM campaigns with far less risk.

Prepare Your Team to Trust – But Verify – AI Decisions

Moving from rule-based logic to AI-generated product recommendations changes how marketers and merchandisers work. The goal is not blind trust in Gemini, but calibrated trust with strong observability. Strategically, this means defining guardrails: hard exclusions (e.g. out-of-stock, restricted products), brand and compliance rules, and constraints around discounts or sensitive categories.

It also means agreeing on processes for reviewing, approving and overriding AI behavior. For example, product and CRM leads might review recommendation patterns weekly with clear dashboards, and define human-in-the-loop workflows for strategic campaigns or seasonal catalog shifts. Treat Gemini as a smart colleague: powerful, but operating under shared standards and KPIs.

Mitigate Risk with Transparent Metrics and Controlled Experiments

Any shift from generic to AI-personalized recommendations should be managed as a portfolio of experiments, not a big-bang replacement. Strategically, set up an experimentation framework with holdout groups and A/B tests to quantify uplift from Gemini-powered recommendations versus your current baseline. Track not only conversion and revenue uplift, but also user experience metrics like bounce rate and time on site.

To mitigate risk, start with conservative traffic allocations and explicit rollback criteria. Make metrics transparent across marketing, product and leadership so everyone can see how Gemini-based personalization impacts the P&L. This transparency builds confidence internally and keeps the conversation grounded in measurable business impact instead of hype.

Used deliberately, Gemini can turn your product recommendations from static noise into a dynamic system that responds to each customer in real time and in every campaign. The key is treating it as a business capability – with the right data, guardrails and experiment design – rather than a plug-and-play widget. At Reruption, we build exactly these kinds of AI-first systems inside organisations, from early proof-of-concept to production workflows. If you want to explore how Gemini could power next-best-product decisions in your stack, we’re ready to co-design and test a solution with you.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Payments to Banking: Learn how companies successfully use Gemini.

Visa

Payments

The payments industry faced a surge in online fraud, particularly enumeration attacks where threat actors use automated scripts and botnets to test stolen card details at scale. These attacks exploit vulnerabilities in card-not-present transactions, causing $1.1 billion in annual fraud losses globally and significant operational expenses for issuers. Visa needed real-time detection to combat this without generating high false positives that block legitimate customers, especially amid rising e-commerce volumes like Cyber Monday spikes. Traditional fraud systems struggled with the speed and sophistication of these attacks, amplified by AI-driven bots. Visa's challenge was to analyze vast transaction data in milliseconds, identifying anomalous patterns while maintaining seamless user experiences. This required advanced AI and machine learning to predict and score risks accurately.

Lösung

Visa developed the Visa Account Attack Intelligence (VAAI) Score, a generative AI-powered tool that scores the likelihood of enumeration attacks in real-time for card-not-present transactions. By leveraging generative AI components alongside machine learning models, VAAI detects sophisticated patterns from botnets and scripts that evade legacy rules-based systems. Integrated into Visa's broader AI-driven fraud ecosystem, including Identity Behavior Analysis, the solution enhances risk scoring with behavioral insights. Rolled out first to U.S. issuers in 2024, it reduces both fraud and false declines, optimizing operations. This approach allows issuers to proactively mitigate threats at unprecedented scale.

Ergebnisse

  • $40 billion in fraud prevented (Oct 2022-Sep 2023)
  • Nearly 2x increase YoY in fraud prevention
  • $1.1 billion annual global losses from enumeration attacks targeted
  • 85% more fraudulent transactions blocked on Cyber Monday 2024 YoY
  • Handled 200% spike in fraud attempts without service disruption
  • Enhanced risk scoring accuracy via ML and Identity Behavior Analysis
Read case study →

JPMorgan Chase

Banking

In the high-stakes world of asset management and wealth management at JPMorgan Chase, advisors faced significant time burdens from manual research, document summarization, and report drafting. Generating investment ideas, market insights, and personalized client reports often took hours or days, limiting time for client interactions and strategic advising. This inefficiency was exacerbated post-ChatGPT, as the bank recognized the need for secure, internal AI to handle vast proprietary data without risking compliance or security breaches. The Private Bank advisors specifically struggled with preparing for client meetings, sifting through research reports, and creating tailored recommendations amid regulatory scrutiny and data silos, hindering productivity and client responsiveness in a competitive landscape.

Lösung

JPMorgan addressed these challenges by developing the LLM Suite, an internal suite of seven fine-tuned large language models (LLMs) powered by generative AI, integrated with secure data infrastructure. This platform enables advisors to draft reports, generate investment ideas, and summarize documents rapidly using proprietary data. A specialized tool, Connect Coach, was created for Private Bank advisors to assist in client preparation, idea generation, and research synthesis. The implementation emphasized governance, risk management, and employee training through AI competitions and 'learn-by-doing' approaches, ensuring safe scaling across the firm. LLM Suite rolled out progressively, starting with proofs-of-concept and expanding firm-wide.

Ergebnisse

  • Users reached: 140,000 employees
  • Use cases developed: 450+ proofs-of-concept
  • Financial upside: Up to $2 billion in AI value
  • Deployment speed: From pilot to 60K users in months
  • Advisor tools: Connect Coach for Private Bank
  • Firm-wide PoCs: Rigorous ROI measurement across 450 initiatives
Read case study →

Amazon

Retail

In the vast e-commerce landscape, online shoppers face significant hurdles in product discovery and decision-making. With millions of products available, customers often struggle to find items matching their specific needs, compare options, or get quick answers to nuanced questions about features, compatibility, and usage. Traditional search bars and static listings fall short, leading to shopping cart abandonment rates as high as 70% industry-wide and prolonged decision times that frustrate users. Amazon, serving over 300 million active customers, encountered amplified challenges during peak events like Prime Day, where query volumes spiked dramatically. Shoppers demanded personalized, conversational assistance akin to in-store help, but scaling human support was impossible. Issues included handling complex, multi-turn queries, integrating real-time inventory and pricing data, and ensuring recommendations complied with safety and accuracy standards amid a $500B+ catalog.

Lösung

Amazon developed Rufus, a generative AI-powered conversational shopping assistant embedded in the Amazon Shopping app and desktop. Rufus leverages a custom-built large language model (LLM) fine-tuned on Amazon's product catalog, customer reviews, and web data, enabling natural, multi-turn conversations to answer questions, compare products, and provide tailored recommendations. Powered by Amazon Bedrock for scalability and AWS Trainium/Inferentia chips for efficient inference, Rufus scales to millions of sessions without latency issues. It incorporates agentic capabilities for tasks like cart addition, price tracking, and deal hunting, overcoming prior limitations in personalization by accessing user history and preferences securely. Implementation involved iterative testing, starting with beta in February 2024, expanding to all US users by September, and global rollouts, addressing hallucination risks through grounding techniques and human-in-loop safeguards.

Ergebnisse

  • 60% higher purchase completion rate for Rufus users
  • $10B projected additional sales from Rufus
  • 250M+ customers used Rufus in 2025
  • Monthly active users up 140% YoY
  • Interactions surged 210% YoY
  • Black Friday sales sessions +100% with Rufus
  • 149% jump in Rufus users recently
Read case study →

Citibank Hong Kong

Wealth Management

Citibank Hong Kong faced growing demand for advanced personal finance management tools accessible via mobile devices. Customers sought predictive insights into budgeting, investing, and financial tracking, but traditional apps lacked personalization and real-time interactivity. In a competitive retail banking landscape, especially in wealth management, clients expected seamless, proactive advice amid volatile markets and rising digital expectations in Asia. Key challenges included integrating vast customer data for accurate forecasts, ensuring conversational interfaces felt natural, and overcoming data privacy hurdles in Hong Kong's regulated environment. Early mobile tools showed low engagement, with users abandoning apps due to generic recommendations, highlighting the need for AI-driven personalization to retain high-net-worth individuals.

Lösung

Wealth 360 emerged as Citibank HK's AI-powered personal finance manager, embedded in the Citi Mobile app. It leverages predictive analytics to forecast spending patterns, investment returns, and portfolio risks, delivering personalized recommendations via a conversational interface like chatbots. Drawing from Citi's global AI expertise, it processes transaction data, market trends, and user behavior for tailored advice on budgeting and wealth growth. Implementation involved machine learning models for personalization and natural language processing (NLP) for intuitive chats, building on Citi's prior successes like Asia-Pacific chatbots and APIs. This solution addressed gaps by enabling proactive alerts and virtual consultations, enhancing customer experience without human intervention.

Ergebnisse

  • 30% increase in mobile app engagement metrics
  • 25% improvement in wealth management service retention
  • 40% faster response times via conversational AI
  • 85% customer satisfaction score for personalized insights
  • 18M+ API calls processed in similar Citi initiatives
  • 50% reduction in manual advisory queries
Read case study →

Three UK

Telecommunications

Three UK, a leading mobile telecom operator in the UK, faced intense pressure from surging data traffic driven by 5G rollout, video streaming, online gaming, and remote work. With over 10 million customers, peak-hour congestion in urban areas led to dropped calls, buffering during streams, and high latency impacting gaming experiences. Traditional monitoring tools struggled with the volume of big data from network probes, making real-time optimization impossible and risking customer churn. Compounding this, legacy on-premises systems couldn't scale for 5G network slicing and dynamic resource allocation, resulting in inefficient spectrum use and OPEX spikes. Three UK needed a solution to predict and preempt network bottlenecks proactively, ensuring low-latency services for latency-sensitive apps while maintaining QoS across diverse traffic types.

Lösung

Microsoft Azure Operator Insights emerged as the cloud-based AI platform tailored for telecoms, leveraging big data machine learning to ingest petabytes of network telemetry in real-time. It analyzes KPIs like throughput, packet loss, and handover success to detect anomalies and forecast congestion. Three UK integrated it with their core network for automated insights and recommendations. The solution employed ML models for root-cause analysis, traffic prediction, and optimization actions like beamforming adjustments and load balancing. Deployed on Azure's scalable cloud, it enabled seamless migration from legacy tools, reducing dependency on manual interventions and empowering engineers with actionable dashboards.

Ergebnisse

  • 25% reduction in network congestion incidents
  • 20% improvement in average download speeds
  • 15% decrease in end-to-end latency
  • 30% faster anomaly detection
  • 10% OPEX savings on network ops
  • Improved NPS by 12 points
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Connect Gemini to a Clean Product Feed and Behavioral Events

Before you can ask Gemini to suggest the “next best product”, it needs structured access to your catalog and relevant user signals. Work with your data/engineering teams to expose a normalized product feed (via API or scheduled exports) including IDs, categories, attributes, price, margin buckets, availability and descriptive text. In parallel, stream or batch key behavioral events: product views, add-to-cart, purchases, search queries, content views and email interactions.

Use an intermediary service or lightweight backend that can assemble a user snapshot on request: last N interactions, current session context, and eligible products. Gemini should receive a concise but information-rich payload, not raw logs. This approach keeps latency low and makes prompts predictable, which is critical when you build recommendation workflows that must respond in real time across channels.

Design a Reusable Next-Best-Product Prompt Template

Once the data is flowing, define a standard prompt template you can reuse across website, app and CRM. The aim is to make Gemini reason over the user context and the product pool, then return ranked recommendations with explanations that can later inform merchandising and experimentation.

System role:
You are a marketing AI that generates personalized product recommendations.
Optimize for:
- Highest probability of purchase in this session
- Respecting business rules (stock, exclusions, price range)
- Diversity across categories, but relevance first

Inputs:
- User profile and recent behavior:
{{user_context_json}}
- Candidate products (JSON array with id, name, price, margin_band, category, tags):
{{product_candidates_json}}
- Channel: {{channel}} (e.g. web_home, cart_page, email_postpurchase)

Task:
1. Select the top 4 products for this user in this channel.
2. Return JSON:
{
  "recommendations": [
    {"product_id": "...", "reason": "short rationale", "position": 1},
    ...
  ]
}
3. Do not invent product IDs that are not in the candidate list.

This pattern ensures your front end can directly consume Gemini’s output, while the reasoning (“reason” field) becomes a powerful signal for later analysis and creative optimization.

Generate Channel-Specific Creative Around Recommended Products

Gemini’s strength is not only picking products, but also generating personalized campaign content for each channel. After your next-best-product call, trigger a second prompt that asks Gemini to create headlines, snippets and CTAs that reference the selected items and the user’s context. This can power on-site copy, dynamic email content or ad creatives.

System role:
You are a performance marketing copywriter.
Goal: Create concise, personalized copy for product recommendations.

Inputs:
- User context summary: {{user_context_summary}}
- Selected products with names, key benefits and prices: {{selected_products_json}}
- Channel: {{channel}}

Task:
1. For each product, create a short headline (<40 chars) and body (<80 chars).
2. Tone: helpful, clear, no hard sell.
3. Return JSON with fields: product_id, headline, body, cta.

Connect this to your CMS, ESP or ad platform so that recommendation logic and creative personalization stay in sync. Over time, you can A/B test different prompt variants and tones to optimize engagement.

Implement Guardrails and Business Rules in a Pre-Filter Layer

To avoid surprises, build business logic outside of Gemini as a pre-filter and post-filter. Before calling Gemini, filter out out-of-stock items, restricted categories, low-margin products you never want to push, or SKUs conflicting with user attributes (e.g. already purchased, incompatible accessories). This ensures AI-driven recommendations always respect baseline commercial and legal constraints.

After Gemini returns its ranked list, validate the output: check IDs against the candidate set, ensure price ranges and categories meet your rules, and fall back to a safe default if the response is invalid. This layered approach keeps your recommendation system robust, particularly in early stages when you are still tuning prompts and data quality.

Integrate with Email and CRM Journeys for Lifecycle Personalization

Do not limit Gemini-powered recommendations to on-site blocks. Integrate the same next-best-product API into your email and CRM journeys so each triggered or batch campaign can personalize based on live context. For example, a post-purchase email can ask Gemini: “Given this order and browsing history, what are the top three relevant accessories within 30 days?” and then fetch copy for the chosen products.

On the ESP side, configure dynamic content blocks that call your recommendation service (which orchestrates the Gemini call) at send time or in pre-send batch jobs. Store product IDs and copy variants as personalization fields. Start with high-impact flows like welcome series, abandoned cart and replenishment, then extend to loyalty and win-back campaigns.

Track KPIs and Create Feedback Loops into Gemini Workflows

To improve over time, you need measurement tightly coupled to your Gemini workflows. Track KPIs at the block and session level: recommendation CTR, conversion rate after recommendation click, incremental revenue per session, and AOV uplift. Instrument separate tracking for AI-powered modules vs. legacy ones so you can directly compare performance.

Feed aggregate insights back into your system. For example, you might periodically summarise successful vs. unsuccessful recommendation patterns and use Gemini itself to analyze them: “Given these high-performing scenarios and these low-performing ones, what changes to candidate selection or ranking logic should we test?” This closes the loop and helps you iteratively refine prompts, candidate filtering and channel strategies.

With these tactics in place, marketing teams typically see realistic gains such as 5–15% uplift in recommendation CTR, 5–10% higher average order value on affected journeys, and increased relevance scores in customer feedback. The exact numbers depend on your baseline, but a structured Gemini implementation for recommendations almost always surfaces measurable revenue and engagement improvements within a few weeks of going live.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Gemini adds a reasoning layer on top of your existing data and tooling. Instead of relying only on collaborative filtering or static rules, you can feed Gemini a user’s recent behavior, profile data and a set of eligible products, and ask it to select the next best products for that specific context.

This lets you combine signals that are hard to capture in traditional engines – such as intent from search queries, content consumed, channel context and campaign history – and use them to tailor recommendations and the surrounding copy. In practice, most teams use Gemini alongside existing recommenders at first (e.g. reranking or augmenting their output) before phasing out legacy rules where it makes sense.

You do not need an in-house research lab, but you do need a small cross-functional pod. Typically that includes one backend or data engineer to connect analytics and product feeds, one marketing or CRM lead to define use cases and KPIs, and optionally a data analyst to help with measurement.

From a technical perspective, the key tasks are exposing a clean product feed, structuring user context data, calling the Gemini API securely and integrating the outputs into your website, app or email templates. Reruption often works as the engineering and AI layer for clients, so internal teams can focus on commercial strategy and content rather than low-level implementation details.

For a focused use case like cart or post-purchase cross-sell, organisations can usually get a working prototype live within a few weeks, assuming data access is in place. With Reruption’s structured AI PoC approach, we aim to prove technical feasibility and show first performance metrics in a matter of days, then run an initial A/B test over 2–4 weeks.

Meaningful business results – such as uplift in recommendation CTR, AOV or attach rate of accessories – often become visible during that first test window. Full rollout across additional journeys and channels typically happens over subsequent sprints, depending on your internal release cycles and governance.

The direct cost components are Gemini API usage, any additional infrastructure (often modest if you use existing cloud resources), and implementation effort. For many marketing teams, the main investment is the initial integration work, not ongoing runtime cost.

In terms of ROI, even small improvements in revenue per visitor compound quickly. For example, a 5–10% uplift in AOV or conversion rate on journeys influenced by recommendations can translate into significant incremental revenue at scale. Because we validate performance through controlled experiments, you can quantify uplift before committing to a broader rollout. Reruption’s PoC format at 9.900€ is specifically designed to help you answer the ROI question with real data, not slides.

Reruption works as a Co-Preneur inside your organisation: instead of delivering slideware, we embed with your team to ship a working solution. Our AI PoC offering (9.900€) is a structured way to test Gemini for your specific recommendation use case. We define the scope with you, assess data and architecture, build a prototype that calls the Gemini API on real user and product data, and measure performance against your current baseline.

If the PoC meets your thresholds, we help you turn it into a production-ready capability: refining prompts, hardening the integration, addressing security and compliance, and enabling your marketing and CRM teams to use the system day to day. Throughout, we apply our Co-Preneur approach – taking entrepreneurial ownership of the outcome and working inside your P&L rather than on the sidelines.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media