The Challenge: Untargeted Product Recommendations

Most marketing teams still rely on static bestseller blocks, broad category suggestions or manually defined cross-sell rules. On the surface, these modules look like personalization, but in reality every shopper sees nearly the same products regardless of their tastes, intent or current context. The result is a generic experience that fails to reflect what customers actually want in the moment.

This approach worked when data was sparse and channels were simple, but it breaks down in modern e-commerce and digital marketing. Users move fluidly between website, app, email, search and social. Their behavior leaves rich signals about preferences, price sensitivity and intent – yet traditional recommendation engines and rule-based setups rarely use more than a handful of attributes. Updating rules is slow, manual and quickly becomes unmanageable for hundreds of categories and thousands of SKUs.

The business impact is significant. Irrelevant product recommendations train customers to ignore your on-site and in-channel suggestions, depressing click-through and conversion rates. Average order value stays flat because true cross-sell and upsell opportunities are missed. Marketing teams pour budget into acquisition, only to lose potential revenue on the last mile of the journey. Meanwhile, competitors investing in smarter personalization quietly gain higher revenue per visitor and stronger customer loyalty.

The good news: this is a solvable problem. With modern generative AI like Gemini, marketers can finally connect behavioral data, product catalogs and campaign content into one continuous personalization loop. At Reruption, we’ve seen how AI-first thinking can replace fragile rules with adaptive, data-driven recommendations that ship in weeks, not years. In the sections below, you’ll find a practical roadmap to move from untargeted product blocks to intelligent, Gemini-powered personalization.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s perspective, using Gemini for product recommendation personalization is not about adding another widget to your site – it is about rethinking how your marketing stack decides what to show, to whom, and when. Drawing on our hands-on experience building AI products and internal tools inside complex organisations, we see Gemini as the reasoning layer that can sit between your analytics, product feed and campaign systems, orchestrating next-best-product decisions and the content that wraps around them.

Frame Recommendations as a Business System, Not a Widget

Many teams treat product recommendations as a front-end feature: a carousel on the homepage, a block in the basket, a placeholder in an email. To use Gemini for personalized recommendations effectively, you need to frame it as a core business system that touches merchandising, CRM, performance marketing and product management. That means aligning on shared objectives like incremental revenue per session, margin-aware upsell, or reduction in abandonment — not just “widget CTR”.

At a strategic level, clarify the decision logic you want Gemini to support: Should it prioritize margin or conversion probability? How should it trade off recency vs. diversity of recommendations? Which channels need to be consistent, and where is experimentation acceptable? This shared view turns Gemini into a controlled driver of commercial outcomes rather than an opaque black box owned by one team.

Design a Data Strategy Before You Design Prompts

Gemini is powerful at reasoning over complex data — but only if you feed it the right signals. Before thinking about prompt templates or campaign copy, marketing leaders should steer a clear data strategy for AI-driven recommendations. Which behavioral signals matter most for your business (e.g. high-intent views, search queries, wishlist activity, content consumption)? How will that data reliably reach Gemini via APIs or batch processes?

This is where close collaboration between marketing, data and engineering is essential. Define a minimal but robust event schema, decide what product attributes (price bands, margin buckets, compatibility tags, lifestyle themes) need to be exposed, and ensure consent and privacy considerations are addressed up front. With this foundation, you can ask Gemini better questions and trust the outputs.

Start Narrow: One High-Value Journey, Not Full-Site Personalization

It is tempting to promise “AI personalization everywhere” and then stall under the complexity. A more effective strategy is to deploy Gemini in one clearly defined, high-impact journey first. For many brands, that might be cart and post-purchase cross-sell recommendations, or a key lifecycle email such as first-time buyer nurture. This creates a contained environment for experimentation, data-learning and organisational change.

By focusing on one journey, you can define clean success metrics (e.g. uplift in AOV, attach rate of accessories, or click-through on recommendation blocks), gather qualitative feedback from customers and internal stakeholders, and iterate on the Gemini workflow quickly. Once this path is working reliably, you can extend the same patterns to home, category, search and CRM campaigns with far less risk.

Prepare Your Team to Trust – But Verify – AI Decisions

Moving from rule-based logic to AI-generated product recommendations changes how marketers and merchandisers work. The goal is not blind trust in Gemini, but calibrated trust with strong observability. Strategically, this means defining guardrails: hard exclusions (e.g. out-of-stock, restricted products), brand and compliance rules, and constraints around discounts or sensitive categories.

It also means agreeing on processes for reviewing, approving and overriding AI behavior. For example, product and CRM leads might review recommendation patterns weekly with clear dashboards, and define human-in-the-loop workflows for strategic campaigns or seasonal catalog shifts. Treat Gemini as a smart colleague: powerful, but operating under shared standards and KPIs.

Mitigate Risk with Transparent Metrics and Controlled Experiments

Any shift from generic to AI-personalized recommendations should be managed as a portfolio of experiments, not a big-bang replacement. Strategically, set up an experimentation framework with holdout groups and A/B tests to quantify uplift from Gemini-powered recommendations versus your current baseline. Track not only conversion and revenue uplift, but also user experience metrics like bounce rate and time on site.

To mitigate risk, start with conservative traffic allocations and explicit rollback criteria. Make metrics transparent across marketing, product and leadership so everyone can see how Gemini-based personalization impacts the P&L. This transparency builds confidence internally and keeps the conversation grounded in measurable business impact instead of hype.

Used deliberately, Gemini can turn your product recommendations from static noise into a dynamic system that responds to each customer in real time and in every campaign. The key is treating it as a business capability – with the right data, guardrails and experiment design – rather than a plug-and-play widget. At Reruption, we build exactly these kinds of AI-first systems inside organisations, from early proof-of-concept to production workflows. If you want to explore how Gemini could power next-best-product decisions in your stack, we’re ready to co-design and test a solution with you.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From EdTech to Technology: Learn how companies successfully use Gemini.

Duolingo

EdTech

Duolingo, a leader in gamified language learning, faced key limitations in providing real-world conversational practice and in-depth feedback. While its bite-sized lessons built vocabulary and basics effectively, users craved immersive dialogues simulating everyday scenarios, which static exercises couldn't deliver . This gap hindered progression to fluency, as learners lacked opportunities for free-form speaking and nuanced grammar explanations without expensive human tutors. Additionally, content creation was a bottleneck. Human experts manually crafted lessons, slowing the rollout of new courses and languages amid rapid user growth. Scaling personalized experiences across 40+ languages demanded innovation to maintain engagement without proportional resource increases . These challenges risked user churn and limited monetization in a competitive EdTech market.

Lösung

Duolingo launched Duolingo Max in March 2023, a premium subscription powered by GPT-4, introducing Roleplay for dynamic conversations and Explain My Answer for contextual feedback . Roleplay simulates real-life interactions like ordering coffee or planning vacations with AI characters, adapting in real-time to user inputs. Explain My Answer provides detailed breakdowns of correct/incorrect responses, enhancing comprehension. Complementing this, Duolingo's Birdbrain LLM (fine-tuned on proprietary data) automates lesson generation, allowing experts to create content 10x faster . This hybrid human-AI approach ensured quality while scaling rapidly, integrated seamlessly into the app for all skill levels .

Ergebnisse

  • DAU Growth: +59% YoY to 34.1M (Q2 2024)
  • DAU Growth: +54% YoY to 31.4M (Q1 2024)
  • Revenue Growth: +41% YoY to $178.3M (Q2 2024)
  • Adjusted EBITDA Margin: 27.0% (Q2 2024)
  • Lesson Creation Speed: 10x faster with AI
  • User Self-Efficacy: Significant increase post-AI use (2025 study)
Read case study →

Forever 21

E-commerce

Forever 21, a leading fast-fashion retailer, faced significant hurdles in online product discovery. Customers struggled with text-based searches that couldn't capture subtle visual details like fabric textures, color variations, or exact styles amid a vast catalog of millions of SKUs. This led to high bounce rates exceeding 50% on search pages and frustrated shoppers abandoning carts. The fashion industry's visual-centric nature amplified these issues. Descriptive keywords often mismatched inventory due to subjective terms (e.g., 'boho dress' vs. specific patterns), resulting in poor user experiences and lost sales opportunities. Pre-AI, Forever 21's search relied on basic keyword matching, limiting personalization and efficiency in a competitive e-commerce landscape. Implementation challenges included scaling for high-traffic mobile users and handling diverse image inputs like user photos or screenshots.

Lösung

To address this, Forever 21 deployed an AI-powered visual search feature across its app and website, enabling users to upload images for similar item matching. Leveraging computer vision techniques, the system extracts features using pre-trained CNN models like VGG16, computes embeddings, and ranks products via cosine similarity or Euclidean distance metrics. The solution integrated seamlessly with existing infrastructure, processing queries in real-time. Forever 21 likely partnered with providers like ViSenze or built in-house, training on proprietary catalog data for fashion-specific accuracy. This overcame text limitations by focusing on visual semantics, supporting features like style, color, and pattern matching. Overcoming challenges involved fine-tuning models for diverse lighting/user images and A/B testing for UX optimization.

Ergebnisse

  • 25% increase in conversion rates from visual searches
  • 35% reduction in average search time
  • 40% higher engagement (pages per session)
  • 18% growth in average order value
  • 92% matching accuracy for similar items
  • 50% decrease in bounce rate on search pages
Read case study →

NYU Langone Health

Healthcare

NYU Langone Health, a leading academic medical center, faced significant hurdles in leveraging the vast amounts of unstructured clinical notes generated daily across its network. Traditional clinical predictive models relied heavily on structured data like lab results and vitals, but these required complex ETL processes that were time-consuming and limited in scope. Unstructured notes, rich with nuanced physician insights, were underutilized due to challenges in natural language processing, hindering accurate predictions of critical outcomes such as in-hospital mortality, length of stay (LOS), readmissions, and operational events like insurance denials. Clinicians needed real-time, scalable tools to identify at-risk patients early, but existing models struggled with the volume and variability of EHR data—over 4 million notes spanning a decade. This gap led to reactive care, increased costs, and suboptimal patient outcomes, prompting the need for an innovative approach to transform raw text into actionable foresight.

Lösung

To address these challenges, NYU Langone's Division of Applied AI Technologies at the Center for Healthcare Innovation and Delivery Science developed NYUTron, a proprietary large language model (LLM) specifically trained on internal clinical notes. Unlike off-the-shelf models, NYUTron was fine-tuned on unstructured EHR text from millions of encounters, enabling it to serve as an all-purpose prediction engine for diverse tasks. The solution involved pre-training a 13-billion-parameter LLM on over 10 years of de-identified notes (approximately 4.8 million inpatient notes), followed by task-specific fine-tuning. This allowed seamless integration into clinical workflows, automating risk flagging directly from physician documentation without manual data structuring. Collaborative efforts, including AI 'Prompt-a-Thons,' accelerated adoption by engaging clinicians in model refinement.

Ergebnisse

  • AUROC: 0.961 for 48-hour mortality prediction (vs. 0.938 benchmark)
  • 92% accuracy in identifying high-risk patients from notes
  • LOS prediction AUROC: 0.891 (5.6% improvement over prior models)
  • Readmission prediction: AUROC 0.812, outperforming clinicians in some tasks
  • Operational predictions (e.g., insurance denial): AUROC up to 0.85
  • 24 clinical tasks with superior performance across mortality, LOS, and comorbidities
Read case study →

Revolut

Fintech

Revolut faced escalating Authorized Push Payment (APP) fraud, where scammers psychologically manipulate customers into authorizing transfers to fraudulent accounts, often under guises like investment opportunities. Traditional rule-based systems struggled against sophisticated social engineering tactics, leading to substantial financial losses despite Revolut's rapid growth to over 35 million customers worldwide. The rise in digital payments amplified vulnerabilities, with fraudsters exploiting real-time transfers that bypassed conventional checks. APP scams evaded detection by mimicking legitimate behaviors, resulting in billions in global losses annually and eroding customer trust in fintech platforms like Revolut. Urgent need for intelligent, adaptive anomaly detection to intervene before funds were pushed.

Lösung

Revolut deployed an AI-powered scam detection feature using machine learning anomaly detection to monitor transactions and user behaviors in real-time. The system analyzes patterns indicative of scams, such as unusual payment prompts tied to investment lures, and intervenes by alerting users or blocking suspicious actions. Leveraging supervised and unsupervised ML algorithms, it detects deviations from normal behavior during high-risk moments, 'breaking the scammer's spell' before authorization. Integrated into the app, it processes vast transaction data for proactive fraud prevention without disrupting legitimate flows.

Ergebnisse

  • 30% reduction in fraud losses from APP-related card scams
  • Targets investment opportunity scams specifically
  • Real-time intervention during testing phase
  • Protects 35 million global customers
  • Deployed since February 2024
Read case study →

Citibank Hong Kong

Wealth Management

Citibank Hong Kong faced growing demand for advanced personal finance management tools accessible via mobile devices. Customers sought predictive insights into budgeting, investing, and financial tracking, but traditional apps lacked personalization and real-time interactivity. In a competitive retail banking landscape, especially in wealth management, clients expected seamless, proactive advice amid volatile markets and rising digital expectations in Asia. Key challenges included integrating vast customer data for accurate forecasts, ensuring conversational interfaces felt natural, and overcoming data privacy hurdles in Hong Kong's regulated environment. Early mobile tools showed low engagement, with users abandoning apps due to generic recommendations, highlighting the need for AI-driven personalization to retain high-net-worth individuals.

Lösung

Wealth 360 emerged as Citibank HK's AI-powered personal finance manager, embedded in the Citi Mobile app. It leverages predictive analytics to forecast spending patterns, investment returns, and portfolio risks, delivering personalized recommendations via a conversational interface like chatbots. Drawing from Citi's global AI expertise, it processes transaction data, market trends, and user behavior for tailored advice on budgeting and wealth growth. Implementation involved machine learning models for personalization and natural language processing (NLP) for intuitive chats, building on Citi's prior successes like Asia-Pacific chatbots and APIs. This solution addressed gaps by enabling proactive alerts and virtual consultations, enhancing customer experience without human intervention.

Ergebnisse

  • 30% increase in mobile app engagement metrics
  • 25% improvement in wealth management service retention
  • 40% faster response times via conversational AI
  • 85% customer satisfaction score for personalized insights
  • 18M+ API calls processed in similar Citi initiatives
  • 50% reduction in manual advisory queries
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Connect Gemini to a Clean Product Feed and Behavioral Events

Before you can ask Gemini to suggest the “next best product”, it needs structured access to your catalog and relevant user signals. Work with your data/engineering teams to expose a normalized product feed (via API or scheduled exports) including IDs, categories, attributes, price, margin buckets, availability and descriptive text. In parallel, stream or batch key behavioral events: product views, add-to-cart, purchases, search queries, content views and email interactions.

Use an intermediary service or lightweight backend that can assemble a user snapshot on request: last N interactions, current session context, and eligible products. Gemini should receive a concise but information-rich payload, not raw logs. This approach keeps latency low and makes prompts predictable, which is critical when you build recommendation workflows that must respond in real time across channels.

Design a Reusable Next-Best-Product Prompt Template

Once the data is flowing, define a standard prompt template you can reuse across website, app and CRM. The aim is to make Gemini reason over the user context and the product pool, then return ranked recommendations with explanations that can later inform merchandising and experimentation.

System role:
You are a marketing AI that generates personalized product recommendations.
Optimize for:
- Highest probability of purchase in this session
- Respecting business rules (stock, exclusions, price range)
- Diversity across categories, but relevance first

Inputs:
- User profile and recent behavior:
{{user_context_json}}
- Candidate products (JSON array with id, name, price, margin_band, category, tags):
{{product_candidates_json}}
- Channel: {{channel}} (e.g. web_home, cart_page, email_postpurchase)

Task:
1. Select the top 4 products for this user in this channel.
2. Return JSON:
{
  "recommendations": [
    {"product_id": "...", "reason": "short rationale", "position": 1},
    ...
  ]
}
3. Do not invent product IDs that are not in the candidate list.

This pattern ensures your front end can directly consume Gemini’s output, while the reasoning (“reason” field) becomes a powerful signal for later analysis and creative optimization.

Generate Channel-Specific Creative Around Recommended Products

Gemini’s strength is not only picking products, but also generating personalized campaign content for each channel. After your next-best-product call, trigger a second prompt that asks Gemini to create headlines, snippets and CTAs that reference the selected items and the user’s context. This can power on-site copy, dynamic email content or ad creatives.

System role:
You are a performance marketing copywriter.
Goal: Create concise, personalized copy for product recommendations.

Inputs:
- User context summary: {{user_context_summary}}
- Selected products with names, key benefits and prices: {{selected_products_json}}
- Channel: {{channel}}

Task:
1. For each product, create a short headline (<40 chars) and body (<80 chars).
2. Tone: helpful, clear, no hard sell.
3. Return JSON with fields: product_id, headline, body, cta.

Connect this to your CMS, ESP or ad platform so that recommendation logic and creative personalization stay in sync. Over time, you can A/B test different prompt variants and tones to optimize engagement.

Implement Guardrails and Business Rules in a Pre-Filter Layer

To avoid surprises, build business logic outside of Gemini as a pre-filter and post-filter. Before calling Gemini, filter out out-of-stock items, restricted categories, low-margin products you never want to push, or SKUs conflicting with user attributes (e.g. already purchased, incompatible accessories). This ensures AI-driven recommendations always respect baseline commercial and legal constraints.

After Gemini returns its ranked list, validate the output: check IDs against the candidate set, ensure price ranges and categories meet your rules, and fall back to a safe default if the response is invalid. This layered approach keeps your recommendation system robust, particularly in early stages when you are still tuning prompts and data quality.

Integrate with Email and CRM Journeys for Lifecycle Personalization

Do not limit Gemini-powered recommendations to on-site blocks. Integrate the same next-best-product API into your email and CRM journeys so each triggered or batch campaign can personalize based on live context. For example, a post-purchase email can ask Gemini: “Given this order and browsing history, what are the top three relevant accessories within 30 days?” and then fetch copy for the chosen products.

On the ESP side, configure dynamic content blocks that call your recommendation service (which orchestrates the Gemini call) at send time or in pre-send batch jobs. Store product IDs and copy variants as personalization fields. Start with high-impact flows like welcome series, abandoned cart and replenishment, then extend to loyalty and win-back campaigns.

Track KPIs and Create Feedback Loops into Gemini Workflows

To improve over time, you need measurement tightly coupled to your Gemini workflows. Track KPIs at the block and session level: recommendation CTR, conversion rate after recommendation click, incremental revenue per session, and AOV uplift. Instrument separate tracking for AI-powered modules vs. legacy ones so you can directly compare performance.

Feed aggregate insights back into your system. For example, you might periodically summarise successful vs. unsuccessful recommendation patterns and use Gemini itself to analyze them: “Given these high-performing scenarios and these low-performing ones, what changes to candidate selection or ranking logic should we test?” This closes the loop and helps you iteratively refine prompts, candidate filtering and channel strategies.

With these tactics in place, marketing teams typically see realistic gains such as 5–15% uplift in recommendation CTR, 5–10% higher average order value on affected journeys, and increased relevance scores in customer feedback. The exact numbers depend on your baseline, but a structured Gemini implementation for recommendations almost always surfaces measurable revenue and engagement improvements within a few weeks of going live.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Gemini adds a reasoning layer on top of your existing data and tooling. Instead of relying only on collaborative filtering or static rules, you can feed Gemini a user’s recent behavior, profile data and a set of eligible products, and ask it to select the next best products for that specific context.

This lets you combine signals that are hard to capture in traditional engines – such as intent from search queries, content consumed, channel context and campaign history – and use them to tailor recommendations and the surrounding copy. In practice, most teams use Gemini alongside existing recommenders at first (e.g. reranking or augmenting their output) before phasing out legacy rules where it makes sense.

You do not need an in-house research lab, but you do need a small cross-functional pod. Typically that includes one backend or data engineer to connect analytics and product feeds, one marketing or CRM lead to define use cases and KPIs, and optionally a data analyst to help with measurement.

From a technical perspective, the key tasks are exposing a clean product feed, structuring user context data, calling the Gemini API securely and integrating the outputs into your website, app or email templates. Reruption often works as the engineering and AI layer for clients, so internal teams can focus on commercial strategy and content rather than low-level implementation details.

For a focused use case like cart or post-purchase cross-sell, organisations can usually get a working prototype live within a few weeks, assuming data access is in place. With Reruption’s structured AI PoC approach, we aim to prove technical feasibility and show first performance metrics in a matter of days, then run an initial A/B test over 2–4 weeks.

Meaningful business results – such as uplift in recommendation CTR, AOV or attach rate of accessories – often become visible during that first test window. Full rollout across additional journeys and channels typically happens over subsequent sprints, depending on your internal release cycles and governance.

The direct cost components are Gemini API usage, any additional infrastructure (often modest if you use existing cloud resources), and implementation effort. For many marketing teams, the main investment is the initial integration work, not ongoing runtime cost.

In terms of ROI, even small improvements in revenue per visitor compound quickly. For example, a 5–10% uplift in AOV or conversion rate on journeys influenced by recommendations can translate into significant incremental revenue at scale. Because we validate performance through controlled experiments, you can quantify uplift before committing to a broader rollout. Reruption’s PoC format at 9.900€ is specifically designed to help you answer the ROI question with real data, not slides.

Reruption works as a Co-Preneur inside your organisation: instead of delivering slideware, we embed with your team to ship a working solution. Our AI PoC offering (9.900€) is a structured way to test Gemini for your specific recommendation use case. We define the scope with you, assess data and architecture, build a prototype that calls the Gemini API on real user and product data, and measure performance against your current baseline.

If the PoC meets your thresholds, we help you turn it into a production-ready capability: refining prompts, hardening the integration, addressing security and compliance, and enabling your marketing and CRM teams to use the system day to day. Throughout, we apply our Co-Preneur approach – taking entrepreneurial ownership of the outcome and working inside your P&L rather than on the sidelines.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media