The Challenge: Untargeted Product Recommendations

Most marketing teams rely on static best-seller blocks, generic “customers also bought” widgets, or manually defined cross-sell rules. These tactics ignore each shopper’s real preferences and live session behavior. The result: visitors see random products instead of genuinely relevant suggestions, and marketing loses the chance to turn interest into higher basket values.

Traditional recommendation setups were built for a world of limited data and limited channels. Rules-based engines are hard to maintain, brittle across categories, and blind to nuanced intent signals like search queries, content consumption, or customer service interactions. As assortments grow and customer journeys stretch across devices, maintaining manual rules becomes unmanageable, and batch personalization can’t keep up with real-time expectations.

The business impact is clear: irrelevant recommendations suppress click-through rates, depress conversion, and keep revenue per user flat. Customers feel misunderstood and abandon sessions earlier. Marketing teams over-invest in acquisition to compensate for weak on-site conversion, while competitors with smarter AI-driven personalization convert the same traffic into higher margin and tighter loyalty loops.

The good news: this problem is very solvable. With the right data foundation and AI tooling, you can replace static blocks with dynamic, intent-aware recommendations that feel almost human. At Reruption, we’ve helped organisations move from manual rules to AI-first decisioning in other domains, and the same principles apply here. In the sections below, you’ll find practical guidance on using ChatGPT to design, test, and scale truly personalised product recommendations.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s work building real-world AI products and internal tools, we’ve seen that the biggest unlock is not just the model, but how you architect the workflow around it. ChatGPT is not a drop-in recommendation engine, but it is exceptionally strong at designing decision logic, generating personalized narratives, and orchestrating which products to show to which segments when connected to your own customer and behavioral data. Our perspective: treat ChatGPT as the intelligence layer that turns raw signals into human-sounding, context-aware recommendations.

Design an AI-First Recommendation Strategy, Not Just a New Widget

Fixing untargeted product recommendations starts with rethinking your approach, not only swapping tools. Instead of asking “Which widget should we use on the product page?”, step back and define your personalization strategy: what are the key journeys, what signals do you have (and trust), and where in the funnel can recommendations realistically move the needle?

In practice, this means mapping touchpoints (homepage, PDP, cart, email, ads) and deciding how ChatGPT will support each one: from segment logic and messaging variants to A/B-test ideas and narrative generation for different personas. A clear strategy prevents you from scattering isolated experiments and helps you prioritize the few high-impact placements that justify integration effort.

Use ChatGPT as a Reasoning Layer on Top of Your Data

Many teams expect an AI model to “magically” pick products. In reality, your existing recommendation engine, product catalog, and event tracking remain critical. The role of ChatGPT is to interpret user behavior, demographics, and context and then decide how to present which products, not to replace the underlying ranking algorithms overnight.

Strategically, you want ChatGPT to sit between your data sources (e.g., event stream, CRM, product feed) and your user interface. It receives structured inputs (category, price range, engagement signals) and outputs personalized recommendation logic and copy for each placement. This preserves the robustness of your existing scoring models while adding a flexible intelligence layer that can adapt tone, angle, and product mix to each visitor.

Align Marketing, Data, and Engineering Around Clear Guardrails

Personalized recommendations touch revenue, brand, and UX. If marketing designs logic in isolation, data quality issues and technical constraints will surface late and slow you down. Conversely, if engineering drives the project without marketing, you risk technically elegant but commercially weak experiences.

Before you wire ChatGPT into production, align on guardrails: which product categories are allowed where, what discount levels can be suggested, which compliance or legal constraints apply, and how you measure success (CTR, conversion lift, AOV, margin impact). This shared frame lets ChatGPT operate within safe bounds and reduces the risk of awkward or off-brand recommendations.

Start with a Narrow Pilot and Explicit Success Metrics

The temptation is to personalize everything, everywhere. That’s risky and hard to evaluate. A better strategy is to pick one or two high-traffic placements—like the product detail page and cart cross-sell—and run a focused pilot where ChatGPT-powered recommendations compete against your current logic.

Define explicit success metrics up front: for example, +10–15% uplift in recommendation click-through rate, +5% in AOV for sessions exposed to AI recommendations, or reduced time-to-launch for new campaigns. With narrow scope and clear KPIs, stakeholders see tangible value quickly, and you build a case for expanding personalization across channels.

Plan for Governance, Not Just Experiments

Once AI-driven personalization works, it will influence a large share of your revenue. At that point, you need more than clever prompts—you need governance. Who owns the prompts and logic? How often are they reviewed? What happens when assortment changes or new categories launch?

Strategically, treat your ChatGPT configuration (prompts, rules, templates) as a living asset with versioning, approval workflows, and monitoring. Make sure you have dashboards that surface performance by placement and segment, and clear escalation paths if something goes wrong. This disciplined approach turns a successful pilot into a sustainable personalization capability.

Used in the right role, ChatGPT can transform clumsy, untargeted product blocks into dynamic, context-aware recommendations that respect your brand and your constraints. The key is to connect it to the right data, define clear guardrails, and treat prompts and logic as strategic assets—not one-off experiments. At Reruption, we work hands-on with teams to design these AI-first workflows, validate them quickly with a PoC, and embed them in your stack. If you’re ready to move beyond generic best-seller carousels, we’re happy to explore what a practical, AI-powered recommendation engine could look like in your environment.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Healthcare to News Media: Learn how companies successfully use ChatGPT.

AstraZeneca

Healthcare

In the highly regulated pharmaceutical industry, AstraZeneca faced immense pressure to accelerate drug discovery and clinical trials, which traditionally take 10-15 years and cost billions, with low success rates of under 10%. Data silos, stringent compliance requirements (e.g., FDA regulations), and manual knowledge work hindered efficiency across R&D and business units. Researchers struggled with analyzing vast datasets from 3D imaging, literature reviews, and protocol drafting, leading to delays in bringing therapies to patients. Scaling AI was complicated by data privacy concerns, integration into legacy systems, and ensuring AI outputs were reliable in a high-stakes environment. Without rapid adoption, AstraZeneca risked falling behind competitors leveraging AI for faster innovation toward 2030 ambitions of novel medicines.

Lösung

AstraZeneca launched an enterprise-wide generative AI strategy, deploying ChatGPT Enterprise customized for pharma workflows. This included AI assistants for 3D molecular imaging analysis, automated clinical trial protocol drafting, and knowledge synthesis from scientific literature. They partnered with OpenAI for secure, scalable LLMs and invested in training: ~12,000 employees across R&D and functions completed GenAI programs by mid-2025. Infrastructure upgrades, like AMD Instinct MI300X GPUs, optimized model training. Governance frameworks ensured compliance, with human-in-loop validation for critical tasks. Rollout phased from pilots in 2023-2024 to full scaling in 2025, focusing on R&D acceleration via GenAI for molecule design and real-world evidence analysis.

Ergebnisse

  • ~12,000 employees trained on generative AI by mid-2025
  • 85-93% of staff reported productivity gains
  • 80% of medical writers found AI protocol drafts useful
  • Significant reduction in life sciences model training time via MI300X GPUs
  • High AI maturity ranking per IMD Index (top global)
  • GenAI enabling faster trial design and dose selection
Read case study →

AT&T

Telecommunications

As a leading telecom operator, AT&T manages one of the world's largest and most complex networks, spanning millions of cell sites, fiber optics, and 5G infrastructure. The primary challenges included inefficient network planning and optimization, such as determining optimal cell site placement and spectrum acquisition amid exploding data demands from 5G rollout and IoT growth. Traditional methods relied on manual analysis, leading to suboptimal resource allocation and higher capital expenditures. Additionally, reactive network maintenance caused frequent outages, with anomaly detection lagging behind real-time needs. Detecting and fixing issues proactively was critical to minimize downtime, but vast data volumes from network sensors overwhelmed legacy systems. This resulted in increased operational costs, customer dissatisfaction, and delayed 5G deployment. AT&T needed scalable AI to predict failures, automate healing, and forecast demand accurately.

Lösung

AT&T integrated machine learning and predictive analytics through its AT&T Labs, developing models for network design including spectrum refarming and cell site optimization. AI algorithms analyze geospatial data, traffic patterns, and historical performance to recommend ideal tower locations, reducing build costs. For operations, anomaly detection and self-healing systems use predictive models on NFV (Network Function Virtualization) to forecast failures and automate fixes, like rerouting traffic. Causal AI extends beyond correlations for root-cause analysis in churn and network issues. Implementation involved edge-to-edge intelligence, deploying AI across 100,000+ engineers' workflows.

Ergebnisse

  • Billions of dollars saved in network optimization costs
  • 20-30% improvement in network utilization and efficiency
  • Significant reduction in truck rolls and manual interventions
  • Proactive detection of anomalies preventing major outages
  • Optimized cell site placement reducing CapEx by millions
  • Enhanced 5G forecasting accuracy by up to 40%
Read case study →

Airbus

Aerospace

In aircraft design, computational fluid dynamics (CFD) simulations are essential for predicting airflow around wings, fuselages, and novel configurations critical to fuel efficiency and emissions reduction. However, traditional high-fidelity RANS solvers require hours to days per run on supercomputers, limiting engineers to just a few dozen iterations per design cycle and stifling innovation for next-gen hydrogen-powered aircraft like ZEROe. This computational bottleneck was particularly acute amid Airbus' push for decarbonized aviation by 2035, where complex geometries demand exhaustive exploration to optimize lift-drag ratios while minimizing weight. Collaborations with DLR and ONERA highlighted the need for faster tools, as manual tuning couldn't scale to test thousands of variants needed for laminar flow or blended-wing-body concepts.

Lösung

Machine learning surrogate models, including physics-informed neural networks (PINNs), were trained on vast CFD datasets to emulate full simulations in milliseconds. Airbus integrated these into a generative design pipeline, where AI predicts pressure fields, velocities, and forces, enforcing Navier-Stokes physics via hybrid loss functions for accuracy. Development involved curating millions of simulation snapshots from legacy runs, GPU-accelerated training, and iterative fine-tuning with experimental wind-tunnel data. This enabled rapid iteration: AI screens designs, high-fidelity CFD verifies top candidates, slashing overall compute by orders of magnitude while maintaining <5% error on key metrics.

Ergebnisse

  • Simulation time: 1 hour → 30 ms (120,000x speedup)
  • Design iterations: +10,000 per cycle in same timeframe
  • Prediction accuracy: 95%+ for lift/drag coefficients
  • 50% reduction in design phase timeline
  • 30-40% fewer high-fidelity CFD runs required
  • Fuel burn optimization: up to 5% improvement in predictions
Read case study →

Amazon

Retail

In the vast e-commerce landscape, online shoppers face significant hurdles in product discovery and decision-making. With millions of products available, customers often struggle to find items matching their specific needs, compare options, or get quick answers to nuanced questions about features, compatibility, and usage. Traditional search bars and static listings fall short, leading to shopping cart abandonment rates as high as 70% industry-wide and prolonged decision times that frustrate users. Amazon, serving over 300 million active customers, encountered amplified challenges during peak events like Prime Day, where query volumes spiked dramatically. Shoppers demanded personalized, conversational assistance akin to in-store help, but scaling human support was impossible. Issues included handling complex, multi-turn queries, integrating real-time inventory and pricing data, and ensuring recommendations complied with safety and accuracy standards amid a $500B+ catalog.

Lösung

Amazon developed Rufus, a generative AI-powered conversational shopping assistant embedded in the Amazon Shopping app and desktop. Rufus leverages a custom-built large language model (LLM) fine-tuned on Amazon's product catalog, customer reviews, and web data, enabling natural, multi-turn conversations to answer questions, compare products, and provide tailored recommendations. Powered by Amazon Bedrock for scalability and AWS Trainium/Inferentia chips for efficient inference, Rufus scales to millions of sessions without latency issues. It incorporates agentic capabilities for tasks like cart addition, price tracking, and deal hunting, overcoming prior limitations in personalization by accessing user history and preferences securely. Implementation involved iterative testing, starting with beta in February 2024, expanding to all US users by September, and global rollouts, addressing hallucination risks through grounding techniques and human-in-loop safeguards.

Ergebnisse

  • 60% higher purchase completion rate for Rufus users
  • $10B projected additional sales from Rufus
  • 250M+ customers used Rufus in 2025
  • Monthly active users up 140% YoY
  • Interactions surged 210% YoY
  • Black Friday sales sessions +100% with Rufus
  • 149% jump in Rufus users recently
Read case study →

American Eagle Outfitters

Apparel Retail

In the competitive apparel retail landscape, American Eagle Outfitters faced significant hurdles in fitting rooms, where customers crave styling advice, accurate sizing, and complementary item suggestions without waiting for overtaxed associates . Peak-hour staff shortages often resulted in frustrated shoppers abandoning carts, low try-on rates, and missed conversion opportunities, as traditional in-store experiences lagged behind personalized e-commerce . Early efforts like beacon technology in 2014 doubled fitting room entry odds but lacked depth in real-time personalization . Compounding this, data silos between online and offline hindered unified customer insights, making it tough to match items to individual style preferences, body types, or even skin tones dynamically. American Eagle needed a scalable solution to boost engagement and loyalty in flagship stores while experimenting with AI for broader impact .

Lösung

American Eagle partnered with Aila Technologies to deploy interactive fitting room kiosks powered by computer vision and machine learning, rolled out in 2019 at flagship locations in Boston, Las Vegas, and San Francisco . Customers scan garments via iOS devices, triggering CV algorithms to identify items and ML models—trained on purchase history and Google Cloud data—to suggest optimal sizes, colors, and outfit complements tailored to inferred style and preferences . Integrated with Google Cloud's ML capabilities, the system enables real-time recommendations, associate alerts for assistance, and seamless inventory checks, evolving from beacon lures to a full smart assistant . This experimental approach, championed by CMO Craig Brommers, fosters an AI culture for personalization at scale .

Ergebnisse

  • Double-digit conversion gains from AI personalization
  • 11% comparable sales growth for Aerie brand Q3 2025
  • 4% overall comparable sales increase Q3 2025
  • 29% EPS growth to $0.53 Q3 2025
  • Doubled fitting room try-on odds via early tech
  • Record Q3 revenue of $1.36B
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Map Your Data Inputs and Define a Recommendation Context Schema

Before writing a single prompt, define which data points you will pass into ChatGPT for each recommendation request. This "context schema" ensures that every AI call is grounded in reliable, structured information—the opposite of a black box.

For a typical ecommerce scenario, this might include: user segment, last viewed category, items in cart, price sensitivity bucket, device type, and a list of candidate products from your existing recommender or rules engine. Work with your data and engineering teams to produce a JSON payload that is consistent across placements.

{
  "user_segment": "value_hunter",
  "session_intent": "looking for running shoes",
  "current_page": "product_detail",
  "current_product": {
    "id": "SKU123",
    "category": "running_shoes",
    "price": 129.99
  },
  "cart_items": [],
  "candidate_products": [
    {"id": "SKU234", "category": "running_shoes", "price": 119.99},
    {"id": "SKU345", "category": "socks", "price": 14.99},
    {"id": "SKU456", "category": "running_watch", "price": 199.99}
  ]
}

Having this schema stabilised means your marketing team can iterate on prompts and logic without constantly reworking the integration, and it keeps ChatGPT-powered personalization explainable and testable.

Create Role-Specific Prompts for Each Recommendation Placement

Don’t reuse a single generic prompt for every widget. Instead, create role-specific prompts tailored to the context: homepage inspiration, PDP alternatives, cart cross-sell, post-purchase upsell, and email recommendations each require different tone and logic.

For example, a product detail page prompt can focus on alternatives and complements with a strong emphasis on similarity and reassurance:

You are a product recommendation strategist for an ecommerce site.
Goal: Suggest 3 highly relevant products for this user and context.

Inputs:
- User segment: {{user_segment}}
- Session intent: {{session_intent}}
- Current product details: {{current_product_json}}
- Candidate products: {{candidate_products_json}}

Instructions:
1. Select 3 products from the candidate list that best match the user's intent and current product.
2. Ensure at least 1 close alternative (same category, similar price), and 1 complementary item.
3. For each, generate a short, benefit-led message (max 90 characters) tailored to the user segment.
4. Respect brand tone: clear, confident, no hype, no discounts unless explicitly mentioned in the input.

Return JSON with fields: product_id, role ("alternative" or "complement"), message.

Using placement-specific prompts like this keeps recommendations on-brand and aligned with the business goal of each touchpoint.

Build a Prompt Library and Version It Like Code

As you expand AI-driven recommendations, you’ll accumulate many prompts and templates. Treat them as a shared asset, not random snippets in documents. Create a central prompt library in your repository or documentation system, with clear naming (e.g., pdp_cross_sell_v1, cart_upsell_high_value_v2), owners, and change logs.

Each time you adjust recommendation rules, messaging tone, or guardrails, create a new version and test it with a subset of traffic. This makes it easy to A/B test ChatGPT recommendation logic, roll back if needed, and share learnings with the broader team. Marketing can propose new variants, while engineering controls deployment and monitoring.

Use ChatGPT to Generate and Prioritize A/B Test Ideas

Beyond real-time recommendations, ChatGPT is excellent at exploring variations in messaging, bundles, and positioning across segments. Feed it anonymized performance data and ask it to propose test ideas that could improve click-through or AOV for underperforming segments.

You are an ecommerce experimentation strategist.
I will give you aggregated performance data for our recommendation widgets.

Data:
- Segment: {{segment_name}}
- Placement: {{placement}}
- Current CTR: {{ctr}}
- Current AOV: {{aov}}
- Current copy examples: {{copy_examples}}

Tasks:
1. Suggest 5 concrete A/B test ideas for this segment and placement.
2. For each idea, provide: hypothesis, recommendation logic change (if any), and 2–3 message examples.
3. Prioritize the ideas by expected impact and implementation effort (low/medium/high).

Return as a markdown table.

This workflow lets marketing teams systematically improve personalized product recommendations without guessing, and it keeps experiments grounded in observed performance.

Integrate Safety, Compliance, and Business Rules into the Prompt

To avoid awkward or risky suggestions, bake your constraints directly into the prompt and integration. Include rules such as: no recommending out-of-stock items, no conflicting products (e.g., incompatible accessories), and respect category-specific restrictions (e.g., age-limited products).

Extend your prompts with explicit guardrails:

Additional rules:
- Only select from candidate_products.
- Do NOT recommend products with "is_restricted": true.
- Exclude products with stock < 5.
- Do not mention prices or discounts unless provided in the input.
- Never reference user characteristics that are not in the input.

Combine prompt-level rules with backend checks: even after ChatGPT proposes product IDs, run them through your own filters before displaying them. This layered approach ensures your AI-driven recommendations remain safe, compliant, and aligned with your commercial priorities.

Measure Incremental Impact and Feed Learnings Back into Prompts

Avoid vanity metrics. For each ChatGPT-powered placement, run controlled experiments against your existing logic. Track not only CTR on recommendations, but also downstream impact: conversion rate, average order value, margin per session, and engagement by segment.

Regularly export aggregated results and feed them back into ChatGPT to help refine your prompts and hypotheses. For example, if value hunters react better to bundle suggestions than single-product upsells, adjust your prompt to bias toward bundles for that segment. Over time, this closed loop lets you turn ChatGPT personalization from a one-off project into an engine of continuous optimization.

Implemented step by step, these practices typically lead to realistic gains such as +10–20% lift in recommendation CTR on key placements, 3–8% increase in AOV for exposed sessions, and a significant reduction in manual effort for campaign and cross-sell configuration. The exact numbers depend on your baseline, but the pattern is consistent: better-targeted product suggestions, less wasted traffic, and a more coherent personalization strategy across channels.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

ChatGPT works best as a reasoning and messaging layer on top of your existing recommendation logic. In most setups, you keep your current engine (collaborative filtering, rules, or algorithmic scores) to produce a candidate list of products. ChatGPT then uses user context and business rules to decide which candidates to surface and how to position them.

This hybrid approach gives you the reliability and speed of your existing engine plus the flexibility of AI-driven personalization for copy and selection logic, without turning ChatGPT into a single point of failure for core ranking.

You typically need three core capabilities: data/engineering to expose the right customer and product signals via APIs, marketing/CRM to define segments, guardrails, and messaging, and someone with basic prompt engineering skills to translate that logic into ChatGPT prompts.

On the tech side, implementation usually involves a backend service that assembles a structured context payload, calls the ChatGPT API, validates the response, and returns it to your frontend. On the business side, you need a clear owner for personalization who can review outputs, manage A/B tests, and evolve the prompts as you learn.

For most organisations with existing tracking and product feeds, a focused pilot on one or two placements can be live in 4–8 weeks. The initial weeks go into scoping, data mapping, and integration; the remainder into prompt design, QA, and setting up experiments.

Meaningful results usually show up within the first 2–4 weeks of running an A/B test, assuming you have sufficient traffic. Early gains are often in click-through on recommendation modules; measurable uplift in AOV and conversion typically becomes clear once you’ve iterated on prompts and targeting a few times.

Yes, if implemented correctly. The API cost of ChatGPT for recommendation logic and copy generation is typically a small fraction of your overall marketing or tech budget. The payback comes from higher revenue per visitor, reduced manual configuration of rules and campaigns, and faster experimentation cycles.

To keep costs under control, you can optimize prompt length, reuse responses where appropriate (e.g., cached narratives for evergreen products), and limit calls to high-impact placements. We usually encourage clients to track ROI by comparing incremental revenue uplift in exposed sessions to the AI run cost and implementation effort. In most cases, even modest uplifts in AOV or conversion make the business case compelling.

Reruption combines AI engineering, marketing understanding, and an embedded Co-Preneur approach. We don’t just write slideware; we sit with your team to define the use case, wire up the data, and ship a working solution. Our AI PoC for 9.900€ is designed exactly for questions like this: can we reliably use ChatGPT with our data to improve recommendations and move key metrics?

In the PoC, we scope the recommendation scenario, design and test prompts with your real catalog and traffic patterns, and build a minimal integration that demonstrates end-to-end value. From there, we help you plan hardening and rollout: governance, monitoring, and scaling to more channels. If you want a partner who will challenge assumptions and own outcomes alongside you, not from a distance, we’re ready to rerupt your personalization stack together.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media