The Challenge: Untargeted Product Recommendations

Most marketing teams still rely on static bestseller carousels and simple cross-sell rules like “customers who bought X also bought Y.” On paper this looks efficient, but in practice it ignores who the customer is, what they have browsed, and what they are trying to achieve right now. The result is a recommendation layer that is technically present, but strategically blind.

Traditional approaches struggle because they are rigid, manual, and slow to adapt. Category managers handcraft rules, IT teams hard-code logic into templates, and any change requires another ticket in the backlog. These systems rarely combine behavioral data, content metadata and context (campaign, device, location), so they keep serving generic offers even when your data clearly signals otherwise. At the same time, many smaller teams don’t have the data science resources to build full-blown recommender engines.

The impact is bigger than a slightly lower click-through rate. Irrelevant recommendations increase bounce rates, suppress average order value, and erode trust – customers feel like your brand “doesn’t get them.” High-intent sessions end without an upsell, repeat buyers never discover relevant add-ons, and your performance marketing spend has to work harder to compensate. Over time, competitors with smarter personalization win more share of wallet, because they use every visit to deepen relevance instead of repeating the same generic carousel.

The good news: this is a very solvable problem. With modern AI like Claude, marketers can finally interpret customer behavior, catalog metadata and campaign context in real time, then generate tailored recommendation strategies and copy without waiting for a full data platform rebuild. At Reruption, we’ve helped organisations turn vague personalization ambitions into working AI prototypes and production workflows. In the sections below, you’ll find practical guidance to move from untargeted recommendations to Claude-powered experiences that actually match user intent.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s work building AI-first marketing workflows, we see a clear pattern: most organisations don’t lack data, they lack a way to interpret it quickly and turn it into relevant experiences. Claude is a powerful fit for this gap. Used correctly, it can sit between your raw customer signals and your front-end, generating personalized recommendation logic and copy that’s understandable to marketers and controllable by the business.

Think of Claude as a Personalization Brain, Not a Black-Box Recommender

Claude is not a plug-and-play "recommendation engine" in the traditional sense. Its real strength is interpreting multiple inputs – customer profile, on-site behavior, campaign context, and product attributes – and turning them into a coherent recommendation strategy. Strategically, this gives marketing teams a transparent, explainable layer instead of a mathematical black box.

When you frame Claude as a "personalization brain," you can ask it for reasoning: why it recommends certain assortments, what message to use, how aggressive the upsell should be for a given segment. This makes it easier for non-technical marketers to review, control, and iterate, without needing deep data science skills. The technical recommender components (e.g., similarity search, rules) can remain simple, while Claude handles the orchestration and narrative.

Start with Clear Personalization Guardrails

Strategically, you need to decide where personalization is allowed to flex and where it must stay within strict boundaries. For example, you may want full personalization in content and order of products, but strict business rules about pricing, margin thresholds, and compliance-sensitive categories.

Before implementation, define guardrails such as allowed categories per segment, minimum margin per recommendation slot, or exclusion rules (e.g., no cross-selling out-of-stock items). Claude can then be prompted to operate inside these constraints, choosing the best combination of products and messaging without violating brand or commercial policies. This reduces risk and makes stakeholders much more comfortable with AI-driven decisions.

Prepare Your Teams for an AI-Assisted Workflow

Moving from static blocks to Claude-assisted personalization changes how marketing, product and engineering collaborate. Copywriters, CRM managers and merchandisers become designers of decision logic and prompts, not just creators of single assets.

Plan for enablement: train marketers on how to brief Claude, review AI output, and turn insights into experiments. Align with engineering on where Claude sits in the architecture (e.g., in a middleware layer or marketing ops tool) and who owns quality monitoring. Reruption often runs short enablement sprints so teams are comfortable iterating on prompts, taxonomies and KPIs instead of relying solely on external experts.

Balance Personalization Ambition with Data Reality

It’s tempting to jump straight to 1:1 personalization everywhere, but your data quality and integration maturity should shape the initial scope. If browsing data is fragmented or product metadata is messy, start with a few high-impact touchpoints (e.g., PDP recommendations, abandoned cart emails) where signals are clearer.

Claude can compensate for imperfect data by inferring intent from partial signals, but it can’t fix missing fundamentals like completely absent product descriptions. Strategically, define a staged roadmap: phase 1 uses Claude on well-structured campaigns and categories, phase 2 expands as data structures improve, and later phases move toward real-time, multi-channel orchestration. This avoids overpromising personalization you cannot reliably deliver.

Treat AI Personalization as an Ongoing Experiment, Not a One-Off Project

Untargeted recommendations are often the result of a project mindset: a recommender is implemented once, KPIs look “good enough,” and the setup is left alone. With Claude, enormous value comes from continually refining prompt strategies, segment definitions and creative angles based on live performance.

From a strategic perspective, set up a recurring experimentation cadence. Marketing should review which Claude-driven recommendation variants lift CTR, AOV or retention, then bake those learnings back into prompts and decision rules. This requires ownership: decide who is responsible for experimentation backlogs, success metrics, and sign-off. Organisations that treat AI personalization as a living capability, not a finished IT project, see compounding gains over time.

Used thoughtfully, Claude lets marketing teams escape rigid, untargeted recommendation blocks and move toward adaptive experiences that respect intent, context and business rules. The real unlock is not just smarter algorithms, but a workflow where marketers can directly shape and control how personalization behaves.

At Reruption, we build these AI-first workflows side by side with our clients – from proof-of-concept to production. If you’re serious about fixing generic product recommendations and want a partner who can combine strategy, engineering and hands-on experimentation, we’re happy to explore what Claude could do in your stack.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Food Manufacturing to EdTech: Learn how companies successfully use Claude.

PepsiCo (Frito-Lay)

Food Manufacturing

In the fast-paced food manufacturing industry, PepsiCo's Frito-Lay division grappled with unplanned machinery downtime that disrupted high-volume production lines for snacks like Lay's and Doritos. These lines operate 24/7, where even brief failures could cost thousands of dollars per hour in lost capacity—industry estimates peg average downtime at $260,000 per hour in manufacturing . Perishable ingredients and just-in-time supply chains amplified losses, leading to high maintenance costs from reactive repairs, which are 3-5x more expensive than planned ones . Frito-Lay plants faced frequent issues with critical equipment like compressors, conveyors, and fryers, where micro-stops and major breakdowns eroded overall equipment effectiveness (OEE). Worker fatigue from extended shifts compounded risks, as noted in reports of grueling 84-hour weeks, indirectly stressing machines further . Without predictive insights, maintenance teams relied on schedules or breakdowns, resulting in lost production capacity and inability to meet consumer demand spikes.

Lösung

PepsiCo deployed machine learning predictive maintenance across Frito-Lay factories, leveraging sensor data from IoT devices on equipment to forecast failures days or weeks ahead. Models analyzed vibration, temperature, pressure, and usage patterns using algorithms like random forests and deep learning for time-series forecasting . Partnering with cloud platforms like Microsoft Azure Machine Learning and AWS, PepsiCo built scalable systems integrating real-time data streams for just-in-time maintenance alerts. This shifted from reactive to proactive strategies, optimizing schedules during low-production windows and minimizing disruptions . Implementation involved pilot testing in select plants before full rollout, overcoming data silos through advanced analytics .

Ergebnisse

  • 4,000 extra production hours gained annually
  • 50% reduction in unplanned downtime
  • 30% decrease in maintenance costs
  • 95% accuracy in failure predictions
  • 20% increase in OEE (Overall Equipment Effectiveness)
  • $5M+ annual savings from optimized repairs
Read case study →

Morgan Stanley

Banking

Financial advisors at Morgan Stanley struggled with rapid access to the firm's extensive proprietary research database, comprising over 350,000 documents spanning decades of institutional knowledge. Manual searches through this vast repository were time-intensive, often taking 30 minutes or more per query, hindering advisors' ability to deliver timely, personalized advice during client interactions . This bottleneck limited scalability in wealth management, where high-net-worth clients demand immediate, data-driven insights amid volatile markets. Additionally, the sheer volume of unstructured data—40 million words of research reports—made it challenging to synthesize relevant information quickly, risking suboptimal recommendations and reduced client satisfaction. Advisors needed a solution to democratize access to this 'goldmine' of intelligence without extensive training or technical expertise .

Lösung

Morgan Stanley partnered with OpenAI to develop AI @ Morgan Stanley Debrief, a GPT-4-powered generative AI chatbot tailored for wealth management advisors. The tool uses retrieval-augmented generation (RAG) to securely query the firm's proprietary research database, providing instant, context-aware responses grounded in verified sources . Implemented as a conversational assistant, Debrief allows advisors to ask natural-language questions like 'What are the risks of investing in AI stocks?' and receive synthesized answers with citations, eliminating manual digging. Rigorous AI evaluations and human oversight ensure accuracy, with custom fine-tuning to align with Morgan Stanley's institutional knowledge . This approach overcame data silos and enabled seamless integration into advisors' workflows.

Ergebnisse

  • 98% adoption rate among wealth management advisors
  • Access for nearly 50% of Morgan Stanley's total employees
  • Queries answered in seconds vs. 30+ minutes manually
  • Over 350,000 proprietary research documents indexed
  • 60% employee access at peers like JPMorgan for comparison
  • Significant productivity gains reported by CAO
Read case study →

DBS Bank

Banking

DBS Bank, Southeast Asia's leading financial institution, grappled with scaling AI from experiments to production amid surging fraud threats, demands for hyper-personalized customer experiences, and operational inefficiencies in service support. Traditional fraud detection systems struggled to process up to 15,000 data points per customer in real-time, leading to missed threats and suboptimal risk scoring. Personalization efforts were hampered by siloed data and lack of scalable algorithms for millions of users across diverse markets. Additionally, customer service teams faced overwhelming query volumes, with manual processes slowing response times and increasing costs. Regulatory pressures in banking demanded responsible AI governance, while talent shortages and integration challenges hindered enterprise-wide adoption. DBS needed a robust framework to overcome data quality issues, model drift, and ethical concerns in generative AI deployment, ensuring trust and compliance in a competitive Southeast Asian landscape.

Lösung

DBS launched an enterprise-wide AI program with over 20 use cases, leveraging machine learning for advanced fraud risk models and personalization, complemented by generative AI for an internal support assistant. Fraud models integrated vast datasets for real-time anomaly detection, while personalization algorithms delivered hyper-targeted nudges and investment ideas via the digibank app. A human-AI synergy approach empowered service teams with a GenAI assistant handling routine queries, drawing from internal knowledge bases. DBS emphasized responsible AI through governance frameworks, upskilling 40,000+ employees, and phased rollout starting with pilots in 2021, scaling production by 2024. Partnerships with tech leaders and Harvard-backed strategy ensured ethical scaling across fraud, personalization, and operations.

Ergebnisse

  • 17% increase in savings from prevented fraud attempts
  • Over 100 customized algorithms for customer analyses
  • 250,000 monthly queries processed efficiently by GenAI assistant
  • 20+ enterprise-wide AI use cases deployed
  • Analyzes up to 15,000 data points per customer for fraud
  • Boosted productivity by 20% via AI adoption (CEO statement)
Read case study →

Klarna

Fintech

Klarna, a leading fintech BNPL provider, faced enormous pressure from millions of customer service inquiries across multiple languages for its 150 million users worldwide. Queries spanned complex fintech issues like refunds, returns, order tracking, and payments, requiring high accuracy, regulatory compliance, and 24/7 availability. Traditional human agents couldn't scale efficiently, leading to long wait times averaging 11 minutes per resolution and rising costs. Additionally, providing personalized shopping advice at scale was challenging, as customers expected conversational, context-aware guidance across retail partners. Multilingual support was critical in markets like US, Europe, and beyond, but hiring multilingual agents was costly and slow. This bottleneck hindered growth and customer satisfaction in a competitive BNPL sector.

Lösung

Klarna partnered with OpenAI to deploy a generative AI chatbot powered by GPT-4, customized as a multilingual customer service assistant. The bot handles refunds, returns, order issues, and acts as a conversational shopping advisor, integrated seamlessly into Klarna's app and website. Key innovations included fine-tuning on Klarna's data, retrieval-augmented generation (RAG) for real-time policy access, and safeguards for fintech compliance. It supports dozens of languages, escalating complex cases to humans while learning from interactions. This AI-native approach enabled rapid scaling without proportional headcount growth.

Ergebnisse

  • 2/3 of all customer service chats handled by AI
  • 2.3 million conversations in first month alone
  • Resolution time: 11 minutes → 2 minutes (82% reduction)
  • CSAT: 4.4/5 (AI) vs. 4.2/5 (humans)
  • $40 million annual cost savings
  • Equivalent to 700 full-time human agents
  • 80%+ queries resolved without human intervention
Read case study →

Stanford Health Care

Healthcare

Stanford Health Care, a leading academic medical center, faced escalating clinician burnout from overwhelming administrative tasks, including drafting patient correspondence and managing inboxes overloaded with messages. With vast EHR data volumes, extracting insights for precision medicine and real-time patient monitoring was manual and time-intensive, delaying care and increasing error risks. Traditional workflows struggled with predictive analytics for events like sepsis or falls, and computer vision for imaging analysis, amid growing patient volumes. Clinicians spent excessive time on routine communications, such as lab result notifications, hindering focus on complex diagnostics. The need for scalable, unbiased AI algorithms was critical to leverage extensive datasets for better outcomes.

Lösung

Partnering with Microsoft, Stanford became one of the first healthcare systems to pilot Azure OpenAI Service within Epic EHR, enabling generative AI for drafting patient messages and natural language queries on clinical data. This integration used GPT-4 to automate correspondence, reducing manual effort. Complementing this, the Healthcare AI Applied Research Team deployed machine learning for predictive analytics (e.g., sepsis, falls prediction) and explored computer vision in imaging projects. Tools like ChatEHR allow conversational access to patient records, accelerating chart reviews. Phased pilots addressed data privacy and bias, ensuring explainable AI for clinicians.

Ergebnisse

  • 50% reduction in time for drafting patient correspondence
  • 30% decrease in clinician inbox burden from AI message routing
  • 91% accuracy in predictive models for inpatient adverse events
  • 20% faster lab result communication to patients
  • Improved autoimmune detection by 1 year prior to diagnosis
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Use Claude to Translate Behavior into Recommendation Intents

Most recommendation systems jump straight from clicks to products. A more powerful pattern is to let Claude first interpret behavior as shopping intent, then match that intent to product sets using your existing tools. This keeps your architecture simple while dramatically improving relevance.

For example, map key behavioral signals (visited categories, time on page, filters used, campaign source) into a compact JSON payload. Send this to Claude with clear instructions to output an intent profile and recommendation strategy (e.g., “budget-conscious first-time buyer looking for durable basics”). Your front-end or middleware can then select products that fit the suggested criteria using your product catalog or search engine.

Example prompt to Claude:
You are a personalization strategist for an ecommerce site.

Input data:
- User profile: {{user_profile_json}}
- Session behavior: {{session_events_json}}
- Campaign context: {{campaign_info}}
- Product catalog facets: {{facet_summary}}

Tasks:
1) Infer the user's primary shopping intent in 1-2 sentences.
2) Classify them into one of our segments: {{segment_definitions}}.
3) Output recommendation rules in JSON with:
   - target_price_range
   - key_benefits_to_prioritize
   - categories_to_focus
   - cross_sell_opportunities

Only output JSON.

This approach lets you Personalize "why" and "how" to recommend, while keeping the final product retrieval under tight technical and commercial control.

Generate On-Brand, Dynamic Recommendation Copy at Scale

Even with better targeting, generic copy like “You might also like” undercuts performance. Claude can generate on-brand, segment-specific microcopy that explains why products were recommended, which increases trust and click-through.

Start by collecting your brand tone guidelines, past successful headlines, and any compliance constraints. Turn these into a reusable prompt template. For each recommendation slot, provide Claude with the chosen products, the inferred user intent, and the channel (web, email, app). Ask it to return short, tested variants you can A/B test.

Example prompt to Claude:
You are a senior copywriter for our brand. Follow these rules:
- Tone of voice: {{brand_tone}}
- Forbidden phrases: {{forbidden_phrases}}
- Max 60 characters per line.

Context:
- User intent: {{intent_summary}}
- Segment: {{segment_name}}
- Recommended products (titles + key features): {{products_json}}
- Channel: {{channel}}

Write 3 alternative headlines and 3 sublines that:
- Make the recommendation logic explicit ("Because you looked at..." etc.)
- Prioritize the benefits that matter for this intent.

Return as JSON with keys: headlines[], sublines[].

Integrate this into your CMS or email tool so marketers can trigger fresh, relevant copy per campaign without writing everything manually.

Let Claude Help Clean and Enrich Product Metadata for Better Matching

Untargeted recommendations are often a symptom of poor product metadata: missing attributes, inconsistent naming, or weak descriptions. Claude can help marketing and merchandising teams standardize and enrich catalog data, which directly improves matching quality.

Design a background job or one-off clean-up workflow: export products for priority categories, send batches to Claude, and ask it to normalize attributes (e.g., style, use case, skill level) based on titles and descriptions. Use a review step before writing back to your PIM or catalog database.

Example prompt to Claude:
You are helping standardize our product catalog.

For each product in {{products_json}}:
1) Infer missing attributes: use_case, target_user_level, style, primary_material.
2) Map values to the closest option in our allowed lists: {{allowed_values_json}}.
3) Output cleaned data in JSON, preserving product_id.

Do not invent impossible attributes. If unsure, set value to null.

With richer, consistent metadata, even simple rule-based or similarity-based recommenders become much more precise, and Claude’s own strategies can reference reliable attributes.

Build a Claude-Assisted A/B Testing Workflow for Recommendations

Instead of guessing which recommendation patterns will work, use Claude to quickly generate testable variants and interpret the results. This makes experimentation faster and more structured without adding a data science headcount.

For a given page type (e.g., product detail page), define a set of hypotheses: upsell vs. cross-sell focus, price anchoring vs. value framing, bundle suggestions vs. single items. Ask Claude to design 2–3 distinct recommendation strategies and associated messaging for each segment. Implement them as variants in your experimentation tool and let traffic flow.

Example prompt to Claude:
You are designing A/B tests for product recommendations.

Context:
- Page type: PDP
- User segment: {{segment_name}}
- Business goal: increase AOV without reducing conversion
- Current recommendation options: {{candidate_products_json}}

Tasks:
1) Propose 3 distinct recommendation strategies (e.g., "premium upsell",
   "budget-friendly bundles").
2) For each strategy, specify:
   - selection_rules (JSON)
   - messaging_angle (1-2 sentences)
   - risk_notes (what might go wrong)

Return only JSON.

After tests run, feed the performance data back to Claude and ask it to summarize insights and suggest the next iteration. This closes the loop and turns raw metrics into actionable learnings for marketers.

Integrate Claude into Existing Marketing Tools Instead of Rebuilding Everything

You don’t need to rip out your ESP, CDP or ecommerce platform to benefit from Claude-powered personalization. In most environments, a lightweight integration layer or marketing ops script is enough to orchestrate data in and out of Claude.

Practically, identify the few key touchpoints where recommendation quality matters most: homepage, PDP, cart page, post-purchase emails. For each, define a minimal data payload (user, context, candidates) and a standard response format from Claude (intent, strategy, copy). Implement this as a microservice or API endpoint that your current tools can call. Start with low-risk traffic slices, monitor performance, then scale up as confidence grows.

Reruption often packages this into a small AI middleware: one service that calls Claude, applies business rules, logs decisions, and returns safe responses. This keeps your core systems stable while allowing rapid iteration on prompts and logic.

Monitor Quality and Set Realistic Performance Targets

Finally, treat Claude-driven personalization as a product that needs ongoing monitoring. Define clear KPIs for recommendations: click-through rate on recommended items, incremental AOV, attachment rate for key categories, and opt-out or complaint rates for overly aggressive upsells.

Set realistic targets for the first 3–6 months, such as: +10–20% recommendation CTR, +5–10% uplift in AOV for sessions that see personalized blocks, and improved conversion on targeted follow-up emails. Use dashboards to compare Claude-assisted experiences to your old static blocks, and create an alerting mechanism if metrics drop or output quality degrades.

Expected outcome: with a focused implementation on a few high-traffic touchpoints, most organisations can achieve measurable uplifts within 8–12 weeks. Over 6–12 months, as prompts, metadata and experiments mature, it’s realistic to see double-digit increases in recommendation engagement and a sustained lift in revenue per visitor, without linearly increasing marketing headcount.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Claude improves recommendations by interpreting user behavior, context and product metadata instead of relying only on simple rules or bestseller lists. It can infer a shopper’s intent (e.g., "researching premium options" vs. "looking for the cheapest replacement"), then suggest recommendation strategies and copy that match that intent.

Technically, you can keep your existing catalog and basic recommendation logic, while using Claude to decide which products to highlight, how to frame them, and how aggressively to upsell. This transforms your current blocks from generic carousels into adaptive experiences that explain why these products are shown, which typically increases click-through and average order value.

You don’t need a full data science team to get started. A typical setup combines:

  • A marketing or CRM owner who understands segments, campaigns and business goals.
  • An engineer or marketing ops specialist who can work with APIs and your ecommerce/CRM platform.
  • Optionally, a merchandiser or product owner who can define recommendation guardrails and business rules.

Claude itself is accessed via API or through tools Reruption can help you set up. The main new skill is prompt and workflow design: specifying which inputs Claude sees, what outputs you expect (strategy, copy, metadata), and how those are used. We typically handle the initial architecture, prompts and integration, then train your team to iterate and maintain the system.

For most organisations, a focused proof-of-concept on 1–2 key touchpoints (e.g., PDP and abandoned cart emails) can be live in 4–6 weeks. This includes scoping, data wiring, initial prompt design, and a first round of experiments.

Measurable uplifts – like higher recommendation CTR or AOV – usually appear within the first few weeks of live traffic, provided you have enough volume to run A/B tests. A more mature setup, where Claude informs strategies across multiple channels (web, email, app), often evolves over 3–6 months as you refine segments, metadata and experimentation routines.

The running cost of Claude is primarily usage-based: you pay per token processed. For recommendation use cases, payloads can be kept compact by sending only relevant behavior summaries and product candidates, which keeps per-request costs low. In practice, infrastructure and engineering time are usually more significant than Claude’s API fees.

On the ROI side, realistic early targets are +10–20% uplift in recommendation CTR and +5–10% uplift in AOV for sessions exposed to personalized blocks, depending on your baseline. Because recommendations influence a large share of traffic, even modest percentage gains can translate into meaningful incremental revenue. Part of Reruption’s work is to design your implementation so that uplift can be measured clearly against a static-control baseline, ensuring you can attribute ROI with confidence.

Reruption works as a Co-Preneur inside your organisation: we don’t just advise, we build. For untargeted product recommendations, we usually start with our AI PoC offering (9.900€), where we define your specific use case, check feasibility, and deliver a working prototype that plugs into a real page or campaign – not just slides.

From there, we can support you with end-to-end implementation: designing the recommendation and copy workflows, integrating Claude into your ecommerce or marketing stack, setting up quality and safety guardrails, and enabling your marketing team to iterate on prompts and experiments. The goal is to leave you with a production-ready, AI-first personalization capability that your team can own and grow, rather than a one-off pilot.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media