The Challenge: Untargeted Product Recommendations

Most marketing teams still rely on static bestseller carousels and simple cross-sell rules like “customers who bought X also bought Y.” On paper this looks efficient, but in practice it ignores who the customer is, what they have browsed, and what they are trying to achieve right now. The result is a recommendation layer that is technically present, but strategically blind.

Traditional approaches struggle because they are rigid, manual, and slow to adapt. Category managers handcraft rules, IT teams hard-code logic into templates, and any change requires another ticket in the backlog. These systems rarely combine behavioral data, content metadata and context (campaign, device, location), so they keep serving generic offers even when your data clearly signals otherwise. At the same time, many smaller teams don’t have the data science resources to build full-blown recommender engines.

The impact is bigger than a slightly lower click-through rate. Irrelevant recommendations increase bounce rates, suppress average order value, and erode trust – customers feel like your brand “doesn’t get them.” High-intent sessions end without an upsell, repeat buyers never discover relevant add-ons, and your performance marketing spend has to work harder to compensate. Over time, competitors with smarter personalization win more share of wallet, because they use every visit to deepen relevance instead of repeating the same generic carousel.

The good news: this is a very solvable problem. With modern AI like Claude, marketers can finally interpret customer behavior, catalog metadata and campaign context in real time, then generate tailored recommendation strategies and copy without waiting for a full data platform rebuild. At Reruption, we’ve helped organisations turn vague personalization ambitions into working AI prototypes and production workflows. In the sections below, you’ll find practical guidance to move from untargeted recommendations to Claude-powered experiences that actually match user intent.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s work building AI-first marketing workflows, we see a clear pattern: most organisations don’t lack data, they lack a way to interpret it quickly and turn it into relevant experiences. Claude is a powerful fit for this gap. Used correctly, it can sit between your raw customer signals and your front-end, generating personalized recommendation logic and copy that’s understandable to marketers and controllable by the business.

Think of Claude as a Personalization Brain, Not a Black-Box Recommender

Claude is not a plug-and-play "recommendation engine" in the traditional sense. Its real strength is interpreting multiple inputs – customer profile, on-site behavior, campaign context, and product attributes – and turning them into a coherent recommendation strategy. Strategically, this gives marketing teams a transparent, explainable layer instead of a mathematical black box.

When you frame Claude as a "personalization brain," you can ask it for reasoning: why it recommends certain assortments, what message to use, how aggressive the upsell should be for a given segment. This makes it easier for non-technical marketers to review, control, and iterate, without needing deep data science skills. The technical recommender components (e.g., similarity search, rules) can remain simple, while Claude handles the orchestration and narrative.

Start with Clear Personalization Guardrails

Strategically, you need to decide where personalization is allowed to flex and where it must stay within strict boundaries. For example, you may want full personalization in content and order of products, but strict business rules about pricing, margin thresholds, and compliance-sensitive categories.

Before implementation, define guardrails such as allowed categories per segment, minimum margin per recommendation slot, or exclusion rules (e.g., no cross-selling out-of-stock items). Claude can then be prompted to operate inside these constraints, choosing the best combination of products and messaging without violating brand or commercial policies. This reduces risk and makes stakeholders much more comfortable with AI-driven decisions.

Prepare Your Teams for an AI-Assisted Workflow

Moving from static blocks to Claude-assisted personalization changes how marketing, product and engineering collaborate. Copywriters, CRM managers and merchandisers become designers of decision logic and prompts, not just creators of single assets.

Plan for enablement: train marketers on how to brief Claude, review AI output, and turn insights into experiments. Align with engineering on where Claude sits in the architecture (e.g., in a middleware layer or marketing ops tool) and who owns quality monitoring. Reruption often runs short enablement sprints so teams are comfortable iterating on prompts, taxonomies and KPIs instead of relying solely on external experts.

Balance Personalization Ambition with Data Reality

It’s tempting to jump straight to 1:1 personalization everywhere, but your data quality and integration maturity should shape the initial scope. If browsing data is fragmented or product metadata is messy, start with a few high-impact touchpoints (e.g., PDP recommendations, abandoned cart emails) where signals are clearer.

Claude can compensate for imperfect data by inferring intent from partial signals, but it can’t fix missing fundamentals like completely absent product descriptions. Strategically, define a staged roadmap: phase 1 uses Claude on well-structured campaigns and categories, phase 2 expands as data structures improve, and later phases move toward real-time, multi-channel orchestration. This avoids overpromising personalization you cannot reliably deliver.

Treat AI Personalization as an Ongoing Experiment, Not a One-Off Project

Untargeted recommendations are often the result of a project mindset: a recommender is implemented once, KPIs look “good enough,” and the setup is left alone. With Claude, enormous value comes from continually refining prompt strategies, segment definitions and creative angles based on live performance.

From a strategic perspective, set up a recurring experimentation cadence. Marketing should review which Claude-driven recommendation variants lift CTR, AOV or retention, then bake those learnings back into prompts and decision rules. This requires ownership: decide who is responsible for experimentation backlogs, success metrics, and sign-off. Organisations that treat AI personalization as a living capability, not a finished IT project, see compounding gains over time.

Used thoughtfully, Claude lets marketing teams escape rigid, untargeted recommendation blocks and move toward adaptive experiences that respect intent, context and business rules. The real unlock is not just smarter algorithms, but a workflow where marketers can directly shape and control how personalization behaves.

At Reruption, we build these AI-first workflows side by side with our clients – from proof-of-concept to production. If you’re serious about fixing generic product recommendations and want a partner who can combine strategy, engineering and hands-on experimentation, we’re happy to explore what Claude could do in your stack.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Healthcare to Food Manufacturing: Learn how companies successfully use Claude.

Duke Health

Healthcare

Sepsis is a leading cause of hospital mortality, affecting over 1.7 million Americans annually with a 20-30% mortality rate when recognized late. At Duke Health, clinicians faced the challenge of early detection amid subtle, non-specific symptoms mimicking other conditions, leading to delayed interventions like antibiotics and fluids. Traditional scoring systems like qSOFA or NEWS suffered from low sensitivity (around 50-60%) and high false alarms, causing alert fatigue in busy wards and EDs. Additionally, integrating AI into real-time clinical workflows posed risks: ensuring model accuracy on diverse patient data, gaining clinician trust, and complying with regulations without disrupting care. Duke needed a custom, explainable model trained on its own EHR data to avoid vendor biases and enable seamless adoption across its three hospitals.

Lösung

Duke's Sepsis Watch is a deep learning model leveraging real-time EHR data (vitals, labs, demographics) to continuously monitor hospitalized patients and predict sepsis onset 6 hours in advance with high precision. Developed by the Duke Institute for Health Innovation (DIHI), it triggers nurse-facing alerts (Best Practice Advisories) only when risk exceeds thresholds, minimizing fatigue. The model was trained on Duke-specific data from 250,000+ encounters, achieving AUROC of 0.935 at 3 hours prior and 88% sensitivity at low false positive rates. Integration via Epic EHR used a human-centered design, involving clinicians in iterations to refine alerts and workflows, ensuring safe deployment without overriding clinical judgment.

Ergebnisse

  • AUROC: 0.935 for sepsis prediction 3 hours prior
  • Sensitivity: 88% at 3 hours early detection
  • Reduced time to antibiotics: 1.2 hours faster
  • Alert override rate: <10% (high clinician trust)
  • Sepsis bundle compliance: Improved by 20%
  • Mortality reduction: Associated with 12% drop in sepsis deaths
Read case study →

Forever 21

E-commerce

Forever 21, a leading fast-fashion retailer, faced significant hurdles in online product discovery. Customers struggled with text-based searches that couldn't capture subtle visual details like fabric textures, color variations, or exact styles amid a vast catalog of millions of SKUs. This led to high bounce rates exceeding 50% on search pages and frustrated shoppers abandoning carts. The fashion industry's visual-centric nature amplified these issues. Descriptive keywords often mismatched inventory due to subjective terms (e.g., 'boho dress' vs. specific patterns), resulting in poor user experiences and lost sales opportunities. Pre-AI, Forever 21's search relied on basic keyword matching, limiting personalization and efficiency in a competitive e-commerce landscape. Implementation challenges included scaling for high-traffic mobile users and handling diverse image inputs like user photos or screenshots.

Lösung

To address this, Forever 21 deployed an AI-powered visual search feature across its app and website, enabling users to upload images for similar item matching. Leveraging computer vision techniques, the system extracts features using pre-trained CNN models like VGG16, computes embeddings, and ranks products via cosine similarity or Euclidean distance metrics. The solution integrated seamlessly with existing infrastructure, processing queries in real-time. Forever 21 likely partnered with providers like ViSenze or built in-house, training on proprietary catalog data for fashion-specific accuracy. This overcame text limitations by focusing on visual semantics, supporting features like style, color, and pattern matching. Overcoming challenges involved fine-tuning models for diverse lighting/user images and A/B testing for UX optimization.

Ergebnisse

  • 25% increase in conversion rates from visual searches
  • 35% reduction in average search time
  • 40% higher engagement (pages per session)
  • 18% growth in average order value
  • 92% matching accuracy for similar items
  • 50% decrease in bounce rate on search pages
Read case study →

bunq

Banking

As bunq experienced rapid growth as the second-largest neobank in Europe, scaling customer support became a critical challenge. With millions of users demanding personalized banking information on accounts, spending patterns, and financial advice on demand, the company faced pressure to deliver instant responses without proportionally expanding its human support teams, which would increase costs and slow operations. Traditional search functions in the app were insufficient for complex, contextual queries, leading to inefficiencies and user frustration. Additionally, ensuring data privacy and accuracy in a highly regulated fintech environment posed risks. bunq needed a solution that could handle nuanced conversations while complying with EU banking regulations, avoiding hallucinations common in early GenAI models, and integrating seamlessly without disrupting app performance. The goal was to offload routine inquiries, allowing human agents to focus on high-value issues.

Lösung

bunq addressed these challenges by developing Finn, a proprietary GenAI platform integrated directly into its mobile app, replacing the traditional search function with a conversational AI chatbot. After hiring over a dozen data specialists in the prior year, the team built Finn to query user-specific financial data securely, answer questions on balances, transactions, budgets, and even provide general advice while remembering conversation context across sessions. Launched as Europe's first AI-powered bank assistant in December 2023 following a beta, Finn evolved rapidly. By May 2024, it became fully conversational, enabling natural back-and-forth interactions. This retrieval-augmented generation (RAG) approach grounded responses in real-time user data, minimizing errors and enhancing personalization.

Ergebnisse

  • 100,000+ questions answered within months post-beta (end-2023)
  • 40% of user queries fully resolved autonomously by mid-2024
  • 35% of queries assisted, totaling 75% immediate support coverage
  • Hired 12+ data specialists pre-launch for data infrastructure
  • Second-largest neobank in Europe by user base (1M+ users)
Read case study →

Netflix

Streaming Media

With over 17,000 titles and growing, Netflix faced the classic cold start problem and data sparsity in recommendations, where new users or obscure content lacked sufficient interaction data, leading to poor personalization and higher churn rates . Viewers often struggled to discover engaging content among thousands of options, resulting in prolonged browsing times and disengagement—estimated at up to 75% of session time wasted on searching rather than watching . This risked subscriber loss in a competitive streaming market, where retaining users costs far less than acquiring new ones. Scalability was another hurdle: handling 200M+ subscribers generating billions of daily interactions required processing petabytes of data in real-time, while evolving viewer tastes demanded adaptive models beyond traditional collaborative filtering limitations like the popularity bias favoring mainstream hits . Early systems post-Netflix Prize (2006-2009) improved accuracy but struggled with contextual factors like device, time, and mood .

Lösung

Netflix built a hybrid recommendation engine combining collaborative filtering (CF)—starting with FunkSVD and Probabilistic Matrix Factorization from the Netflix Prize—and advanced deep learning models for embeddings and predictions . They consolidated multiple use-case models into a single multi-task neural network, improving performance and maintainability while supporting search, home page, and row recommendations . Key innovations include contextual bandits for exploration-exploitation, A/B testing on thumbnails and metadata, and content-based features from computer vision/audio analysis to mitigate cold starts . Real-time inference on Kubernetes clusters processes 100s of millions of predictions per user session, personalized by viewing history, ratings, pauses, and even search queries . This evolved from 2009 Prize winners to transformer-based architectures by 2023 .

Ergebnisse

  • 80% of viewer hours from recommendations
  • $1B+ annual savings in subscriber retention
  • 75% reduction in content browsing time
  • 10% RMSE improvement from Netflix Prize CF techniques
  • 93% of views from personalized rows
  • Handles billions of daily interactions for 270M subscribers
Read case study →

Cruise (GM)

Automotive

Developing a self-driving taxi service in dense urban environments posed immense challenges for Cruise. Complex scenarios like unpredictable pedestrians, erratic cyclists, construction zones, and adverse weather demanded near-perfect perception and decision-making in real-time. Safety was paramount, as any failure could result in accidents, regulatory scrutiny, or public backlash. Early testing revealed gaps in handling edge cases, such as emergency vehicles or occluded objects, requiring robust AI to exceed human driver performance. A pivotal safety incident in October 2023 amplified these issues: a Cruise vehicle struck a pedestrian pushed into its path by a hit-and-run driver, then dragged her while fleeing the scene, leading to suspension of operations nationwide. This exposed vulnerabilities in post-collision behavior, sensor fusion under chaos, and regulatory compliance. Scaling to commercial robotaxi fleets while achieving zero at-fault incidents proved elusive amid $10B+ investments from GM.

Lösung

Cruise addressed these with an integrated AI stack leveraging computer vision for perception and reinforcement learning for planning. Lidar, radar, and 30+ cameras fed into CNNs and transformers for object detection, semantic segmentation, and scene prediction, processing 360° views at high fidelity even in low light or rain. Reinforcement learning optimized trajectory planning and behavioral decisions, trained on millions of simulated miles to handle rare events. End-to-end neural networks refined motion forecasting, while simulation frameworks accelerated iteration without real-world risk. Post-incident, Cruise enhanced safety protocols, resuming supervised testing in 2024 with improved disengagement rates. GM's pivot integrated this tech into Super Cruise evolution for personal vehicles.

Ergebnisse

  • 1,000,000+ miles driven fully autonomously by 2023
  • 5 million driverless miles used for AI model training
  • $10B+ cumulative investment by GM in Cruise (2016-2024)
  • 30,000+ miles per intervention in early unsupervised tests
  • Operations suspended Oct 2023; resumed supervised May 2024
  • Zero commercial robotaxi revenue; pivoted Dec 2024
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Use Claude to Translate Behavior into Recommendation Intents

Most recommendation systems jump straight from clicks to products. A more powerful pattern is to let Claude first interpret behavior as shopping intent, then match that intent to product sets using your existing tools. This keeps your architecture simple while dramatically improving relevance.

For example, map key behavioral signals (visited categories, time on page, filters used, campaign source) into a compact JSON payload. Send this to Claude with clear instructions to output an intent profile and recommendation strategy (e.g., “budget-conscious first-time buyer looking for durable basics”). Your front-end or middleware can then select products that fit the suggested criteria using your product catalog or search engine.

Example prompt to Claude:
You are a personalization strategist for an ecommerce site.

Input data:
- User profile: {{user_profile_json}}
- Session behavior: {{session_events_json}}
- Campaign context: {{campaign_info}}
- Product catalog facets: {{facet_summary}}

Tasks:
1) Infer the user's primary shopping intent in 1-2 sentences.
2) Classify them into one of our segments: {{segment_definitions}}.
3) Output recommendation rules in JSON with:
   - target_price_range
   - key_benefits_to_prioritize
   - categories_to_focus
   - cross_sell_opportunities

Only output JSON.

This approach lets you Personalize "why" and "how" to recommend, while keeping the final product retrieval under tight technical and commercial control.

Generate On-Brand, Dynamic Recommendation Copy at Scale

Even with better targeting, generic copy like “You might also like” undercuts performance. Claude can generate on-brand, segment-specific microcopy that explains why products were recommended, which increases trust and click-through.

Start by collecting your brand tone guidelines, past successful headlines, and any compliance constraints. Turn these into a reusable prompt template. For each recommendation slot, provide Claude with the chosen products, the inferred user intent, and the channel (web, email, app). Ask it to return short, tested variants you can A/B test.

Example prompt to Claude:
You are a senior copywriter for our brand. Follow these rules:
- Tone of voice: {{brand_tone}}
- Forbidden phrases: {{forbidden_phrases}}
- Max 60 characters per line.

Context:
- User intent: {{intent_summary}}
- Segment: {{segment_name}}
- Recommended products (titles + key features): {{products_json}}
- Channel: {{channel}}

Write 3 alternative headlines and 3 sublines that:
- Make the recommendation logic explicit ("Because you looked at..." etc.)
- Prioritize the benefits that matter for this intent.

Return as JSON with keys: headlines[], sublines[].

Integrate this into your CMS or email tool so marketers can trigger fresh, relevant copy per campaign without writing everything manually.

Let Claude Help Clean and Enrich Product Metadata for Better Matching

Untargeted recommendations are often a symptom of poor product metadata: missing attributes, inconsistent naming, or weak descriptions. Claude can help marketing and merchandising teams standardize and enrich catalog data, which directly improves matching quality.

Design a background job or one-off clean-up workflow: export products for priority categories, send batches to Claude, and ask it to normalize attributes (e.g., style, use case, skill level) based on titles and descriptions. Use a review step before writing back to your PIM or catalog database.

Example prompt to Claude:
You are helping standardize our product catalog.

For each product in {{products_json}}:
1) Infer missing attributes: use_case, target_user_level, style, primary_material.
2) Map values to the closest option in our allowed lists: {{allowed_values_json}}.
3) Output cleaned data in JSON, preserving product_id.

Do not invent impossible attributes. If unsure, set value to null.

With richer, consistent metadata, even simple rule-based or similarity-based recommenders become much more precise, and Claude’s own strategies can reference reliable attributes.

Build a Claude-Assisted A/B Testing Workflow for Recommendations

Instead of guessing which recommendation patterns will work, use Claude to quickly generate testable variants and interpret the results. This makes experimentation faster and more structured without adding a data science headcount.

For a given page type (e.g., product detail page), define a set of hypotheses: upsell vs. cross-sell focus, price anchoring vs. value framing, bundle suggestions vs. single items. Ask Claude to design 2–3 distinct recommendation strategies and associated messaging for each segment. Implement them as variants in your experimentation tool and let traffic flow.

Example prompt to Claude:
You are designing A/B tests for product recommendations.

Context:
- Page type: PDP
- User segment: {{segment_name}}
- Business goal: increase AOV without reducing conversion
- Current recommendation options: {{candidate_products_json}}

Tasks:
1) Propose 3 distinct recommendation strategies (e.g., "premium upsell",
   "budget-friendly bundles").
2) For each strategy, specify:
   - selection_rules (JSON)
   - messaging_angle (1-2 sentences)
   - risk_notes (what might go wrong)

Return only JSON.

After tests run, feed the performance data back to Claude and ask it to summarize insights and suggest the next iteration. This closes the loop and turns raw metrics into actionable learnings for marketers.

Integrate Claude into Existing Marketing Tools Instead of Rebuilding Everything

You don’t need to rip out your ESP, CDP or ecommerce platform to benefit from Claude-powered personalization. In most environments, a lightweight integration layer or marketing ops script is enough to orchestrate data in and out of Claude.

Practically, identify the few key touchpoints where recommendation quality matters most: homepage, PDP, cart page, post-purchase emails. For each, define a minimal data payload (user, context, candidates) and a standard response format from Claude (intent, strategy, copy). Implement this as a microservice or API endpoint that your current tools can call. Start with low-risk traffic slices, monitor performance, then scale up as confidence grows.

Reruption often packages this into a small AI middleware: one service that calls Claude, applies business rules, logs decisions, and returns safe responses. This keeps your core systems stable while allowing rapid iteration on prompts and logic.

Monitor Quality and Set Realistic Performance Targets

Finally, treat Claude-driven personalization as a product that needs ongoing monitoring. Define clear KPIs for recommendations: click-through rate on recommended items, incremental AOV, attachment rate for key categories, and opt-out or complaint rates for overly aggressive upsells.

Set realistic targets for the first 3–6 months, such as: +10–20% recommendation CTR, +5–10% uplift in AOV for sessions that see personalized blocks, and improved conversion on targeted follow-up emails. Use dashboards to compare Claude-assisted experiences to your old static blocks, and create an alerting mechanism if metrics drop or output quality degrades.

Expected outcome: with a focused implementation on a few high-traffic touchpoints, most organisations can achieve measurable uplifts within 8–12 weeks. Over 6–12 months, as prompts, metadata and experiments mature, it’s realistic to see double-digit increases in recommendation engagement and a sustained lift in revenue per visitor, without linearly increasing marketing headcount.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Claude improves recommendations by interpreting user behavior, context and product metadata instead of relying only on simple rules or bestseller lists. It can infer a shopper’s intent (e.g., "researching premium options" vs. "looking for the cheapest replacement"), then suggest recommendation strategies and copy that match that intent.

Technically, you can keep your existing catalog and basic recommendation logic, while using Claude to decide which products to highlight, how to frame them, and how aggressively to upsell. This transforms your current blocks from generic carousels into adaptive experiences that explain why these products are shown, which typically increases click-through and average order value.

You don’t need a full data science team to get started. A typical setup combines:

  • A marketing or CRM owner who understands segments, campaigns and business goals.
  • An engineer or marketing ops specialist who can work with APIs and your ecommerce/CRM platform.
  • Optionally, a merchandiser or product owner who can define recommendation guardrails and business rules.

Claude itself is accessed via API or through tools Reruption can help you set up. The main new skill is prompt and workflow design: specifying which inputs Claude sees, what outputs you expect (strategy, copy, metadata), and how those are used. We typically handle the initial architecture, prompts and integration, then train your team to iterate and maintain the system.

For most organisations, a focused proof-of-concept on 1–2 key touchpoints (e.g., PDP and abandoned cart emails) can be live in 4–6 weeks. This includes scoping, data wiring, initial prompt design, and a first round of experiments.

Measurable uplifts – like higher recommendation CTR or AOV – usually appear within the first few weeks of live traffic, provided you have enough volume to run A/B tests. A more mature setup, where Claude informs strategies across multiple channels (web, email, app), often evolves over 3–6 months as you refine segments, metadata and experimentation routines.

The running cost of Claude is primarily usage-based: you pay per token processed. For recommendation use cases, payloads can be kept compact by sending only relevant behavior summaries and product candidates, which keeps per-request costs low. In practice, infrastructure and engineering time are usually more significant than Claude’s API fees.

On the ROI side, realistic early targets are +10–20% uplift in recommendation CTR and +5–10% uplift in AOV for sessions exposed to personalized blocks, depending on your baseline. Because recommendations influence a large share of traffic, even modest percentage gains can translate into meaningful incremental revenue. Part of Reruption’s work is to design your implementation so that uplift can be measured clearly against a static-control baseline, ensuring you can attribute ROI with confidence.

Reruption works as a Co-Preneur inside your organisation: we don’t just advise, we build. For untargeted product recommendations, we usually start with our AI PoC offering (9.900€), where we define your specific use case, check feasibility, and deliver a working prototype that plugs into a real page or campaign – not just slides.

From there, we can support you with end-to-end implementation: designing the recommendation and copy workflows, integrating Claude into your ecommerce or marketing stack, setting up quality and safety guardrails, and enabling your marketing team to iterate on prompts and experiments. The goal is to leave you with a production-ready, AI-first personalization capability that your team can own and grow, rather than a one-off pilot.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media