The Challenge: Untargeted Product Recommendations

Most marketing teams still rely on static bestseller carousels and simple cross-sell rules like “customers who bought X also bought Y.” On paper this looks efficient, but in practice it ignores who the customer is, what they have browsed, and what they are trying to achieve right now. The result is a recommendation layer that is technically present, but strategically blind.

Traditional approaches struggle because they are rigid, manual, and slow to adapt. Category managers handcraft rules, IT teams hard-code logic into templates, and any change requires another ticket in the backlog. These systems rarely combine behavioral data, content metadata and context (campaign, device, location), so they keep serving generic offers even when your data clearly signals otherwise. At the same time, many smaller teams don’t have the data science resources to build full-blown recommender engines.

The impact is bigger than a slightly lower click-through rate. Irrelevant recommendations increase bounce rates, suppress average order value, and erode trust – customers feel like your brand “doesn’t get them.” High-intent sessions end without an upsell, repeat buyers never discover relevant add-ons, and your performance marketing spend has to work harder to compensate. Over time, competitors with smarter personalization win more share of wallet, because they use every visit to deepen relevance instead of repeating the same generic carousel.

The good news: this is a very solvable problem. With modern AI like Claude, marketers can finally interpret customer behavior, catalog metadata and campaign context in real time, then generate tailored recommendation strategies and copy without waiting for a full data platform rebuild. At Reruption, we’ve helped organisations turn vague personalization ambitions into working AI prototypes and production workflows. In the sections below, you’ll find practical guidance to move from untargeted recommendations to Claude-powered experiences that actually match user intent.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s work building AI-first marketing workflows, we see a clear pattern: most organisations don’t lack data, they lack a way to interpret it quickly and turn it into relevant experiences. Claude is a powerful fit for this gap. Used correctly, it can sit between your raw customer signals and your front-end, generating personalized recommendation logic and copy that’s understandable to marketers and controllable by the business.

Think of Claude as a Personalization Brain, Not a Black-Box Recommender

Claude is not a plug-and-play "recommendation engine" in the traditional sense. Its real strength is interpreting multiple inputs – customer profile, on-site behavior, campaign context, and product attributes – and turning them into a coherent recommendation strategy. Strategically, this gives marketing teams a transparent, explainable layer instead of a mathematical black box.

When you frame Claude as a "personalization brain," you can ask it for reasoning: why it recommends certain assortments, what message to use, how aggressive the upsell should be for a given segment. This makes it easier for non-technical marketers to review, control, and iterate, without needing deep data science skills. The technical recommender components (e.g., similarity search, rules) can remain simple, while Claude handles the orchestration and narrative.

Start with Clear Personalization Guardrails

Strategically, you need to decide where personalization is allowed to flex and where it must stay within strict boundaries. For example, you may want full personalization in content and order of products, but strict business rules about pricing, margin thresholds, and compliance-sensitive categories.

Before implementation, define guardrails such as allowed categories per segment, minimum margin per recommendation slot, or exclusion rules (e.g., no cross-selling out-of-stock items). Claude can then be prompted to operate inside these constraints, choosing the best combination of products and messaging without violating brand or commercial policies. This reduces risk and makes stakeholders much more comfortable with AI-driven decisions.

Prepare Your Teams for an AI-Assisted Workflow

Moving from static blocks to Claude-assisted personalization changes how marketing, product and engineering collaborate. Copywriters, CRM managers and merchandisers become designers of decision logic and prompts, not just creators of single assets.

Plan for enablement: train marketers on how to brief Claude, review AI output, and turn insights into experiments. Align with engineering on where Claude sits in the architecture (e.g., in a middleware layer or marketing ops tool) and who owns quality monitoring. Reruption often runs short enablement sprints so teams are comfortable iterating on prompts, taxonomies and KPIs instead of relying solely on external experts.

Balance Personalization Ambition with Data Reality

It’s tempting to jump straight to 1:1 personalization everywhere, but your data quality and integration maturity should shape the initial scope. If browsing data is fragmented or product metadata is messy, start with a few high-impact touchpoints (e.g., PDP recommendations, abandoned cart emails) where signals are clearer.

Claude can compensate for imperfect data by inferring intent from partial signals, but it can’t fix missing fundamentals like completely absent product descriptions. Strategically, define a staged roadmap: phase 1 uses Claude on well-structured campaigns and categories, phase 2 expands as data structures improve, and later phases move toward real-time, multi-channel orchestration. This avoids overpromising personalization you cannot reliably deliver.

Treat AI Personalization as an Ongoing Experiment, Not a One-Off Project

Untargeted recommendations are often the result of a project mindset: a recommender is implemented once, KPIs look “good enough,” and the setup is left alone. With Claude, enormous value comes from continually refining prompt strategies, segment definitions and creative angles based on live performance.

From a strategic perspective, set up a recurring experimentation cadence. Marketing should review which Claude-driven recommendation variants lift CTR, AOV or retention, then bake those learnings back into prompts and decision rules. This requires ownership: decide who is responsible for experimentation backlogs, success metrics, and sign-off. Organisations that treat AI personalization as a living capability, not a finished IT project, see compounding gains over time.

Used thoughtfully, Claude lets marketing teams escape rigid, untargeted recommendation blocks and move toward adaptive experiences that respect intent, context and business rules. The real unlock is not just smarter algorithms, but a workflow where marketers can directly shape and control how personalization behaves.

At Reruption, we build these AI-first workflows side by side with our clients – from proof-of-concept to production. If you’re serious about fixing generic product recommendations and want a partner who can combine strategy, engineering and hands-on experimentation, we’re happy to explore what Claude could do in your stack.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Logistics to Healthcare: Learn how companies successfully use Claude.

FedEx

Logistics

FedEx faced suboptimal truck routing challenges in its vast logistics network, where static planning led to excess mileage, inflated fuel costs, and higher labor expenses . Handling millions of packages daily across complex routes, traditional methods struggled with real-time variables like traffic, weather disruptions, and fluctuating demand, resulting in inefficient vehicle utilization and delayed deliveries . These inefficiencies not only drove up operational costs but also increased carbon emissions and undermined customer satisfaction in a highly competitive shipping industry. Scaling solutions for dynamic optimization across thousands of trucks required advanced computational approaches beyond conventional heuristics .

Lösung

Machine learning models integrated with heuristic optimization algorithms formed the core of FedEx's AI-driven route planning system, enabling dynamic route adjustments based on real-time data feeds including traffic, weather, and package volumes . The system employs deep learning for predictive analytics alongside heuristics like genetic algorithms to solve the vehicle routing problem (VRP) efficiently, balancing loads and minimizing empty miles . Implemented as part of FedEx's broader AI supply chain transformation, the solution dynamically reoptimizes routes throughout the day, incorporating sense-and-respond capabilities to adapt to disruptions and enhance overall network efficiency .

Ergebnisse

  • 700,000 excess miles eliminated daily from truck routes
  • Multi-million dollar annual savings in fuel and labor costs
  • Improved delivery time estimate accuracy via ML models
  • Enhanced operational efficiency reducing costs industry-wide
  • Boosted on-time performance through real-time optimizations
  • Significant reduction in carbon footprint from mileage savings
Read case study →

UC San Diego Health

Healthcare

Sepsis, a life-threatening condition, poses a major threat in emergency departments, with delayed detection contributing to high mortality rates—up to 20-30% in severe cases. At UC San Diego Health, an academic medical center handling over 1 million patient visits annually, nonspecific early symptoms made timely intervention challenging, exacerbating outcomes in busy ERs . A randomized study highlighted the need for proactive tools beyond traditional scoring systems like qSOFA. Hospital capacity management and patient flow were further strained post-COVID, with bed shortages leading to prolonged admission wait times and transfer delays. Balancing elective surgeries, emergencies, and discharges required real-time visibility . Safely integrating generative AI, such as GPT-4 in Epic, risked data privacy breaches and inaccurate clinical advice . These issues demanded scalable AI solutions to predict risks, streamline operations, and responsibly adopt emerging tech without compromising care quality.

Lösung

UC San Diego Health implemented COMPOSER, a deep learning model trained on electronic health records to predict sepsis risk up to 6-12 hours early, triggering Epic Best Practice Advisory (BPA) alerts for nurses . This quasi-experimental approach across two ERs integrated seamlessly with workflows . Mission Control, an AI-powered operations command center funded by $22M, uses predictive analytics for real-time bed assignments, patient transfers, and capacity forecasting, reducing bottlenecks . Led by Chief Health AI Officer Karandeep Singh, it leverages data from Epic for holistic visibility. For generative AI, pilots with Epic's GPT-4 enable NLP queries and automated patient replies, governed by strict safety protocols to mitigate hallucinations and ensure HIPAA compliance . This multi-faceted strategy addressed detection, flow, and innovation challenges.

Ergebnisse

  • Sepsis in-hospital mortality: 17% reduction
  • Lives saved annually: 50 across two ERs
  • Sepsis bundle compliance: Significant improvement
  • 72-hour SOFA score change: Reduced deterioration
  • ICU encounters: Decreased post-implementation
  • Patient throughput: Improved via Mission Control
Read case study →

Mass General Brigham

Healthcare

Mass General Brigham, one of the largest healthcare systems in the U.S., faced a deluge of medical imaging data from radiology, pathology, and surgical procedures. With millions of scans annually across its 12 hospitals, clinicians struggled with analysis overload, leading to delays in diagnosis and increased burnout rates among radiologists and surgeons. The need for precise, rapid interpretation was critical, as manual reviews limited throughput and risked errors in complex cases like tumor detection or surgical risk assessment. Additionally, operative workflows required better predictive tools. Surgeons needed models to forecast complications, optimize scheduling, and personalize interventions, but fragmented data silos and regulatory hurdles impeded progress. Staff shortages exacerbated these issues, demanding decision support systems to alleviate cognitive load and improve patient outcomes.

Lösung

To address these, Mass General Brigham established a dedicated Artificial Intelligence Center, centralizing research, development, and deployment of hundreds of AI models focused on computer vision for imaging and predictive analytics for surgery. This enterprise-wide initiative integrates ML into clinical workflows, partnering with tech giants like Microsoft for foundation models in medical imaging. Key solutions include deep learning algorithms for automated anomaly detection in X-rays, MRIs, and CTs, reducing radiologist review time. For surgery, predictive models analyze patient data to predict post-op risks, enhancing planning. Robust governance frameworks ensure ethical deployment, addressing bias and explainability.

Ergebnisse

  • $30 million AI investment fund established
  • Hundreds of AI models managed for radiology and pathology
  • Improved diagnostic throughput via AI-assisted radiology
  • AI foundation models developed through Microsoft partnership
  • Initiatives for AI governance in medical imaging deployed
  • Reduced clinician workload and burnout through decision support
Read case study →

Ford Motor Company

Manufacturing

In Ford's automotive manufacturing plants, vehicle body sanding and painting represented a major bottleneck. These labor-intensive tasks required workers to manually sand car bodies, a process prone to inconsistencies, fatigue, and ergonomic injuries due to repetitive motions over hours . Traditional robotic systems struggled with the variability in body panels, curvatures, and material differences, limiting full automation in legacy 'brownfield' facilities . Additionally, achieving consistent surface quality for painting was critical, as defects could lead to rework, delays, and increased costs. With rising demand for electric vehicles (EVs) and production scaling, Ford needed to modernize without massive CapEx or disrupting ongoing operations, while prioritizing workforce safety and upskilling . The challenge was to integrate scalable automation that collaborated with humans seamlessly.

Lösung

Ford addressed this by deploying AI-guided collaborative robots (cobots) equipped with machine vision and automation algorithms. In the body shop, six cobots use cameras and AI to scan car bodies in real-time, detecting surfaces, defects, and contours with high precision . These systems employ computer vision models for 3D mapping and path planning, allowing cobots to adapt dynamically without reprogramming . The solution emphasized a workforce-first brownfield strategy, starting with pilot deployments in Michigan plants. Cobots handle sanding autonomously while humans oversee quality, reducing injury risks. Partnerships with robotics firms and in-house AI development enabled low-code inspection tools for easy scaling .

Ergebnisse

  • Sanding time: 35 seconds per full car body (vs. hours manually)
  • Productivity boost: 4x faster assembly processes
  • Injury reduction: 70% fewer ergonomic strains in cobot zones
  • Consistency improvement: 95% defect-free surfaces post-sanding
  • Deployment scale: 6 cobots operational, expanding to 50+ units
  • ROI timeline: Payback in 12-18 months per plant
Read case study →

HSBC

Banking

As a global banking titan handling trillions in annual transactions, HSBC grappled with escalating fraud and money laundering risks. Traditional systems struggled to process over 1 billion transactions monthly, generating excessive false positives that burdened compliance teams, slowed operations, and increased costs. Ensuring real-time detection while minimizing disruptions to legitimate customers was critical, alongside strict regulatory compliance in diverse markets. Customer service faced high volumes of inquiries requiring 24/7 multilingual support, straining resources. Simultaneously, HSBC sought to pioneer generative AI research for innovation in personalization and automation, but challenges included ethical deployment, human oversight for advancing AI, data privacy, and integration across legacy systems without compromising security. Scaling these solutions globally demanded robust governance to maintain trust and adhere to evolving regulations.

Lösung

HSBC tackled fraud with machine learning models powered by Google Cloud's Transaction Monitoring 360, enabling AI to detect anomalies and financial crime patterns in real-time across vast datasets. This shifted from rigid rules to dynamic, adaptive learning. For customer service, NLP-driven chatbots were rolled out to handle routine queries, provide instant responses, and escalate complex issues, enhancing accessibility worldwide. In parallel, HSBC advanced generative AI through internal research, sandboxes, and a landmark multi-year partnership with Mistral AI (announced December 2024), integrating tools for document analysis, translation, fraud enhancement, automation, and client-facing innovations—all under ethical frameworks with human oversight.

Ergebnisse

  • Screens over 1 billion transactions monthly for financial crime
  • Significant reduction in false positives and manual reviews (up to 60-90% in models)
  • Hundreds of AI use cases deployed across global operations
  • Multi-year Mistral AI partnership (Dec 2024) to accelerate genAI productivity
  • Enhanced real-time fraud alerts, reducing compliance workload
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Use Claude to Translate Behavior into Recommendation Intents

Most recommendation systems jump straight from clicks to products. A more powerful pattern is to let Claude first interpret behavior as shopping intent, then match that intent to product sets using your existing tools. This keeps your architecture simple while dramatically improving relevance.

For example, map key behavioral signals (visited categories, time on page, filters used, campaign source) into a compact JSON payload. Send this to Claude with clear instructions to output an intent profile and recommendation strategy (e.g., “budget-conscious first-time buyer looking for durable basics”). Your front-end or middleware can then select products that fit the suggested criteria using your product catalog or search engine.

Example prompt to Claude:
You are a personalization strategist for an ecommerce site.

Input data:
- User profile: {{user_profile_json}}
- Session behavior: {{session_events_json}}
- Campaign context: {{campaign_info}}
- Product catalog facets: {{facet_summary}}

Tasks:
1) Infer the user's primary shopping intent in 1-2 sentences.
2) Classify them into one of our segments: {{segment_definitions}}.
3) Output recommendation rules in JSON with:
   - target_price_range
   - key_benefits_to_prioritize
   - categories_to_focus
   - cross_sell_opportunities

Only output JSON.

This approach lets you Personalize "why" and "how" to recommend, while keeping the final product retrieval under tight technical and commercial control.

Generate On-Brand, Dynamic Recommendation Copy at Scale

Even with better targeting, generic copy like “You might also like” undercuts performance. Claude can generate on-brand, segment-specific microcopy that explains why products were recommended, which increases trust and click-through.

Start by collecting your brand tone guidelines, past successful headlines, and any compliance constraints. Turn these into a reusable prompt template. For each recommendation slot, provide Claude with the chosen products, the inferred user intent, and the channel (web, email, app). Ask it to return short, tested variants you can A/B test.

Example prompt to Claude:
You are a senior copywriter for our brand. Follow these rules:
- Tone of voice: {{brand_tone}}
- Forbidden phrases: {{forbidden_phrases}}
- Max 60 characters per line.

Context:
- User intent: {{intent_summary}}
- Segment: {{segment_name}}
- Recommended products (titles + key features): {{products_json}}
- Channel: {{channel}}

Write 3 alternative headlines and 3 sublines that:
- Make the recommendation logic explicit ("Because you looked at..." etc.)
- Prioritize the benefits that matter for this intent.

Return as JSON with keys: headlines[], sublines[].

Integrate this into your CMS or email tool so marketers can trigger fresh, relevant copy per campaign without writing everything manually.

Let Claude Help Clean and Enrich Product Metadata for Better Matching

Untargeted recommendations are often a symptom of poor product metadata: missing attributes, inconsistent naming, or weak descriptions. Claude can help marketing and merchandising teams standardize and enrich catalog data, which directly improves matching quality.

Design a background job or one-off clean-up workflow: export products for priority categories, send batches to Claude, and ask it to normalize attributes (e.g., style, use case, skill level) based on titles and descriptions. Use a review step before writing back to your PIM or catalog database.

Example prompt to Claude:
You are helping standardize our product catalog.

For each product in {{products_json}}:
1) Infer missing attributes: use_case, target_user_level, style, primary_material.
2) Map values to the closest option in our allowed lists: {{allowed_values_json}}.
3) Output cleaned data in JSON, preserving product_id.

Do not invent impossible attributes. If unsure, set value to null.

With richer, consistent metadata, even simple rule-based or similarity-based recommenders become much more precise, and Claude’s own strategies can reference reliable attributes.

Build a Claude-Assisted A/B Testing Workflow for Recommendations

Instead of guessing which recommendation patterns will work, use Claude to quickly generate testable variants and interpret the results. This makes experimentation faster and more structured without adding a data science headcount.

For a given page type (e.g., product detail page), define a set of hypotheses: upsell vs. cross-sell focus, price anchoring vs. value framing, bundle suggestions vs. single items. Ask Claude to design 2–3 distinct recommendation strategies and associated messaging for each segment. Implement them as variants in your experimentation tool and let traffic flow.

Example prompt to Claude:
You are designing A/B tests for product recommendations.

Context:
- Page type: PDP
- User segment: {{segment_name}}
- Business goal: increase AOV without reducing conversion
- Current recommendation options: {{candidate_products_json}}

Tasks:
1) Propose 3 distinct recommendation strategies (e.g., "premium upsell",
   "budget-friendly bundles").
2) For each strategy, specify:
   - selection_rules (JSON)
   - messaging_angle (1-2 sentences)
   - risk_notes (what might go wrong)

Return only JSON.

After tests run, feed the performance data back to Claude and ask it to summarize insights and suggest the next iteration. This closes the loop and turns raw metrics into actionable learnings for marketers.

Integrate Claude into Existing Marketing Tools Instead of Rebuilding Everything

You don’t need to rip out your ESP, CDP or ecommerce platform to benefit from Claude-powered personalization. In most environments, a lightweight integration layer or marketing ops script is enough to orchestrate data in and out of Claude.

Practically, identify the few key touchpoints where recommendation quality matters most: homepage, PDP, cart page, post-purchase emails. For each, define a minimal data payload (user, context, candidates) and a standard response format from Claude (intent, strategy, copy). Implement this as a microservice or API endpoint that your current tools can call. Start with low-risk traffic slices, monitor performance, then scale up as confidence grows.

Reruption often packages this into a small AI middleware: one service that calls Claude, applies business rules, logs decisions, and returns safe responses. This keeps your core systems stable while allowing rapid iteration on prompts and logic.

Monitor Quality and Set Realistic Performance Targets

Finally, treat Claude-driven personalization as a product that needs ongoing monitoring. Define clear KPIs for recommendations: click-through rate on recommended items, incremental AOV, attachment rate for key categories, and opt-out or complaint rates for overly aggressive upsells.

Set realistic targets for the first 3–6 months, such as: +10–20% recommendation CTR, +5–10% uplift in AOV for sessions that see personalized blocks, and improved conversion on targeted follow-up emails. Use dashboards to compare Claude-assisted experiences to your old static blocks, and create an alerting mechanism if metrics drop or output quality degrades.

Expected outcome: with a focused implementation on a few high-traffic touchpoints, most organisations can achieve measurable uplifts within 8–12 weeks. Over 6–12 months, as prompts, metadata and experiments mature, it’s realistic to see double-digit increases in recommendation engagement and a sustained lift in revenue per visitor, without linearly increasing marketing headcount.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Claude improves recommendations by interpreting user behavior, context and product metadata instead of relying only on simple rules or bestseller lists. It can infer a shopper’s intent (e.g., "researching premium options" vs. "looking for the cheapest replacement"), then suggest recommendation strategies and copy that match that intent.

Technically, you can keep your existing catalog and basic recommendation logic, while using Claude to decide which products to highlight, how to frame them, and how aggressively to upsell. This transforms your current blocks from generic carousels into adaptive experiences that explain why these products are shown, which typically increases click-through and average order value.

You don’t need a full data science team to get started. A typical setup combines:

  • A marketing or CRM owner who understands segments, campaigns and business goals.
  • An engineer or marketing ops specialist who can work with APIs and your ecommerce/CRM platform.
  • Optionally, a merchandiser or product owner who can define recommendation guardrails and business rules.

Claude itself is accessed via API or through tools Reruption can help you set up. The main new skill is prompt and workflow design: specifying which inputs Claude sees, what outputs you expect (strategy, copy, metadata), and how those are used. We typically handle the initial architecture, prompts and integration, then train your team to iterate and maintain the system.

For most organisations, a focused proof-of-concept on 1–2 key touchpoints (e.g., PDP and abandoned cart emails) can be live in 4–6 weeks. This includes scoping, data wiring, initial prompt design, and a first round of experiments.

Measurable uplifts – like higher recommendation CTR or AOV – usually appear within the first few weeks of live traffic, provided you have enough volume to run A/B tests. A more mature setup, where Claude informs strategies across multiple channels (web, email, app), often evolves over 3–6 months as you refine segments, metadata and experimentation routines.

The running cost of Claude is primarily usage-based: you pay per token processed. For recommendation use cases, payloads can be kept compact by sending only relevant behavior summaries and product candidates, which keeps per-request costs low. In practice, infrastructure and engineering time are usually more significant than Claude’s API fees.

On the ROI side, realistic early targets are +10–20% uplift in recommendation CTR and +5–10% uplift in AOV for sessions exposed to personalized blocks, depending on your baseline. Because recommendations influence a large share of traffic, even modest percentage gains can translate into meaningful incremental revenue. Part of Reruption’s work is to design your implementation so that uplift can be measured clearly against a static-control baseline, ensuring you can attribute ROI with confidence.

Reruption works as a Co-Preneur inside your organisation: we don’t just advise, we build. For untargeted product recommendations, we usually start with our AI PoC offering (9.900€), where we define your specific use case, check feasibility, and deliver a working prototype that plugs into a real page or campaign – not just slides.

From there, we can support you with end-to-end implementation: designing the recommendation and copy workflows, integrating Claude into your ecommerce or marketing stack, setting up quality and safety guardrails, and enabling your marketing team to iterate on prompts and experiments. The goal is to leave you with a production-ready, AI-first personalization capability that your team can own and grow, rather than a one-off pilot.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media