The Challenge: Untargeted Product Recommendations

Most marketing teams still rely on static bestseller carousels and simple cross-sell rules like “customers who bought X also bought Y.” On paper this looks efficient, but in practice it ignores who the customer is, what they have browsed, and what they are trying to achieve right now. The result is a recommendation layer that is technically present, but strategically blind.

Traditional approaches struggle because they are rigid, manual, and slow to adapt. Category managers handcraft rules, IT teams hard-code logic into templates, and any change requires another ticket in the backlog. These systems rarely combine behavioral data, content metadata and context (campaign, device, location), so they keep serving generic offers even when your data clearly signals otherwise. At the same time, many smaller teams don’t have the data science resources to build full-blown recommender engines.

The impact is bigger than a slightly lower click-through rate. Irrelevant recommendations increase bounce rates, suppress average order value, and erode trust – customers feel like your brand “doesn’t get them.” High-intent sessions end without an upsell, repeat buyers never discover relevant add-ons, and your performance marketing spend has to work harder to compensate. Over time, competitors with smarter personalization win more share of wallet, because they use every visit to deepen relevance instead of repeating the same generic carousel.

The good news: this is a very solvable problem. With modern AI like Claude, marketers can finally interpret customer behavior, catalog metadata and campaign context in real time, then generate tailored recommendation strategies and copy without waiting for a full data platform rebuild. At Reruption, we’ve helped organisations turn vague personalization ambitions into working AI prototypes and production workflows. In the sections below, you’ll find practical guidance to move from untargeted recommendations to Claude-powered experiences that actually match user intent.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s work building AI-first marketing workflows, we see a clear pattern: most organisations don’t lack data, they lack a way to interpret it quickly and turn it into relevant experiences. Claude is a powerful fit for this gap. Used correctly, it can sit between your raw customer signals and your front-end, generating personalized recommendation logic and copy that’s understandable to marketers and controllable by the business.

Think of Claude as a Personalization Brain, Not a Black-Box Recommender

Claude is not a plug-and-play "recommendation engine" in the traditional sense. Its real strength is interpreting multiple inputs – customer profile, on-site behavior, campaign context, and product attributes – and turning them into a coherent recommendation strategy. Strategically, this gives marketing teams a transparent, explainable layer instead of a mathematical black box.

When you frame Claude as a "personalization brain," you can ask it for reasoning: why it recommends certain assortments, what message to use, how aggressive the upsell should be for a given segment. This makes it easier for non-technical marketers to review, control, and iterate, without needing deep data science skills. The technical recommender components (e.g., similarity search, rules) can remain simple, while Claude handles the orchestration and narrative.

Start with Clear Personalization Guardrails

Strategically, you need to decide where personalization is allowed to flex and where it must stay within strict boundaries. For example, you may want full personalization in content and order of products, but strict business rules about pricing, margin thresholds, and compliance-sensitive categories.

Before implementation, define guardrails such as allowed categories per segment, minimum margin per recommendation slot, or exclusion rules (e.g., no cross-selling out-of-stock items). Claude can then be prompted to operate inside these constraints, choosing the best combination of products and messaging without violating brand or commercial policies. This reduces risk and makes stakeholders much more comfortable with AI-driven decisions.

Prepare Your Teams for an AI-Assisted Workflow

Moving from static blocks to Claude-assisted personalization changes how marketing, product and engineering collaborate. Copywriters, CRM managers and merchandisers become designers of decision logic and prompts, not just creators of single assets.

Plan for enablement: train marketers on how to brief Claude, review AI output, and turn insights into experiments. Align with engineering on where Claude sits in the architecture (e.g., in a middleware layer or marketing ops tool) and who owns quality monitoring. Reruption often runs short enablement sprints so teams are comfortable iterating on prompts, taxonomies and KPIs instead of relying solely on external experts.

Balance Personalization Ambition with Data Reality

It’s tempting to jump straight to 1:1 personalization everywhere, but your data quality and integration maturity should shape the initial scope. If browsing data is fragmented or product metadata is messy, start with a few high-impact touchpoints (e.g., PDP recommendations, abandoned cart emails) where signals are clearer.

Claude can compensate for imperfect data by inferring intent from partial signals, but it can’t fix missing fundamentals like completely absent product descriptions. Strategically, define a staged roadmap: phase 1 uses Claude on well-structured campaigns and categories, phase 2 expands as data structures improve, and later phases move toward real-time, multi-channel orchestration. This avoids overpromising personalization you cannot reliably deliver.

Treat AI Personalization as an Ongoing Experiment, Not a One-Off Project

Untargeted recommendations are often the result of a project mindset: a recommender is implemented once, KPIs look “good enough,” and the setup is left alone. With Claude, enormous value comes from continually refining prompt strategies, segment definitions and creative angles based on live performance.

From a strategic perspective, set up a recurring experimentation cadence. Marketing should review which Claude-driven recommendation variants lift CTR, AOV or retention, then bake those learnings back into prompts and decision rules. This requires ownership: decide who is responsible for experimentation backlogs, success metrics, and sign-off. Organisations that treat AI personalization as a living capability, not a finished IT project, see compounding gains over time.

Used thoughtfully, Claude lets marketing teams escape rigid, untargeted recommendation blocks and move toward adaptive experiences that respect intent, context and business rules. The real unlock is not just smarter algorithms, but a workflow where marketers can directly shape and control how personalization behaves.

At Reruption, we build these AI-first workflows side by side with our clients – from proof-of-concept to production. If you’re serious about fixing generic product recommendations and want a partner who can combine strategy, engineering and hands-on experimentation, we’re happy to explore what Claude could do in your stack.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Automotive to Payments: Learn how companies successfully use Claude.

Tesla, Inc.

Automotive

The automotive industry faces a staggering 94% of traffic accidents attributed to human error, including distraction, fatigue, and poor judgment, resulting in over 1.3 million global road deaths annually. In the US alone, NHTSA data shows an average of one crash per 670,000 miles driven, highlighting the urgent need for advanced driver assistance systems (ADAS) to enhance safety and reduce fatalities. Tesla encountered specific hurdles in scaling vision-only autonomy, ditching radar and lidar for camera-based systems reliant on AI to mimic human perception. Challenges included variable AI performance in diverse conditions like fog, night, or construction zones, regulatory scrutiny over misleading Level 2 labeling despite Level 4-like demos, and ensuring robust driver monitoring to prevent over-reliance. Past incidents and studies criticized inconsistent computer vision reliability.

Lösung

Tesla's Autopilot and Full Self-Driving (FSD) Supervised leverage end-to-end deep learning neural networks trained on billions of real-world miles, processing camera feeds for perception, prediction, and control without modular rules. Transitioning from HydraNet (multi-task learning for 30+ outputs) to pure end-to-end models, FSD v14 achieves door-to-door driving via video-based imitation learning. Overcoming challenges, Tesla scaled data collection from its fleet of 6M+ vehicles, using Dojo supercomputers for training on petabytes of video. Vision-only approach cuts costs vs. lidar rivals, with recent upgrades like new cameras addressing edge cases. Regulatory pushes target unsupervised FSD by end-2025, with China approval eyed for 2026.

Ergebnisse

  • Autopilot Crash Rate: 1 per 6.36M miles (Q3 2025)
  • Safety Multiple: 9x safer than US average (670K miles/crash)
  • Fleet Data: Billions of miles for training
  • FSD v14: Door-to-door autonomy achieved
  • Q2 2025: 1 crash per 6.69M miles
  • 2024 Q4 Record: 5.94M miles between accidents
Read case study →

Forever 21

E-commerce

Forever 21, a leading fast-fashion retailer, faced significant hurdles in online product discovery. Customers struggled with text-based searches that couldn't capture subtle visual details like fabric textures, color variations, or exact styles amid a vast catalog of millions of SKUs. This led to high bounce rates exceeding 50% on search pages and frustrated shoppers abandoning carts. The fashion industry's visual-centric nature amplified these issues. Descriptive keywords often mismatched inventory due to subjective terms (e.g., 'boho dress' vs. specific patterns), resulting in poor user experiences and lost sales opportunities. Pre-AI, Forever 21's search relied on basic keyword matching, limiting personalization and efficiency in a competitive e-commerce landscape. Implementation challenges included scaling for high-traffic mobile users and handling diverse image inputs like user photos or screenshots.

Lösung

To address this, Forever 21 deployed an AI-powered visual search feature across its app and website, enabling users to upload images for similar item matching. Leveraging computer vision techniques, the system extracts features using pre-trained CNN models like VGG16, computes embeddings, and ranks products via cosine similarity or Euclidean distance metrics. The solution integrated seamlessly with existing infrastructure, processing queries in real-time. Forever 21 likely partnered with providers like ViSenze or built in-house, training on proprietary catalog data for fashion-specific accuracy. This overcame text limitations by focusing on visual semantics, supporting features like style, color, and pattern matching. Overcoming challenges involved fine-tuning models for diverse lighting/user images and A/B testing for UX optimization.

Ergebnisse

  • 25% increase in conversion rates from visual searches
  • 35% reduction in average search time
  • 40% higher engagement (pages per session)
  • 18% growth in average order value
  • 92% matching accuracy for similar items
  • 50% decrease in bounce rate on search pages
Read case study →

Nubank (Pix Payments)

Payments

Nubank, Latin America's largest digital bank serving over 114 million customers across Brazil, Mexico, and Colombia, faced the challenge of scaling its Pix instant payment system amid explosive growth. Traditional Pix transactions required users to navigate the app manually, leading to friction, especially for quick, on-the-go payments. This app navigation bottleneck increased processing time and limited accessibility for users preferring conversational interfaces like WhatsApp, where 80% of Brazilians communicate daily. Additionally, enabling secure, accurate interpretation of diverse inputs—voice commands, natural language text, and images (e.g., handwritten notes or receipts)—posed significant hurdles. Nubank needed to overcome accuracy issues in multimodal understanding, ensure compliance with Brazil's Central Bank regulations, and maintain trust in a high-stakes financial environment while handling millions of daily transactions.

Lösung

Nubank deployed a multimodal generative AI solution powered by OpenAI models, allowing customers to initiate Pix payments through voice messages, text instructions, or image uploads directly in the app or WhatsApp. The AI processes speech-to-text, natural language processing for intent extraction, and optical character recognition (OCR) for images, converting them into executable Pix transfers. Integrated seamlessly with Nubank's backend, the system verifies user identity, extracts key details like amount and recipient, and executes transactions in seconds, bypassing traditional app screens. This AI-first approach enhances convenience, speed, and safety, scaling operations without proportional human intervention.

Ergebnisse

  • 60% reduction in transaction processing time
  • Tested with 2 million users by end of 2024
  • Serves 114 million customers across 3 countries
  • Testing initiated August 2024
  • Processes voice, text, and image inputs for Pix
  • Enabled instant payments via WhatsApp integration
Read case study →

DBS Bank

Banking

DBS Bank, Southeast Asia's leading financial institution, grappled with scaling AI from experiments to production amid surging fraud threats, demands for hyper-personalized customer experiences, and operational inefficiencies in service support. Traditional fraud detection systems struggled to process up to 15,000 data points per customer in real-time, leading to missed threats and suboptimal risk scoring. Personalization efforts were hampered by siloed data and lack of scalable algorithms for millions of users across diverse markets. Additionally, customer service teams faced overwhelming query volumes, with manual processes slowing response times and increasing costs. Regulatory pressures in banking demanded responsible AI governance, while talent shortages and integration challenges hindered enterprise-wide adoption. DBS needed a robust framework to overcome data quality issues, model drift, and ethical concerns in generative AI deployment, ensuring trust and compliance in a competitive Southeast Asian landscape.

Lösung

DBS launched an enterprise-wide AI program with over 20 use cases, leveraging machine learning for advanced fraud risk models and personalization, complemented by generative AI for an internal support assistant. Fraud models integrated vast datasets for real-time anomaly detection, while personalization algorithms delivered hyper-targeted nudges and investment ideas via the digibank app. A human-AI synergy approach empowered service teams with a GenAI assistant handling routine queries, drawing from internal knowledge bases. DBS emphasized responsible AI through governance frameworks, upskilling 40,000+ employees, and phased rollout starting with pilots in 2021, scaling production by 2024. Partnerships with tech leaders and Harvard-backed strategy ensured ethical scaling across fraud, personalization, and operations.

Ergebnisse

  • 17% increase in savings from prevented fraud attempts
  • Over 100 customized algorithms for customer analyses
  • 250,000 monthly queries processed efficiently by GenAI assistant
  • 20+ enterprise-wide AI use cases deployed
  • Analyzes up to 15,000 data points per customer for fraud
  • Boosted productivity by 20% via AI adoption (CEO statement)
Read case study →

Walmart (Marketplace)

Retail

In the cutthroat arena of Walmart Marketplace, third-party sellers fiercely compete for the Buy Box, which accounts for the majority of sales conversions . These sellers manage vast inventories but struggle with manual pricing adjustments, which are too slow to keep pace with rapidly shifting competitor prices, demand fluctuations, and market trends. This leads to frequent loss of the Buy Box, missed sales opportunities, and eroded profit margins in a platform where price is the primary battleground . Additionally, sellers face data overload from monitoring thousands of SKUs, predicting optimal price points, and balancing competitiveness against profitability. Traditional static pricing strategies fail in this dynamic e-commerce environment, resulting in suboptimal performance and requiring excessive manual effort—often hours daily per seller . Walmart recognized the need for an automated solution to empower sellers and drive platform growth.

Lösung

Walmart launched the Repricer, a free AI-driven automated pricing tool integrated into Seller Center, leveraging generative AI for decision support alongside machine learning models like sequential decision intelligence to dynamically adjust prices in real-time . The tool analyzes competitor pricing, historical sales data, demand signals, and market conditions to recommend and implement optimal prices that maximize Buy Box eligibility and sales velocity . Complementing this, the Pricing Insights dashboard provides account-level metrics and AI-generated recommendations, including suggested prices for promotions, helping sellers identify opportunities without manual analysis . For advanced users, third-party tools like Biviar's AI repricer—commissioned by Walmart—enhance this with reinforcement learning for profit-maximizing daily pricing decisions . This ecosystem shifts sellers from reactive to proactive pricing strategies.

Ergebnisse

  • 25% increase in conversion rates from dynamic AI pricing
  • Higher Buy Box win rates through real-time competitor analysis
  • Maximized sales velocity for 3rd-party sellers on Marketplace
  • 850 million catalog data improvements via GenAI (broader impact)
  • 40%+ conversion boost potential from AI-driven offers
  • Reduced manual pricing time by hours daily per seller
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Use Claude to Translate Behavior into Recommendation Intents

Most recommendation systems jump straight from clicks to products. A more powerful pattern is to let Claude first interpret behavior as shopping intent, then match that intent to product sets using your existing tools. This keeps your architecture simple while dramatically improving relevance.

For example, map key behavioral signals (visited categories, time on page, filters used, campaign source) into a compact JSON payload. Send this to Claude with clear instructions to output an intent profile and recommendation strategy (e.g., “budget-conscious first-time buyer looking for durable basics”). Your front-end or middleware can then select products that fit the suggested criteria using your product catalog or search engine.

Example prompt to Claude:
You are a personalization strategist for an ecommerce site.

Input data:
- User profile: {{user_profile_json}}
- Session behavior: {{session_events_json}}
- Campaign context: {{campaign_info}}
- Product catalog facets: {{facet_summary}}

Tasks:
1) Infer the user's primary shopping intent in 1-2 sentences.
2) Classify them into one of our segments: {{segment_definitions}}.
3) Output recommendation rules in JSON with:
   - target_price_range
   - key_benefits_to_prioritize
   - categories_to_focus
   - cross_sell_opportunities

Only output JSON.

This approach lets you Personalize "why" and "how" to recommend, while keeping the final product retrieval under tight technical and commercial control.

Generate On-Brand, Dynamic Recommendation Copy at Scale

Even with better targeting, generic copy like “You might also like” undercuts performance. Claude can generate on-brand, segment-specific microcopy that explains why products were recommended, which increases trust and click-through.

Start by collecting your brand tone guidelines, past successful headlines, and any compliance constraints. Turn these into a reusable prompt template. For each recommendation slot, provide Claude with the chosen products, the inferred user intent, and the channel (web, email, app). Ask it to return short, tested variants you can A/B test.

Example prompt to Claude:
You are a senior copywriter for our brand. Follow these rules:
- Tone of voice: {{brand_tone}}
- Forbidden phrases: {{forbidden_phrases}}
- Max 60 characters per line.

Context:
- User intent: {{intent_summary}}
- Segment: {{segment_name}}
- Recommended products (titles + key features): {{products_json}}
- Channel: {{channel}}

Write 3 alternative headlines and 3 sublines that:
- Make the recommendation logic explicit ("Because you looked at..." etc.)
- Prioritize the benefits that matter for this intent.

Return as JSON with keys: headlines[], sublines[].

Integrate this into your CMS or email tool so marketers can trigger fresh, relevant copy per campaign without writing everything manually.

Let Claude Help Clean and Enrich Product Metadata for Better Matching

Untargeted recommendations are often a symptom of poor product metadata: missing attributes, inconsistent naming, or weak descriptions. Claude can help marketing and merchandising teams standardize and enrich catalog data, which directly improves matching quality.

Design a background job or one-off clean-up workflow: export products for priority categories, send batches to Claude, and ask it to normalize attributes (e.g., style, use case, skill level) based on titles and descriptions. Use a review step before writing back to your PIM or catalog database.

Example prompt to Claude:
You are helping standardize our product catalog.

For each product in {{products_json}}:
1) Infer missing attributes: use_case, target_user_level, style, primary_material.
2) Map values to the closest option in our allowed lists: {{allowed_values_json}}.
3) Output cleaned data in JSON, preserving product_id.

Do not invent impossible attributes. If unsure, set value to null.

With richer, consistent metadata, even simple rule-based or similarity-based recommenders become much more precise, and Claude’s own strategies can reference reliable attributes.

Build a Claude-Assisted A/B Testing Workflow for Recommendations

Instead of guessing which recommendation patterns will work, use Claude to quickly generate testable variants and interpret the results. This makes experimentation faster and more structured without adding a data science headcount.

For a given page type (e.g., product detail page), define a set of hypotheses: upsell vs. cross-sell focus, price anchoring vs. value framing, bundle suggestions vs. single items. Ask Claude to design 2–3 distinct recommendation strategies and associated messaging for each segment. Implement them as variants in your experimentation tool and let traffic flow.

Example prompt to Claude:
You are designing A/B tests for product recommendations.

Context:
- Page type: PDP
- User segment: {{segment_name}}
- Business goal: increase AOV without reducing conversion
- Current recommendation options: {{candidate_products_json}}

Tasks:
1) Propose 3 distinct recommendation strategies (e.g., "premium upsell",
   "budget-friendly bundles").
2) For each strategy, specify:
   - selection_rules (JSON)
   - messaging_angle (1-2 sentences)
   - risk_notes (what might go wrong)

Return only JSON.

After tests run, feed the performance data back to Claude and ask it to summarize insights and suggest the next iteration. This closes the loop and turns raw metrics into actionable learnings for marketers.

Integrate Claude into Existing Marketing Tools Instead of Rebuilding Everything

You don’t need to rip out your ESP, CDP or ecommerce platform to benefit from Claude-powered personalization. In most environments, a lightweight integration layer or marketing ops script is enough to orchestrate data in and out of Claude.

Practically, identify the few key touchpoints where recommendation quality matters most: homepage, PDP, cart page, post-purchase emails. For each, define a minimal data payload (user, context, candidates) and a standard response format from Claude (intent, strategy, copy). Implement this as a microservice or API endpoint that your current tools can call. Start with low-risk traffic slices, monitor performance, then scale up as confidence grows.

Reruption often packages this into a small AI middleware: one service that calls Claude, applies business rules, logs decisions, and returns safe responses. This keeps your core systems stable while allowing rapid iteration on prompts and logic.

Monitor Quality and Set Realistic Performance Targets

Finally, treat Claude-driven personalization as a product that needs ongoing monitoring. Define clear KPIs for recommendations: click-through rate on recommended items, incremental AOV, attachment rate for key categories, and opt-out or complaint rates for overly aggressive upsells.

Set realistic targets for the first 3–6 months, such as: +10–20% recommendation CTR, +5–10% uplift in AOV for sessions that see personalized blocks, and improved conversion on targeted follow-up emails. Use dashboards to compare Claude-assisted experiences to your old static blocks, and create an alerting mechanism if metrics drop or output quality degrades.

Expected outcome: with a focused implementation on a few high-traffic touchpoints, most organisations can achieve measurable uplifts within 8–12 weeks. Over 6–12 months, as prompts, metadata and experiments mature, it’s realistic to see double-digit increases in recommendation engagement and a sustained lift in revenue per visitor, without linearly increasing marketing headcount.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Claude improves recommendations by interpreting user behavior, context and product metadata instead of relying only on simple rules or bestseller lists. It can infer a shopper’s intent (e.g., "researching premium options" vs. "looking for the cheapest replacement"), then suggest recommendation strategies and copy that match that intent.

Technically, you can keep your existing catalog and basic recommendation logic, while using Claude to decide which products to highlight, how to frame them, and how aggressively to upsell. This transforms your current blocks from generic carousels into adaptive experiences that explain why these products are shown, which typically increases click-through and average order value.

You don’t need a full data science team to get started. A typical setup combines:

  • A marketing or CRM owner who understands segments, campaigns and business goals.
  • An engineer or marketing ops specialist who can work with APIs and your ecommerce/CRM platform.
  • Optionally, a merchandiser or product owner who can define recommendation guardrails and business rules.

Claude itself is accessed via API or through tools Reruption can help you set up. The main new skill is prompt and workflow design: specifying which inputs Claude sees, what outputs you expect (strategy, copy, metadata), and how those are used. We typically handle the initial architecture, prompts and integration, then train your team to iterate and maintain the system.

For most organisations, a focused proof-of-concept on 1–2 key touchpoints (e.g., PDP and abandoned cart emails) can be live in 4–6 weeks. This includes scoping, data wiring, initial prompt design, and a first round of experiments.

Measurable uplifts – like higher recommendation CTR or AOV – usually appear within the first few weeks of live traffic, provided you have enough volume to run A/B tests. A more mature setup, where Claude informs strategies across multiple channels (web, email, app), often evolves over 3–6 months as you refine segments, metadata and experimentation routines.

The running cost of Claude is primarily usage-based: you pay per token processed. For recommendation use cases, payloads can be kept compact by sending only relevant behavior summaries and product candidates, which keeps per-request costs low. In practice, infrastructure and engineering time are usually more significant than Claude’s API fees.

On the ROI side, realistic early targets are +10–20% uplift in recommendation CTR and +5–10% uplift in AOV for sessions exposed to personalized blocks, depending on your baseline. Because recommendations influence a large share of traffic, even modest percentage gains can translate into meaningful incremental revenue. Part of Reruption’s work is to design your implementation so that uplift can be measured clearly against a static-control baseline, ensuring you can attribute ROI with confidence.

Reruption works as a Co-Preneur inside your organisation: we don’t just advise, we build. For untargeted product recommendations, we usually start with our AI PoC offering (9.900€), where we define your specific use case, check feasibility, and deliver a working prototype that plugs into a real page or campaign – not just slides.

From there, we can support you with end-to-end implementation: designing the recommendation and copy workflows, integrating Claude into your ecommerce or marketing stack, setting up quality and safety guardrails, and enabling your marketing team to iterate on prompts and experiments. The goal is to leave you with a production-ready, AI-first personalization capability that your team can own and grow, rather than a one-off pilot.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media