The Challenge: Poor Knowledge Retention

HR and L&D teams spend serious budget and time on trainings, leadership programs, and compliance courses, yet a few weeks later most of that knowledge has evaporated. Employees attend a workshop, pass a quiz, and then slip back into old habits. Managers see little change in performance, and HR is stuck defending training budgets without hard proof that learning is sticking.

Traditional approaches to corporate learning were built around one-off events: classroom sessions, long e‑learning modules, PDFs, and slide decks. They rarely support spaced repetition, quick on‑the‑job lookup, or realistic scenarios tailored to a specific role. Once the session is over, employees have no easy way to refresh concepts, ask questions, or apply the material to their daily work. The result is inevitable forgetting, even for high‑quality content.

The business impact is significant. Poor knowledge retention turns training into a sunk cost: content production, external trainers, and employee time away from work, with minimal behavior change. Compliance and safety risks increase when people forget critical policies. New hires ramp slower because they can’t recall key processes. Strategically, HR loses credibility when it cannot show a clear link between learning investments and performance improvements or reduced errors on the floor.

Yet this is a solvable problem. By turning static manuals, policies, and training decks into interactive AI learning tutors, HR can bring learning into the flow of work and reinforce it over time. At Reruption, we’ve seen how AI‑driven learning experiences can transform retention in complex environments, from technical education to large‑scale enablement. The rest of this page walks through concrete ways to use Claude to combat forgetting and build a learning ecosystem that actually changes how people work.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s work building AI learning tools and document assistants inside real organisations, we’ve seen that the main lever against poor retention is not “more content” but smarter knowledge reinforcement. Claude is particularly strong here because it can absorb long HR manuals, policies, and training materials, then turn them into a conversational tutor that employees actually use in their daily work. Used strategically, this shifts HR from one-off training events to a continuous, AI-supported learning journey.

Think in Continuous Learning Journeys, Not Single Events

Before you deploy Claude, rethink how you design your learning offerings. Instead of a standalone workshop, define a learning journey that includes pre-work, live sessions, and a structured follow-up period where Claude acts as a tutor and coach. Your goal is to distribute practice and reflection across weeks, not hours.

Strategically, this means mapping where employees typically forget or get stuck: after onboarding, when moving into a new role, or after major policy changes. Design Claude’s role around those moments — for example, as the go-to assistant for “first 90 days” questions or for applying a new leadership framework to real team situations.

Treat Claude as Part of the Learning Stack, Not a Gadget

Organisations often position AI tools as side projects. For fighting poor knowledge retention in HR training, Claude needs to be integrated into your existing LMS, HRIS, and communication channels. Think about where employees already are — Microsoft Teams, Slack, your intranet, your LMS — and make Claude accessible there with single sign-on and clear entry points.

From a strategic standpoint, define ownership early: Who curates the content Claude uses? Who signs off on policy-sensitive answers? Who tracks KPIs like repeat usage and question types? Putting Claude under the umbrella of L&D or HR Operations with clear governance is key to long-term adoption.

Start with High-Value, Low-Risk Knowledge Domains

Not every content area is equally suited for an AI tutor from day one. Start with training topics where forgetting is costly but the content is relatively stable and well-documented: onboarding, tool usage guides, standard operating procedures, or internal HR policies. These domains allow you to validate impact on retention without getting stuck in complex edge cases.

This strategic scoping reduces risk and increases speed. You can prove that Claude drives better retention on, say, a new performance management process before you move it into more sensitive topics like labour law interpretations or complex leadership coaching.

Prepare Stakeholders and Managers, Not Just Learners

For Claude to meaningfully improve knowledge retention in the workforce, managers must see it as a support, not a threat or a gimmick. Explain to leaders how the AI tutor reduces repetitive questions, helps new hires ramp faster, and gives them insights into which topics confuse the team. Encourage them to point team members to Claude instead of answering every basic question themselves.

HR should also prepare works councils, legal, and data protection stakeholders with a transparent view on what data Claude uses, how it is secured, and what guardrails exist. This upfront alignment is often the difference between a blocked pilot and a scalable learning assistant.

Define Success Metrics Around Behaviour and Performance, Not Just Usage

It’s tempting to judge the success of an AI learning tutor on logins or number of questions asked. To truly tackle poor knowledge retention, define KPIs tied to behaviour and performance: fewer repeated HR helpdesk tickets on basic topics, improved quiz results weeks after training, reduced errors in processes covered by the training, or time-to-productivity for new hires.

Aligning these metrics with business stakeholders (operations, finance, line managers) turns Claude from a “cool AI tool” into a strategic lever for workforce capability. It also gives HR the evidence it needs to defend and optimise the learning budget.

Using Claude as an AI learning tutor is one of the most effective ways HR can turn one-off trainings into continuous capability building and finally break the pattern of poor knowledge retention. With the right content strategy, governance, and integration into daily work, Claude becomes the always-available coach that keeps knowledge alive long after the workshop ends. At Reruption, we specialise in turning this vision into working solutions — from rapid proof-of-concepts to production-grade AI tutors embedded in your HR ecosystem — and are happy to explore what a high-impact, AI-first learning journey could look like in your organisation.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Transportation to Healthcare: Learn how companies successfully use Claude.

Waymo (Alphabet)

Transportation

Developing fully autonomous ride-hailing demanded overcoming extreme challenges in AI reliability for real-world roads. Waymo needed to master perception—detecting objects in fog, rain, night, or occlusions using sensors alone—while predicting erratic human behaviors like jaywalking or sudden lane changes. Planning complex trajectories in dense, unpredictable urban traffic, and precise control to execute maneuvers without collisions, required near-perfect accuracy, as a single failure could be catastrophic . Scaling from tests to commercial fleets introduced hurdles like handling edge cases (e.g., school buses with stop signs, emergency vehicles), regulatory approvals across cities, and public trust amid scrutiny. Incidents like failing to stop for school buses highlighted software gaps, prompting recalls. Massive data needs for training, compute-intensive models, and geographic adaptation (e.g., right-hand vs. left-hand driving) compounded issues, with competitors struggling on scalability .

Lösung

Waymo's Waymo Driver stack integrates deep learning end-to-end: perception fuses lidar, radar, and cameras via convolutional neural networks (CNNs) and transformers for 3D object detection, tracking, and semantic mapping with high fidelity. Prediction models forecast multi-agent behaviors using graph neural networks and video transformers trained on billions of simulated and real miles . For planning, Waymo applied scaling laws—larger models with more data/compute yield power-law gains in forecasting accuracy and trajectory quality—shifting from rule-based to ML-driven motion planning for human-like decisions. Control employs reinforcement learning and model-predictive control hybridized with neural policies for smooth, safe execution. Vast datasets from 96M+ autonomous miles, plus simulations, enable continuous improvement; recent AI strategy emphasizes modular, scalable stacks .

Ergebnisse

  • 450,000+ weekly paid robotaxi rides (Dec 2025)
  • 96 million autonomous miles driven (through June 2025)
  • 3.5x better avoiding injury-causing crashes vs. humans
  • 2x better avoiding police-reported crashes vs. humans
  • Over 71M miles with detailed safety crash analysis
  • 250,000 weekly rides (April 2025 baseline, since doubled)
Read case study →

Nubank (Pix Payments)

Payments

Nubank, Latin America's largest digital bank serving over 114 million customers across Brazil, Mexico, and Colombia, faced the challenge of scaling its Pix instant payment system amid explosive growth. Traditional Pix transactions required users to navigate the app manually, leading to friction, especially for quick, on-the-go payments. This app navigation bottleneck increased processing time and limited accessibility for users preferring conversational interfaces like WhatsApp, where 80% of Brazilians communicate daily. Additionally, enabling secure, accurate interpretation of diverse inputs—voice commands, natural language text, and images (e.g., handwritten notes or receipts)—posed significant hurdles. Nubank needed to overcome accuracy issues in multimodal understanding, ensure compliance with Brazil's Central Bank regulations, and maintain trust in a high-stakes financial environment while handling millions of daily transactions.

Lösung

Nubank deployed a multimodal generative AI solution powered by OpenAI models, allowing customers to initiate Pix payments through voice messages, text instructions, or image uploads directly in the app or WhatsApp. The AI processes speech-to-text, natural language processing for intent extraction, and optical character recognition (OCR) for images, converting them into executable Pix transfers. Integrated seamlessly with Nubank's backend, the system verifies user identity, extracts key details like amount and recipient, and executes transactions in seconds, bypassing traditional app screens. This AI-first approach enhances convenience, speed, and safety, scaling operations without proportional human intervention.

Ergebnisse

  • 60% reduction in transaction processing time
  • Tested with 2 million users by end of 2024
  • Serves 114 million customers across 3 countries
  • Testing initiated August 2024
  • Processes voice, text, and image inputs for Pix
  • Enabled instant payments via WhatsApp integration
Read case study →

Amazon

Retail

In the vast e-commerce landscape, online shoppers face significant hurdles in product discovery and decision-making. With millions of products available, customers often struggle to find items matching their specific needs, compare options, or get quick answers to nuanced questions about features, compatibility, and usage. Traditional search bars and static listings fall short, leading to shopping cart abandonment rates as high as 70% industry-wide and prolonged decision times that frustrate users. Amazon, serving over 300 million active customers, encountered amplified challenges during peak events like Prime Day, where query volumes spiked dramatically. Shoppers demanded personalized, conversational assistance akin to in-store help, but scaling human support was impossible. Issues included handling complex, multi-turn queries, integrating real-time inventory and pricing data, and ensuring recommendations complied with safety and accuracy standards amid a $500B+ catalog.

Lösung

Amazon developed Rufus, a generative AI-powered conversational shopping assistant embedded in the Amazon Shopping app and desktop. Rufus leverages a custom-built large language model (LLM) fine-tuned on Amazon's product catalog, customer reviews, and web data, enabling natural, multi-turn conversations to answer questions, compare products, and provide tailored recommendations. Powered by Amazon Bedrock for scalability and AWS Trainium/Inferentia chips for efficient inference, Rufus scales to millions of sessions without latency issues. It incorporates agentic capabilities for tasks like cart addition, price tracking, and deal hunting, overcoming prior limitations in personalization by accessing user history and preferences securely. Implementation involved iterative testing, starting with beta in February 2024, expanding to all US users by September, and global rollouts, addressing hallucination risks through grounding techniques and human-in-loop safeguards.

Ergebnisse

  • 60% higher purchase completion rate for Rufus users
  • $10B projected additional sales from Rufus
  • 250M+ customers used Rufus in 2025
  • Monthly active users up 140% YoY
  • Interactions surged 210% YoY
  • Black Friday sales sessions +100% with Rufus
  • 149% jump in Rufus users recently
Read case study →

NatWest

Banking

NatWest Group, a leading UK bank serving over 19 million customers, grappled with escalating demands for digital customer service. Traditional systems like the original Cora chatbot handled routine queries effectively but struggled with complex, nuanced interactions, often escalating 80-90% of cases to human agents. This led to delays, higher operational costs, and risks to customer satisfaction amid rising expectations for instant, personalized support . Simultaneously, the surge in financial fraud posed a critical threat, requiring seamless fraud reporting and detection within chat interfaces without compromising security or user trust. Regulatory compliance, data privacy under UK GDPR, and ethical AI deployment added layers of complexity, as the bank aimed to scale support while minimizing errors in high-stakes banking scenarios . Balancing innovation with reliability was paramount; poor AI performance could erode trust in a sector where customer satisfaction directly impacts retention and revenue .

Lösung

Cora+, launched in June 2024, marked NatWest's first major upgrade using generative AI to enable proactive, intuitive responses for complex queries, reducing escalations and enhancing self-service . This built on Cora's established platform, which already managed millions of interactions monthly. In a pioneering move, NatWest partnered with OpenAI in March 2025—becoming the first UK-headquartered bank to do so—integrating LLMs into both customer-facing Cora and internal tool Ask Archie. This allowed natural language processing for fraud reports, personalized advice, and process simplification while embedding safeguards for compliance and bias mitigation . The approach emphasized ethical AI, with rigorous testing, human oversight, and continuous monitoring to ensure safe, accurate interactions in fraud detection and service delivery .

Ergebnisse

  • 150% increase in Cora customer satisfaction scores (2024)
  • Proactive resolution of complex queries without human intervention
  • First UK bank OpenAI partnership, accelerating AI adoption
  • Enhanced fraud detection via real-time chat analysis
  • Millions of monthly interactions handled autonomously
  • Significant reduction in agent escalation rates
Read case study →

Morgan Stanley

Banking

Financial advisors at Morgan Stanley struggled with rapid access to the firm's extensive proprietary research database, comprising over 350,000 documents spanning decades of institutional knowledge. Manual searches through this vast repository were time-intensive, often taking 30 minutes or more per query, hindering advisors' ability to deliver timely, personalized advice during client interactions . This bottleneck limited scalability in wealth management, where high-net-worth clients demand immediate, data-driven insights amid volatile markets. Additionally, the sheer volume of unstructured data—40 million words of research reports—made it challenging to synthesize relevant information quickly, risking suboptimal recommendations and reduced client satisfaction. Advisors needed a solution to democratize access to this 'goldmine' of intelligence without extensive training or technical expertise .

Lösung

Morgan Stanley partnered with OpenAI to develop AI @ Morgan Stanley Debrief, a GPT-4-powered generative AI chatbot tailored for wealth management advisors. The tool uses retrieval-augmented generation (RAG) to securely query the firm's proprietary research database, providing instant, context-aware responses grounded in verified sources . Implemented as a conversational assistant, Debrief allows advisors to ask natural-language questions like 'What are the risks of investing in AI stocks?' and receive synthesized answers with citations, eliminating manual digging. Rigorous AI evaluations and human oversight ensure accuracy, with custom fine-tuning to align with Morgan Stanley's institutional knowledge . This approach overcame data silos and enabled seamless integration into advisors' workflows.

Ergebnisse

  • 98% adoption rate among wealth management advisors
  • Access for nearly 50% of Morgan Stanley's total employees
  • Queries answered in seconds vs. 30+ minutes manually
  • Over 350,000 proprietary research documents indexed
  • 60% employee access at peers like JPMorgan for comparison
  • Significant productivity gains reported by CAO
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Turn Static Training Materials into a Structured Claude Knowledge Base

Start by gathering the core assets behind your trainings: slide decks, facilitator notes, HR manuals, process documents, FAQs, and policy PDFs. Clean them up where needed (remove outdated sections, mark regional variations) and organise them by theme: onboarding, performance management, compliance, leadership, tools and systems, etc. This gives Claude a solid foundation for accurate, context-aware answers.

When connecting Claude to these documents (via API or a secure knowledge base integration), tag each document with metadata like topic, audience (e.g. managers vs. employees), and last update date. This enables more precise retrieval and allows you to instruct Claude to prioritise the newest approved sources.

Example system prompt for Claude:
You are an HR learning tutor for ACME GmbH.
Use ONLY the provided internal documents to answer.
If a question is not covered, say you don't know and refer the user to HR.
Prioritise the most recent policies and Germany-specific rules.
Explain in clear, simple language and suggest 1-2 follow-up questions
that help the employee apply the concept to their daily work.

Expected outcome: Employees can ask Claude any question related to the training topics and get consistent, policy-compliant explanations instead of hunting through old slide decks.

Design Spaced Repetition Microlearning with Claude

To fight forgetting, build simple workflows where Claude generates and delivers spaced repetition content after a training. For example, schedule weekly microlearning messages in Teams or email for 4–6 weeks post-training. Each message contains 2–3 questions or scenarios based on the original content, with instant feedback powered by Claude.

Use Claude to draft these questions at different difficulty levels and formats (multiple choice, short answer, scenarios). You can then review and approve them before they go live.

Prompt to generate spaced repetition items:
You are designing spaced repetition microlearning for employees
who completed our "Feedback for Managers" training.
Based on the attached training manual, create 10 questions that:
- Mix multiple choice and short scenario responses
- Focus on real-life situations a manager faces
- Include a short model answer and explanation per question
Label them by difficulty: basic, intermediate, advanced.

Expected outcome: Employees receive short, varied practice over time, dramatically increasing retention without requiring them to log into a separate learning platform.

Use Claude for Scenario-Based Practice and Role Plays

Knowledge retention improves when people practise realistic situations. Configure Claude as a role-play partner that simulates employees, candidates, or colleagues so learners can rehearse difficult conversations or processes after training. This is particularly effective for leadership, feedback, performance reviews, and HR business partner trainings.

Give Claude clear instructions about its role and the type of feedback it should provide after each exchange.

Prompt for a scenario-based tutor:
You are playing the role of an employee in a performance review.
The user is the manager who just completed our "Effective Reviews" training.
1) Act like a realistic employee: sometimes defensive, sometimes unsure.
2) After 10-15 messages, pause and provide structured feedback:
   - What the manager did well (linked to our training model)
   - What could be improved
   - 2 specific sentences the manager could have used instead.
Stay within the guidelines described in the attached training guide.

Expected outcome: Learners can return to Claude for targeted practice any time, turning passive knowledge into active skill with no need for scheduling extra live sessions.

Embed Claude into Onboarding and Just-in-Time Support

Onboarding is where poor knowledge retention hurts the most. Integrate Claude into your onboarding journey as the primary channel for “how do we do X here?” questions. Link to Claude from welcome emails, the intranet, and your LMS, and show new hires specific example questions they can ask.

Combine this with simple checklists and progress prompts generated by Claude. For example, after day 3 or week 2, send a message asking what topics are still unclear and route the most common questions to HR for content improvement.

Example onboarding helper prompt:
You are an onboarding assistant for new hires in the Sales team.
Your goals:
- Answer questions about processes, tools, and HR policies
- Always suggest where to find the official document or system screen
- Ask 1 clarifying question to better understand the context before answering.
If something seems like a manager decision, advise the user to check with their manager.

Expected outcome: New hires rely on Claude instead of peers for basic questions, reducing information overload in the first weeks and reinforcing core concepts when they actually need them.

Create Self-Serve “Refresh Paths” for Key Trainings

For critical topics (e.g. performance management, code of conduct, information security), build explicit “refresh paths” that employees can run through in Claude before key moments: yearly reviews, audits, or project kick-offs. These paths bundle short recaps, checks for understanding, and links to the most relevant documents.

You can implement this by creating named prompts or quick commands employees trigger inside your chat interface (e.g. typing “/refresh-performance-review”). Claude then guides them through a structured sequence.

Prompt to define a refresh path:
Design a 15-minute refresh sequence for our "Performance Review" training
based on the attached materials. Structure it as a guided conversation
with the employee:
1) 3-question diagnostic on their current understanding
2) Short recap of the core model (max 5 bullets)
3) 2 realistic scenarios to apply the model
4) A final checklist they can use in their upcoming review.
Keep language concise and practical.

Expected outcome: Employees revisit critical concepts at the moments of highest relevance, which boosts retention and quality of execution without scheduling new workshops.

Measure Retention and Content Gaps Through Claude Interactions

Finally, use Claude not only to deliver learning, but also to understand where knowledge leaks occur. Analyse anonymised interaction logs (in compliance with your data policies) to see which topics get repeated questions after a training, what people misunderstand, and which documents are rarely referenced.

Combine this analysis with periodic pulse quizzes generated by Claude and delivered through your existing channels. Compare correct answer rates directly after training versus 4–8 weeks later to quantify retention and identify where additional reinforcement is needed.

Prompt for retention pulse quiz:
You are helping HR measure long-term knowledge retention.
Based on the "Information Security Basics" training, create:
- 8 multiple-choice questions covering the most critical risks
- 2 scenario questions about real-life decisions employees face
Provide the right answer and a short rationale.
Keep language non-technical and focused on behaviour.

Expected outcome: HR gains data-driven insight into which trainings stick, which need redesign, and where to allocate budget, instead of relying on smile sheets or attendance metrics alone.

Across these best practices, organisations typically see higher post-training quiz scores (10–25 percentage points), fewer repetitive HR helpdesk questions on trained topics, and shorter time-to-productivity for new hires. The exact metrics will depend on your context, but with a well-implemented Claude-based AI tutor, you can realistically expect a tangible improvement in knowledge retention within one or two training cycles.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Claude improves knowledge retention by turning one-off trainings into ongoing, interactive support. Instead of employees trying to remember a slide from a workshop, they can ask Claude questions in natural language, practise with scenarios, and receive spaced repetition microlearning over several weeks.

Because Claude is available on demand in the tools employees already use (e.g. Teams, intranet), it reinforces concepts at the moment of need. This combination of retrieval practice, real-life application, and easy access is what significantly reduces forgetting compared to traditional training-only approaches.

Implementation is mostly about content preparation and integration, not building complex infrastructure from scratch. You need:

  • A curated set of up-to-date training materials, policies, and process docs
  • Clear rules on what Claude may and may not answer (governance and compliance)
  • Technical integration into your preferred channels (LMS, intranet, Teams, Slack, etc.)
  • A small cross-functional team from HR/L&D, IT, and Legal/Data Protection to sign off guardrails

With this in place, a focused pilot for a specific training (e.g. onboarding or performance management) can often go live within a few weeks, especially if you use Reruption’s AI PoC approach to validate feasibility and user experience quickly.

For a single training topic, you can typically measure improvements in retention within one learning cycle. If you introduce Claude-based follow-up (microlearning, Q&A, scenarios) immediately after a workshop, you can run a follow-up quiz or scenario assessment 4–8 weeks later and compare it to previous cohorts.

On a broader level — reduced HR helpdesk tickets, better onboarding ramp-up times, fewer process errors — meaningful trends usually become visible over 3–6 months, depending on how often employees use the relevant skills. The key is to define metrics upfront and use Claude’s interaction data to understand where knowledge is sticking and where it still leaks.

The main cost components are access to Claude (via API or platform), integration effort, and some HR/L&D time to curate and maintain content. Compared to traditional training costs (external trainers, travel, lost productive hours), this is usually moderate, especially once the initial setup is complete.

ROI comes from multiple levers: better knowledge retention (fewer repeat trainings), reduced HR and manager time spent on repetitive questions, faster onboarding, lower error or compliance risk, and more targeted use of your learning budget. By tying Claude initiatives to specific metrics — e.g. a 20% reduction in repeated questions on a policy, or a 15% faster ramp-up for a role — you can build a solid business case for the investment.

Reruption supports organisations end-to-end in using Claude for HR learning. With our AI PoC offering (9.900€), we quickly validate a concrete use case — for example, turning your onboarding or performance management training into an AI tutor — and deliver a working prototype with performance metrics and a production roadmap.

Beyond the PoC, we apply our Co-Preneur approach: embedding with your team like co-founders, not external slide-ware consultants. We help you scope the right learning journeys, set up secure integrations, design prompts and guardrails, and measure real impact on behaviour and performance. The goal is not another pilot that fades out, but a sustainable AI-first learning capability embedded in your HR function.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media