The Challenge: Limited Learning Insights

Most HR and L&D teams are swimming in learning data but starving for real insight. You have LMS exports, feedback forms, survey responses, webinar recordings and assessment scores – yet you still struggle to answer a basic question: which learning modules actually improve skills and performance? As a result, decisions about what to keep, fix or retire in your catalog are often based on gut feeling or satisfaction scores, not on hard evidence.

Traditional approaches to training analytics were built for a different era. Standard LMS dashboards focus on attendance, completion rates and quiz scores, not on behavioural change or business impact. Manual analysis of open text feedback and training logs is slow and inconsistent, so it rarely happens at scale. Even when HR exports data into spreadsheets or BI tools, you need scarce data skills to turn that into meaningful insight, and you still miss the nuance in comments, discussions and coaching notes.

The business impact is significant. Without clear insight into learning effectiveness, budgets get spread thin across generic programs that may not move the needle. High-potential employees waste time on irrelevant modules while critical skill gaps remain uncovered. It becomes harder to defend L&D investments against other priorities because you cannot clearly link programs to measurable improvements in performance, retention or internal mobility. Over time, your organisation falls behind competitors that can more precisely develop and deploy skills.

This challenge is real, but it is solvable. Modern AI for HR and L&D – especially large language models like Claude – can process long-form training data, surface patterns hidden in comments and transcripts, and connect learning activities to observable skill signals. At Reruption, we’ve helped organisations build AI-first learning tools and analytics layers that turn unstructured training information into clear, actionable narratives. In the rest of this guide, you’ll see practical ways to use Claude to close your learning insight gap and make sharper, evidence-based L&D decisions.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s work building AI-powered learning and training solutions, we see the same pattern across HR teams: the data to understand learning effectiveness already exists, but it is fragmented across systems and too unstructured to analyse manually. Tools like Claude change the economics of analysis by digesting long logs, survey responses and transcripts and turning them into structured, decision-ready insights. Our perspective: the opportunity isn’t just to add another report, but to redesign how your organisation measures skills, learning impact and L&D ROI with an AI-first lens.

Think in Terms of Skills, Not Courses

The biggest strategic shift when using Claude for learning analytics is to move from a course-centric view to a skill-centric one. Instead of asking, “Is this training popular?” the core question becomes, “Which specific skills does this module build, and how effectively?”. Claude can help you map learning content, assignments and assessments to a defined skills taxonomy, then track where learners show improvement or persistent gaps.

At a strategic level, this means aligning HR, L&D and business leaders on a common language for skills and proficiency levels. Before you scale any AI-based analysis, invest time in defining 20–50 priority skills for your critical roles and how they show up in behaviour, artifacts (e.g., project deliverables) and feedback. Claude can then be instructed to evaluate comments, reflections and assignments through that skills lens, giving you insight that connects directly to workforce planning and talent decisions.

Start With One High-Value Learning Journey

It’s tempting to point Claude at your entire L&D catalog, but strategically it’s more effective to begin with a single, high-impact learning journey – for example, leadership development, sales enablement or onboarding for critical roles. This concentrates your effort where insight will have immediate business impact and makes it easier to validate whether Claude’s outputs are useful.

Pick a journey where you have sufficient data: sign-ups, completions, quizzes, feedback forms, and ideally manager evaluations or performance indicators. Use Claude to analyse this end-to-end experience and generate a “learning effectiveness narrative”: what works, what confuses learners, where engagement drops, and which elements correlate with better outcomes. Once stakeholders see concrete, trusted improvements for one journey, you’ll find it much easier to secure support to expand AI analytics to other programs.

Design a Human-in-the-Loop Review Process

Claude can dramatically accelerate learning insights, but HR should not outsource judgement entirely to an AI model. Strategically, you need a human-in-the-loop process that defines what is automated (aggregation, clustering, drafting insights) and what is owned by experts (interpretation, prioritisation, intervention design). This protects against misread nuance and builds trust with works councils and leadership.

Set up a recurring cadence where L&D specialists and HR business partners review Claude’s summaries and recommendations together. Encourage them to challenge the output, cross-check with raw data and enrich the findings with contextual knowledge about organisational changes, cultural factors or local specifics. Position Claude as a “learning insights analyst” whose work must be validated and refined, not as a black-box decision maker.

Prepare Your Data and Governance Upfront

Using AI in HR analytics touches sensitive data – performance feedback, open text comments, even health- or diversity-related issues in some contexts. Strategically, you must prepare your data pipelines and governance before scaling Claude. Decide which sources are in scope (LMS logs, survey results, 360 feedback, coaching notes, call transcripts), what needs to be anonymised or pseudonymised, and which regions or groups require additional safeguards.

Work with Legal, IT and Data Protection Officers to define clear policies: what data is processed by Claude, how long it’s stored, and who can access the outputs. A well-governed approach not only reduces compliance risk but also increases employee trust when you later communicate that AI is being used to improve learning, not to micro-monitor individuals.

Frame the Initiative Around Business Outcomes, Not Technology

For executives and line managers, the value of AI-powered learning analytics is not about Claude itself, but about solving concrete business problems: ramping new hires faster, closing critical skill gaps, or improving quota attainment. Strategically, you should frame your Claude initiative in these terms from day one.

Define a small set of target outcomes (e.g., reducing time-to-productivity for new sales reps by 20%, or cutting ineffective training hours by 15%). Then use Claude’s analysis to regularly report progress on these outcomes, not just on engagement metrics. This positions HR as a strategic partner who brings data-backed recommendations instead of more “training requests”, and it makes it much easier to argue for a higher L&D budget when you can show demonstrable impact.

Using Claude to fix limited learning insights is less about generating pretty dashboards and more about fundamentally changing how HR understands skills, learning journeys and business impact. With the right scope, governance and human-in-the-loop process, Claude becomes a force multiplier for your L&D team, turning messy training data into clear, actionable direction. At Reruption, we combine this AI capability with hands-on implementation and our Co-Preneur mindset to build learning insight systems that actually get used. If you’re exploring how to move beyond completion rates and into real skill analytics, we’re happy to help you test the approach on a concrete use case and scale from there.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Banking to Payments: Learn how companies successfully use Claude.

Morgan Stanley

Banking

Financial advisors at Morgan Stanley struggled with rapid access to the firm's extensive proprietary research database, comprising over 350,000 documents spanning decades of institutional knowledge. Manual searches through this vast repository were time-intensive, often taking 30 minutes or more per query, hindering advisors' ability to deliver timely, personalized advice during client interactions . This bottleneck limited scalability in wealth management, where high-net-worth clients demand immediate, data-driven insights amid volatile markets. Additionally, the sheer volume of unstructured data—40 million words of research reports—made it challenging to synthesize relevant information quickly, risking suboptimal recommendations and reduced client satisfaction. Advisors needed a solution to democratize access to this 'goldmine' of intelligence without extensive training or technical expertise .

Lösung

Morgan Stanley partnered with OpenAI to develop AI @ Morgan Stanley Debrief, a GPT-4-powered generative AI chatbot tailored for wealth management advisors. The tool uses retrieval-augmented generation (RAG) to securely query the firm's proprietary research database, providing instant, context-aware responses grounded in verified sources . Implemented as a conversational assistant, Debrief allows advisors to ask natural-language questions like 'What are the risks of investing in AI stocks?' and receive synthesized answers with citations, eliminating manual digging. Rigorous AI evaluations and human oversight ensure accuracy, with custom fine-tuning to align with Morgan Stanley's institutional knowledge . This approach overcame data silos and enabled seamless integration into advisors' workflows.

Ergebnisse

  • 98% adoption rate among wealth management advisors
  • Access for nearly 50% of Morgan Stanley's total employees
  • Queries answered in seconds vs. 30+ minutes manually
  • Over 350,000 proprietary research documents indexed
  • 60% employee access at peers like JPMorgan for comparison
  • Significant productivity gains reported by CAO
Read case study →

Stanford Health Care

Healthcare

Stanford Health Care, a leading academic medical center, faced escalating clinician burnout from overwhelming administrative tasks, including drafting patient correspondence and managing inboxes overloaded with messages. With vast EHR data volumes, extracting insights for precision medicine and real-time patient monitoring was manual and time-intensive, delaying care and increasing error risks. Traditional workflows struggled with predictive analytics for events like sepsis or falls, and computer vision for imaging analysis, amid growing patient volumes. Clinicians spent excessive time on routine communications, such as lab result notifications, hindering focus on complex diagnostics. The need for scalable, unbiased AI algorithms was critical to leverage extensive datasets for better outcomes.

Lösung

Partnering with Microsoft, Stanford became one of the first healthcare systems to pilot Azure OpenAI Service within Epic EHR, enabling generative AI for drafting patient messages and natural language queries on clinical data. This integration used GPT-4 to automate correspondence, reducing manual effort. Complementing this, the Healthcare AI Applied Research Team deployed machine learning for predictive analytics (e.g., sepsis, falls prediction) and explored computer vision in imaging projects. Tools like ChatEHR allow conversational access to patient records, accelerating chart reviews. Phased pilots addressed data privacy and bias, ensuring explainable AI for clinicians.

Ergebnisse

  • 50% reduction in time for drafting patient correspondence
  • 30% decrease in clinician inbox burden from AI message routing
  • 91% accuracy in predictive models for inpatient adverse events
  • 20% faster lab result communication to patients
  • Improved autoimmune detection by 1 year prior to diagnosis
Read case study →

HSBC

Banking

As a global banking titan handling trillions in annual transactions, HSBC grappled with escalating fraud and money laundering risks. Traditional systems struggled to process over 1 billion transactions monthly, generating excessive false positives that burdened compliance teams, slowed operations, and increased costs. Ensuring real-time detection while minimizing disruptions to legitimate customers was critical, alongside strict regulatory compliance in diverse markets. Customer service faced high volumes of inquiries requiring 24/7 multilingual support, straining resources. Simultaneously, HSBC sought to pioneer generative AI research for innovation in personalization and automation, but challenges included ethical deployment, human oversight for advancing AI, data privacy, and integration across legacy systems without compromising security. Scaling these solutions globally demanded robust governance to maintain trust and adhere to evolving regulations.

Lösung

HSBC tackled fraud with machine learning models powered by Google Cloud's Transaction Monitoring 360, enabling AI to detect anomalies and financial crime patterns in real-time across vast datasets. This shifted from rigid rules to dynamic, adaptive learning. For customer service, NLP-driven chatbots were rolled out to handle routine queries, provide instant responses, and escalate complex issues, enhancing accessibility worldwide. In parallel, HSBC advanced generative AI through internal research, sandboxes, and a landmark multi-year partnership with Mistral AI (announced December 2024), integrating tools for document analysis, translation, fraud enhancement, automation, and client-facing innovations—all under ethical frameworks with human oversight.

Ergebnisse

  • Screens over 1 billion transactions monthly for financial crime
  • Significant reduction in false positives and manual reviews (up to 60-90% in models)
  • Hundreds of AI use cases deployed across global operations
  • Multi-year Mistral AI partnership (Dec 2024) to accelerate genAI productivity
  • Enhanced real-time fraud alerts, reducing compliance workload
Read case study →

UPS

Logistics

UPS faced massive inefficiencies in delivery routing, with drivers navigating an astronomical number of possible route combinations—far exceeding the nanoseconds since Earth's existence. Traditional manual planning led to longer drive times, higher fuel consumption, and elevated operational costs, exacerbated by dynamic factors like traffic, package volumes, terrain, and customer availability. These issues not only inflated expenses but also contributed to significant CO2 emissions in an industry under pressure to go green. Key challenges included driver resistance to new technology, integration with legacy systems, and ensuring real-time adaptability without disrupting daily operations. Pilot tests revealed adoption hurdles, as drivers accustomed to familiar routes questioned the AI's suggestions, highlighting the human element in tech deployment. Scaling across 55,000 vehicles demanded robust infrastructure and data handling for billions of data points daily.

Lösung

UPS developed ORION (On-Road Integrated Optimization and Navigation), an AI-powered system blending operations research for mathematical optimization with machine learning for predictive analytics on traffic, weather, and delivery patterns. It dynamically recalculates routes in real-time, considering package destinations, vehicle capacity, right/left turn efficiencies, and stop sequences to minimize miles and time. The solution evolved from static planning to dynamic routing upgrades, incorporating agentic AI for autonomous decision-making. Training involved massive datasets from GPS telematics, with continuous ML improvements refining algorithms. Overcoming adoption challenges required driver training programs and gamification incentives, ensuring seamless integration via in-cab displays.

Ergebnisse

  • 100 million miles saved annually
  • $300-400 million cost savings per year
  • 10 million gallons of fuel reduced yearly
  • 100,000 metric tons CO2 emissions cut
  • 2-4 miles shorter routes per driver daily
  • 97% fleet deployment by 2021
Read case study →

Waymo (Alphabet)

Transportation

Developing fully autonomous ride-hailing demanded overcoming extreme challenges in AI reliability for real-world roads. Waymo needed to master perception—detecting objects in fog, rain, night, or occlusions using sensors alone—while predicting erratic human behaviors like jaywalking or sudden lane changes. Planning complex trajectories in dense, unpredictable urban traffic, and precise control to execute maneuvers without collisions, required near-perfect accuracy, as a single failure could be catastrophic . Scaling from tests to commercial fleets introduced hurdles like handling edge cases (e.g., school buses with stop signs, emergency vehicles), regulatory approvals across cities, and public trust amid scrutiny. Incidents like failing to stop for school buses highlighted software gaps, prompting recalls. Massive data needs for training, compute-intensive models, and geographic adaptation (e.g., right-hand vs. left-hand driving) compounded issues, with competitors struggling on scalability .

Lösung

Waymo's Waymo Driver stack integrates deep learning end-to-end: perception fuses lidar, radar, and cameras via convolutional neural networks (CNNs) and transformers for 3D object detection, tracking, and semantic mapping with high fidelity. Prediction models forecast multi-agent behaviors using graph neural networks and video transformers trained on billions of simulated and real miles . For planning, Waymo applied scaling laws—larger models with more data/compute yield power-law gains in forecasting accuracy and trajectory quality—shifting from rule-based to ML-driven motion planning for human-like decisions. Control employs reinforcement learning and model-predictive control hybridized with neural policies for smooth, safe execution. Vast datasets from 96M+ autonomous miles, plus simulations, enable continuous improvement; recent AI strategy emphasizes modular, scalable stacks .

Ergebnisse

  • 450,000+ weekly paid robotaxi rides (Dec 2025)
  • 96 million autonomous miles driven (through June 2025)
  • 3.5x better avoiding injury-causing crashes vs. humans
  • 2x better avoiding police-reported crashes vs. humans
  • Over 71M miles with detailed safety crash analysis
  • 250,000 weekly rides (April 2025 baseline, since doubled)
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Unify Your Learning Data Into Analyse-Ready Packs for Claude

Claude is powerful with long, complex inputs – but you still need to give it well-structured “packs” of data. Start by exporting relevant information from your LMS and survey tools: course IDs and titles, module outlines, attendance and completion logs, quiz results, open text feedback, and (if available) manager evaluations linked to the same cohort.

Combine these into logical bundles. For example, create one file per program per cohort that contains: program description, learning objectives, module list, participant list (anonymised if needed), and all associated comments and survey responses. Then, feed these bundles into Claude with clear instructions about what to extract: themes, bottlenecks, skill improvements and correlations with performance indicators.

Example prompt to analyse one learning program:
You are an HR learning analytics expert.
You receive all data for a single learning program and its latest cohort.

1. Read the full input carefully.
2. Identify:
   - The 3–5 main strengths of the program
   - The 3–5 main weaknesses or confusion points
   - Recurring themes in open text feedback
3. Map issues to specific modules or activities.
4. Suggest 5 concrete improvements ranked by impact/feasibility.

Return your answer in this structure:
- Program summary (5–7 sentences)
- Strengths
- Weaknesses
- Module-level issues
- Recommended improvements

Expected outcome: HR gains a clear, narrative overview of each program without manually reading hundreds of comments, making review cycles faster and more consistent.

Use Claude to Build and Refine a Skills Taxonomy From Real Learning Data

If you don’t yet have a formal skills framework, Claude can help you bootstrap one from your existing learning content and assessments. Start by feeding Claude a representative set of course descriptions, learning objectives and assessment questions from your highest-value programs.

Ask Claude to infer the underlying skills and cluster them into a draft taxonomy, then iterate with HR and business stakeholders. You can then reuse this taxonomy to tag both content and learner outputs.

Example prompt to draft a skills taxonomy:
You are helping an HR team build a skills taxonomy.
Below you find course descriptions, learning objectives and exam questions.

Tasks:
1. Infer 20–40 distinct skills covered by this material.
2. Group them into logical categories.
3. For each skill, provide:
   - Name (max 4 words)
   - Short definition
   - 3 example behaviours at basic proficiency
   - 3 example behaviours at advanced proficiency

Return the result as a structured list.

Expected outcome: a practical, business-grounded skills framework that can later be refined, rather than a theoretical model that never connects to actual learning content.

Analyse Open Text Feedback and Transcripts for Root Causes

One of Claude’s strengths is handling long-form, messy text, such as survey comments, workshop notes or webinar transcripts. Use this to go beyond “average satisfaction” and identify root causes of learning issues. For example, you can upload all comments for a leadership program plus transcripts from key sessions and ask Claude to cluster issues, quote representative examples and relate them to specific modules.

Combine qualitative insight with simple quantitative counts (e.g., how often each theme appears) to prioritise action.

Example prompt to extract root causes:
You are analysing qualitative feedback for an HR learning program.
The input contains: survey comments, chat logs, and selected session transcripts.

1. Cluster feedback into themes (e.g., pace too fast, examples not relevant).
2. For each theme, provide:
   - A short description
   - 2–3 direct quotes from participants
   - The likely root cause
   - Suggested design changes to address it
3. Indicate which themes appear to have the highest impact on learning effectiveness.

Expected outcome: instead of guessing why a module underperforms, HR sees concrete patterns and can make targeted design changes.

Link Learning Data to Performance Signals With Claude-Assisted Analysis

To move from engagement metrics to impact, you need to connect learning data with performance signals like sales numbers, productivity metrics or internal mobility. Claude can’t directly do statistical analysis, but it can help you reason about patterns and generate hypotheses once you supply aggregated summaries from your BI tools.

For example, you can export a simple table summarising performance by cohort (e.g., average sales before vs. after training) and provide Claude with narrative descriptions of the program and any changes made over time.

Example prompt to reason about impact:
You are an HR analytics advisor.
Below you find:
- A description of the learning program and its objectives.
- A table (in text form) with average performance metrics by cohort.
- Notes about major changes to the program across cohorts.

Tasks:
1. Identify plausible relationships between program changes and performance trends.
2. Highlight where the data is inconclusive or might be influenced by external factors.
3. Suggest 3–5 follow-up analyses HR should run with BI to validate impact.
4. Propose 3 concrete decisions HR could test in the next cohort.

Expected outcome: HR gains a structured, narrative interpretation of numbers that can be discussed with Finance and business leaders, supporting a stronger L&D business case.

Generate Personalised Learning Insights and Micro-Recommendations for Learners

Beyond program-level analytics, Claude can produce individualised learning insights for employees without creating a huge manual workload for HR. Feed Claude anonymised or employee-consented data for a learner: completed modules, quiz results, reflection assignments and selected work samples (e.g., sales emails, project reports).

Ask Claude to identify strengths, gaps and recommended next steps, phrased in a supportive, coaching style. These outputs can be embedded into your LMS or shared with managers to guide development conversations.

Example prompt for individual learning feedback:
You are a supportive learning coach.
You receive:
- A list of modules the learner has completed
- Their quiz scores
- Short reflection answers and sample work outputs

1. Identify the learner's top 3 strengths.
2. Identify the 3 most relevant skill gaps based on the input.
3. Suggest a personalised 4-week learning focus:
   - 2–3 modules to revisit or deepen
   - 2 on-the-job practice ideas per skill gap
4. Write the feedback in a constructive tone, addressing the learner directly.

Expected outcome: employees receive targeted guidance that makes learning feel relevant and actionable, while HR gets scalable personalised support without adding headcount.

Establish a Recurring Claude-Assisted L&D Review Cadence

To make AI-powered learning insights stick, embed Claude into your regular L&D operating rhythm. For example, create a quarterly review ritual where HR exports updated data for priority programs, runs the standardised Claude prompts, and then discusses the outputs with stakeholders.

Use consistent templates for Claude’s analysis and store results centrally, so you can track how insights and actions evolve over time. Over 2–3 cycles, you’ll see which recommendations lead to measurable improvements and where your prompts or data need refinement.

Expected outcomes (realistic ranges over 6–12 months, depending on scale and maturity): 30–50% reduction in manual time spent on feedback analysis, 10–20% reduction in clearly ineffective training hours, faster iteration cycles on key programs, and a much stronger evidence base to defend or expand your L&D budget.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Claude can process the unstructured learning data that traditional LMS reports ignore: open text feedback, reflection assignments, coaching notes and workshop transcripts. By analysing this content alongside attendance and quiz data, Claude can highlight which modules actually build specific skills, where learners are confused, and which parts of a program drive the most behaviour change.

Instead of just knowing that 92% completed a course, HR gets narrative insights such as “Module 3 is where most learners drop off and report confusion about applying the concept in their role.” This allows you to redesign or drop underperforming elements and invest more in what works.

You don’t need a full data science team to start. At minimum, you need:

  • An HR or L&D lead who understands your learning programs and business context.
  • Basic data extraction support from IT or your LMS admin to export logs, survey results and transcripts.
  • Someone comfortable experimenting with Claude prompts and iterating based on the outputs.

Reruption typically helps clients set up the first end-to-end workflow: defining the data bundle, crafting robust prompts, and designing the review process. Once this is in place, your HR team can operate and refine the process without heavy technical dependencies.

For a focused use case, you can see meaningful insights in a matter of weeks, not months. A typical timeline looks like this:

  • Week 1–2: Select one priority learning journey, export data, and define the first analysis prompts.
  • Week 3–4: Run Claude analyses, review outputs with L&D, and implement quick wins (e.g., clarifying confusing modules, adjusting examples).
  • Month 2–3: Refine prompts, add additional data sources (e.g., manager feedback), and formalise a recurring review rhythm.

Measurable impact on skills and performance (e.g., better assessment scores, shorter ramp-up time) typically appears over one or two program cycles as improvements take effect. The key is to start narrow, learn quickly and scale only what demonstrably adds value.

The ROI comes from three main levers:

  • Time savings: Automating analysis of feedback and transcripts can cut manual review time by 30–50%, freeing HR and L&D to focus on design and stakeholder alignment.
  • Better allocation of L&D budget: With clear evidence of what works, you can reduce or retire low-impact content, often cutting 10–20% of ineffective training hours and reinvesting in high-value areas.
  • Improved performance: By identifying and addressing specific skill gaps, organisations see faster onboarding, higher sales effectiveness, or fewer quality issues – all of which have direct financial impact.

Claude itself is relatively low-cost compared to traditional analytics projects; the main investment is in setting up the right workflows. That’s why we recommend starting with a contained pilot that can demonstrate concrete savings or gains within one budget cycle.

Reruption supports organisations from idea to working solution with an AI-first, Co-Preneur mindset. For this specific challenge of limited learning insights, we typically start with our AI PoC offering (9,900€), where we co-define a concrete use case (e.g., analysing one key learning journey), build a Claude-based prototype, and test whether it delivers actionable insights with your real data.

Beyond the PoC, we help you embed Claude into your HR and L&D workflows: designing data pipelines, refining prompts, ensuring security & compliance, and training your team to operate the solution themselves. We don’t just hand over slides; we work inside your organisation’s reality until a functioning learning insights process is live and delivering value.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media