The Challenge: Limited Learning Insights

Most HR and L&D teams are swimming in learning data but starving for real insight. You have LMS exports, feedback forms, survey responses, webinar recordings and assessment scores – yet you still struggle to answer a basic question: which learning modules actually improve skills and performance? As a result, decisions about what to keep, fix or retire in your catalog are often based on gut feeling or satisfaction scores, not on hard evidence.

Traditional approaches to training analytics were built for a different era. Standard LMS dashboards focus on attendance, completion rates and quiz scores, not on behavioural change or business impact. Manual analysis of open text feedback and training logs is slow and inconsistent, so it rarely happens at scale. Even when HR exports data into spreadsheets or BI tools, you need scarce data skills to turn that into meaningful insight, and you still miss the nuance in comments, discussions and coaching notes.

The business impact is significant. Without clear insight into learning effectiveness, budgets get spread thin across generic programs that may not move the needle. High-potential employees waste time on irrelevant modules while critical skill gaps remain uncovered. It becomes harder to defend L&D investments against other priorities because you cannot clearly link programs to measurable improvements in performance, retention or internal mobility. Over time, your organisation falls behind competitors that can more precisely develop and deploy skills.

This challenge is real, but it is solvable. Modern AI for HR and L&D – especially large language models like Claude – can process long-form training data, surface patterns hidden in comments and transcripts, and connect learning activities to observable skill signals. At Reruption, we’ve helped organisations build AI-first learning tools and analytics layers that turn unstructured training information into clear, actionable narratives. In the rest of this guide, you’ll see practical ways to use Claude to close your learning insight gap and make sharper, evidence-based L&D decisions.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s work building AI-powered learning and training solutions, we see the same pattern across HR teams: the data to understand learning effectiveness already exists, but it is fragmented across systems and too unstructured to analyse manually. Tools like Claude change the economics of analysis by digesting long logs, survey responses and transcripts and turning them into structured, decision-ready insights. Our perspective: the opportunity isn’t just to add another report, but to redesign how your organisation measures skills, learning impact and L&D ROI with an AI-first lens.

Think in Terms of Skills, Not Courses

The biggest strategic shift when using Claude for learning analytics is to move from a course-centric view to a skill-centric one. Instead of asking, “Is this training popular?” the core question becomes, “Which specific skills does this module build, and how effectively?”. Claude can help you map learning content, assignments and assessments to a defined skills taxonomy, then track where learners show improvement or persistent gaps.

At a strategic level, this means aligning HR, L&D and business leaders on a common language for skills and proficiency levels. Before you scale any AI-based analysis, invest time in defining 20–50 priority skills for your critical roles and how they show up in behaviour, artifacts (e.g., project deliverables) and feedback. Claude can then be instructed to evaluate comments, reflections and assignments through that skills lens, giving you insight that connects directly to workforce planning and talent decisions.

Start With One High-Value Learning Journey

It’s tempting to point Claude at your entire L&D catalog, but strategically it’s more effective to begin with a single, high-impact learning journey – for example, leadership development, sales enablement or onboarding for critical roles. This concentrates your effort where insight will have immediate business impact and makes it easier to validate whether Claude’s outputs are useful.

Pick a journey where you have sufficient data: sign-ups, completions, quizzes, feedback forms, and ideally manager evaluations or performance indicators. Use Claude to analyse this end-to-end experience and generate a “learning effectiveness narrative”: what works, what confuses learners, where engagement drops, and which elements correlate with better outcomes. Once stakeholders see concrete, trusted improvements for one journey, you’ll find it much easier to secure support to expand AI analytics to other programs.

Design a Human-in-the-Loop Review Process

Claude can dramatically accelerate learning insights, but HR should not outsource judgement entirely to an AI model. Strategically, you need a human-in-the-loop process that defines what is automated (aggregation, clustering, drafting insights) and what is owned by experts (interpretation, prioritisation, intervention design). This protects against misread nuance and builds trust with works councils and leadership.

Set up a recurring cadence where L&D specialists and HR business partners review Claude’s summaries and recommendations together. Encourage them to challenge the output, cross-check with raw data and enrich the findings with contextual knowledge about organisational changes, cultural factors or local specifics. Position Claude as a “learning insights analyst” whose work must be validated and refined, not as a black-box decision maker.

Prepare Your Data and Governance Upfront

Using AI in HR analytics touches sensitive data – performance feedback, open text comments, even health- or diversity-related issues in some contexts. Strategically, you must prepare your data pipelines and governance before scaling Claude. Decide which sources are in scope (LMS logs, survey results, 360 feedback, coaching notes, call transcripts), what needs to be anonymised or pseudonymised, and which regions or groups require additional safeguards.

Work with Legal, IT and Data Protection Officers to define clear policies: what data is processed by Claude, how long it’s stored, and who can access the outputs. A well-governed approach not only reduces compliance risk but also increases employee trust when you later communicate that AI is being used to improve learning, not to micro-monitor individuals.

Frame the Initiative Around Business Outcomes, Not Technology

For executives and line managers, the value of AI-powered learning analytics is not about Claude itself, but about solving concrete business problems: ramping new hires faster, closing critical skill gaps, or improving quota attainment. Strategically, you should frame your Claude initiative in these terms from day one.

Define a small set of target outcomes (e.g., reducing time-to-productivity for new sales reps by 20%, or cutting ineffective training hours by 15%). Then use Claude’s analysis to regularly report progress on these outcomes, not just on engagement metrics. This positions HR as a strategic partner who brings data-backed recommendations instead of more “training requests”, and it makes it much easier to argue for a higher L&D budget when you can show demonstrable impact.

Using Claude to fix limited learning insights is less about generating pretty dashboards and more about fundamentally changing how HR understands skills, learning journeys and business impact. With the right scope, governance and human-in-the-loop process, Claude becomes a force multiplier for your L&D team, turning messy training data into clear, actionable direction. At Reruption, we combine this AI capability with hands-on implementation and our Co-Preneur mindset to build learning insight systems that actually get used. If you’re exploring how to move beyond completion rates and into real skill analytics, we’re happy to help you test the approach on a concrete use case and scale from there.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Banking to Healthcare: Learn how companies successfully use Claude.

Morgan Stanley

Banking

Financial advisors at Morgan Stanley struggled with rapid access to the firm's extensive proprietary research database, comprising over 350,000 documents spanning decades of institutional knowledge. Manual searches through this vast repository were time-intensive, often taking 30 minutes or more per query, hindering advisors' ability to deliver timely, personalized advice during client interactions . This bottleneck limited scalability in wealth management, where high-net-worth clients demand immediate, data-driven insights amid volatile markets. Additionally, the sheer volume of unstructured data—40 million words of research reports—made it challenging to synthesize relevant information quickly, risking suboptimal recommendations and reduced client satisfaction. Advisors needed a solution to democratize access to this 'goldmine' of intelligence without extensive training or technical expertise .

Lösung

Morgan Stanley partnered with OpenAI to develop AI @ Morgan Stanley Debrief, a GPT-4-powered generative AI chatbot tailored for wealth management advisors. The tool uses retrieval-augmented generation (RAG) to securely query the firm's proprietary research database, providing instant, context-aware responses grounded in verified sources . Implemented as a conversational assistant, Debrief allows advisors to ask natural-language questions like 'What are the risks of investing in AI stocks?' and receive synthesized answers with citations, eliminating manual digging. Rigorous AI evaluations and human oversight ensure accuracy, with custom fine-tuning to align with Morgan Stanley's institutional knowledge . This approach overcame data silos and enabled seamless integration into advisors' workflows.

Ergebnisse

  • 98% adoption rate among wealth management advisors
  • Access for nearly 50% of Morgan Stanley's total employees
  • Queries answered in seconds vs. 30+ minutes manually
  • Over 350,000 proprietary research documents indexed
  • 60% employee access at peers like JPMorgan for comparison
  • Significant productivity gains reported by CAO
Read case study →

Ooredoo (Qatar)

Telecommunications

Ooredoo Qatar, Qatar's leading telecom operator, grappled with the inefficiencies of manual Radio Access Network (RAN) optimization and troubleshooting. As 5G rollout accelerated, traditional methods proved time-consuming and unscalable , struggling to handle surging data demands, ensure seamless connectivity, and maintain high-quality user experiences amid complex network dynamics . Performance issues like dropped calls, variable data speeds, and suboptimal resource allocation required constant human intervention, driving up operating expenses (OpEx) and delaying resolutions. With Qatar's National Digital Transformation agenda pushing for advanced 5G capabilities, Ooredoo needed a proactive, intelligent approach to RAN management without compromising network reliability .

Lösung

Ooredoo partnered with Ericsson to deploy cloud-native Ericsson Cognitive Software on Microsoft Azure, featuring a digital twin of the RAN combined with deep reinforcement learning (DRL) for AI-driven optimization . This solution creates a virtual network replica to simulate scenarios, analyze vast RAN data in real-time, and generate proactive tuning recommendations . The Ericsson Performance Optimizers suite was trialed in 2022, evolving into full deployment by 2023, enabling automated issue resolution and performance enhancements while integrating seamlessly with Ooredoo's 5G infrastructure . Recent expansions include energy-saving PoCs, further leveraging AI for sustainable operations .

Ergebnisse

  • 15% reduction in radio power consumption (Energy Saver PoC)
  • Proactive RAN optimization reducing troubleshooting time
  • Maintained high user experience during power savings
  • Reduced operating expenses via automated resolutions
  • Enhanced 5G subscriber experience with seamless connectivity
  • 10% spectral efficiency gains (Ericsson AI RAN benchmarks)
Read case study →

Nubank (Pix Payments)

Payments

Nubank, Latin America's largest digital bank serving over 114 million customers across Brazil, Mexico, and Colombia, faced the challenge of scaling its Pix instant payment system amid explosive growth. Traditional Pix transactions required users to navigate the app manually, leading to friction, especially for quick, on-the-go payments. This app navigation bottleneck increased processing time and limited accessibility for users preferring conversational interfaces like WhatsApp, where 80% of Brazilians communicate daily. Additionally, enabling secure, accurate interpretation of diverse inputs—voice commands, natural language text, and images (e.g., handwritten notes or receipts)—posed significant hurdles. Nubank needed to overcome accuracy issues in multimodal understanding, ensure compliance with Brazil's Central Bank regulations, and maintain trust in a high-stakes financial environment while handling millions of daily transactions.

Lösung

Nubank deployed a multimodal generative AI solution powered by OpenAI models, allowing customers to initiate Pix payments through voice messages, text instructions, or image uploads directly in the app or WhatsApp. The AI processes speech-to-text, natural language processing for intent extraction, and optical character recognition (OCR) for images, converting them into executable Pix transfers. Integrated seamlessly with Nubank's backend, the system verifies user identity, extracts key details like amount and recipient, and executes transactions in seconds, bypassing traditional app screens. This AI-first approach enhances convenience, speed, and safety, scaling operations without proportional human intervention.

Ergebnisse

  • 60% reduction in transaction processing time
  • Tested with 2 million users by end of 2024
  • Serves 114 million customers across 3 countries
  • Testing initiated August 2024
  • Processes voice, text, and image inputs for Pix
  • Enabled instant payments via WhatsApp integration
Read case study →

Citibank Hong Kong

Wealth Management

Citibank Hong Kong faced growing demand for advanced personal finance management tools accessible via mobile devices. Customers sought predictive insights into budgeting, investing, and financial tracking, but traditional apps lacked personalization and real-time interactivity. In a competitive retail banking landscape, especially in wealth management, clients expected seamless, proactive advice amid volatile markets and rising digital expectations in Asia. Key challenges included integrating vast customer data for accurate forecasts, ensuring conversational interfaces felt natural, and overcoming data privacy hurdles in Hong Kong's regulated environment. Early mobile tools showed low engagement, with users abandoning apps due to generic recommendations, highlighting the need for AI-driven personalization to retain high-net-worth individuals.

Lösung

Wealth 360 emerged as Citibank HK's AI-powered personal finance manager, embedded in the Citi Mobile app. It leverages predictive analytics to forecast spending patterns, investment returns, and portfolio risks, delivering personalized recommendations via a conversational interface like chatbots. Drawing from Citi's global AI expertise, it processes transaction data, market trends, and user behavior for tailored advice on budgeting and wealth growth. Implementation involved machine learning models for personalization and natural language processing (NLP) for intuitive chats, building on Citi's prior successes like Asia-Pacific chatbots and APIs. This solution addressed gaps by enabling proactive alerts and virtual consultations, enhancing customer experience without human intervention.

Ergebnisse

  • 30% increase in mobile app engagement metrics
  • 25% improvement in wealth management service retention
  • 40% faster response times via conversational AI
  • 85% customer satisfaction score for personalized insights
  • 18M+ API calls processed in similar Citi initiatives
  • 50% reduction in manual advisory queries
Read case study →

Mastercard

Payments

In the high-stakes world of digital payments, card-testing attacks emerged as a critical threat to Mastercard's ecosystem. Fraudsters deploy automated bots to probe stolen card details through micro-transactions across thousands of merchants, validating credentials for larger fraud schemes. Traditional rule-based and machine learning systems often detected these only after initial tests succeeded, allowing billions in annual losses and disrupting legitimate commerce. The subtlety of these attacks—low-value, high-volume probes mimicking normal behavior—overwhelmed legacy models, exacerbated by fraudsters' use of AI to evade patterns. As transaction volumes exploded post-pandemic, Mastercard faced mounting pressure to shift from reactive to proactive fraud prevention. False positives from overzealous alerts led to declined legitimate transactions, eroding customer trust, while sophisticated attacks like card-testing evaded detection in real-time. The company needed a solution to identify compromised cards preemptively, analyzing vast networks of interconnected transactions without compromising speed or accuracy.

Lösung

Mastercard's Decision Intelligence (DI) platform integrated generative AI with graph-based machine learning to revolutionize fraud detection. Generative AI simulates fraud scenarios and generates synthetic transaction data, accelerating model training and anomaly detection by mimicking rare attack patterns that real data lacks. Graph technology maps entities like cards, merchants, IPs, and devices as interconnected nodes, revealing hidden fraud rings and propagation paths in transaction graphs. This hybrid approach processes signals at unprecedented scale, using gen AI to prioritize high-risk patterns and graphs to contextualize relationships. Implemented via Mastercard's AI Garage, it enables real-time scoring of card compromise risk, alerting issuers before fraud escalates. The system combats card-testing by flagging anomalous testing clusters early. Deployment involved iterative testing with financial institutions, leveraging Mastercard's global network for robust validation while ensuring explainability to build issuer confidence.

Ergebnisse

  • 2x faster detection of potentially compromised cards
  • Up to 300% boost in fraud detection effectiveness
  • Doubled rate of proactive compromised card notifications
  • Significant reduction in fraudulent transactions post-detection
  • Minimized false declines on legitimate transactions
  • Real-time processing of billions of transactions
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Unify Your Learning Data Into Analyse-Ready Packs for Claude

Claude is powerful with long, complex inputs – but you still need to give it well-structured “packs” of data. Start by exporting relevant information from your LMS and survey tools: course IDs and titles, module outlines, attendance and completion logs, quiz results, open text feedback, and (if available) manager evaluations linked to the same cohort.

Combine these into logical bundles. For example, create one file per program per cohort that contains: program description, learning objectives, module list, participant list (anonymised if needed), and all associated comments and survey responses. Then, feed these bundles into Claude with clear instructions about what to extract: themes, bottlenecks, skill improvements and correlations with performance indicators.

Example prompt to analyse one learning program:
You are an HR learning analytics expert.
You receive all data for a single learning program and its latest cohort.

1. Read the full input carefully.
2. Identify:
   - The 3–5 main strengths of the program
   - The 3–5 main weaknesses or confusion points
   - Recurring themes in open text feedback
3. Map issues to specific modules or activities.
4. Suggest 5 concrete improvements ranked by impact/feasibility.

Return your answer in this structure:
- Program summary (5–7 sentences)
- Strengths
- Weaknesses
- Module-level issues
- Recommended improvements

Expected outcome: HR gains a clear, narrative overview of each program without manually reading hundreds of comments, making review cycles faster and more consistent.

Use Claude to Build and Refine a Skills Taxonomy From Real Learning Data

If you don’t yet have a formal skills framework, Claude can help you bootstrap one from your existing learning content and assessments. Start by feeding Claude a representative set of course descriptions, learning objectives and assessment questions from your highest-value programs.

Ask Claude to infer the underlying skills and cluster them into a draft taxonomy, then iterate with HR and business stakeholders. You can then reuse this taxonomy to tag both content and learner outputs.

Example prompt to draft a skills taxonomy:
You are helping an HR team build a skills taxonomy.
Below you find course descriptions, learning objectives and exam questions.

Tasks:
1. Infer 20–40 distinct skills covered by this material.
2. Group them into logical categories.
3. For each skill, provide:
   - Name (max 4 words)
   - Short definition
   - 3 example behaviours at basic proficiency
   - 3 example behaviours at advanced proficiency

Return the result as a structured list.

Expected outcome: a practical, business-grounded skills framework that can later be refined, rather than a theoretical model that never connects to actual learning content.

Analyse Open Text Feedback and Transcripts for Root Causes

One of Claude’s strengths is handling long-form, messy text, such as survey comments, workshop notes or webinar transcripts. Use this to go beyond “average satisfaction” and identify root causes of learning issues. For example, you can upload all comments for a leadership program plus transcripts from key sessions and ask Claude to cluster issues, quote representative examples and relate them to specific modules.

Combine qualitative insight with simple quantitative counts (e.g., how often each theme appears) to prioritise action.

Example prompt to extract root causes:
You are analysing qualitative feedback for an HR learning program.
The input contains: survey comments, chat logs, and selected session transcripts.

1. Cluster feedback into themes (e.g., pace too fast, examples not relevant).
2. For each theme, provide:
   - A short description
   - 2–3 direct quotes from participants
   - The likely root cause
   - Suggested design changes to address it
3. Indicate which themes appear to have the highest impact on learning effectiveness.

Expected outcome: instead of guessing why a module underperforms, HR sees concrete patterns and can make targeted design changes.

Link Learning Data to Performance Signals With Claude-Assisted Analysis

To move from engagement metrics to impact, you need to connect learning data with performance signals like sales numbers, productivity metrics or internal mobility. Claude can’t directly do statistical analysis, but it can help you reason about patterns and generate hypotheses once you supply aggregated summaries from your BI tools.

For example, you can export a simple table summarising performance by cohort (e.g., average sales before vs. after training) and provide Claude with narrative descriptions of the program and any changes made over time.

Example prompt to reason about impact:
You are an HR analytics advisor.
Below you find:
- A description of the learning program and its objectives.
- A table (in text form) with average performance metrics by cohort.
- Notes about major changes to the program across cohorts.

Tasks:
1. Identify plausible relationships between program changes and performance trends.
2. Highlight where the data is inconclusive or might be influenced by external factors.
3. Suggest 3–5 follow-up analyses HR should run with BI to validate impact.
4. Propose 3 concrete decisions HR could test in the next cohort.

Expected outcome: HR gains a structured, narrative interpretation of numbers that can be discussed with Finance and business leaders, supporting a stronger L&D business case.

Generate Personalised Learning Insights and Micro-Recommendations for Learners

Beyond program-level analytics, Claude can produce individualised learning insights for employees without creating a huge manual workload for HR. Feed Claude anonymised or employee-consented data for a learner: completed modules, quiz results, reflection assignments and selected work samples (e.g., sales emails, project reports).

Ask Claude to identify strengths, gaps and recommended next steps, phrased in a supportive, coaching style. These outputs can be embedded into your LMS or shared with managers to guide development conversations.

Example prompt for individual learning feedback:
You are a supportive learning coach.
You receive:
- A list of modules the learner has completed
- Their quiz scores
- Short reflection answers and sample work outputs

1. Identify the learner's top 3 strengths.
2. Identify the 3 most relevant skill gaps based on the input.
3. Suggest a personalised 4-week learning focus:
   - 2–3 modules to revisit or deepen
   - 2 on-the-job practice ideas per skill gap
4. Write the feedback in a constructive tone, addressing the learner directly.

Expected outcome: employees receive targeted guidance that makes learning feel relevant and actionable, while HR gets scalable personalised support without adding headcount.

Establish a Recurring Claude-Assisted L&D Review Cadence

To make AI-powered learning insights stick, embed Claude into your regular L&D operating rhythm. For example, create a quarterly review ritual where HR exports updated data for priority programs, runs the standardised Claude prompts, and then discusses the outputs with stakeholders.

Use consistent templates for Claude’s analysis and store results centrally, so you can track how insights and actions evolve over time. Over 2–3 cycles, you’ll see which recommendations lead to measurable improvements and where your prompts or data need refinement.

Expected outcomes (realistic ranges over 6–12 months, depending on scale and maturity): 30–50% reduction in manual time spent on feedback analysis, 10–20% reduction in clearly ineffective training hours, faster iteration cycles on key programs, and a much stronger evidence base to defend or expand your L&D budget.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Claude can process the unstructured learning data that traditional LMS reports ignore: open text feedback, reflection assignments, coaching notes and workshop transcripts. By analysing this content alongside attendance and quiz data, Claude can highlight which modules actually build specific skills, where learners are confused, and which parts of a program drive the most behaviour change.

Instead of just knowing that 92% completed a course, HR gets narrative insights such as “Module 3 is where most learners drop off and report confusion about applying the concept in their role.” This allows you to redesign or drop underperforming elements and invest more in what works.

You don’t need a full data science team to start. At minimum, you need:

  • An HR or L&D lead who understands your learning programs and business context.
  • Basic data extraction support from IT or your LMS admin to export logs, survey results and transcripts.
  • Someone comfortable experimenting with Claude prompts and iterating based on the outputs.

Reruption typically helps clients set up the first end-to-end workflow: defining the data bundle, crafting robust prompts, and designing the review process. Once this is in place, your HR team can operate and refine the process without heavy technical dependencies.

For a focused use case, you can see meaningful insights in a matter of weeks, not months. A typical timeline looks like this:

  • Week 1–2: Select one priority learning journey, export data, and define the first analysis prompts.
  • Week 3–4: Run Claude analyses, review outputs with L&D, and implement quick wins (e.g., clarifying confusing modules, adjusting examples).
  • Month 2–3: Refine prompts, add additional data sources (e.g., manager feedback), and formalise a recurring review rhythm.

Measurable impact on skills and performance (e.g., better assessment scores, shorter ramp-up time) typically appears over one or two program cycles as improvements take effect. The key is to start narrow, learn quickly and scale only what demonstrably adds value.

The ROI comes from three main levers:

  • Time savings: Automating analysis of feedback and transcripts can cut manual review time by 30–50%, freeing HR and L&D to focus on design and stakeholder alignment.
  • Better allocation of L&D budget: With clear evidence of what works, you can reduce or retire low-impact content, often cutting 10–20% of ineffective training hours and reinvesting in high-value areas.
  • Improved performance: By identifying and addressing specific skill gaps, organisations see faster onboarding, higher sales effectiveness, or fewer quality issues – all of which have direct financial impact.

Claude itself is relatively low-cost compared to traditional analytics projects; the main investment is in setting up the right workflows. That’s why we recommend starting with a contained pilot that can demonstrate concrete savings or gains within one budget cycle.

Reruption supports organisations from idea to working solution with an AI-first, Co-Preneur mindset. For this specific challenge of limited learning insights, we typically start with our AI PoC offering (9,900€), where we co-define a concrete use case (e.g., analysing one key learning journey), build a Claude-based prototype, and test whether it delivers actionable insights with your real data.

Beyond the PoC, we help you embed Claude into your HR and L&D workflows: designing data pipelines, refining prompts, ensuring security & compliance, and training your team to operate the solution themselves. We don’t just hand over slides; we work inside your organisation’s reality until a functioning learning insights process is live and delivering value.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media