The Challenge: No Personalized Learning Paths

HR and L&D leaders are under pressure to offer personalized learning paths, but most rely on generic curricula by role or grade. Mapping each employee’s skills, career goals and learning content manually is simply not feasible at scale. The result is one-size-fits-all learning plans that look efficient on paper but don’t match how individuals actually learn or what the business really needs.

Traditional approaches — static competency matrices, annual training catalogs, and classroom-heavy programs — were designed for a slower world. They can’t keep pace with changing skills requirements, hybrid work, or employees expecting consumer-grade experiences. Even when HR has solid content libraries and an LMS in place, connecting the dots between skills data, performance feedback and learning assets requires hours of manual curation that most L&D teams just don’t have.

When learning is not personalized, the business impact is significant: employees tune out mandatory training, critical skill gaps stay hidden, and high potentials don’t see a clear development path and are more likely to leave. Training budgets are spent on low-impact programs, managers lose trust in L&D recommendations, and HR struggles to show any causal link between learning investments and performance or retention.

This challenge is real, but it is solvable. Modern AI in HR can ingest competency frameworks, role profiles and training content to recommend tailored development journeys automatically. At Reruption, we’ve seen how AI-powered learning design can dramatically increase engagement and shorten time-to-competence when done right. In the rest of this article, we’ll show you how to use Claude specifically to move from generic training plans to adaptive learning paths — step by step, and in a way that fits your existing HR tech landscape.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s experience building AI solutions for HR and L&D, Claude is particularly strong for turning fragmented skills, HR and training data into coherent, conversational learning journeys. By combining our engineering depth with Claude’s ability to reason over long documents and frameworks, we help organisations move beyond static course catalogs and toward adaptive, AI-powered learning paths that fit both the employee and the business.

Start with Clear Skill Taxonomies and Business-Critical Roles

The quality of any AI-driven personalized learning path depends on the structure of the inputs you provide. Before deploying Claude broadly, focus on clarifying your core skill taxonomy and a small set of business-critical roles or career paths. This gives Claude a consistent backbone for mapping employees to the right development steps.

Strategically, pick roles where skill gaps are clearly linked to business outcomes: sales productivity, manufacturing quality, customer support satisfaction, or digital transformation initiatives. When HR can show that personalized learning for these roles moves the needle on concrete KPIs, executive sponsorship and budget for scaling AI in L&D follow much more easily.

Position Claude as a Copilot for L&D, Not a Replacement

Claude should be framed internally as a learning design copilot for HR and L&D professionals, not as an autonomous decision-maker. Your experts remain accountable for defining competency standards, approving learning paths and handling sensitive performance decisions. Claude accelerates the heavy lifting: synthesising input data, proposing drafts and adapting content to different audiences.

This mindset helps with stakeholder buy-in and risk mitigation. Rather than “AI decides who learns what,” the narrative becomes “AI helps our experts build better personalized paths, faster.” From our implementation experience, involving a small group of L&D specialists as co-designers of the prompts and workflows significantly increases trust in the system and the quality of the outputs.

Design Governance and Guardrails from Day One

Using AI in HR touches on sensitive areas: performance data, development decisions and fairness. Governance should not be an afterthought. Define clear guardrails for which data Claude can access, how recommendations are reviewed, and how employees can give feedback or challenge a proposed learning path.

Strategically, you want transparent criteria: for example, development suggestions should be based on observable skills evidence and agreed career goals, not proxies like age or tenure. Reruption typically works with HR, Legal and IT to define access controls, logging and approval flows so that Claude’s recommendations are auditable and compliant with internal policies and regulations.

Prepare Managers and Employees for a New L&D Experience

Even the best AI-powered personalized learning paths fail if managers and employees don’t understand how to use them. Change management should be built into your Claude rollout plan. Equip managers to have better development conversations using Claude’s insights, and show employees how to ask the right questions to get useful career and learning guidance.

At a strategic level, reposition L&D as a continuous, pull-based experience instead of one-time, push-based training. Claude’s conversational interface is ideal for this: employees can explore learning options, ask follow-up questions and adapt their path as projects change. Your communication should emphasise empowerment (“you can drive your own learning journey”) rather than monitoring.

Pilot, Measure, Then Scale with a Portfolio View

Rather than trying to personalize learning for the entire workforce at once, start with a well-scoped pilot: one or two roles, a defined content set and clear success metrics. Use this to test how Claude performs in your specific context and to refine prompts, workflows and governance. A small, well-measured win creates internal proof that AI for personalized learning is more than a buzzword.

As you scale, manage your initiatives as a portfolio: some use cases will focus on time savings for L&D teams, others on faster onboarding, and others on upskilling for strategic skills. Looking across this portfolio helps HR prioritise where to extend Claude next and supports a structured investment case for AI in L&D rather than ad-hoc experimentation.

Using Claude to solve the “no personalized learning paths” problem is less about plugging in a chatbot and more about rethinking how you design, govern and deliver development journeys. When skill taxonomies, guardrails and change management are in place, Claude can become a powerful copilot for HR and L&D, turning static catalogs into adaptive, high-impact learning paths. Reruption brings the AI engineering, HR process understanding and Co-Preneur mindset needed to make this work end-to-end — from first pilot to scaled deployment. If you want to explore what this could look like in your organisation, we’re ready to dive into a concrete use case with you.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Payments to Banking: Learn how companies successfully use Claude.

Mastercard

Payments

In the high-stakes world of digital payments, card-testing attacks emerged as a critical threat to Mastercard's ecosystem. Fraudsters deploy automated bots to probe stolen card details through micro-transactions across thousands of merchants, validating credentials for larger fraud schemes. Traditional rule-based and machine learning systems often detected these only after initial tests succeeded, allowing billions in annual losses and disrupting legitimate commerce. The subtlety of these attacks—low-value, high-volume probes mimicking normal behavior—overwhelmed legacy models, exacerbated by fraudsters' use of AI to evade patterns. As transaction volumes exploded post-pandemic, Mastercard faced mounting pressure to shift from reactive to proactive fraud prevention. False positives from overzealous alerts led to declined legitimate transactions, eroding customer trust, while sophisticated attacks like card-testing evaded detection in real-time. The company needed a solution to identify compromised cards preemptively, analyzing vast networks of interconnected transactions without compromising speed or accuracy.

Lösung

Mastercard's Decision Intelligence (DI) platform integrated generative AI with graph-based machine learning to revolutionize fraud detection. Generative AI simulates fraud scenarios and generates synthetic transaction data, accelerating model training and anomaly detection by mimicking rare attack patterns that real data lacks. Graph technology maps entities like cards, merchants, IPs, and devices as interconnected nodes, revealing hidden fraud rings and propagation paths in transaction graphs. This hybrid approach processes signals at unprecedented scale, using gen AI to prioritize high-risk patterns and graphs to contextualize relationships. Implemented via Mastercard's AI Garage, it enables real-time scoring of card compromise risk, alerting issuers before fraud escalates. The system combats card-testing by flagging anomalous testing clusters early. Deployment involved iterative testing with financial institutions, leveraging Mastercard's global network for robust validation while ensuring explainability to build issuer confidence.

Ergebnisse

  • 2x faster detection of potentially compromised cards
  • Up to 300% boost in fraud detection effectiveness
  • Doubled rate of proactive compromised card notifications
  • Significant reduction in fraudulent transactions post-detection
  • Minimized false declines on legitimate transactions
  • Real-time processing of billions of transactions
Read case study →

AstraZeneca

Healthcare

In the highly regulated pharmaceutical industry, AstraZeneca faced immense pressure to accelerate drug discovery and clinical trials, which traditionally take 10-15 years and cost billions, with low success rates of under 10%. Data silos, stringent compliance requirements (e.g., FDA regulations), and manual knowledge work hindered efficiency across R&D and business units. Researchers struggled with analyzing vast datasets from 3D imaging, literature reviews, and protocol drafting, leading to delays in bringing therapies to patients. Scaling AI was complicated by data privacy concerns, integration into legacy systems, and ensuring AI outputs were reliable in a high-stakes environment. Without rapid adoption, AstraZeneca risked falling behind competitors leveraging AI for faster innovation toward 2030 ambitions of novel medicines.

Lösung

AstraZeneca launched an enterprise-wide generative AI strategy, deploying ChatGPT Enterprise customized for pharma workflows. This included AI assistants for 3D molecular imaging analysis, automated clinical trial protocol drafting, and knowledge synthesis from scientific literature. They partnered with OpenAI for secure, scalable LLMs and invested in training: ~12,000 employees across R&D and functions completed GenAI programs by mid-2025. Infrastructure upgrades, like AMD Instinct MI300X GPUs, optimized model training. Governance frameworks ensured compliance, with human-in-loop validation for critical tasks. Rollout phased from pilots in 2023-2024 to full scaling in 2025, focusing on R&D acceleration via GenAI for molecule design and real-world evidence analysis.

Ergebnisse

  • ~12,000 employees trained on generative AI by mid-2025
  • 85-93% of staff reported productivity gains
  • 80% of medical writers found AI protocol drafts useful
  • Significant reduction in life sciences model training time via MI300X GPUs
  • High AI maturity ranking per IMD Index (top global)
  • GenAI enabling faster trial design and dose selection
Read case study →

Shell

Energy

Unplanned equipment failures in refineries and offshore oil rigs plagued Shell, causing significant downtime, safety incidents, and costly repairs that eroded profitability in a capital-intensive industry. According to a Deloitte 2024 report, 35% of refinery downtime is unplanned, with 70% preventable via advanced analytics—highlighting the gap in traditional scheduled maintenance approaches that missed subtle failure precursors in assets like pumps, valves, and compressors. Shell's vast global operations amplified these issues, generating terabytes of sensor data from thousands of assets that went underutilized due to data silos, legacy systems, and manual analysis limitations. Failures could cost millions per hour, risking environmental spills and personnel safety while pressuring margins amid volatile energy markets.

Lösung

Shell partnered with C3 AI to implement an AI-powered predictive maintenance platform, leveraging machine learning models trained on real-time IoT sensor data, maintenance histories, and operational metrics to forecast failures and optimize interventions. Integrated with Microsoft Azure Machine Learning, the solution detects anomalies, predicts remaining useful life (RUL), and prioritizes high-risk assets across upstream oil rigs and downstream refineries. The scalable C3 AI platform enabled rapid deployment, starting with pilots on critical equipment and expanding globally. It automates predictive analytics, shifting from reactive to proactive maintenance, and provides actionable insights via intuitive dashboards for engineers.

Ergebnisse

  • 20% reduction in unplanned downtime
  • 15% slash in maintenance costs
  • £1M+ annual savings per site
  • 10,000 pieces of equipment monitored globally
  • 35% industry unplanned downtime addressed (Deloitte benchmark)
  • 70% preventable failures mitigated
Read case study →

Airbus

Aerospace

In aircraft design, computational fluid dynamics (CFD) simulations are essential for predicting airflow around wings, fuselages, and novel configurations critical to fuel efficiency and emissions reduction. However, traditional high-fidelity RANS solvers require hours to days per run on supercomputers, limiting engineers to just a few dozen iterations per design cycle and stifling innovation for next-gen hydrogen-powered aircraft like ZEROe. This computational bottleneck was particularly acute amid Airbus' push for decarbonized aviation by 2035, where complex geometries demand exhaustive exploration to optimize lift-drag ratios while minimizing weight. Collaborations with DLR and ONERA highlighted the need for faster tools, as manual tuning couldn't scale to test thousands of variants needed for laminar flow or blended-wing-body concepts.

Lösung

Machine learning surrogate models, including physics-informed neural networks (PINNs), were trained on vast CFD datasets to emulate full simulations in milliseconds. Airbus integrated these into a generative design pipeline, where AI predicts pressure fields, velocities, and forces, enforcing Navier-Stokes physics via hybrid loss functions for accuracy. Development involved curating millions of simulation snapshots from legacy runs, GPU-accelerated training, and iterative fine-tuning with experimental wind-tunnel data. This enabled rapid iteration: AI screens designs, high-fidelity CFD verifies top candidates, slashing overall compute by orders of magnitude while maintaining <5% error on key metrics.

Ergebnisse

  • Simulation time: 1 hour → 30 ms (120,000x speedup)
  • Design iterations: +10,000 per cycle in same timeframe
  • Prediction accuracy: 95%+ for lift/drag coefficients
  • 50% reduction in design phase timeline
  • 30-40% fewer high-fidelity CFD runs required
  • Fuel burn optimization: up to 5% improvement in predictions
Read case study →

BP

Energy

BP, a global energy leader in oil, gas, and renewables, grappled with high energy costs during peak periods across its extensive assets. Volatile grid demands and price spikes during high-consumption times strained operations, exacerbating inefficiencies in energy production and consumption. Integrating intermittent renewable sources added forecasting challenges, while traditional management failed to dynamically respond to real-time market signals, leading to substantial financial losses and grid instability risks . Compounding this, BP's diverse portfolio—from offshore platforms to data-heavy exploration—faced data silos and legacy systems ill-equipped for predictive analytics. Peak energy expenses not only eroded margins but hindered the transition to sustainable operations amid rising regulatory pressures for emissions reduction. The company needed a solution to shift loads intelligently and monetize flexibility in energy markets .

Lösung

To tackle these issues, BP acquired Open Energi in 2021, gaining access to its flagship Plato AI platform, which employs machine learning for predictive analytics and real-time optimization. Plato analyzes vast datasets from assets, weather, and grid signals to forecast peaks and automate demand response, shifting non-critical loads to off-peak times while participating in frequency response services . Integrated into BP's operations, the AI enables dynamic containment and flexibility markets, optimizing consumption without disrupting production. Combined with BP's internal AI for exploration and simulation, it provides end-to-end visibility, reducing reliance on fossil fuels during peaks and enhancing renewable integration . This acquisition marked a strategic pivot, blending Open Energi's demand-side expertise with BP's supply-side scale.

Ergebnisse

  • $10 million in annual energy savings
  • 80+ MW of energy assets under flexible management
  • Strongest oil exploration performance in years via AI
  • Material boost in electricity demand optimization
  • Reduced peak grid costs through dynamic response
  • Enhanced asset efficiency across oil, gas, renewables
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Centralise Your Inputs: Skills, Roles and Content

Before asking Claude to generate personalized learning journeys, assemble the raw materials it needs: your competency framework, role profiles, training catalog and any existing development guidelines. Even if these live in different systems (HRIS, LMS, spreadsheets), export representative samples and create a structured bundle for Claude to work with.

In a secure environment, you can provide Claude with these documents and ask it to build a unified view. For example, start with a prompt like:

You are an L&D architect helping HR build a unified skill and learning map.

Inputs you will receive:
1) A competency framework with skills and proficiency levels
2) Role profiles with responsibilities
3) A training catalog with titles, descriptions and target audiences

Task:
- Merge these into a structured model:
  - For each role: key skills and target levels
  - For each skill: relevant courses, modules and formats
- Highlight missing skills or areas with too few learning resources.
- Output as a structured text table that we can copy to Excel.

This creates a practical base map that your L&D team can refine and later use as reference for Claude when generating personalized plans.

Generate Role-Based Learning Path Templates

Instead of starting from scratch for each employee, use Claude to create role-based learning path templates that you can then personalize further. These templates should reflect must-have skills, recommended sequencing, and a mix of learning formats (courses, on-the-job practice, coaching, microlearning).

Provide Claude with your unified skill map and then prompt it like this:

You are an HR learning designer.

Context:
- Target role: Inside Sales Representative
- Required skills and levels: see the attached skills-role map
- Available trainings: see the attached training catalog

Task:
1) Create a 6-month learning path template for this role.
2) Structure it by month with clear milestones.
3) Mix formats: e-learning, live training, practice tasks, manager check-ins.
4) For each step, specify:
   - Objective
   - Recommended content (from the catalog)
   - Expected time investment
   - How to evidence skill progress.

Once you validate a few templates, standardise them and store them in your LMS or HR knowledge base as starting points for individual personalization.

Personalize Paths Based on Skill Gaps and Career Goals

With templates in place, you can use Claude to tailor the path to each employee based on their current skills and future goals. This can be done through HR input (e.g. performance reviews, assessments) or, more powerfully, via a conversational career assistant that employees interact with directly.

Here is an example prompt for individual personalization:

You are a career development assistant for our company.

Inputs:
- Role-based learning path template for "Inside Sales Representative"
- Employee's current role: Junior Inside Sales Representative
- Skill self-assessment and manager feedback (attached)
- Employee's career goal: move into Key Account Management in 2-3 years

Task:
1) Analyse the skill gaps vs. the target levels.
2) Adapt the 6-month template to this individual:
   - Prioritise closing the most critical gaps
   - Add 1-2 elements that support the long-term career goal
3) Output a clear, motivating 6-month learning plan with monthly focus areas and specific actions.
4) Use language that is easy to understand for a non-expert.

L&D or managers can review and adjust these personalized plans before sharing them with the employee, keeping people in the loop while dramatically reducing manual work.

Use Claude to Create Microlearning and Practice Tasks

Generic e-learning often fails because it is too long and not tied to daily work. Claude is well-suited to generate microlearning content and practical exercises directly aligned with your roles and tools. You can give Claude internal documents (playbooks, policies, process descriptions) and ask it to produce short, contextual learning assets.

For example:

You are an L&D content creator.

Input:
- Our internal sales playbook for handling objections (attached)
- Target audience: new Inside Sales Representatives

Task:
1) Create 5 microlearning units (5-10 minutes each).
2) For each unit, provide:
   - A short explanation (max. 200 words)
   - 3-4 realistic practice scenarios based on our context
   - Reflection questions that the learner can discuss with their manager.
3) Adapt the tone to be practical and conversational.

These assets can be embedded into your LMS, shared via collaboration tools, or surfaced directly by a Claude-based learning assistant when the employee asks for help on a specific topic.

Set Up Feedback Loops and Track ROI

To ensure your AI-powered learning paths actually work, build feedback and analytics into the workflow from the beginning. Combine usage data (which recommendations employees follow, which modules they complete) with outcome data (skills assessment changes, performance metrics, retention) and qualitative feedback.

Claude can help summarise and interpret this data for HR. For example:

You are an L&D analytics assistant.

Inputs:
- Completion and engagement data for the personalized learning paths pilot
- Pre- and post-assessment scores on key skills
- Employee feedback comments
- KPIs: sales productivity, onboarding time, internal mobility rate

Task:
1) Summarise the impact of the personalized learning paths pilot.
2) Identify which types of recommendations worked best.
3) Highlight any patterns by role or manager.
4) Suggest 3-5 concrete improvements to our prompts, content, or workflows.

From an ROI perspective, track at least three dimensions: reduced L&D design time (hours saved), business performance improvements (e.g. faster ramp-up, fewer errors), and retention or internal mobility gains among target roles. This creates a tangible case for further investment.

When implemented this way, companies typically see realistic outcomes such as a 30–50% reduction in manual effort to design development plans for pilot roles, noticeably faster onboarding (often 20–30% shorter time-to-productivity), and improved engagement scores with learning offerings in targeted populations. The exact numbers will depend on your starting point, but with disciplined prompts, governance and measurement, Claude can turn the “no personalized learning paths” problem into a structured, measurable AI-enabled L&D capability.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Claude can ingest your competency frameworks, role profiles, training catalogs and assessment data and use them as a structured knowledge base. Based on this, it can generate role-based learning path templates and then adapt them to each employee using inputs such as current role, skill gaps, performance feedback and career goals.

In practice, we configure workflows where HR or managers provide key data points (or where employees interact with a Claude-powered assistant), and Claude returns a proposed development plan with recommended modules, sequencing and practice tasks. HR and managers remain in control: they review, adjust and approve the plans before they are final.

You do not need a large AI team, but you do need a few core capabilities: an L&D owner who understands your skills and role architecture, an HR/IT contact who can provide access to relevant data and systems, and someone with basic technical skills to work with Reruption on integrating Claude into your environment.

We typically form a small cross-functional squad (HR/L&D, IT, sometimes Data Protection) and handle the AI engineering on our side. Your team focuses on providing source materials, validating outputs and defining governance rules. Over time, we enable your people to maintain prompts and workflows themselves so you are not dependent on external consultants.

With a focused scope, you can see concrete results within a few weeks. A typical first phase with Claude might look like this: 1–2 weeks to consolidate skills and content data for one or two roles, 1–2 weeks to build and refine the initial prompt workflows, and another 2–4 weeks for a pilot where you generate and test personalized learning paths with a defined group of employees.

Meaningful impact on KPIs like time-to-productivity or skill assessment scores usually appears over one or two performance cycles (3–6 months), depending on the complexity of the roles. The key is to start narrow, measure carefully and then expand to additional roles once you have a working pattern.

The direct usage cost of Claude is typically modest compared to traditional training spend; most of the investment is in designing workflows, integrating with your HR/LMS stack and change management. By structuring the work as a focused use case, you can contain initial costs and validate value quickly.

On the return side, organisations usually see ROI in three areas: reduced manual effort in creating development plans (L&D and HR time saved), faster skill development for target roles (shorter onboarding, better performance metrics) and improved retention in critical talent segments due to more visible career paths and tailored development. In our experience, even conservative gains in these dimensions can quickly outweigh the initial setup and operating costs.

Reruption works as a Co-Preneur alongside your HR and L&D teams. We start with our AI PoC offering (9,900€) to prove that a specific use case — for example, personalized onboarding paths for one role family — works in your environment. This includes scoping, feasibility checks, a working prototype with Claude, performance evaluation and a concrete production plan.

Beyond the PoC, we support hands-on implementation: integrating Claude with your HR and learning systems, designing prompts and workflows, setting up governance and enabling your team to operate and extend the solution. Our Co-Preneur approach means we don’t just hand over slideware; we embed with your people, challenge assumptions and iterate until a real, AI-driven personalized learning capability is live and delivering results.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media