The Challenge: Irrelevant Course Content

HR and L&D teams invest heavily in learning platforms, but employees still get pushed into generic courses that don’t match their role, context or skill level. A sales manager receives basic Excel training, a senior engineer repeats mandatory “intro to communication” modules, and new hires binge click-through e-learnings that never show up in their day-to-day work. The result is predictable: boredom, low completion rates and a growing skepticism about the value of internal learning.

Traditional approaches to learning design and curation can’t keep up. HR relies on static competency frameworks, manual needs analyses and vendor catalogs that are updated once a year at best. Most LMS systems don’t understand what employees actually do in their jobs or which skills truly drive performance. Content tagging is inconsistent, recommendations are rule-based rather than intelligence-driven, and every change requires time-consuming coordination between HR, subject-matter experts and IT.

The business impact is significant. Budgets are locked into content licenses that don’t move the needle on performance. High performers disengage from learning because it wastes their time, while critical skill gaps in key roles remain unaddressed. When senior management asks for ROI on L&D spend, HR often has usage statistics instead of impact metrics: logins, completion rates and smile sheets instead of faster onboarding, higher sales conversion or fewer quality incidents. Over time, this erodes trust in HR’s strategic role.

Yet this challenge is solvable. With the latest generation of AI, you can connect what’s taught in courses to what actually happens in your business, at the level of tasks, roles and outcomes. At Reruption, we’ve helped organisations build AI-powered learning products and internal platforms that do exactly this: align content with real work and adapt it to each learner. In the rest of this guide, you’ll see how to use Gemini to identify irrelevant content, personalize learning paths and turn your LMS into a system that genuinely supports performance.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s perspective, the real opportunity is not just to add another AI assistant on top of your LMS, but to use Gemini to connect training content with real-world work and performance data. Based on our hands-on experience building AI-enabled learning experiences and internal tools, we see Gemini as a powerful orchestrator: it can read your course library, understand role profiles and competency models, and compare them with how people actually work inside your organisation.

Anchor Gemini in Business Outcomes, Not Content Coverage

Before you start asking Gemini to recommend courses, define what “effective learning” means in business terms. For HR and L&D, that usually means faster ramp-up times, fewer errors, higher productivity or better engagement in specific roles. If you only look at course completion, Gemini will optimize for the wrong goal and may still recommend irrelevant content, just more efficiently.

Start by selecting 2–3 critical roles (for example, account executives, production supervisors, customer support agents) and define measurable outcomes for each. Then ensure any Gemini initiative is framed around improving those metrics. This gives you a clear lens for what “relevance” means and allows Gemini to rank and evaluate content by its contribution to real performance, not just by topic similarity.

Treat Training Relevance as a Data Problem

Irrelevant course content persists because most HR systems don’t hold enough structured data about roles, tasks and skills. From a strategic view, you should treat learning relevance as a data integration challenge rather than a UX or content problem. Gemini is strongest when it can see across your HRIS, LMS, knowledge bases and performance metrics.

Map out where your data lives: role descriptions in your HR system, course catalogs and assessments in your LMS, SOPs and playbooks in internal wikis, and KPIs in BI tools. The more of this context Gemini can access (securely and with proper governance), the more accurately it can flag irrelevant modules and surface content that truly matches day-to-day work.

Start with Targeted Pilots, Not a Full L&D Transformation

A common strategic mistake is to aim for a complete AI overhaul of all learning programs in one go. That creates resistance, complexity and risk. Instead, use Gemini in a focused pilot on a clearly defined learning journey where relevance issues are visible and painful: e.g. onboarding for a specific role or mandatory training in one business unit.

In that pilot, define a narrow set of questions Gemini should answer: Which modules are redundant or outdated? Where are there gaps versus actual tasks? How can we tailor microlearning based on role and proficiency? This approach builds internal confidence, provides concrete evidence for ROI and gives your HR and IT teams time to adjust governance and workflows before scaling.

Prepare Teams for AI-Augmented Learning Design

Gemini won’t replace your L&D team, but it will change how they work. Strategically, you need to move from content production and catalog management toward AI-augmented curation and continuous optimization. Designers and HR business partners should be ready to interpret AI recommendations, challenge them and turn them into concrete learning interventions.

Invest early in capability building: short enablement sessions showing how Gemini evaluates content, where its limits are and how to give it better context. Make clear that AI suggestions are starting points, not orders. When HR and subject-matter experts see Gemini as a partner in diagnosing relevance issues and co-creating assets, adoption goes up and risk of blind trust goes down.

Build Governance Around Data, Bias and Compliance

Using Gemini on HR and learning data raises valid concerns around privacy, bias and compliance. Strategically, you should define a governance framework for AI in HR before scaling. That includes which data Gemini can access, how outputs are validated and who is accountable for changes to mandatory training or certification paths.

Set clear rules: no direct automated changes to compliance-critical courses without human review; regular audits of Gemini’s recommendations for different demographics to detect potential bias; transparent communication to employees about how their learning data is used. This governance reduces risk and builds trust, making it easier to expand AI capabilities across more HR processes later.

Using Gemini to tackle irrelevant course content only pays off when you tie it to real roles, real tasks and real performance data, and when HR treats AI as an ongoing learning partner rather than a one-off project. With the right strategy, Gemini becomes the engine that keeps your content library aligned with how your business actually works. At Reruption, we’ve repeatedly embedded AI into learning and people workflows, and we know where the technical and organisational traps are. If you want to explore a focused pilot or validate a specific use case, we’re happy to help you turn this from theory into a working solution.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Banking to Healthcare: Learn how companies successfully use Gemini.

bunq

Banking

As bunq experienced rapid growth as the second-largest neobank in Europe, scaling customer support became a critical challenge. With millions of users demanding personalized banking information on accounts, spending patterns, and financial advice on demand, the company faced pressure to deliver instant responses without proportionally expanding its human support teams, which would increase costs and slow operations. Traditional search functions in the app were insufficient for complex, contextual queries, leading to inefficiencies and user frustration. Additionally, ensuring data privacy and accuracy in a highly regulated fintech environment posed risks. bunq needed a solution that could handle nuanced conversations while complying with EU banking regulations, avoiding hallucinations common in early GenAI models, and integrating seamlessly without disrupting app performance. The goal was to offload routine inquiries, allowing human agents to focus on high-value issues.

Lösung

bunq addressed these challenges by developing Finn, a proprietary GenAI platform integrated directly into its mobile app, replacing the traditional search function with a conversational AI chatbot. After hiring over a dozen data specialists in the prior year, the team built Finn to query user-specific financial data securely, answer questions on balances, transactions, budgets, and even provide general advice while remembering conversation context across sessions. Launched as Europe's first AI-powered bank assistant in December 2023 following a beta, Finn evolved rapidly. By May 2024, it became fully conversational, enabling natural back-and-forth interactions. This retrieval-augmented generation (RAG) approach grounded responses in real-time user data, minimizing errors and enhancing personalization.

Ergebnisse

  • 100,000+ questions answered within months post-beta (end-2023)
  • 40% of user queries fully resolved autonomously by mid-2024
  • 35% of queries assisted, totaling 75% immediate support coverage
  • Hired 12+ data specialists pre-launch for data infrastructure
  • Second-largest neobank in Europe by user base (1M+ users)
Read case study →

Zalando

E-commerce

In the online fashion retail sector, high return rates—often exceeding 30-40% for apparel—stem primarily from fit and sizing uncertainties, as customers cannot physically try on items before purchase . Zalando, Europe's largest fashion e-tailer serving 27 million active customers across 25 markets, faced substantial challenges with these returns, incurring massive logistics costs, environmental impact, and customer dissatisfaction due to inconsistent sizing across over 6,000 brands and 150,000+ products . Traditional size charts and recommendations proved insufficient, with early surveys showing up to 50% of returns attributed to poor fit perception, hindering conversion rates and repeat purchases in a competitive market . This was compounded by the lack of immersive shopping experiences online, leading to hesitation among tech-savvy millennials and Gen Z shoppers who demanded more personalized, visual tools.

Lösung

Zalando addressed these pain points by deploying a generative computer vision-powered virtual try-on solution, enabling users to upload selfies or use avatars to see realistic garment overlays tailored to their body shape and measurements . Leveraging machine learning models for pose estimation, body segmentation, and AI-generated rendering, the tool predicts optimal sizes and simulates draping effects, integrating with Zalando's ML platform for scalable personalization . The system combines computer vision (e.g., for landmark detection) with generative AI techniques to create hyper-realistic visualizations, drawing from vast datasets of product images, customer data, and 3D scans, ultimately aiming to cut returns while enhancing engagement . Piloted online and expanded to outlets, it forms part of Zalando's broader AI ecosystem including size predictors and style assistants.

Ergebnisse

  • 30,000+ customers used virtual fitting room shortly after launch
  • 5-10% projected reduction in return rates
  • Up to 21% fewer wrong-size returns via related AI size tools
  • Expanded to all physical outlets by 2023 for jeans category
  • Supports 27 million customers across 25 European markets
  • Part of AI strategy boosting personalization for 150,000+ products
Read case study →

Bank of America

Banking

Bank of America faced a high volume of routine customer inquiries, such as account balances, payments, and transaction histories, overwhelming traditional call centers and support channels. With millions of daily digital banking users, the bank struggled to provide 24/7 personalized financial advice at scale, leading to inefficiencies, longer wait times, and inconsistent service quality. Customers demanded proactive insights beyond basic queries, like spending patterns or financial recommendations, but human agents couldn't handle the sheer scale without escalating costs. Additionally, ensuring conversational naturalness in a regulated industry like banking posed challenges, including compliance with financial privacy laws, accurate interpretation of complex queries, and seamless integration into the mobile app without disrupting user experience. The bank needed to balance AI automation with human-like empathy to maintain trust and high satisfaction scores.

Lösung

Bank of America developed Erica, an in-house NLP-powered virtual assistant integrated directly into its mobile banking app, leveraging natural language processing and predictive analytics to handle queries conversationally. Erica acts as a gateway for self-service, processing routine tasks instantly while offering personalized insights, such as cash flow predictions or tailored advice, using client data securely. The solution evolved from a basic navigation tool to a sophisticated AI, incorporating generative AI elements for more natural interactions and escalating complex issues to human agents seamlessly. Built with a focus on in-house language models, it ensures control over data privacy and customization, driving enterprise-wide AI adoption while enhancing digital engagement.

Ergebnisse

  • 3+ billion total client interactions since 2018
  • Nearly 50 million unique users assisted
  • 58+ million interactions per month (2025)
  • 2 billion interactions reached by April 2024 (doubled from 1B in 18 months)
  • 42 million clients helped by 2024
  • 19% earnings spike linked to efficiency gains
Read case study →

Wells Fargo

Banking

Wells Fargo, serving 70 million customers across 35 countries, faced intense demand for 24/7 customer service in its mobile banking app, where users needed instant support for transactions like transfers and bill payments. Traditional systems struggled with high interaction volumes, long wait times, and the need for rapid responses via voice and text, especially as customer expectations shifted toward seamless digital experiences. Regulatory pressures in banking amplified challenges, requiring strict data privacy to prevent PII exposure while scaling AI without human intervention. Additionally, most large banks were stuck in proof-of-concept stages for generative AI, lacking production-ready solutions that balanced innovation with compliance. Wells Fargo needed a virtual assistant capable of handling complex queries autonomously, providing spending insights, and continuously improving without compromising security or efficiency.

Lösung

Wells Fargo developed Fargo, a generative AI virtual assistant integrated into its banking app, leveraging Google Cloud AI including Dialogflow for conversational flow and PaLM 2/Flash 2.0 LLMs for natural language understanding. This model-agnostic architecture enabled privacy-forward orchestration, routing queries without sending PII to external models. Launched in March 2023 after a 2022 announcement, Fargo supports voice/text interactions for tasks like transfers, bill pay, and spending analysis. Continuous updates added AI-driven insights, agentic capabilities via Google Agentspace, ensuring zero human handoffs and scalability for regulated industries. The approach overcame challenges by focusing on secure, efficient AI deployment.

Ergebnisse

  • 245 million interactions in 2024
  • 20 million interactions by Jan 2024 since March 2023 launch
  • Projected 100 million interactions annually (2024 forecast)
  • Zero human handoffs across all interactions
  • Zero PII exposed to LLMs
  • Average 2.7 interactions per user session
Read case study →

Upstart

Banking

Traditional credit scoring relies heavily on FICO scores, which evaluate only a narrow set of factors like payment history and debt utilization, often rejecting creditworthy borrowers with thin credit files, non-traditional employment, or education histories that signal repayment ability. This results in up to 50% of potential applicants being denied despite low default risk, limiting lenders' ability to expand portfolios safely . Fintech lenders and banks faced the dual challenge of regulatory compliance under fair lending laws while seeking growth. Legacy models struggled with inaccurate risk prediction amid economic shifts, leading to higher defaults or conservative lending that missed opportunities in underserved markets . Upstart recognized that incorporating alternative data could unlock lending to millions previously excluded.

Lösung

Upstart developed an AI-powered lending platform using machine learning models that analyze over 1,600 variables, including education, job history, and bank transaction data, far beyond FICO's 20-30 inputs. Their gradient boosting algorithms predict default probability with higher precision, enabling safer approvals . The platform integrates via API with partner banks and credit unions, providing real-time decisions and fully automated underwriting for most loans. This shift from rule-based to data-driven scoring ensures fairness through explainable AI techniques like feature importance analysis . Implementation involved training models on billions of repayment events, continuously retraining to adapt to new data patterns .

Ergebnisse

  • 44% more loans approved vs. traditional models
  • 36% lower average interest rates for borrowers
  • 80% of loans fully automated
  • 73% fewer losses at equivalent approval rates
  • Adopted by 500+ banks and credit unions by 2024
  • 157% increase in approvals at same risk level
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Connect Gemini to Your LMS and HR Systems for a Single Skills View

The first tactical step is integration. To identify irrelevant course content, Gemini needs access to your LMS metadata, course descriptions, assessments and HR role data. Work with IT to expose this data via secure APIs or data exports into a controlled environment where Gemini can operate.

Start with a subset of data: course titles, descriptions, tags, estimated duration, target audience, completion statistics and user feedback scores, plus job titles, role profiles and high-level competency models from HR. Configure Gemini so it can read this information and build a semantic map between skills, roles and content. This doesn’t require touching production systems at first; you can prove value in a sandbox environment.

Use Gemini to Audit Your Course Catalog for Relevance and Gaps

Once integrations are in place, use Gemini as an automated auditor for your learning catalog. Let it analyze each course and compare it to role profiles and task descriptions to identify where content is misaligned or duplicative. You can run this as a recurring job (e.g. quarterly) to keep your catalog lean.

For example, you can prompt Gemini like this during an initial audit:

System role:
You are an AI learning strategist helping HR evaluate the relevance of training content.

User:
Here is a set of data:
1) Role profile with key tasks and required skills
2) List of existing courses with title, description, target audience and duration

Please:
- Classify each course as "critical", "useful", "nice-to-have" or "irrelevant" for this role
- Flag clear redundancies (courses that cover >70% the same topics)
- Suggest missing topics based on the role tasks
- Output results in a table with justification for each classification

Expected outcome: a prioritized list of courses to keep, combine, retire or redesign for a specific role, giving HR a concrete starting point for catalog cleanup.

Generate Role- and Level-Specific Microlearning with Gemini

After cleaning up the catalog, use Gemini to create focused microlearning assets that match specific roles and proficiency levels. Feed Gemini your existing long-form courses, SOPs and playbooks, and ask it to produce short, scenario-based modules that reflect real work situations.

Here’s an example prompt to turn a generic course into targeted microlearning:

System role:
You are an instructional designer creating microlearning for busy employees.

User:
You will receive:
- A long-form training document about "Customer complaint handling basics"
- A role description (e.g. "Senior support agent")
- A proficiency level (e.g. "advanced")

Task:
1) Extract only the concepts that are critical for this specific role and level.
2) Create 3 microlearning units, each 5–7 minutes, using realistic scenarios.
3) For each unit, provide:
   - Learning objective
   - Short explanation
   - 2–3 scenario questions with model answers
4) Avoid beginner content. Assume prior experience.

Expected outcome: a set of targeted micro modules that replace generic, one-size-fits-all training with relevant, role-specific learning experiences.

Personalize Learning Paths Using Skill Signals and Feedback

To avoid pushing irrelevant content, configure Gemini to build adaptive learning paths based on skill signals instead of job title alone. Combine information such as assessment scores, manager feedback, self-assessments and usage patterns to estimate an employee’s proficiency across key skills.

You can implement a workflow where Gemini receives a learner profile and suggests a personalized path:

System role:
You are an AI learning path designer.

User:
Input data:
- Role: Inside Sales Representative
- Skills with current levels (1–5): Prospecting 2, Product knowledge 3, Objection handling 1
- Time available per week: 1.5 hours
- Course catalog with metadata

Task:
Design a 4-week learning path that:
- Focuses on the lowest skills first
- Mixes existing courses, microlearning units, and on-the-job practice tasks
- Avoids any content clearly below the stated levels
- Limits weekly time to 1.5 hours

Output:
- Week-by-week plan
- For each activity: type, duration, rationale

Expected outcome: personalized plans that skip basic content for advanced learners, focus on actual gaps and respect time constraints, leading to higher engagement and better on-the-job transfer.

Embed Gemini into HR and Manager Workflows, Not Just the LMS

To truly eliminate irrelevant course assignments, bring Gemini recommendations into the tools HR and managers already use. For example, integrate Gemini into your HRIS or performance conversation templates so that when a manager logs development goals, they immediately see a curated, AI-generated set of relevant learning options.

A practical sequence could be:

  • Manager completes a performance review and tags 2–3 development areas.
  • Those tags, plus the role and current skill levels, are passed to Gemini.
  • Gemini returns 5–10 highly relevant assets: courses, micro units, internal documents, shadowing suggestions.
  • The manager and employee select and confirm the plan, which is then written back into the LMS as assignments.

This avoids the typical manual search through a bloated catalog and reduces the risk of assigning content that doesn’t fit the employee’s context.

Track Impact Metrics and Close the Loop with Continuous Optimization

Finally, move beyond completion metrics. Instrument your AI-enabled learning flows so you can measure time-to-productivity, error rates, sales performance or support quality before and after Gemini-powered interventions. Feed these metrics back into Gemini so it learns which content truly drives outcomes.

For example, you might track: reduction in onboarding time for a role, improvement in first-call resolution after a new microlearning series, or error rate reduction in a specific process after targeted refreshers. Use scheduled Gemini jobs to analyze this data and suggest course updates, new microlearning or module retirements. Over 6–12 months, you should realistically aim for outcomes like 20–30% reduction in time spent on irrelevant training for target roles, measurable improvements in 1–2 key performance indicators and a leaner catalog with higher usage of remaining content.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Gemini helps by analyzing the match between your roles, skills and course catalog. It reads role profiles, task descriptions and performance data, then compares them with training descriptions, tags and usage patterns. Based on this, Gemini can flag courses that don’t fit specific roles, identify redundancies, and recommend more relevant alternatives or new content topics.

Beyond detection, Gemini can also generate role-specific microlearning from existing material, so you don’t have to start from scratch. Over time, this shifts your learning offer from generic, one-size-fits-all content to targeted learning journeys that align with real work.

You don’t need a perfect data landscape to start, but you do need access to some core data sources: LMS course metadata and usage data, basic role and competency information from HR, and a secure environment where Gemini can operate. Ideally, you also have at least one performance KPI per role that L&D is trying to influence.

From a skills perspective, you’ll need collaboration between HR/L&D, IT and one or two subject-matter experts. Reruption often starts with a short technical scoping to define which systems to connect, what data is required and how to handle security and access rights.

For a focused pilot on a few key roles, you can see actionable insights within 4–6 weeks. In that timeframe, Gemini can audit selected parts of your catalog, highlight irrelevant or redundant content and propose more targeted learning assets based on existing materials.

Measurable performance impact (such as reduced onboarding time or better quality metrics) typically shows up after 3–6 months, once new learning paths are in place and employees have completed them. The exact timeline depends on your internal decision speed, the complexity of your systems and how quickly managers adopt the new learning recommendations.

Using Gemini usually shifts spend rather than simply adding cost. On the one hand, there is investment in integration, configuration and change management. On the other, you can often reduce spending on unused or low-impact courses and avoid purchasing additional generic content libraries.

ROI comes from several directions: less time wasted on irrelevant training, faster ramp-up in critical roles, better targeting of external content spend, and stronger evidence when defending L&D budgets. Many organisations can realistically aim for a 10–20% reduction in catalog size with higher utilisation of what remains, alongside clear business impact for a subset of critical roles.

Reruption supports you end-to-end with a hands-on, Co-Preneur approach. We don’t just write a strategy; we work inside your organisation to connect Gemini to your LMS and HR systems, design the data flows, and build the workflows HR and managers will use. Our AI PoC offering (9,900€) is a practical way to start: in a few weeks, we deliver a working prototype that proves whether Gemini can detect irrelevant content and generate more targeted learning for your context.

From there, we help you evaluate performance, define governance and scale the solution across roles and countries. Because we combine AI engineering, security and enablement, we can move from idea to live pilot quickly while ensuring compliance and adoption inside your existing HR structures.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media