The Challenge: Limited Learning Insights

Most HR and L&D teams are flying blind. They see attendance rates, course completions and satisfaction scores, but not whether people actually apply new skills on the job. Without clear visibility into which modules move the needle and which are just noise, it is hard to steer the learning portfolio with confidence.

Traditional approaches rely on manual reporting in LMS dashboards, sporadic surveys and ad-hoc Excel analyses. These methods were acceptable when content libraries were small and expectations on L&D were modest. But as catalogues grow, skills become more dynamic and budgets face scrutiny, spreadsheet-based analysis and generic dashboards simply cannot keep up. They tell you what happened, not what worked.

The cost of not solving this insight gap is substantial. HR continues funding ineffective learning modules while critical skill gaps remain open. High-potential employees waste time on mismatched training, while managers lose trust in L&D recommendations. Over time, this leads to higher opportunity costs, weaker performance enablement and a competitive disadvantage compared to organisations that can precisely link learning investments to measurable capability gains.

The good news: this problem is very solvable. With modern AI such as Gemini, HR can analyse assessments, behaviour data and performance indicators at scale to understand which content truly develops skills. At Reruption, we have helped organisations build AI-powered learning and decision tools that replace manual analysis with continuous, data-driven insight. In the rest of this article, you will find practical guidance on how to turn limited learning insights into a strategic advantage using Gemini.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s perspective, Gemini for L&D analytics is not just another dashboard add-on, but a way to fundamentally change how HR understands and steers learning. Drawing on our hands-on experience building AI-powered learning platforms, skill assessment tools and analytics assistants, we see Gemini as a flexible layer that sits on top of your LMS exports, Google Workspace and HR data to deliver insight, not just reports.

Start with Clear Learning Questions, Not with Data Dumps

Before connecting every LMS export to Gemini, define the business questions you want to answer. For example: “Which modules correlate with higher sales performance 3 months later?” or “Where do mid-level managers most often drop out of leadership programs?” A clear question anchors your Gemini-driven learning analytics and prevents you from generating pretty but unused reports.

Strategically, involve HR business partners and line managers in defining these questions. They feel the skills gaps daily and can point to the decisions they struggle to make (e.g. promotion readiness, reskilling priorities). Gemini then becomes a decision-support engine for HR, not just an L&D reporting toy. This alignment creates early buy-in when you later shift budgets based on AI-generated insights.

Design a Data Model Around Skills and Journeys

To move beyond completions, you need to think in terms of skills and learning journeys, not just courses and events. Strategically, this means mapping content to skill tags, proficiency levels and roles, then structuring your data exports so Gemini can see how learners move across modules over time.

This mindset shift is essential: instead of “Who finished course X?”, you want to ask, “How does someone progress from basic to advanced proficiency in data literacy, and which modules accelerate that journey?” Plan this model up front with your L&D team and IT. It reduces rework later and ensures Gemini can generate robust skill progression insights rather than isolated course statistics.

Prepare Your Team for Data-Driven Decisions, Not AI Magic

Introducing AI in HR learning analytics is as much an organisational change as it is a technical project. Your L&D managers may feel threatened by automated insights or overwhelmed by new metrics. Strategically, position Gemini as an assistant that surfaces patterns and hypotheses, while humans still make prioritisation calls.

Build readiness by running joint review sessions where Gemini-generated findings are challenged by HR and business stakeholders. For example, ask “Does this pattern match what you see in the field? What might explain deviations?” This creates a culture where AI insights are interrogated and refined, not blindly accepted, and it increases trust that Gemini-based recommendations are a support, not a replacement.

Balance Insight Ambition with Privacy and Compliance

Using Gemini on learning data quickly touches sensitive areas: individual performance, assessment results, and potentially demographic information. Strategically, you must define clear governance and compliance boundaries before you roll out advanced analytics. Decide which insights are aggregated, which are role-based, and how you avoid unintended bias or discrimination.

Involve works councils, data protection officers and legal early. Show them sample use cases, anonymisation approaches and access controls. With the right framing, Gemini becomes a tool for fairer, more targeted development opportunities, not surveillance. This proactive risk mitigation will save you from delays and trust issues later.

Pilot in One Critical Capability Area Before Scaling

Instead of trying to instrument your entire learning landscape, pick one critical capability area—such as digital skills, frontline enablement or leadership—and focus your first Gemini pilot there. Choose an area where you can link learning to tangible business outcomes (reduced errors, higher sales, fewer support tickets).

This focused approach allows you to validate data quality, refine your analytics prompts and demonstrate real impact within weeks, not months. Once stakeholders see that better insights lead to better skill development and performance in one area, it becomes much easier to secure support and budget to extend Gemini analytics across the rest of your learning portfolio.

Using Gemini to overcome limited learning insights is ultimately about turning scattered LMS metrics into an evidence base for skills and performance decisions. When you start with sharp questions, a skills-oriented data model and careful change management, Gemini can show HR which programs truly build capabilities and where to redirect budget. At Reruption, we specialise in turning these ideas into working AI solutions inside your organisation—from a focused PoC to embedded tools your HR team uses every day. If you want to explore what this could look like with your data and systems, we are ready to co-design and implement a tailored approach with you.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Healthcare to Logistics: Learn how companies successfully use Gemini.

Mass General Brigham

Healthcare

Mass General Brigham, one of the largest healthcare systems in the U.S., faced a deluge of medical imaging data from radiology, pathology, and surgical procedures. With millions of scans annually across its 12 hospitals, clinicians struggled with analysis overload, leading to delays in diagnosis and increased burnout rates among radiologists and surgeons. The need for precise, rapid interpretation was critical, as manual reviews limited throughput and risked errors in complex cases like tumor detection or surgical risk assessment. Additionally, operative workflows required better predictive tools. Surgeons needed models to forecast complications, optimize scheduling, and personalize interventions, but fragmented data silos and regulatory hurdles impeded progress. Staff shortages exacerbated these issues, demanding decision support systems to alleviate cognitive load and improve patient outcomes.

Lösung

To address these, Mass General Brigham established a dedicated Artificial Intelligence Center, centralizing research, development, and deployment of hundreds of AI models focused on computer vision for imaging and predictive analytics for surgery. This enterprise-wide initiative integrates ML into clinical workflows, partnering with tech giants like Microsoft for foundation models in medical imaging. Key solutions include deep learning algorithms for automated anomaly detection in X-rays, MRIs, and CTs, reducing radiologist review time. For surgery, predictive models analyze patient data to predict post-op risks, enhancing planning. Robust governance frameworks ensure ethical deployment, addressing bias and explainability.

Ergebnisse

  • $30 million AI investment fund established
  • Hundreds of AI models managed for radiology and pathology
  • Improved diagnostic throughput via AI-assisted radiology
  • AI foundation models developed through Microsoft partnership
  • Initiatives for AI governance in medical imaging deployed
  • Reduced clinician workload and burnout through decision support
Read case study →

Cleveland Clinic

Healthcare

At Cleveland Clinic, one of the largest academic medical centers, physicians grappled with a heavy documentation burden, spending up to 2 hours per day on electronic health record (EHR) notes, which detracted from patient care time. This issue was compounded by the challenge of timely sepsis identification, a condition responsible for nearly 350,000 U.S. deaths annually, where subtle early symptoms often evade traditional monitoring, leading to delayed antibiotics and 20-30% mortality rates in severe cases. Sepsis detection relied on manual vital sign checks and clinician judgment, frequently missing signals 6-12 hours before onset. Integrating unstructured data like clinical notes was manual and inconsistent, exacerbating risks in high-volume ICUs.

Lösung

Cleveland Clinic piloted Bayesian Health’s AI platform, a predictive analytics tool that processes structured and unstructured data (vitals, labs, notes) via machine learning to forecast sepsis risk up to 12 hours early, generating real-time EHR alerts for clinicians. The system uses advanced NLP to mine clinical documentation for subtle indicators. Complementing this, the Clinic explored ambient AI solutions like speech-to-text systems (e.g., similar to Nuance DAX or Abridge), which passively listen to doctor-patient conversations, apply NLP for transcription and summarization, auto-populating EHR notes to cut documentation time by 50% or more. These were integrated into workflows to address both prediction and admin burdens.

Ergebnisse

  • 12 hours earlier sepsis prediction
  • 32% increase in early detection rate
  • 87% sensitivity and specificity in AI models
  • 50% reduction in physician documentation time
  • 17% fewer false positives vs. physician alone
  • Expanded to full rollout post-pilot (Sep 2025)
Read case study →

Associated Press (AP)

News Media

In the mid-2010s, the Associated Press (AP) faced significant constraints in its business newsroom due to limited manual resources. With only a handful of journalists dedicated to earnings coverage, AP could produce just around 300 quarterly earnings reports per quarter, primarily focusing on major S&P 500 companies. This manual process was labor-intensive: reporters had to extract data from financial filings, analyze key metrics like revenue, profits, and growth rates, and craft concise narratives under tight deadlines. As the number of publicly traded companies grew, AP struggled to cover smaller firms, leaving vast amounts of market-relevant information unreported. This limitation not only reduced AP's comprehensive market coverage but also tied up journalists on rote tasks, preventing them from pursuing investigative stories or deeper analysis. The pressure of quarterly earnings seasons amplified these issues, with deadlines coinciding across thousands of companies, making scalable reporting impossible without innovation.

Lösung

To address this, AP partnered with Automated Insights in 2014, implementing their Wordsmith NLG platform. Wordsmith uses templated algorithms to transform structured financial data—such as earnings per share, revenue figures, and year-over-year changes—into readable, journalistic prose. Reporters input verified data from sources like Zacks Investment Research, and the AI generates draft stories in seconds, which humans then lightly edit for accuracy and style. The solution involved creating custom NLG templates tailored to AP's style, ensuring stories sounded human-written while adhering to journalistic standards. This hybrid approach—AI for volume, humans for oversight—overcame quality concerns. By 2015, AP announced it would automate the majority of U.S. corporate earnings stories, scaling coverage dramatically without proportional staff increases.

Ergebnisse

  • 14x increase in quarterly earnings stories: 300 to 4,200
  • Coverage expanded to 4,000+ U.S. public companies per quarter
  • Equivalent to freeing time of 20 full-time reporters
  • Stories published in seconds vs. hours manually
  • Zero reported errors in automated stories post-implementation
  • Sustained use expanded to sports, weather, and lottery reports
Read case study →

DHL

Logistics

DHL, a global logistics giant, faced significant challenges from vehicle breakdowns and suboptimal maintenance schedules. Unpredictable failures in its vast fleet of delivery vehicles led to frequent delivery delays, increased operational costs, and frustrated customers. Traditional reactive maintenance—fixing issues only after they occurred—resulted in excessive downtime, with vehicles sidelined for hours or days, disrupting supply chains worldwide. Inefficiencies were compounded by varying fleet conditions across regions, making scheduled maintenance inefficient and wasteful, often over-maintaining healthy vehicles while under-maintaining others at risk. These issues not only inflated maintenance costs by up to 20% in some segments but also eroded customer trust through unreliable deliveries. With rising e-commerce demands, DHL needed a proactive approach to predict failures before they happened, minimizing disruptions in a highly competitive logistics industry.

Lösung

DHL implemented a predictive maintenance system leveraging IoT sensors installed on vehicles to collect real-time data on engine performance, tire wear, brakes, and more. This data feeds into machine learning models that analyze patterns, predict potential breakdowns, and recommend optimal maintenance timing. The AI solution integrates with DHL's existing fleet management systems, using algorithms like random forests and neural networks for anomaly detection and failure forecasting. Overcoming data silos and integration challenges, DHL partnered with tech providers to deploy edge computing for faster processing. Pilot programs in key hubs expanded globally, shifting from time-based to condition-based maintenance, ensuring resources focus on high-risk assets.

Ergebnisse

  • Vehicle downtime reduced by 15%
  • Maintenance costs lowered by 10%
  • Unplanned breakdowns decreased by 25%
  • On-time delivery rate improved by 12%
  • Fleet availability increased by 20%
  • Overall operational efficiency up 18%
Read case study →

bunq

Banking

As bunq experienced rapid growth as the second-largest neobank in Europe, scaling customer support became a critical challenge. With millions of users demanding personalized banking information on accounts, spending patterns, and financial advice on demand, the company faced pressure to deliver instant responses without proportionally expanding its human support teams, which would increase costs and slow operations. Traditional search functions in the app were insufficient for complex, contextual queries, leading to inefficiencies and user frustration. Additionally, ensuring data privacy and accuracy in a highly regulated fintech environment posed risks. bunq needed a solution that could handle nuanced conversations while complying with EU banking regulations, avoiding hallucinations common in early GenAI models, and integrating seamlessly without disrupting app performance. The goal was to offload routine inquiries, allowing human agents to focus on high-value issues.

Lösung

bunq addressed these challenges by developing Finn, a proprietary GenAI platform integrated directly into its mobile app, replacing the traditional search function with a conversational AI chatbot. After hiring over a dozen data specialists in the prior year, the team built Finn to query user-specific financial data securely, answer questions on balances, transactions, budgets, and even provide general advice while remembering conversation context across sessions. Launched as Europe's first AI-powered bank assistant in December 2023 following a beta, Finn evolved rapidly. By May 2024, it became fully conversational, enabling natural back-and-forth interactions. This retrieval-augmented generation (RAG) approach grounded responses in real-time user data, minimizing errors and enhancing personalization.

Ergebnisse

  • 100,000+ questions answered within months post-beta (end-2023)
  • 40% of user queries fully resolved autonomously by mid-2024
  • 35% of queries assisted, totaling 75% immediate support coverage
  • Hired 12+ data specialists pre-launch for data infrastructure
  • Second-largest neobank in Europe by user base (1M+ users)
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Connect LMS Exports and Google Workspace into a Single Gemini Workspace

The first tactical step is to give Gemini access to the right inputs. Export course data, module structures, assessment scores and completion logs from your LMS (CSV, Excel or via API), and store them in a structured way in Google Drive or Google Sheets. Use consistent naming and date formats so Gemini can recognise relationships across files.

Then, create a dedicated Gemini workspace for HR analytics. When prompting Gemini, explicitly reference the folders or sheets that contain your learning data so it can ingest and reason over them. This avoids the classic “Gemini doesn’t see my data” problem and lays the foundation for reliable insights.

Example prompt to initialise the context:
You are an HR learning analytics assistant.
You have access to the following files in Drive:
- "LMS_Export_Q1_2025.csv" (course completions, timestamps, user IDs)
- "Assessments_Q1_2025.csv" (pre/post scores, module IDs, user IDs)
- "Course_Catalogue_Skill_Tags.xlsx" (course IDs mapped to skill tags)

First, load and summarise the structure of these datasets.
Identify the key fields we can use to link them (e.g., user_id, course_id).
Describe any data quality issues you see.

Expected outcome: Gemini returns a quick schema overview and an initial data quality assessment, so you know whether you can move ahead with deeper analysis or need to fix basics first.

Use Gemini to Map Courses to Skills and Proficiency Levels

If your LMS catalogue is large and inconsistently tagged, manually mapping content to skills can take months. Use Gemini to accelerate this. Export course titles, descriptions and learning objectives, then ask Gemini to propose skill tags and proficiency levels (beginner, intermediate, advanced) based on your competency framework.

Example prompt for skill tagging:
You are helping HR structure our learning catalogue by skills.
Here is our competency framework with key skills and descriptions:
[Paste or link framework]

Here is a table exported from the LMS with columns:
course_id, title, description, learning_objectives

For each course, assign:
- 2-5 primary skill tags from the framework
- A proficiency level (beginner / intermediate / advanced)
Return the result as a table with the new columns added.

Review Gemini’s output with L&D experts, adjust where needed and then re-import the enriched mapping into the LMS or your central skills database. This allows later analytics to answer questions like “Which advanced data skills modules actually move post-test scores?” rather than just “Which data courses are popular?”

Analyse Learning Effectiveness with Pre/Post Assessments and Performance Data

To move beyond completion metrics, combine pre/post assessments with business KPIs where possible. Feed Gemini a dataset linking learner IDs, module completions, assessment scores and, if available, anonymised performance indicators (e.g. sales per rep, error rates, quality scores).

Example analysis prompt:
You are an L&D effectiveness analyst.
Use the following datasets:
- Assessments_Q1_2025.csv (user_id, module_id, pre_score, post_score)
- Completions_Q1_2025.csv (user_id, module_id, completed_at)
- Performance_Q2_2025.csv (user_id, performance_metric_name, value)
- Course_Catalogue_Skill_Tags.xlsx (module_id, skill_tags, proficiency_level)

Tasks:
1) For each module, calculate average score improvement (post - pre).
2) Identify modules with high completion but low score improvement.
3) Explore correlations between module completions and performance metrics
   1-3 months later, controlling for pre_score where possible.
4) Summarise which skills and modules show the strongest link to improved performance.

Expected outcome: a ranked list of modules by effectiveness, flags for low-impact content and evidence you can use to adjust curricula and defend or reallocate L&D budgets.

Predict Dropout Risk and Trigger Targeted Interventions

Gemini can also help identify where learners are likely to drop out of programs and why. Export event-level learning data (logins, time spent per module, failed attempts, pauses between sessions) and use Gemini to build simple rules or even train a lightweight model that flags participants at high risk of non-completion.

Example prompt for dropout analysis:
You are an HR data analyst.
We have the following data from our leadership program:
- Events.csv (user_id, event_type, module_id, timestamp)
- Completions.csv (user_id, completed_program [yes/no])

1) Identify behaviour patterns that differentiate completers from non-completers
   (e.g., time gaps, number of failed quizzes, late-night usage).
2) Propose simple rules we could use as an early warning system.
3) Suggest targeted interventions HR or managers could trigger when a
   participant is flagged as high risk of dropping out.

Once you have these patterns, you can operationalise them: for example, by having HR business partners receive a weekly Gemini-generated report of at-risk participants with suggested interventions like “schedule a manager check-in” or “recommend a shorter microlearning alternative.”

Generate Manager-Ready Insight Reports and Learning Path Suggestions

Managers rarely have time to dive into LMS dashboards. Use Gemini to turn raw analytics into concise, role-specific insight reports and personalised learning path suggestions. Feed Gemini the learning and performance data for a team or department and ask it to produce a summary that a manager can act on in 5 minutes.

Example prompt for manager reports:
You are an HR partner preparing a quarterly learning report for the Sales West team.
Input data:
- SalesWest_Learning.csv (user_id, modules_completed, skills_covered)
- SalesWest_Performance.csv (user_id, quota_attainment, win_rate)
- Skill_Framework.pdf (role-specific target skills for Sales roles)

Produce a concise report:
1) Summarise overall learning activity and key skills strengthened.
2) Highlight 3-5 modules that show the strongest link to improved win rate.
3) Identify top 3 skill gaps vs. target profile for the team.
4) Suggest individualised learning paths for the bottom 20% performers
   (2-3 modules each, focusing on high-impact skills).

Expected outcome: consistent, data-backed manager briefings that translate learning analytics into decisions on coaching, promotions and targeted development.

Embed Gemini Workflows into a Repeatable Monthly Learning Insights Cycle

To make these practices stick, turn them into a monthly or quarterly cycle rather than one-off experiments. Document a simple workflow: export data from the LMS on a set date, store it in a predefined Drive structure, run a series of standard Gemini prompts (possibly via automation), and compile the outputs into HR and business-ready formats.

Where possible, automate the repetitive steps using Google Apps Script or simple integrations, so HR teams mainly review insights rather than wrangle data. Define practical KPIs for your AI-driven learning analytics: reduction in low-impact content, percentage of budget shifted to high-effectiveness programs, time saved on reporting, and improvements in targeted skill indicators over time.

Expected outcomes: within 3–6 months, HR can realistically expect a 20–40% reduction in time spent on manual learning reports, a measurable shift of 10–20% of L&D budget into demonstrably high-impact modules, and clearer evidence linking specific learning investments to skill improvements and performance trends.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

At minimum, Gemini needs structured exports from your LMS: course and module metadata, completion records, and assessment results. To move beyond basic insights, it helps to add:

  • A skills or competency framework for your key roles
  • Mappings between courses and skills (which Gemini can help you build)
  • Where possible, anonymised or pseudonymised performance indicators (e.g. quality scores, sales metrics) to analyse learning impact

You do not need a perfect data warehouse to get started. Many HR teams begin with CSV/Excel exports and Google Sheets, then refine data quality as Gemini surfaces gaps or inconsistencies.

For a focused pilot in one capability area, many organisations can see meaningful insights within 4–8 weeks. The rough timeline is:

  • Week 1–2: Define questions, extract LMS data, set up the initial Gemini workspace
  • Week 3–4: Run first analyses (effectiveness by module, dropout patterns, skill coverage), validate findings with HR and business stakeholders
  • Week 5–8: Refine prompts and datasets, produce manager-ready reports, start adjusting programs based on evidence

Full-scale rollout across all learning programs and roles can take several months, depending on the complexity of your landscape and governance requirements, but early wins are usually achievable quickly if the scope is well defined.

No, you do not need a full data science team in HR to benefit from Gemini-driven learning insights. Most of the work can be done by L&D or HR analytics professionals who are comfortable with:

  • Exporting data from the LMS
  • Working with spreadsheets (basic joins, cleaning)
  • Formulating clear questions and prompts for Gemini

For more advanced use cases—like integrating performance data, automating monthly reports, or embedding insights into other systems—it helps to involve IT or analytics colleagues and, ideally, an AI engineering partner. This is where Reruption often steps in: we handle the technical plumbing and prompt engineering so your HR team can focus on interpretation and action.

ROI typically comes from three areas: time saved, better allocation of L&D budget, and improved performance outcomes. Concretely, organisations often see:

  • 20–40% reduction in time spent on manual reporting and ad-hoc analysis
  • 10–20% of learning spend reallocated from low-impact modules to content that demonstrably improves skills
  • Clearer link between learning and performance, which strengthens the business case for targeted programs and protects L&D budgets

The exact numbers depend on your starting point, data quality and willingness to act on insights. Gemini provides the evidence; ROI is realised when HR and business leaders use that evidence to redesign programs and direct investments.

Reruption can support you from idea to working solution. With our AI PoC offering (9,900€), we start by scoping a concrete use case—such as analysing one key learning program or building a manager-ready learning insight report—then rapidly prototype it with your real data. You get a functioning prototype, performance metrics and a roadmap for scaling.

Beyond the PoC, we work as Co-Preneurs: embedding with your team, setting up data pipelines between your LMS, Google Workspace and Gemini, designing prompts and workflows, and ensuring security and compliance requirements are met. Our focus is not on slide decks, but on shipping internal tools and automations your HR and L&D teams actually use to make better decisions about learning and skills.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media