The Challenge: Manual Narrative Commentary

Every reporting cycle, Finance teams are pulled into the same grind: drafting variance explanations, management commentary, and forecast narratives by hand. Analysts copy last month’s text, tweak a few numbers, chase drivers in endless spreadsheets, and then stitch it all together in slides and documents. The result is a time-consuming writing exercise that adds little genuine insight compared to the effort invested.

Traditional approaches rely on Excel notes, email threads, and offline conversations to piece together explanations. This made sense when data volumes were smaller and reporting cycles were slower. But with multiple ERPs, BI tools, bank feeds, and planning systems, there is simply too much information for manual commentary to keep up. Governance and consistency suffer: terminology drifts, different teams explain the same variance differently, and every new report format means more copy-paste work.

The business impact is substantial. Reporting cycles stretch from days into weeks, tying up highly qualified Finance staff in low-leverage writing tasks instead of scenario modelling or decision support. Executives receive generic, backward-looking commentary that they challenge in meetings, forcing analysts to improvise explanations on the spot. Opportunities to spot early warning signals, revenue leakage, or cost anomalies are missed because the team is focused on assembling text, not interrogating the numbers.

This challenge is very real—but it is also solvable. With modern AI for financial reporting, narrative commentary can be generated directly from your ERP, spreadsheets, and planning data, with humans reviewing and enriching the output instead of writing from scratch. At Reruption, we’ve seen how AI-driven automation can replace brittle, manual workflows in complex environments. In the sections below, you’ll find practical guidance on how to use Gemini to transform manual narrative commentary into a faster, more robust, insight-driven process.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s work building AI-first internal tools, we see a consistent pattern: the biggest gains in financial reporting automation come when AI is treated as a narrative co-pilot, not a black box. Gemini is particularly strong for this use case because it can work directly with spreadsheets, BI exports, and planning data, and it integrates deeply into Google Workspace where Finance teams already build their reports. The key is to design the right workflows, controls, and responsibilities around it.

Redefine the Role of Finance in the Reporting Process

When you bring Gemini into financial reporting, the Finance function shifts from being a producer of narrative text to a curator and challenger of AI-generated insight. Strategically, that means your target state is not “Gemini writes everything” but “Gemini drafts, Finance validates and adds judgment.” This mindset change is critical to get buy-in from experienced analysts who may fear being replaced rather than empowered.

Design your future-state process explicitly: where does Gemini ingest data, where does it draft commentary, and where do analysts intervene? For example, aim for a model where 70–80% of routine variance explanations are auto-drafted, and Finance focuses on exceptions, messaging for the board, and scenario implications. This clarifies responsibilities and reinforces that domain expertise is still at the center, just applied at a higher level.

Start with Narrow, High-Frequency Use Cases

Instead of attempting a full automation of all management commentary on day one, pick a contained, high-frequency use case such as monthly OPEX variance explanations or working capital narratives. Finite scope makes it easier to establish prompt patterns, data connections, and governance before you expand to more complex areas like segment profitability or cash flow bridges.

In our experience, narrowing scope accelerates learning. You will quickly see where Gemini needs clearer instructions, where your data has quality gaps, and which review steps are essential. Once that pilot use case is stable and trusted by stakeholders, you can generalize the approach to additional reports and entities with far less resistance.

Design Governance and Controls Upfront

For AI-generated financial narratives, trust is non-negotiable. Before scaling Gemini, define clear policies on what content can be auto-published, what must be reviewed, and who approves final wording. Think in terms of risk tiers: low-risk internal management packs might tolerate more automation; external financial statements require tight human control and auditability.

Strategically, include Compliance, Internal Audit, and Controlling early. Align on how Gemini’s outputs will be documented, how versions are stored in Docs or Slides, and how to evidence that a human reviewed and accepted the commentary. Upfront governance reduces the risk of last-minute pushback when you’re about to go live.

Invest in Finance-Centric Prompt and Terminology Design

Gemini will only speak your organisation’s language if you teach it. Treat prompt engineering for finance and terminology curation as strategic assets, not afterthoughts. Finance leaders should help define standard structures for explanations (e.g., “what happened, why, and what we will do about it”) and the preferred tone for different audiences like the executive committee versus plant managers.

Capture your internal glossary—cost center names, project codes, product families, and KPI definitions—and bake this into your Gemini instructions and shared templates. Over time, this creates a consistent, recognisable voice across all Finance reporting, even as AI does more of the first draft work.

Prepare Your Team for an AI-Augmented Workflow

Rolling out Gemini in Finance is as much a change initiative as a technology project. Analysts and controllers need to be comfortable critiquing and refining AI-generated text, not just producing commentary from scratch. That requires training, psychological safety to challenge the tool, and clarity on what “good” AI output looks like.

Set expectations that early drafts may be rough and that continuous feedback will improve quality. Create feedback loops where Finance users share examples of good and bad outputs, and adjust prompts and data mappings accordingly. When people feel they co-own the system, resistance drops and adoption accelerates.

Used thoughtfully, Gemini can turn manual narrative commentary from a repetitive writing chore into a fast, data-driven insight engine for Finance. The value comes not from replacing your experts, but from freeing them to focus on interpretation, actions, and scenarios rather than drafting text. With Reruption’s blend of AI engineering depth and hands-on delivery, we help Finance teams design and implement these Gemini-powered workflows so they’re robust, compliant, and actually used. If you’re exploring how to automate your financial narratives, we’re happy to help you test the approach on a concrete reporting use case and scale from there.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Healthcare to Payments: Learn how companies successfully use Gemini.

UC San Francisco Health

Healthcare

At UC San Francisco Health (UCSF Health), one of the nation's leading academic medical centers, clinicians grappled with immense documentation burdens. Physicians spent nearly two hours on electronic health record (EHR) tasks for every hour of direct patient care, contributing to burnout and reduced patient interaction . This was exacerbated in high-acuity settings like the ICU, where sifting through vast, complex data streams for real-time insights was manual and error-prone, delaying critical interventions for patient deterioration . The lack of integrated tools meant predictive analytics were underutilized, with traditional rule-based systems failing to capture nuanced patterns in multimodal data (vitals, labs, notes). This led to missed early warnings for sepsis or deterioration, higher lengths of stay, and suboptimal outcomes in a system handling millions of encounters annually . UCSF sought to reclaim clinician time while enhancing decision-making precision.

Lösung

UCSF Health built a secure, internal AI platform leveraging generative AI (LLMs) for "digital scribes" that auto-draft notes, messages, and summaries, integrated directly into their Epic EHR using GPT-4 via Microsoft Azure . For predictive needs, they deployed ML models for real-time ICU deterioration alerts, processing EHR data to forecast risks like sepsis . Partnering with H2O.ai for Document AI, they automated unstructured data extraction from PDFs and scans, feeding into both scribe and predictive pipelines . A clinician-centric approach ensured HIPAA compliance, with models trained on de-identified data and human-in-the-loop validation to overcome regulatory hurdles . This holistic solution addressed both administrative drag and clinical foresight gaps.

Ergebnisse

  • 50% reduction in after-hours documentation time
  • 76% faster note drafting with digital scribes
  • 30% improvement in ICU deterioration prediction accuracy
  • 25% decrease in unexpected ICU transfers
  • 2x increase in clinician-patient face time
  • 80% automation of referral document processing
Read case study →

Mastercard

Payments

In the high-stakes world of digital payments, card-testing attacks emerged as a critical threat to Mastercard's ecosystem. Fraudsters deploy automated bots to probe stolen card details through micro-transactions across thousands of merchants, validating credentials for larger fraud schemes. Traditional rule-based and machine learning systems often detected these only after initial tests succeeded, allowing billions in annual losses and disrupting legitimate commerce. The subtlety of these attacks—low-value, high-volume probes mimicking normal behavior—overwhelmed legacy models, exacerbated by fraudsters' use of AI to evade patterns. As transaction volumes exploded post-pandemic, Mastercard faced mounting pressure to shift from reactive to proactive fraud prevention. False positives from overzealous alerts led to declined legitimate transactions, eroding customer trust, while sophisticated attacks like card-testing evaded detection in real-time. The company needed a solution to identify compromised cards preemptively, analyzing vast networks of interconnected transactions without compromising speed or accuracy.

Lösung

Mastercard's Decision Intelligence (DI) platform integrated generative AI with graph-based machine learning to revolutionize fraud detection. Generative AI simulates fraud scenarios and generates synthetic transaction data, accelerating model training and anomaly detection by mimicking rare attack patterns that real data lacks. Graph technology maps entities like cards, merchants, IPs, and devices as interconnected nodes, revealing hidden fraud rings and propagation paths in transaction graphs. This hybrid approach processes signals at unprecedented scale, using gen AI to prioritize high-risk patterns and graphs to contextualize relationships. Implemented via Mastercard's AI Garage, it enables real-time scoring of card compromise risk, alerting issuers before fraud escalates. The system combats card-testing by flagging anomalous testing clusters early. Deployment involved iterative testing with financial institutions, leveraging Mastercard's global network for robust validation while ensuring explainability to build issuer confidence.

Ergebnisse

  • 2x faster detection of potentially compromised cards
  • Up to 300% boost in fraud detection effectiveness
  • Doubled rate of proactive compromised card notifications
  • Significant reduction in fraudulent transactions post-detection
  • Minimized false declines on legitimate transactions
  • Real-time processing of billions of transactions
Read case study →

Mass General Brigham

Healthcare

Mass General Brigham, one of the largest healthcare systems in the U.S., faced a deluge of medical imaging data from radiology, pathology, and surgical procedures. With millions of scans annually across its 12 hospitals, clinicians struggled with analysis overload, leading to delays in diagnosis and increased burnout rates among radiologists and surgeons. The need for precise, rapid interpretation was critical, as manual reviews limited throughput and risked errors in complex cases like tumor detection or surgical risk assessment. Additionally, operative workflows required better predictive tools. Surgeons needed models to forecast complications, optimize scheduling, and personalize interventions, but fragmented data silos and regulatory hurdles impeded progress. Staff shortages exacerbated these issues, demanding decision support systems to alleviate cognitive load and improve patient outcomes.

Lösung

To address these, Mass General Brigham established a dedicated Artificial Intelligence Center, centralizing research, development, and deployment of hundreds of AI models focused on computer vision for imaging and predictive analytics for surgery. This enterprise-wide initiative integrates ML into clinical workflows, partnering with tech giants like Microsoft for foundation models in medical imaging. Key solutions include deep learning algorithms for automated anomaly detection in X-rays, MRIs, and CTs, reducing radiologist review time. For surgery, predictive models analyze patient data to predict post-op risks, enhancing planning. Robust governance frameworks ensure ethical deployment, addressing bias and explainability.

Ergebnisse

  • $30 million AI investment fund established
  • Hundreds of AI models managed for radiology and pathology
  • Improved diagnostic throughput via AI-assisted radiology
  • AI foundation models developed through Microsoft partnership
  • Initiatives for AI governance in medical imaging deployed
  • Reduced clinician workload and burnout through decision support
Read case study →

Goldman Sachs

Investment Banking

In the fast-paced investment banking sector, Goldman Sachs employees grapple with overwhelming volumes of repetitive tasks. Daily routines like processing hundreds of emails, writing and debugging complex financial code, and poring over lengthy documents for insights consume up to 40% of work time, diverting focus from high-value activities like client advisory and deal-making. Regulatory constraints exacerbate these issues, as sensitive financial data demands ironclad security, limiting off-the-shelf AI use. Traditional tools fail to scale with the need for rapid, accurate analysis amid market volatility, risking delays in response times and competitive edge.

Lösung

Goldman Sachs countered with a proprietary generative AI assistant, fine-tuned on internal datasets in a secure, private environment. This tool summarizes emails by extracting action items and priorities, generates production-ready code for models like risk assessments, and analyzes documents to highlight key trends and anomalies. Built from early 2023 proofs-of-concept, it leverages custom LLMs to ensure compliance and accuracy, enabling natural language interactions without external data risks. The firm prioritized employee augmentation over replacement, training staff for optimal use.

Ergebnisse

  • Rollout Scale: 10,000 employees in 2024
  • Timeline: PoCs 2023; initial rollout 2024; firmwide 2025
  • Productivity Boost: Routine tasks streamlined, est. 25-40% time savings on emails/coding/docs
  • Adoption: Rapid uptake across tech and front-office teams
  • Strategic Impact: Core to 10-year AI playbook for structural gains
Read case study →

FedEx

Logistics

FedEx faced suboptimal truck routing challenges in its vast logistics network, where static planning led to excess mileage, inflated fuel costs, and higher labor expenses . Handling millions of packages daily across complex routes, traditional methods struggled with real-time variables like traffic, weather disruptions, and fluctuating demand, resulting in inefficient vehicle utilization and delayed deliveries . These inefficiencies not only drove up operational costs but also increased carbon emissions and undermined customer satisfaction in a highly competitive shipping industry. Scaling solutions for dynamic optimization across thousands of trucks required advanced computational approaches beyond conventional heuristics .

Lösung

Machine learning models integrated with heuristic optimization algorithms formed the core of FedEx's AI-driven route planning system, enabling dynamic route adjustments based on real-time data feeds including traffic, weather, and package volumes . The system employs deep learning for predictive analytics alongside heuristics like genetic algorithms to solve the vehicle routing problem (VRP) efficiently, balancing loads and minimizing empty miles . Implemented as part of FedEx's broader AI supply chain transformation, the solution dynamically reoptimizes routes throughout the day, incorporating sense-and-respond capabilities to adapt to disruptions and enhance overall network efficiency .

Ergebnisse

  • 700,000 excess miles eliminated daily from truck routes
  • Multi-million dollar annual savings in fuel and labor costs
  • Improved delivery time estimate accuracy via ML models
  • Enhanced operational efficiency reducing costs industry-wide
  • Boosted on-time performance through real-time optimizations
  • Significant reduction in carbon footprint from mileage savings
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Connect Gemini to Clean, Structured Finance Data

The quality of AI-generated financial commentary depends heavily on the inputs. Start by defining the exact data tables Gemini will use: ERP exports (e.g., GL by account and cost center), BI cubes (e.g., revenue by product and region), planning files, and bank feeds. Standardise column names and ensure that key identifiers are consistent across sources to avoid confusion in explanations.

In practice, this often means setting up a recurring export or Google Sheets data connector that produces a single, tidy table per reporting view (e.g., “P&L vs budget by cost center, current month and YTD”). Gemini can then be instructed to read only from that sheet or range when drafting commentary. The more deterministic and repeatable your data pipeline, the more stable your narratives will be.

Create Reusable Prompt Templates for Variance Explanations

Instead of prompting Gemini ad hoc in every cycle, define standard templates for the core commentary types you need: P&L variances, balance sheet movements, cash flow changes, and forecast updates. Store these prompts centrally (e.g. in a shared Google Doc) and use them consistently across entities and months.

Example prompt for OPEX variance commentary in Google Docs:

You are a senior finance analyst for our company.

Context:
- You receive a table with actuals, budget, and variance by cost center and account for the current month and YTD.
- You also receive prior-month commentary as reference.

Task:
1. Identify the top 5 positive and top 5 negative variances by absolute value and by % versus budget.
2. For each, draft a concise explanation that covers:
   - What happened (1-2 sentences)
   - Why it happened (drivers, e.g., volume, price, one-offs, timing)
   - Whether it is expected to continue or is a one-off
3. Group the explanations into themes (e.g. "Personnel", "Logistics", "Marketing") to avoid repetition.
4. Use our internal terminology and tone:
   - Neutral, factual, no blame
   - Refer to cost centers by their official names from the table
   - Avoid generic phrases like "various factors" - be specific.

Input data will follow this structure:
[Paste or reference range from Google Sheets here]

By standardising prompts like this, you make commentary production predictable, reviewable, and easier to refine over time.

Use Prior Commentary as Context, Not as a Template

Finance teams often rely on last month’s commentary as a starting point. With Gemini, you can do this more intelligently by providing prior narratives as context rather than copying them manually. This helps the model preserve continuity of messaging while still reacting to new data.

Example prompt pattern:

You will receive:
1) Current month variance table (actual vs budget vs last year)
2) Last month's commentary for the same cost centers

Task:
- Draft new commentary that:
  - Reuses framing and terminology from last month when trends are unchanged
  - Updates explanations where variances have changed meaningfully
  - Highlights when a previously flagged risk has materialised or faded

Important:
- Do not repeat last month's text verbatim.
- Focus on what materially changed and why.

This approach keeps narratives consistent without falling into the trap of copy-paste reporting that ignores new signals.

Embed Gemini Directly into Docs and Slides Workflows

To make AI in Finance reporting stick, integrate Gemini where your team already works: Google Docs for management narratives and Google Slides for board packs. Use Gemini for Workspace (e.g., Gemini side panel) to generate, refine, and translate commentary in place instead of switching between tools.

Practical workflow in Slides for a monthly performance deck:

  • Populate a summary slide with the key metrics and a small data table or chart.
  • Select the slide notes area and open Gemini in the side panel.
  • Prompt Gemini with context about the audience (e.g. Executive Committee) and ask it to draft 3 bullet points that explain the variance and recommended actions.
  • Review, adjust the wording, and elevate 1–2 bullets to the slide body if needed.

This keeps commentary generation tightly linked to the final deliverables and reduces manual reformatting.

Introduce a Structured Review and Approval Checklist

AI output must be validated systematically, not just skimmed. Define a simple checklist that reviewers follow for each set of Gemini-generated narratives: data accuracy, causal logic, consistency with known business events, and alignment with messaging guidelines.

Example checklist embedded at the top of a Google Doc:

Reviewer checklist for AI-generated commentary:
[ ] All key figures match the source reports (sample-checked)
[ ] Explanations reference real, known business drivers
[ ] No speculative attributions without supporting evidence
[ ] Tone is neutral, factual, and aligned with Finance guidelines
[ ] Material risks or opportunities are clearly highlighted
[ ] No sensitive information beyond the intended audience scope

Having reviewers explicitly tick these off lowers the risk of subtle errors slipping into senior management reports.

Track Impact with Clear KPIs and Time-Logging

To prove the value of automated narrative commentary, track a few simple but concrete KPIs from the start. Common metrics include average time from data availability to first draft of commentary, number of analyst hours spent per cycle on narrative writing, number of review iterations, and frequency of board or management challenges on data accuracy.

Have analysts log their time spent on commentary tasks before and after Gemini rollout for a couple of cycles. Aim for realistic improvements, such as reducing manual drafting time by 40–60% and cutting report production cycles from several days to less than a day for internal packs. These numbers build confidence internally and help justify further investment in AI-enabled Finance processes.

With these practices in place, most Finance teams can expect to reduce manual narrative drafting effort by roughly half within a few reporting cycles, while improving the consistency and clarity of insights delivered to leadership. Over time, this unlocks more capacity for value-adding work like scenario analysis, strategic planning, and partnering with the business on decisions—not just explaining last month’s numbers.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Gemini takes structured inputs such as ERP exports, BI reports, or Google Sheets tables and uses large language models to turn them into human-readable explanations. In practice, you feed Gemini data (for example, P&L vs budget by cost center) along with clear instructions on what to explain and how to structure the commentary. Gemini then identifies key variances, summarises drivers, and drafts text in the desired tone.

The Finance team remains in control: they define the prompts, provide context (e.g. known one-offs, projects, or pricing changes), and review the output before it goes into management reports or board decks. Gemini is the drafting engine, not the final decision-maker.

You typically need three ingredients: Finance domain experts who know how commentary should read, a basic data owner who can provide clean exports from ERP/BI tools, and someone with light technical skills to configure prompts and workflows in Google Workspace. You do not need a full data science team to get started.

Reruption normally works with a small core team—often a head of Controlling, 1–2 analysts, and an IT contact—to design templates, set up data flows (often via Google Sheets or a simple data pipeline), and embed Gemini into Docs/Slides. From there, Finance users can maintain and improve prompts themselves with minimal support.

For a scoped use case like monthly OPEX or revenue variance commentary, you can usually see tangible results within one to three reporting cycles. The first cycle is about setting up data inputs, prompt templates, and review checklists. By the second or third cycle, the workflow stabilises and the team starts to trust and rely on Gemini drafts.

Our experience with similar AI automation projects shows that a focused proof of concept can be built in days, not months, and then hardened over a few iterations. That means you don’t need to wait for a big transformation programme to benefit; you can target specific reports and expand progressively.

The most direct ROI comes from time savings for analysts and controllers. Many Finance teams spend several person-days per cycle on drafting and polishing commentary. With Gemini automating the first draft, that time can often be cut by 40–60%, freeing capacity for analysis, stakeholder discussions, and planning.

There are also qualitative benefits: more consistent messaging across reports, fewer last-minute rewrites for leadership, and improved ability to surface meaningful trends instead of boilerplate explanations. When you factor in reduced reporting delays and better decision support, the business case tends to be compelling even before you reach full-scale automation.

Reruption supports Finance teams from idea to working solution. With our AI PoC offering (9.900€), we can quickly validate whether Gemini can generate useful, accurate commentary from your actual ERP and spreadsheet data. You get a functioning prototype, performance metrics, and a concrete implementation roadmap instead of slideware.

Beyond the PoC, we work as Co-Preneurs: embedded alongside your Finance and IT teams to design prompts, set up data flows, implement governance, and integrate Gemini into Google Docs and Slides. Our engineers build the automations, while Finance leaders shape the narrative structures and controls. The goal is not just a demo, but a deployed workflow that reliably shortens your reporting cycles and raises the quality of your management insight.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media