The Challenge: Time-Consuming Sales Data Entry

Modern sales teams live in email, calls, video meetings, LinkedIn, and chat. Every interaction produces valuable information – contacts, decision-makers, deal size, objections, next steps – but getting that data into the CRM is slow and painful. Reps jump between inbox, call notes, and forms, manually copying names, companies, dates, and outcomes. The result: hours lost each week to low-value admin work instead of conversations with customers.

Traditional fixes don’t solve the problem. Asking reps to "be more disciplined" with CRM hygiene just adds pressure without removing friction. More training, more mandatory fields, or strict CRM policing usually backfire: adoption drops, shortcuts increase, and the best-performing reps resist the system the most. Generic automation rules and basic email plugins help a little, but they can’t reliably interpret unstructured sales conversations or update complex deal records without human intervention.

The business impact is significant. Incomplete or outdated CRM data leads to poor pipeline visibility, unreliable forecasting, and weak prioritization. Sales managers spend time chasing updates instead of coaching. Operations teams can’t trust the numbers. Marketing can’t segment properly. And most critically, high-performing reps feel like data clerks, which hurts morale and increases churn. Every minute they spend typing into the CRM is a minute not spent moving deals forward, which compounds into lost revenue and slower growth.

The good news: this is exactly the type of repetitive, pattern-heavy work that modern AI for sales productivity can automate. With tools like Gemini, it’s now possible to extract entities from emails, calls, and forms and auto-populate CRM fields at scale, without turning your stack upside down. At Reruption, we’ve seen how targeted AI automations can turn messy, manual processes into clean, reliable data flows. In the rest of this page, you’ll find practical guidance to tackle time-consuming data entry with Gemini and give your reps their time back.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s work building real-world AI solutions for sales teams, we see a clear pattern: the biggest gains rarely come from "smarter" reps, but from removing the invisible admin that slows them down. Gemini, embedded into existing sales workflows across the Google ecosystem, is well-suited to turn unstructured inputs – emails, meeting summaries, web forms – into structured CRM updates. Our perspective: treat Gemini not as a shiny chatbot, but as a quiet background engine for automated sales data entry that frees capacity and improves data quality in one move.

Start with a Clear Definition of “Good Data” in Sales

Before you introduce Gemini into your sales workflows, align on what "good" CRM data actually means for your organisation. Sales, RevOps, and management should define the minimum viable dataset for a qualified opportunity: which entities (contact, company, role, deal size, stage, next step, close date) are mandatory, which are optional, and what "done" looks like after a call or email thread.

This clarity is critical because AI-powered data capture will faithfully automate whatever you design. If your current fields are bloated or inconsistent, Gemini will only accelerate the chaos. Use this as an opportunity to simplify: fewer, cleaner fields and explicit rules (e.g., how to record multi-contact deals) will dramatically improve the impact of automation.

Think in Workflows, Not Features

Many teams evaluate Gemini as a standalone tool instead of mapping it to end-to-end sales workflows. The real value comes when Gemini sits inside concrete journeys: "inbound lead → email reply → discovery call → proposal" or "outbound sequence → LinkedIn touch → meeting booked". For each journey, identify where unstructured information appears (emails, meeting notes, forms) and where structured data is needed (CRM, Google Sheets dashboards, BI tools).

Once you see the workflow, you can design Gemini’s role: extract entities from emails, summarise calls, classify intent, or propose next steps – and then push those outputs into your CRM and reporting tools. This mindset prevents random pilots and leads to targeted AI for sales productivity that actually shows up in your pipeline metrics.

Prepare Your Team for a Co-Pilot, Not a Replacement

Sales teams are rightfully skeptical of new tools that promise to "automate everything". Position Gemini as a sales co-pilot that removes tedious data entry but keeps reps in control. For example, design workflows where Gemini drafts CRM updates and call notes, but reps quickly review and confirm before saving. This keeps trust high while still cutting admin time by 50% or more.

Invest a bit of enablement: short loom videos, live demos, and clear "before/after" examples help reps understand how Gemini will help them sell more and report less. When they see that accurate notes appear automatically and fields are pre-filled, adoption becomes a pull, not a push.

Design for Risk Mitigation and Data Governance from Day One

Automating sales data entry with AI touches customer information, internal notes, and sometimes sensitive deal details. You need a clear stance on data privacy, logging, and access controls. Decide which data Gemini can process, where prompts and outputs are stored, and who can configure or change automations. Align with your security and legal teams early to avoid late-stage blockers.

From a risk perspective, start with low-risk, high-volume use cases: extracting company names, roles, and meeting dates from emails, or generating neutral call summaries. Once accuracy and governance are proven, you can extend to more sensitive fields such as budget indicators or risk flags. Building trust through gradual rollout is part of responsible AI adoption in sales.

Measure Productivity and Data Quality, Not Just “AI Usage”

It’s easy to celebrate that "Gemini answered 5,000 prompts" and still have no idea if your sales team is more productive. Define success metrics up front: reduction in average time to log an activity, increase in percentage of opportunities with complete core fields, fewer "unknown" values in key reports, or improved forecast accuracy.

Track both quantitative and qualitative signals. Quantitatively, watch changes in time-to-update, activity logging rates, and data completeness. Qualitatively, run short surveys with reps and managers about time saved, trust in data, and perceived friction. These insights will help you iterate your AI-driven sales workflows and decide where to extend Gemini next.

Used thoughtfully, Gemini can remove a large chunk of the manual data entry that’s draining your sales team, while simultaneously lifting CRM accuracy and pipeline visibility. The key is to start with well-defined workflows, governance, and success metrics, then let AI quietly handle the repetitive work in the background. At Reruption, we specialise in turning these ideas into working automations – from quick PoCs to robust, secure integrations – and are happy to explore what an AI co-pilot for your sales data could look like in your environment.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Logistics to Healthcare: Learn how companies successfully use Gemini.

DHL

Logistics

DHL, a global logistics giant, faced significant challenges from vehicle breakdowns and suboptimal maintenance schedules. Unpredictable failures in its vast fleet of delivery vehicles led to frequent delivery delays, increased operational costs, and frustrated customers. Traditional reactive maintenance—fixing issues only after they occurred—resulted in excessive downtime, with vehicles sidelined for hours or days, disrupting supply chains worldwide. Inefficiencies were compounded by varying fleet conditions across regions, making scheduled maintenance inefficient and wasteful, often over-maintaining healthy vehicles while under-maintaining others at risk. These issues not only inflated maintenance costs by up to 20% in some segments but also eroded customer trust through unreliable deliveries. With rising e-commerce demands, DHL needed a proactive approach to predict failures before they happened, minimizing disruptions in a highly competitive logistics industry.

Lösung

DHL implemented a predictive maintenance system leveraging IoT sensors installed on vehicles to collect real-time data on engine performance, tire wear, brakes, and more. This data feeds into machine learning models that analyze patterns, predict potential breakdowns, and recommend optimal maintenance timing. The AI solution integrates with DHL's existing fleet management systems, using algorithms like random forests and neural networks for anomaly detection and failure forecasting. Overcoming data silos and integration challenges, DHL partnered with tech providers to deploy edge computing for faster processing. Pilot programs in key hubs expanded globally, shifting from time-based to condition-based maintenance, ensuring resources focus on high-risk assets.

Ergebnisse

  • Vehicle downtime reduced by 15%
  • Maintenance costs lowered by 10%
  • Unplanned breakdowns decreased by 25%
  • On-time delivery rate improved by 12%
  • Fleet availability increased by 20%
  • Overall operational efficiency up 18%
Read case study →

Waymo (Alphabet)

Transportation

Developing fully autonomous ride-hailing demanded overcoming extreme challenges in AI reliability for real-world roads. Waymo needed to master perception—detecting objects in fog, rain, night, or occlusions using sensors alone—while predicting erratic human behaviors like jaywalking or sudden lane changes. Planning complex trajectories in dense, unpredictable urban traffic, and precise control to execute maneuvers without collisions, required near-perfect accuracy, as a single failure could be catastrophic . Scaling from tests to commercial fleets introduced hurdles like handling edge cases (e.g., school buses with stop signs, emergency vehicles), regulatory approvals across cities, and public trust amid scrutiny. Incidents like failing to stop for school buses highlighted software gaps, prompting recalls. Massive data needs for training, compute-intensive models, and geographic adaptation (e.g., right-hand vs. left-hand driving) compounded issues, with competitors struggling on scalability .

Lösung

Waymo's Waymo Driver stack integrates deep learning end-to-end: perception fuses lidar, radar, and cameras via convolutional neural networks (CNNs) and transformers for 3D object detection, tracking, and semantic mapping with high fidelity. Prediction models forecast multi-agent behaviors using graph neural networks and video transformers trained on billions of simulated and real miles . For planning, Waymo applied scaling laws—larger models with more data/compute yield power-law gains in forecasting accuracy and trajectory quality—shifting from rule-based to ML-driven motion planning for human-like decisions. Control employs reinforcement learning and model-predictive control hybridized with neural policies for smooth, safe execution. Vast datasets from 96M+ autonomous miles, plus simulations, enable continuous improvement; recent AI strategy emphasizes modular, scalable stacks .

Ergebnisse

  • 450,000+ weekly paid robotaxi rides (Dec 2025)
  • 96 million autonomous miles driven (through June 2025)
  • 3.5x better avoiding injury-causing crashes vs. humans
  • 2x better avoiding police-reported crashes vs. humans
  • Over 71M miles with detailed safety crash analysis
  • 250,000 weekly rides (April 2025 baseline, since doubled)
Read case study →

Bank of America

Banking

Bank of America faced a high volume of routine customer inquiries, such as account balances, payments, and transaction histories, overwhelming traditional call centers and support channels. With millions of daily digital banking users, the bank struggled to provide 24/7 personalized financial advice at scale, leading to inefficiencies, longer wait times, and inconsistent service quality. Customers demanded proactive insights beyond basic queries, like spending patterns or financial recommendations, but human agents couldn't handle the sheer scale without escalating costs. Additionally, ensuring conversational naturalness in a regulated industry like banking posed challenges, including compliance with financial privacy laws, accurate interpretation of complex queries, and seamless integration into the mobile app without disrupting user experience. The bank needed to balance AI automation with human-like empathy to maintain trust and high satisfaction scores.

Lösung

Bank of America developed Erica, an in-house NLP-powered virtual assistant integrated directly into its mobile banking app, leveraging natural language processing and predictive analytics to handle queries conversationally. Erica acts as a gateway for self-service, processing routine tasks instantly while offering personalized insights, such as cash flow predictions or tailored advice, using client data securely. The solution evolved from a basic navigation tool to a sophisticated AI, incorporating generative AI elements for more natural interactions and escalating complex issues to human agents seamlessly. Built with a focus on in-house language models, it ensures control over data privacy and customization, driving enterprise-wide AI adoption while enhancing digital engagement.

Ergebnisse

  • 3+ billion total client interactions since 2018
  • Nearly 50 million unique users assisted
  • 58+ million interactions per month (2025)
  • 2 billion interactions reached by April 2024 (doubled from 1B in 18 months)
  • 42 million clients helped by 2024
  • 19% earnings spike linked to efficiency gains
Read case study →

NYU Langone Health

Healthcare

NYU Langone Health, a leading academic medical center, faced significant hurdles in leveraging the vast amounts of unstructured clinical notes generated daily across its network. Traditional clinical predictive models relied heavily on structured data like lab results and vitals, but these required complex ETL processes that were time-consuming and limited in scope. Unstructured notes, rich with nuanced physician insights, were underutilized due to challenges in natural language processing, hindering accurate predictions of critical outcomes such as in-hospital mortality, length of stay (LOS), readmissions, and operational events like insurance denials. Clinicians needed real-time, scalable tools to identify at-risk patients early, but existing models struggled with the volume and variability of EHR data—over 4 million notes spanning a decade. This gap led to reactive care, increased costs, and suboptimal patient outcomes, prompting the need for an innovative approach to transform raw text into actionable foresight.

Lösung

To address these challenges, NYU Langone's Division of Applied AI Technologies at the Center for Healthcare Innovation and Delivery Science developed NYUTron, a proprietary large language model (LLM) specifically trained on internal clinical notes. Unlike off-the-shelf models, NYUTron was fine-tuned on unstructured EHR text from millions of encounters, enabling it to serve as an all-purpose prediction engine for diverse tasks. The solution involved pre-training a 13-billion-parameter LLM on over 10 years of de-identified notes (approximately 4.8 million inpatient notes), followed by task-specific fine-tuning. This allowed seamless integration into clinical workflows, automating risk flagging directly from physician documentation without manual data structuring. Collaborative efforts, including AI 'Prompt-a-Thons,' accelerated adoption by engaging clinicians in model refinement.

Ergebnisse

  • AUROC: 0.961 for 48-hour mortality prediction (vs. 0.938 benchmark)
  • 92% accuracy in identifying high-risk patients from notes
  • LOS prediction AUROC: 0.891 (5.6% improvement over prior models)
  • Readmission prediction: AUROC 0.812, outperforming clinicians in some tasks
  • Operational predictions (e.g., insurance denial): AUROC up to 0.85
  • 24 clinical tasks with superior performance across mortality, LOS, and comorbidities
Read case study →

Kaiser Permanente

Healthcare

In hospital settings, adult patients on general wards often experience clinical deterioration without adequate warning, leading to emergency transfers to intensive care, increased mortality, and preventable readmissions. Kaiser Permanente Northern California faced this issue across its network, where subtle changes in vital signs and lab results went unnoticed amid high patient volumes and busy clinician workflows. This resulted in elevated adverse outcomes, including higher-than-necessary death rates and 30-day readmissions . Traditional early warning scores like MEWS (Modified Early Warning Score) were limited by manual scoring and poor predictive accuracy for deterioration within 12 hours, failing to leverage the full potential of electronic health record (EHR) data. The challenge was compounded by alert fatigue from less precise systems and the need for a scalable solution across 21 hospitals serving millions .

Lösung

Kaiser Permanente developed the Advance Alert Monitor (AAM), an AI-powered early warning system using predictive analytics to analyze real-time EHR data—including vital signs, labs, and demographics—to identify patients at high risk of deterioration within the next 12 hours. The model generates a risk score and automated alerts integrated into clinicians' workflows, prompting timely interventions like physician reviews or rapid response teams . Implemented since 2013 in Northern California, AAM employs machine learning algorithms trained on historical data to outperform traditional scores, with explainable predictions to build clinician trust. It was rolled out hospital-wide, addressing integration challenges through Epic EHR compatibility and clinician training to minimize fatigue .

Ergebnisse

  • 16% lower mortality rate in AAM intervention cohort
  • 500+ deaths prevented annually across network
  • 10% reduction in 30-day readmissions
  • Identifies deterioration risk within 12 hours with high reliability
  • Deployed in 21 Northern California hospitals
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Use Gemini to Extract CRM Fields Directly from Sales Emails

One of the fastest wins is to let Gemini read incoming and outgoing sales emails and extract key entities: contact name, company, role, topic, deal value indications, and agreed next steps. With Google Workspace, you can trigger Gemini from Gmail (via Apps Script or a Chrome extension) to parse selected email threads and produce a structured JSON or table of fields.

Give Gemini explicit instructions for your CRM schema so it knows how to map email content into fields like Opportunity Name, Close Date, Stage, and Next Action. Then, use connectors or middleware (e.g., Make, Zapier, custom APIs) to push this structured output into your CRM record automatically or as a draft for rep review.

Example prompt to Gemini (used in an Apps Script or integration):
You are a sales CRM assistant. Read the email thread below and output JSON
with the following fields:
- contact_name
- contact_email
- company_name
- job_title (if mentioned)
- opportunity_name
- estimated_deal_value (number, or null if unknown)
- main_need_or_pain
- next_step (short text)
- next_step_due_date (ISO format if a date is mentioned)

Email thread:
{{email_thread_text}}

Expected outcome: for typical B2B emails, 70–90% of core opportunity fields are pre-filled, and reps only adjust edge cases. This can easily save 2–5 minutes per email thread and dramatically improve data completeness.

Automate Call and Meeting Summaries into Structured Notes

After every discovery call or demo, reps usually write notes, summarise key points, and update the deal. With Gemini and call transcripts (from Google Meet recordings, dialer tools, or note-taking apps), you can create a workflow where Gemini summarises the conversation and proposes structured CRM updates in one pass.

Configure Gemini to produce both a human-readable summary and structured fields such as decision-makers, budget signals, timeline, objections, and agreed next steps. Deliver the output via Google Docs or directly into the CRM notes field, with mapped fields ready for one-click confirmation.

Example prompt to Gemini for call summaries:
You are a sales note-taking assistant. Based on the call transcript below:
1) Write a concise summary (max 6 bullet points) focused on:
   - customer's situation
   - key problems
   - proposed solution
   - objections
   - agreed next steps
2) Extract these structured fields in JSON:
   - decision_makers (array of names and roles)
   - budget_mentioned (true/false)
   - budget_amount (if mentioned)
   - timeline (e.g., "this quarter")
   - main_objections
   - next_step
   - next_step_owner
   - next_step_due_date (if given)

Transcript:
{{call_transcript}}

Expected outcome: reps move from 10–15 minutes of manual typing to a 1–2 minute review and tweak of AI-generated notes, increasing call capacity and standardising note quality across the team.

Standardise Web Form and Inbound Lead Data with Gemini

Inbound leads often arrive through forms, chats, or marketing tools with messy or incomplete data. Use Gemini as a normalisation layer between raw submissions and your CRM. For example, when a lead submits a free-text "Project Description", let Gemini classify industry, company size, product interest, and urgency, then enrich and standardise the data before it hits the CRM.

Configure your Google Forms or landing pages to write new submissions to a Google Sheet. Trigger Gemini (via Apps Script) on new rows to generate cleaned-up, enriched fields. Then sync those rows into your CRM with an integration tool, using Gemini’s structured output as the single source of truth.

Example prompt for inbound lead normalisation:
You are a data normalisation assistant for inbound sales leads.
Based on the raw form data, output:
- company_name (cleaned)
- standardized_industry (from our list: SaaS, Manufacturing, Retail, Other)
- company_size_bucket (1-50, 51-200, 201-1000, 1000+)
- main_product_interest
- urgency_score (1-5, where 5 means "needs solution < 1 month")
- free_text_summary (1-2 sentences)

Raw form data:
{{form_fields}}

Expected outcome: cleaner segmentation, more accurate lead routing, and higher-quality data without extra manual review by SDRs or operations.

Create One-Click “Update CRM” Actions from Google Workspace

To minimise friction for reps, bring Gemini-powered updates directly into their daily tools. For example, add a "Send to CRM" button in Gmail or Google Docs via Apps Script. When clicked, it sends the current email thread or meeting notes to Gemini, receives structured CRM updates, and posts them to your CRM through an API.

Design the UX so that reps see a preview of the proposed changes: which fields will be updated, what values are suggested, and the option to edit before committing. This maintains a human-in-the-loop safeguard while keeping the interaction as fast as possible.

High-level configuration steps:
1) Create an Apps Script bound to Gmail/Docs that sends selected content
   and a fixed prompt to a Gemini API endpoint.
2) Parse Gemini's JSON response and map it to your CRM field names.
3) Call your CRM API to either:
   - create a new contact/opportunity, or
   - update an existing record by ID/email.
4) Display a confirmation dialog with "Accept / Edit / Cancel" options.
5) Log errors and edge cases for continuous improvement.

Expected outcome: updating CRM from everyday sales tools becomes a 10–20 second workflow, not a multi-minute context switch.

Use Gemini to Clean Up Legacy CRM Data in Batches

AI shouldn’t only help with new data; it can also help you repair and enrich existing CRM records. Export segments of your current data (e.g., opportunities missing industry, decision-maker, or next step), feed them to Gemini in batches, and ask it to infer and standardise values from existing notes, email history, or free-text fields.

Run this as a controlled data quality project: keep the original data, apply Gemini’s suggestions into a staging environment (e.g., Google Sheets), review samples for accuracy, then push approved updates back into your CRM. Prioritise fields that are low-risk but high-value, like industry, territory, or simple intent labels.

Example prompt for data clean-up:
You are helping clean CRM data. For each row of data, infer and standardize
missing fields based on the free-text notes.
Output CSV rows with:
- original_id
- standardized_industry (SaaS, Manufacturing, Retail, Other)
- likely_buying_stage (Lead, MQL, SQL, Opportunity, Closed Won/Lost)
- has_clear_next_step (true/false)
- short_next_step_guess (if any)

Row data:
{{exported_row_with_notes}}

Expected outcome: a one-off or recurring improvement in data quality that makes reporting and forecasting more reliable, without asking reps to manually fix historical records.

Continuously Tune Prompts and Monitor Accuracy

Once your Gemini sales automations are live, treat prompt templates and mappings as evolving assets, not set-and-forget configurations. Collect examples where Gemini misinterprets entities or misses important details, then refine your prompts with clearer instructions, more examples, or tighter output schemas.

Implement simple QA dashboards: track the percentage of AI-generated updates that reps accept without changes, fields that frequently need correction, and error rates per workflow. Use this feedback loop to adjust prompts, thresholds (e.g., only auto-fill when confidence is high), and where human review is mandatory.

Prompt tuning pattern:
1) Collect 20-30 examples of "good" CRM updates vs. "bad" ones.
2) Update your base prompt to include 3-5 short examples of desired output.
3) Add explicit instructions like:
   - "If you are not sure, set the field to null."
   - "Never fabricate a budget amount."
4) Re-test on historical data and compare accuracy metrics.
5) Roll out the improved prompt and monitor again for 2-4 weeks.

Expected outcomes: Over 8–12 weeks, teams typically see 30–60% reductions in time spent on CRM updates, 20–40% gains in core field completeness, and noticeably better forecast visibility. The key is to start with a few high-impact workflows, measure, and iterate rather than trying to automate everything at once.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

In well-designed workflows, Gemini can correctly extract core entities (name, company, role, dates, next steps) from sales emails and call transcripts in the majority of cases. Accuracy depends heavily on two factors: the quality of the input (clear emails or transcripts) and the quality of the prompt and schema you provide.

We typically recommend a human-in-the-loop approach at first: Gemini generates structured CRM updates, and reps quickly review and confirm. Over time, as you measure accuracy and refine prompts, you can safely automate more fields and reduce the amount of manual review required.

You don’t need a large data science team to start using Gemini for sales productivity. Most implementations require three capabilities:

  • A sales or RevOps lead who understands your current workflows, CRM structure, and pain points.
  • A technical owner (internal or external) comfortable with APIs, Google Workspace (Apps Script), and your CRM’s integration options.
  • A security/compliance stakeholder to sign off on data usage and access controls.

Reruption often fills the technical and product roles for clients, designing prompts, building integrations, and setting up monitoring so your internal team can focus on adoption and change management.

For targeted use cases like email-to-CRM or call summary automation, you can see first results within a few weeks. A focused pilot usually looks like this:

  • Week 1–2: Workflow mapping, prompt design, and initial integration into Google Workspace and your CRM.
  • Week 3–4: Pilot with a small sales group, collect feedback, and refine prompts and mappings.
  • Week 5–8: Broader rollout, training, and iteration based on real usage data.

Many teams already report measurable time savings and better CRM completeness during the pilot phase, especially when they start with high-volume workflows such as inbound emails or discovery calls.

The direct cost of Gemini (API or Workspace-based) is typically modest compared to sales headcount. The main investment is in design and implementation: mapping workflows, building integrations, and refining prompts. ROI comes from three sources:

  • Time saved: Reps reclaim hours per week previously lost to manual data entry.
  • Better decisions: More complete, accurate data improves forecasting, prioritisation, and management coaching.
  • Morale and retention: High performers spend more time selling and less time on admin, which reduces burnout.

Even conservative scenarios – e.g., 30 minutes saved per rep per day – typically justify the investment quickly when multiplied across a full sales team and annualised.

Reruption combines strategic clarity with hands-on engineering to make AI automation in sales real. With our AI PoC offering (9,900€), we can quickly validate whether Gemini can handle your specific data entry and CRM workflows, and deliver a working prototype rather than a slide deck.

Beyond the PoC, our Co-Preneur approach means we embed with your team: mapping processes, simplifying your data model, building and hardening Gemini integrations with Google Workspace and your CRM, and setting up monitoring and governance. We operate in your P&L, not just in presentations, and stay involved until the automation actually reduces admin time and improves data quality for your sales reps.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media