The Challenge: Hidden Compliance Breaches

Customer service teams handle thousands of conversations every day – calls, chats, and emails where agents must follow strict scripts, disclosures, and data-handling rules. The challenge is that hidden compliance breaches often slip through: a missing disclosure here, an overpromised refund there, or a piece of sensitive data left unredacted in a chat transcript. On an individual level these issues look small; at scale, they become a serious and often invisible risk.

Traditional approaches rely on manual QA spot checks, annual training, and static monitoring rules. A supervisor may listen to 2–5% of calls, skim a handful of chats, or respond when a customer complains. But with complex regulations, evolving internal policies, and hybrid channels, this model simply cannot keep pace. Rule-based systems miss context – they can detect a keyword like "refund" but not whether an agent improperly promised a refund that violates policy. And once volumes grow, leaders lose any realistic way to see patterns.

The cost of not solving this is high. Hidden compliance breaches can trigger regulatory penalties, legal disputes, and reputational damage that far outweigh the cost of prevention. They also erode pricing discipline, create inconsistent customer experiences, and waste time on avoidable escalations and rework. Without systematic visibility, you can’t identify training gaps, risky scripts, or specific agents who need coaching. Competitors that put AI on top of their service channels gain an advantage: they reduce risk, standardize quality, and adapt policies based on real data instead of anecdotes.

The good news: while the risk is real, it is also highly solvable. Modern AI models like Gemini can understand natural language, nuance, and context across all your customer interactions, in real time and post hoc. At Reruption, we’ve helped organisations stand up AI products, compliance-sensitive automations, and internal tools that move from spot checks to continuous monitoring in weeks, not years. In the rest of this guide, you’ll see concrete ways to use Gemini to surface hidden compliance breaches early, support your agents, and turn Customer Service into a controlled, auditable environment instead of a blind spot.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s work building AI-first internal tools and compliance-aware automation, we’ve seen that Gemini is especially strong at understanding conversation context across channels and languages. Used correctly, it can move Customer Service teams from random QA sampling to 100% coverage compliance monitoring – not as a replacement for humans, but as a decision engine that flags risks, supports supervisors, and continuously improves policy adherence.

Define Compliance in Clear, Machine-Readable Terms First

Before integrating Gemini into your customer service stack, you need to translate your legal and compliance requirements into operational, machine-readable rules. That means going beyond “agents must not promise X” to defining concrete patterns: which phrases are risky, where disclaimers are mandatory, which data types are sensitive, and in which jurisdictions differences apply.

Practically, this is a joint exercise for Legal, Compliance, and Operations: they should co-create a small set of high-impact compliance scenarios (e.g. missing mandatory disclosure, mishandled personal data, unauthorized discounts) and articulate what “compliant vs. non-compliant” looks like in real conversations. This becomes the ground truth you’ll teach Gemini through examples and evaluation datasets. Without this clarity, even the best AI will feel fuzzy and untrustworthy.

Treat Gemini as a Second-Line Control, Not a Single Point of Failure

Strategically, Gemini should augment – not replace – your existing compliance controls in customer service. Scripts, access controls, and training remain first-line defenses. Gemini becomes a second-line monitoring layer that watches 100% of interactions and flags potential breaches, patterns, and training needs.

This mindset helps manage risk and expectations. You design workflows where Gemini’s alerts route to QA or team leads for review, rather than directly triggering customer-facing actions. Over time, as you gain confidence in the model’s precision on specific scenarios (for example automatic redaction of credit card numbers), you can safely automate certain responses while keeping human review on the grey areas.

Prepare Your Teams for Transparency and Coaching, Not Surveillance

Rolling out real-time AI compliance monitoring changes how agents experience their work. If framed as “the AI is watching you”, adoption will fail. If framed as “we’re giving you a safety net and better coaching”, the same system becomes a support tool. Strategically communicate that Gemini is there to catch unintentional mistakes, protect both the company and the agent, and provide objective data for fair coaching.

Involve team leads and experienced agents early. Let them review flagged conversations together with QA and help refine the rules and labels. When agents see that Gemini’s feedback is used to improve scripts, clarify policies, and reduce personal blame, you build buy-in. A transparent governance model – who sees what data, how alerts are used in performance management – is as important as the technical configuration.

Start with Narrow, High-Risk Use Cases and Expand Gradually

Trying to monitor every conceivable policy in one go is a recipe for noise and stakeholder fatigue. A more effective strategy is to start with 2–3 clearly defined, high-risk compliance areas where the value of catching breaches is obvious: for example, financial promises, data protection obligations, or regulated product advice.

You then design and tune Gemini classifiers around those scenarios, measure precision/recall, and co-create action workflows with Compliance and Operations. Once this foundation is working and trusted, you can expand to additional policies and channels. This phased adoption keeps the project manageable, shows quick wins, and creates internal champions who push for broader use.

Build a Feedback Loop Between Compliance, Training, and Product

Gemini’s real power is not only flagging single incidents but surfacing systemic compliance patterns. Strategically, you should plan from day one how insights will flow back into your organisation: which recurring issues feed into training content, which script elements are revised, which product or pricing policies are clarified.

Set up regular review sessions where Compliance, QA, and Operations review Gemini’s dashboards: top violation types, by team, by product, by time of day. This transforms compliance monitoring from a policing function into a source of continuous improvement. Over time, you’ll shift from reacting to problems to proactively redesigning processes and offerings that are easier to keep compliant.

Used with a clear ruleset, thoughtful governance, and strong feedback loops, Gemini can turn hidden compliance breaches in customer service into visible, manageable risks. Instead of guessing what happens in thousands of interactions, you gain continuous, context-aware oversight and actionable insights for coaching and process design. At Reruption, we specialise in building exactly these kinds of AI-first capabilities inside organisations – from rapid proofs of concept to embedded, production-grade monitoring. If you want to explore what a Gemini-powered compliance layer could look like in your service environment, we’re happy to discuss concrete next steps, not just theory.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Logistics to Transportation: Learn how companies successfully use Gemini.

UPS

Logistics

UPS faced massive inefficiencies in delivery routing, with drivers navigating an astronomical number of possible route combinations—far exceeding the nanoseconds since Earth's existence. Traditional manual planning led to longer drive times, higher fuel consumption, and elevated operational costs, exacerbated by dynamic factors like traffic, package volumes, terrain, and customer availability. These issues not only inflated expenses but also contributed to significant CO2 emissions in an industry under pressure to go green. Key challenges included driver resistance to new technology, integration with legacy systems, and ensuring real-time adaptability without disrupting daily operations. Pilot tests revealed adoption hurdles, as drivers accustomed to familiar routes questioned the AI's suggestions, highlighting the human element in tech deployment. Scaling across 55,000 vehicles demanded robust infrastructure and data handling for billions of data points daily.

Lösung

UPS developed ORION (On-Road Integrated Optimization and Navigation), an AI-powered system blending operations research for mathematical optimization with machine learning for predictive analytics on traffic, weather, and delivery patterns. It dynamically recalculates routes in real-time, considering package destinations, vehicle capacity, right/left turn efficiencies, and stop sequences to minimize miles and time. The solution evolved from static planning to dynamic routing upgrades, incorporating agentic AI for autonomous decision-making. Training involved massive datasets from GPS telematics, with continuous ML improvements refining algorithms. Overcoming adoption challenges required driver training programs and gamification incentives, ensuring seamless integration via in-cab displays.

Ergebnisse

  • 100 million miles saved annually
  • $300-400 million cost savings per year
  • 10 million gallons of fuel reduced yearly
  • 100,000 metric tons CO2 emissions cut
  • 2-4 miles shorter routes per driver daily
  • 97% fleet deployment by 2021
Read case study →

Mass General Brigham

Healthcare

Mass General Brigham, one of the largest healthcare systems in the U.S., faced a deluge of medical imaging data from radiology, pathology, and surgical procedures. With millions of scans annually across its 12 hospitals, clinicians struggled with analysis overload, leading to delays in diagnosis and increased burnout rates among radiologists and surgeons. The need for precise, rapid interpretation was critical, as manual reviews limited throughput and risked errors in complex cases like tumor detection or surgical risk assessment. Additionally, operative workflows required better predictive tools. Surgeons needed models to forecast complications, optimize scheduling, and personalize interventions, but fragmented data silos and regulatory hurdles impeded progress. Staff shortages exacerbated these issues, demanding decision support systems to alleviate cognitive load and improve patient outcomes.

Lösung

To address these, Mass General Brigham established a dedicated Artificial Intelligence Center, centralizing research, development, and deployment of hundreds of AI models focused on computer vision for imaging and predictive analytics for surgery. This enterprise-wide initiative integrates ML into clinical workflows, partnering with tech giants like Microsoft for foundation models in medical imaging. Key solutions include deep learning algorithms for automated anomaly detection in X-rays, MRIs, and CTs, reducing radiologist review time. For surgery, predictive models analyze patient data to predict post-op risks, enhancing planning. Robust governance frameworks ensure ethical deployment, addressing bias and explainability.

Ergebnisse

  • $30 million AI investment fund established
  • Hundreds of AI models managed for radiology and pathology
  • Improved diagnostic throughput via AI-assisted radiology
  • AI foundation models developed through Microsoft partnership
  • Initiatives for AI governance in medical imaging deployed
  • Reduced clinician workload and burnout through decision support
Read case study →

DHL

Logistics

DHL, a global logistics giant, faced significant challenges from vehicle breakdowns and suboptimal maintenance schedules. Unpredictable failures in its vast fleet of delivery vehicles led to frequent delivery delays, increased operational costs, and frustrated customers. Traditional reactive maintenance—fixing issues only after they occurred—resulted in excessive downtime, with vehicles sidelined for hours or days, disrupting supply chains worldwide. Inefficiencies were compounded by varying fleet conditions across regions, making scheduled maintenance inefficient and wasteful, often over-maintaining healthy vehicles while under-maintaining others at risk. These issues not only inflated maintenance costs by up to 20% in some segments but also eroded customer trust through unreliable deliveries. With rising e-commerce demands, DHL needed a proactive approach to predict failures before they happened, minimizing disruptions in a highly competitive logistics industry.

Lösung

DHL implemented a predictive maintenance system leveraging IoT sensors installed on vehicles to collect real-time data on engine performance, tire wear, brakes, and more. This data feeds into machine learning models that analyze patterns, predict potential breakdowns, and recommend optimal maintenance timing. The AI solution integrates with DHL's existing fleet management systems, using algorithms like random forests and neural networks for anomaly detection and failure forecasting. Overcoming data silos and integration challenges, DHL partnered with tech providers to deploy edge computing for faster processing. Pilot programs in key hubs expanded globally, shifting from time-based to condition-based maintenance, ensuring resources focus on high-risk assets.

Ergebnisse

  • Vehicle downtime reduced by 15%
  • Maintenance costs lowered by 10%
  • Unplanned breakdowns decreased by 25%
  • On-time delivery rate improved by 12%
  • Fleet availability increased by 20%
  • Overall operational efficiency up 18%
Read case study →

Amazon

Retail

In the vast e-commerce landscape, online shoppers face significant hurdles in product discovery and decision-making. With millions of products available, customers often struggle to find items matching their specific needs, compare options, or get quick answers to nuanced questions about features, compatibility, and usage. Traditional search bars and static listings fall short, leading to shopping cart abandonment rates as high as 70% industry-wide and prolonged decision times that frustrate users. Amazon, serving over 300 million active customers, encountered amplified challenges during peak events like Prime Day, where query volumes spiked dramatically. Shoppers demanded personalized, conversational assistance akin to in-store help, but scaling human support was impossible. Issues included handling complex, multi-turn queries, integrating real-time inventory and pricing data, and ensuring recommendations complied with safety and accuracy standards amid a $500B+ catalog.

Lösung

Amazon developed Rufus, a generative AI-powered conversational shopping assistant embedded in the Amazon Shopping app and desktop. Rufus leverages a custom-built large language model (LLM) fine-tuned on Amazon's product catalog, customer reviews, and web data, enabling natural, multi-turn conversations to answer questions, compare products, and provide tailored recommendations. Powered by Amazon Bedrock for scalability and AWS Trainium/Inferentia chips for efficient inference, Rufus scales to millions of sessions without latency issues. It incorporates agentic capabilities for tasks like cart addition, price tracking, and deal hunting, overcoming prior limitations in personalization by accessing user history and preferences securely. Implementation involved iterative testing, starting with beta in February 2024, expanding to all US users by September, and global rollouts, addressing hallucination risks through grounding techniques and human-in-loop safeguards.

Ergebnisse

  • 60% higher purchase completion rate for Rufus users
  • $10B projected additional sales from Rufus
  • 250M+ customers used Rufus in 2025
  • Monthly active users up 140% YoY
  • Interactions surged 210% YoY
  • Black Friday sales sessions +100% with Rufus
  • 149% jump in Rufus users recently
Read case study →

Duolingo

EdTech

Duolingo, a leader in gamified language learning, faced key limitations in providing real-world conversational practice and in-depth feedback. While its bite-sized lessons built vocabulary and basics effectively, users craved immersive dialogues simulating everyday scenarios, which static exercises couldn't deliver . This gap hindered progression to fluency, as learners lacked opportunities for free-form speaking and nuanced grammar explanations without expensive human tutors. Additionally, content creation was a bottleneck. Human experts manually crafted lessons, slowing the rollout of new courses and languages amid rapid user growth. Scaling personalized experiences across 40+ languages demanded innovation to maintain engagement without proportional resource increases . These challenges risked user churn and limited monetization in a competitive EdTech market.

Lösung

Duolingo launched Duolingo Max in March 2023, a premium subscription powered by GPT-4, introducing Roleplay for dynamic conversations and Explain My Answer for contextual feedback . Roleplay simulates real-life interactions like ordering coffee or planning vacations with AI characters, adapting in real-time to user inputs. Explain My Answer provides detailed breakdowns of correct/incorrect responses, enhancing comprehension. Complementing this, Duolingo's Birdbrain LLM (fine-tuned on proprietary data) automates lesson generation, allowing experts to create content 10x faster . This hybrid human-AI approach ensured quality while scaling rapidly, integrated seamlessly into the app for all skill levels .

Ergebnisse

  • DAU Growth: +59% YoY to 34.1M (Q2 2024)
  • DAU Growth: +54% YoY to 31.4M (Q1 2024)
  • Revenue Growth: +41% YoY to $178.3M (Q2 2024)
  • Adjusted EBITDA Margin: 27.0% (Q2 2024)
  • Lesson Creation Speed: 10x faster with AI
  • User Self-Efficacy: Significant increase post-AI use (2025 study)
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Configure a Compliance Taxonomy and Labelled Examples

Implementation starts with a clear compliance taxonomy. Define 5–10 violation categories that matter most for your customer service operations: for example “Missing mandatory disclosure”, “Unauthorized financial promise”, “Sharing sensitive personal data”, “Non-compliant cancellation handling”, and “Inaccurate product claims”. Each category should include a short description and 5–20 sample conversations labelled as compliant or non-compliant.

These labelled examples can be stored in a simple internal dataset and fed into Gemini via prompts or fine-tuning approaches. Even if you don’t formally fine-tune, you can condition Gemini with few-shot prompting, showing positive and negative examples so it learns how your organisation defines breaches. This is where collaboration between Compliance, QA, and experienced agents is critical – they help produce realistic transcripts that reflect your real edge cases.

System prompt example for Gemini classification:
You are a compliance monitoring assistant for our customer service team.
Your task: Review the conversation and answer in JSON with:
- violation: true/false
- categories: list of violation categories
- explanation: short text

Violation categories:
1. Missing mandatory disclosure
2. Unauthorized financial promise
3. Sharing sensitive personal data
4. Non-compliant cancellation handling
5. Inaccurate product claims

Use the following examples as guidance:
[include 3-5 short anonymized example snippets with labels]

Once this taxonomy is in place, you can reuse it consistently across channels (voice transcripts, chats, emails) and build dashboards that show patterns by category rather than a long list of unstructured alerts.

Integrate Gemini into Your Contact Center and CRM Workflows

To monitor 100% of interactions, Gemini must be embedded into your existing customer service platforms, not sit as a separate tool. For calls, use your telephony system’s recordings and transcripts; for chat and email, connect via APIs or event hooks. Each finished conversation – or even each turn in real time – can be sent to a Gemini endpoint for compliance analysis.

On the output side, write the Gemini results back into your CRM or ticketing system: store a structured compliance object with fields like violation_flag, categories, severity, and summary. This makes it easy for supervisors to filter for risky conversations, trigger QA review workflows, or add compliance notes to an agent’s coaching plan.

Example workflow (simplified):
1. Conversation ends in contact center system.
2. System sends transcript + metadata (agent ID, channel, product) to Gemini API.
3. Gemini responds with JSON compliance assessment.
4. Middleware writes results into CRM record and pushes alerts to QA queue
   if severity ≥ threshold.
5. Supervisors review flagged cases in their existing QA interface.

By integrating at this level, you avoid context switching for your teams and ensure that compliance information lives where decisions are made: inside tickets, dashboards, and coaching tools.

Use Real-Time Guardrails and Agent Prompts During Conversations

Beyond post-conversation analysis, Gemini can act as a real-time guardian during chats and calls. For example, as an agent types a response in your chat UI, you can send the draft plus context to Gemini to check for missing disclosures, risky promises, or sensitive data before it’s sent.

The same pattern works for voice: with low-latency transcription and Gemini in the loop, you can show on-screen hints to the agent such as “Mandatory cancellation disclosure not yet mentioned” or “Avoid promising refund beyond 30 days.” This turns compliance from a retrospective audit into an in-the-moment assist.

Example prompt for real-time draft checking:
You are a compliance assistant reviewing an agent's draft answer.
Check the draft for:
- Missing mandatory disclosures about cancellations and fees
- Unauthorized financial promises (refunds, discounts above 10%)
- Any mention of customer payment data in free text

Return JSON:
{
  "allowed": true/false,
  "issues": [list of issue descriptions],
  "suggested_rewrite": "<compliant version of the reply>"
}

You can wire this into your UI so that if allowed=false, the agent sees a highlighted message with the suggested rewrite and must confirm or edit before sending. This significantly reduces unintentional breaches without slowing agents down.

Automatically Redact and Mask Sensitive Data

One of the highest-impact, lowest-friction use cases is to let Gemini automatically detect and redact sensitive data in transcripts and chat logs. Instead of relying only on regex rules for card numbers or email addresses, Gemini can understand context and catch more subtle cases (e.g. “my social” in different formats or local identifiers that classic pattern matching misses).

Implement a post-processing step where every conversation is passed through Gemini with a “redaction” prompt. Based on its output, you replace sensitive spans with placeholders like <CARD_NUMBER>, <IBAN>, or <PERSONAL_ID> before storing logs or feeding them to analytics tools. Combine this with strict logging and access policies so that only a minimal, masked version of conversations is used for training and reporting.

Example redaction configuration prompt:
Identify and replace any of the following in the conversation text:
- Credit card numbers
- Bank account/IBAN
- National ID numbers
- Full addresses
- Email addresses
Return the fully redacted text and a list of what was redacted.

This directly reduces data leakage risk and simplifies compliance with data protection regulations when using conversation logs for analytics or training future models.

Set Thresholds, QA Queues, and KPIs for Compliance Monitoring

To avoid overwhelming your teams, you need clear alert thresholds and QA workflows. Use Gemini’s output probabilities or confidence scores to decide which interactions need human review. For example, only push conversations with high-severity categories and confidence above 0.7 into the QA queue, while using lower-confidence flags for periodic sampling or coaching materials.

Define KPIs that reflect both coverage and quality: percentage of interactions automatically assessed, number of high-severity violations per 1,000 interactions, average time to review flagged cases, and reduction of repeat violation patterns over time. Regularly check precision and recall by manually reviewing random samples of “no-violation” and “violation” cases. When false positives are too high, refine prompts and examples; when false negatives appear in audits, add new examples and tighten thresholds.

Suggested KPIs:
- 100% of conversations processed by Gemini within <5 minutes
- ≥80% precision on top 3 violation categories after 8 weeks
- 30–50% reduction in repeated violation patterns in 6 months
- >70% of supervisors using compliance dashboards weekly

Clear metrics keep the project grounded in outcomes rather than technology for its own sake and make it easier to communicate progress to Compliance and leadership.

Continuously Retrain and Align on Policy Changes

Compliance rules, products, and scripts evolve – your Gemini compliance layer must evolve with them. Set up a light-weight process where any policy or script change automatically triggers a review of prompts, examples, and evaluation datasets. When new regulations or internal rules appear, add fresh labelled examples and update your taxonomy.

On a monthly or quarterly basis, run a structured evaluation: feed a held-out test set of conversations to Gemini, compare its outputs to human labels, and track quality over time. Use misclassifications as training material, and maintain documentation that links each monitored violation type to specific legal or policy sources. This keeps the system auditable and gives Compliance the confidence that AI monitoring is not a black box but a controlled, improving control.

Expected outcomes from these best practices are realistic and measurable: many organisations can move from sampling 1–3% of interactions to continuous monitoring of 100% of calls, chats, and emails, cut manual QA time per interaction by 30–60%, and reduce recurring compliance issues in critical areas by 30–50% within 6–12 months, depending on baseline quality and training investment.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Gemini processes the full context of each interaction – not just keywords – to identify compliance risks in calls, chats, and emails. You provide it with a defined set of violation categories (for example, missing mandatory disclosures, unauthorized financial promises, or mishandled personal data) plus labelled examples from your own environment.

Using this configuration and prompts, Gemini evaluates each conversation and outputs a structured assessment: whether a violation is likely, which category it falls under, and a short explanation. This works across languages and channels, and can be applied both in real time (e.g. while an agent is typing) and post hoc on recorded interactions, so you can move from manual sampling to 100% coverage monitoring.

You typically need three capabilities: domain knowledge, engineering, and change management. On the business side, you need Compliance, Legal, and Customer Service Operations to agree on a clear compliance taxonomy and provide realistic examples of compliant and non-compliant interactions. This is crucial for teaching Gemini what “good” and “bad” look like in your specific context.

On the technical side, you need developers or an integration partner who can connect your telephony, chat, or email platforms to Gemini via APIs, handle authentication and logging, and write results back into your CRM or QA tools. Finally, team leads and HR/Training should be involved to design coaching workflows and communication, so that AI monitoring is seen as a support, not surveillance. Reruption often fills the engineering and orchestration gap, while your internal experts provide rules and context.

With a focused scope and existing data, many organisations can get a first Gemini-powered compliance pilot running in 4–8 weeks: monitoring a subset of channels and 2–3 high-risk violation types. In this phase you typically validate that classification quality is good enough (e.g. ≥80% precision on priority categories) and refine prompts and thresholds.

Within 3–6 months, once integrated into QA and coaching workflows, it’s realistic to move from sampling 1–3% of interactions to automatically assessing nearly 100%, reduce manual QA effort per interaction by 30–60%, and see a 30–50% drop in repeated issues for the monitored violation types. Larger, structural gains (fewer regulatory incidents, improved customer trust) usually materialise over 6–12 months as you expand scope and use insights to improve scripts and processes.

Costs break down into two components: implementation and ongoing usage. Implementation costs depend on your current stack and scope – connecting your contact center and CRM, defining your compliance taxonomy, building dashboards, and integrating alerts into QA workflows. This can range from a light-weight integration for one channel to a broader platform project covering all interactions.

Ongoing usage costs are largely driven by how many interactions you process and whether you use real-time checks or batch processing. The ROI typically comes from three areas: avoided regulatory penalties and legal costs, reduced manual QA time (by automating large parts of monitoring), and better pricing and policy discipline (fewer over-promises, unauthorized discounts, or misaligned commitments). Many organisations find that preventing even a single major incident or reclaiming a fraction of QA capacity already pays for the system.

Reruption combines AI engineering depth with a Co-Preneur mindset – we embed into your organisation and build working solutions, not just slideware. For this use case, we typically start with our AI PoC offering (9,900€), where we define the concrete compliance scenarios you care about, connect Gemini to a sample of your call/chat/email data, and deliver a functioning prototype that classifies real interactions for violations.

From there, we can help you harden the prototype into a production-ready monitoring layer: integrating with your contact center and CRM, designing QA and coaching workflows, setting up dashboards, and aligning with Compliance and Legal. Because we operate like co-founders inside your P&L, we focus on measurable outcomes – fewer hidden breaches, better coverage, and clear governance – rather than generic AI experiments.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media