The Challenge: Low Quality Lead Scoring

Marketing teams depend on lead scoring to prioritize who gets attention first. Yet in many organizations, scores are still based on simplistic rules (job title + form fill = MQL) or the subjective judgment of individual marketers. The result: a bloated pipeline full of names that look good on a report but rarely turn into revenue, while genuinely high-intent prospects slip through the cracks or wait days for follow-up.

Traditional approaches like static points-based models or generic marketing automation scoring can no longer keep up with today’s buying behavior. Prospects research anonymously across channels, use multiple devices, and interact with your brand in fragmented micro-moments. A rule like “+10 points for whitepaper download” ignores context: did they bounce after 5 seconds, are they a student, a competitor, or a perfect-fit account comparing vendors for an active project? Without AI-driven lead scoring that understands patterns in your actual funnel data, your model quickly becomes outdated and misleading.

The business impact is substantial. Sales teams waste hours calling low-intent contacts who were scored as “hot” just because they opened a few emails. High-value accounts don’t get timely follow-up because they never hit an arbitrary score threshold. Marketing performance looks worse than it is, with inflated MQL volumes but weak opportunity conversion. In practice this means higher customer acquisition costs, slower pipeline velocity, and misalignment between marketing and sales that is hard to fix with meetings alone.

The good news: this is a solvable, high-leverage problem. With modern tools like Gemini for predictive lead scoring, you can move from rule-of-thumb scoring to models that learn from real conversion data across channels. At Reruption, we’ve seen how AI-powered systems—similar in complexity to recruiting and customer-service chatbots we’ve built—can be embedded directly into existing stacks to drive measurable uplift. In the rest of this page, you’ll find practical guidance on how to redesign your lead scoring with Gemini, from strategy to concrete implementation steps.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s perspective, Gemini is a powerful engine for fixing low-quality lead scoring because it can combine marketing data analysis, code generation, and workflow automation in one place. Based on our hands-on experience building AI products and automations embedded in real organisations, we see Gemini not just as another scoring add-on, but as a way to redesign how your marketing and sales teams decide which leads deserve attention first.

Reframe Lead Scoring as a Predictive System, Not a Points Game

Most marketing teams still treat lead scoring as a debate over which activities deserve how many points. That mindset locks you into opinion-based models. With Gemini, you can reframe scoring as a predictive system: given everything we know about past leads, which new leads are most likely to become opportunities or customers?

This requires alignment at the leadership level. Marketing and sales need to agree on a clear target outcome (e.g. “Sales-qualified opportunity created within 60 days”) and accept that the model might surface surprising patterns that contradict intuition. In our work, we see the best results when teams stop defending legacy rules and start asking, “What does the data say?” Gemini can then be used to explore those patterns across channels and cohorts, instead of manually tweaking points for individual actions.

Start with a Narrow, High-Impact Segment Before Scaling

A common mistake is trying to roll out AI-powered lead scoring across all products, regions, and segments at once. Data quality and buyer behaviour differ widely, which makes early results noisy and undermines trust in the model. Instead, use Gemini to focus first on a narrow but material slice of your funnel—for example, inbound demo requests for one key product in one region.

By constraining the initial scope, you can move faster, iterate on the feature set, and demonstrate a clear uplift in conversion and response time. Once marketing and sales see that Gemini-based scores reliably identify high-intent leads in that slice, it becomes much easier to expand to additional segments with a proven pattern and governance model.

Design for Sales Trust and Adoption from Day One

The best AI lead scoring model fails if sales doesn’t trust or use it. Strategically, that means designing your Gemini initiative with sales input, not presenting it as a finished black box. Involve sales leaders in defining what a “good lead” looks like, and in reviewing early Gemini analyses of past deals—where did gut feeling differ from the data?

From there, focus on transparency. Use Gemini to generate explanations in human language for each high-scoring lead (e.g. “Similar to 24 past leads that became customers within 90 days; strong activity on pricing pages; job title matches key decision-maker persona”). This kind of explainable AI scoring builds confidence and encourages reps to prioritize Gemini’s recommendations instead of reverting to old habits.

Align Data Foundations and Governance Before Automating

Strategically deploying Gemini for lead scoring is not just about the model; it’s about the data it can see. Many marketing organisations have fragmented, inconsistent data: missing UTM parameters, duplicate contacts, offline touchpoints living in spreadsheets. If you point Gemini at this chaos, you will simply get sophisticated noise.

Before full automation, define which data sources are authoritative (CRM, marketing automation, product analytics, customer support tools) and where they will be joined. Set minimum data quality thresholds for a lead to be scored. Agree on governance: who can change which scoring variables, how often models are retrained, and how performance is monitored. This upfront work lets Gemini operate as a reliable layer on top of a stable foundation, not as a patch on broken plumbing.

Plan for Iteration: Treat Lead Scoring as a Living Product

Buyer behaviour, channels, and your own go-to-market evolve constantly. A one-off scoring project will decay quickly. Strategically, treat Gemini lead scoring as a living internal product, with an owner, backlog, and regular review cadence.

Set expectations that the first version is there to learn, not to be perfect. Define clear evaluation windows (e.g. quarterly) where you and your Gemini-powered workflows are judged on business metrics: lead-to-opportunity conversion, time-to-first-touch, sales productivity. With this product mindset, your team stays comfortable updating features, retraining models, and experimenting with new signals rather than clinging to a static scoring sheet.

Using Gemini for lead scoring is less about magic algorithms and more about structuring your marketing and sales machine around better decisions. When you combine clean data, clear outcomes, and iterative experimentation, Gemini becomes a practical way to move from gut feeling to predictive prioritisation. At Reruption, we specialise in building exactly these kinds of embedded AI capabilities—rapidly prototyping, validating, and hardening them inside your existing stack. If you want to see whether AI-driven scoring can work in your context, a focused collaboration around a first segment or an AI PoC is often the fastest way to get real numbers on the board.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Energy to Banking: Learn how companies successfully use Gemini.

BP

Energy

BP, a global energy leader in oil, gas, and renewables, grappled with high energy costs during peak periods across its extensive assets. Volatile grid demands and price spikes during high-consumption times strained operations, exacerbating inefficiencies in energy production and consumption. Integrating intermittent renewable sources added forecasting challenges, while traditional management failed to dynamically respond to real-time market signals, leading to substantial financial losses and grid instability risks . Compounding this, BP's diverse portfolio—from offshore platforms to data-heavy exploration—faced data silos and legacy systems ill-equipped for predictive analytics. Peak energy expenses not only eroded margins but hindered the transition to sustainable operations amid rising regulatory pressures for emissions reduction. The company needed a solution to shift loads intelligently and monetize flexibility in energy markets .

Lösung

To tackle these issues, BP acquired Open Energi in 2021, gaining access to its flagship Plato AI platform, which employs machine learning for predictive analytics and real-time optimization. Plato analyzes vast datasets from assets, weather, and grid signals to forecast peaks and automate demand response, shifting non-critical loads to off-peak times while participating in frequency response services . Integrated into BP's operations, the AI enables dynamic containment and flexibility markets, optimizing consumption without disrupting production. Combined with BP's internal AI for exploration and simulation, it provides end-to-end visibility, reducing reliance on fossil fuels during peaks and enhancing renewable integration . This acquisition marked a strategic pivot, blending Open Energi's demand-side expertise with BP's supply-side scale.

Ergebnisse

  • $10 million in annual energy savings
  • 80+ MW of energy assets under flexible management
  • Strongest oil exploration performance in years via AI
  • Material boost in electricity demand optimization
  • Reduced peak grid costs through dynamic response
  • Enhanced asset efficiency across oil, gas, renewables
Read case study →

AstraZeneca

Healthcare

In the highly regulated pharmaceutical industry, AstraZeneca faced immense pressure to accelerate drug discovery and clinical trials, which traditionally take 10-15 years and cost billions, with low success rates of under 10%. Data silos, stringent compliance requirements (e.g., FDA regulations), and manual knowledge work hindered efficiency across R&D and business units. Researchers struggled with analyzing vast datasets from 3D imaging, literature reviews, and protocol drafting, leading to delays in bringing therapies to patients. Scaling AI was complicated by data privacy concerns, integration into legacy systems, and ensuring AI outputs were reliable in a high-stakes environment. Without rapid adoption, AstraZeneca risked falling behind competitors leveraging AI for faster innovation toward 2030 ambitions of novel medicines.

Lösung

AstraZeneca launched an enterprise-wide generative AI strategy, deploying ChatGPT Enterprise customized for pharma workflows. This included AI assistants for 3D molecular imaging analysis, automated clinical trial protocol drafting, and knowledge synthesis from scientific literature. They partnered with OpenAI for secure, scalable LLMs and invested in training: ~12,000 employees across R&D and functions completed GenAI programs by mid-2025. Infrastructure upgrades, like AMD Instinct MI300X GPUs, optimized model training. Governance frameworks ensured compliance, with human-in-loop validation for critical tasks. Rollout phased from pilots in 2023-2024 to full scaling in 2025, focusing on R&D acceleration via GenAI for molecule design and real-world evidence analysis.

Ergebnisse

  • ~12,000 employees trained on generative AI by mid-2025
  • 85-93% of staff reported productivity gains
  • 80% of medical writers found AI protocol drafts useful
  • Significant reduction in life sciences model training time via MI300X GPUs
  • High AI maturity ranking per IMD Index (top global)
  • GenAI enabling faster trial design and dose selection
Read case study →

Citibank Hong Kong

Wealth Management

Citibank Hong Kong faced growing demand for advanced personal finance management tools accessible via mobile devices. Customers sought predictive insights into budgeting, investing, and financial tracking, but traditional apps lacked personalization and real-time interactivity. In a competitive retail banking landscape, especially in wealth management, clients expected seamless, proactive advice amid volatile markets and rising digital expectations in Asia. Key challenges included integrating vast customer data for accurate forecasts, ensuring conversational interfaces felt natural, and overcoming data privacy hurdles in Hong Kong's regulated environment. Early mobile tools showed low engagement, with users abandoning apps due to generic recommendations, highlighting the need for AI-driven personalization to retain high-net-worth individuals.

Lösung

Wealth 360 emerged as Citibank HK's AI-powered personal finance manager, embedded in the Citi Mobile app. It leverages predictive analytics to forecast spending patterns, investment returns, and portfolio risks, delivering personalized recommendations via a conversational interface like chatbots. Drawing from Citi's global AI expertise, it processes transaction data, market trends, and user behavior for tailored advice on budgeting and wealth growth. Implementation involved machine learning models for personalization and natural language processing (NLP) for intuitive chats, building on Citi's prior successes like Asia-Pacific chatbots and APIs. This solution addressed gaps by enabling proactive alerts and virtual consultations, enhancing customer experience without human intervention.

Ergebnisse

  • 30% increase in mobile app engagement metrics
  • 25% improvement in wealth management service retention
  • 40% faster response times via conversational AI
  • 85% customer satisfaction score for personalized insights
  • 18M+ API calls processed in similar Citi initiatives
  • 50% reduction in manual advisory queries
Read case study →

JPMorgan Chase

Banking

In the high-stakes world of asset management and wealth management at JPMorgan Chase, advisors faced significant time burdens from manual research, document summarization, and report drafting. Generating investment ideas, market insights, and personalized client reports often took hours or days, limiting time for client interactions and strategic advising. This inefficiency was exacerbated post-ChatGPT, as the bank recognized the need for secure, internal AI to handle vast proprietary data without risking compliance or security breaches. The Private Bank advisors specifically struggled with preparing for client meetings, sifting through research reports, and creating tailored recommendations amid regulatory scrutiny and data silos, hindering productivity and client responsiveness in a competitive landscape.

Lösung

JPMorgan addressed these challenges by developing the LLM Suite, an internal suite of seven fine-tuned large language models (LLMs) powered by generative AI, integrated with secure data infrastructure. This platform enables advisors to draft reports, generate investment ideas, and summarize documents rapidly using proprietary data. A specialized tool, Connect Coach, was created for Private Bank advisors to assist in client preparation, idea generation, and research synthesis. The implementation emphasized governance, risk management, and employee training through AI competitions and 'learn-by-doing' approaches, ensuring safe scaling across the firm. LLM Suite rolled out progressively, starting with proofs-of-concept and expanding firm-wide.

Ergebnisse

  • Users reached: 140,000 employees
  • Use cases developed: 450+ proofs-of-concept
  • Financial upside: Up to $2 billion in AI value
  • Deployment speed: From pilot to 60K users in months
  • Advisor tools: Connect Coach for Private Bank
  • Firm-wide PoCs: Rigorous ROI measurement across 450 initiatives
Read case study →

Zalando

E-commerce

In the online fashion retail sector, high return rates—often exceeding 30-40% for apparel—stem primarily from fit and sizing uncertainties, as customers cannot physically try on items before purchase . Zalando, Europe's largest fashion e-tailer serving 27 million active customers across 25 markets, faced substantial challenges with these returns, incurring massive logistics costs, environmental impact, and customer dissatisfaction due to inconsistent sizing across over 6,000 brands and 150,000+ products . Traditional size charts and recommendations proved insufficient, with early surveys showing up to 50% of returns attributed to poor fit perception, hindering conversion rates and repeat purchases in a competitive market . This was compounded by the lack of immersive shopping experiences online, leading to hesitation among tech-savvy millennials and Gen Z shoppers who demanded more personalized, visual tools.

Lösung

Zalando addressed these pain points by deploying a generative computer vision-powered virtual try-on solution, enabling users to upload selfies or use avatars to see realistic garment overlays tailored to their body shape and measurements . Leveraging machine learning models for pose estimation, body segmentation, and AI-generated rendering, the tool predicts optimal sizes and simulates draping effects, integrating with Zalando's ML platform for scalable personalization . The system combines computer vision (e.g., for landmark detection) with generative AI techniques to create hyper-realistic visualizations, drawing from vast datasets of product images, customer data, and 3D scans, ultimately aiming to cut returns while enhancing engagement . Piloted online and expanded to outlets, it forms part of Zalando's broader AI ecosystem including size predictors and style assistants.

Ergebnisse

  • 30,000+ customers used virtual fitting room shortly after launch
  • 5-10% projected reduction in return rates
  • Up to 21% fewer wrong-size returns via related AI size tools
  • Expanded to all physical outlets by 2023 for jeans category
  • Supports 27 million customers across 25 European markets
  • Part of AI strategy boosting personalization for 150,000+ products
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Connect Gemini to Your Marketing and CRM Data for a 360° View

To fix low-quality lead scoring, Gemini needs access to the right signals. Start by mapping where critical data lives: CRM (opportunities, deals, industries), marketing automation (email activity, form fills), web analytics (page views, sessions, intent pages), and product or trial usage (if applicable). Work with your marketing ops or data team to expose this data to Gemini via secure APIs or a warehouse layer.

Use Gemini to help you write and QA the data extraction and transformation code. For example, you can prompt Gemini with your schema and ask it to generate Python or SQL to join leads with their historical touchpoints and outcomes.

Prompt example for Gemini (code-focused):
You are a data engineer helping a marketing team build a training dataset
for predictive lead scoring. We have:
- CRM table: deals (id, contact_id, amount, stage, closed_date)
- CRM table: contacts (id, email, company_size, industry, title)
- Marketing table: events (contact_id, event_type, url, timestamp)

Write SQL to produce a lead-level table with one row per contact_id that includes:
- Binary target: converted_to_opportunity (1 if any deal with stage >= 'SQL')
- Aggregated counts of events by type
- Last activity date and number of visits to /pricing and /demo pages.
Return only the SQL.

Expected outcome: Gemini accelerates the creation of a clean, joined dataset that becomes the backbone for your first predictive model, reducing weeks of manual data work to days.

Use Gemini to Prototype a Simple Predictive Scoring Model

Once you have a dataset, you can use Gemini’s code capabilities to quickly prototype a predictive model in Python, even if your team is not full of data scientists. Start with a standard classifier (e.g. logistic regression, gradient boosting) and let Gemini generate a baseline training script, including feature engineering and evaluation.

Provide Gemini with a description of your columns and desired output, and ask it to produce runnable code that outputs a probability score per lead.

Prompt example for Gemini (modelling-focused):
You are a senior machine learning engineer.
We have a CSV with columns:
- converted_to_opportunity (0/1 target)
- company_size, industry, title
- email_open_count, email_click_count, webinar_attended
- pricing_page_visits, demo_page_visits, last_activity_days_ago

Write Python code using scikit-learn to:
1) Split into train/test
2) Train a gradient boosting classifier
3) Output ROC AUC and a histogram of predicted probabilities
4) Save a CSV with contact_id and predicted probability.
Assume the file is leads.csv.

Expected outcome: Within a short sprint, you have a working predictive lead scoring model to test, instead of debating rules. You can iterate on features and thresholds based on the model’s measured performance.

Translate Model Output into Operational Scores and Playbooks

A probability score alone doesn’t change behaviour. Convert Gemini’s model output into actionable lead score bands with clear next steps for marketing and sales. For example: 0.75+ probability = “Tier A – immediate sales follow-up within 2 hours”; 0.5–0.75 = “Tier B – SDR outreach within 24 hours plus nurturing”; below 0.5 = “marketing nurture only”.

Use Gemini to help you draft playbooks and email sequences tailored to each band. Provide example lead profiles and ask Gemini to propose outreach sequences that match the predicted intent level.

Prompt example for Gemini (playbook-focused):
You are a senior SDR coach.
We have three lead tiers based on an AI score:
- Tier A (0.75+): very high intent
- Tier B (0.5-0.75): medium intent
- Tier C (<0.5): low intent

Create:
1) A 3-touch email + call sequence for Tier A (focus on speed and direct ask for a meeting)
2) A 4-touch education-focused sequence for Tier B
Return in structured bullet points with subject lines and talk tracks.

Expected outcome: Sales and marketing get a concrete, shared operating model linked to the AI score, improving adoption and shortening time-to-first-touch for high-intent prospects.

Automate Scoring and Routing via APIs and Webhooks

To eliminate manual work, embed Gemini-based scoring into your existing tools. A common pattern is: new lead enters your marketing automation or CRM → a webhook triggers a small service that calls your scoring model (which Gemini helped you build) → the resulting score and tier are written back to the lead record → workflows for routing, notifications, and nurture sequences fire automatically.

Gemini can generate boilerplate code for these integrations in your preferred language (e.g. Node.js, Python) and help you handle authentication, error logging, and edge cases. Use it to scaffold a small microservice that exposes a simple endpoint like /score-lead and can be called from your tools.

Prompt example for Gemini (integration-focused):
You are a senior backend engineer.
Write a minimal Python FastAPI service with one POST /score-lead endpoint.
Input JSON:
{
  "contact_id": "123",
  "features": { ... }
}
The service should:
- Load a pickled scikit-learn model from disk
- Return JSON with {"contact_id", "score", "tier"}
- Include basic logging and error handling.

Expected outcome: New leads are scored and routed in near real-time, removing manual prioritisation and ensuring high-intent prospects are contacted quickly.

Use Gemini to Explain and Monitor Model Performance

To maintain trust and avoid model drift, create a simple lead scoring performance dashboard and use Gemini to help interpret the results. On a regular cadence (e.g. monthly), export performance data: distribution of scores, conversion rates by tier, and how these compare to pre-AI baselines.

Feed these summaries into Gemini and ask it to highlight anomalies, recommend threshold adjustments, or identify features whose predictive power is changing over time. You can also use Gemini to generate natural-language explanations for individual leads: why they were scored high or low, and which factors contributed most.

Prompt example for Gemini (explainability-focused):
You are an analytics expert.
Here is a table with lead_score_tier, number_of_leads, and conversion_rate.
Here is another table with feature_importances from the model.
1) Summarise how Tier A/B/C are performing vs last quarter.
2) Suggest whether we should adjust the thresholds.
3) Identify any features that seem to be losing or gaining predictive power.
Return actionable recommendations in plain language for marketing and sales leadership.

Expected outcome: Continuous monitoring and clear explanations keep stakeholders confident in the Gemini-powered scoring model and enable regular, data-driven adjustments.

Embed Gemini in Day-to-Day Marketing Ops Workflows

Finally, make Gemini a standard tool for your marketing operations team. Beyond the core scoring model, use Gemini in day-to-day work: testing new features to add to the model, simulating the impact of changing thresholds, and helping to clean and normalise incoming lead data (e.g. mapping job titles to standard personas).

Give ops specialists prompt patterns they can reuse when exploring ideas or troubleshooting issues with the scoring system, instead of waiting on scarce data science resources.

Prompt example for Gemini (ops-focused):
You are a marketing ops analyst.
We have 5,000 new leads with free-text job titles.
Create a mapping from titles to 4 personas: Decision Maker, Influencer,
User, Not Relevant.
1) Suggest rules and keyword patterns for the mapping.
2) Output a pseudo-SQL CASE expression implementing it.
3) Flag ambiguous titles we should review manually.

Expected outcome: Your team can evolve and maintain the AI lead scoring system as part of normal operations, without turning every change into a big project.

When executed well, these practices typically lead to realistic, measurable outcomes: 10–30% improvement in lead-to-opportunity conversion in the targeted segment, faster response times to high-intent leads, and a noticeable reduction in sales time spent on poor-fit contacts. The exact numbers depend on your baseline and data quality, but a well-implemented Gemini-based lead scoring system almost always sharpens focus on the right prospects and makes your marketing spend work harder.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Gemini improves lead scoring by learning from your real historical data instead of relying on static, opinion-based rules. It can analyse which combinations of attributes and behaviours (industry, company size, engagement patterns, pages visited, email interactions) are most predictive of opportunities and deals in your funnel.

Instead of “+10 points for a webinar”, Gemini helps you build a predictive model that outputs a probability of conversion for each lead. This allows you to prioritise leads based on their true likelihood to move forward, not just activity volume, and to continually refine the model as new data comes in.

For a focused scope (e.g. one product and region), a first working version of Gemini-powered lead scoring is realistic within 4–8 weeks, assuming you have access to the necessary data. A typical timeline looks like this:

  • Week 1–2: Data access, scope definition, and extraction of historical leads and outcomes.
  • Week 3–4: Prototype model with Gemini (feature engineering, training, evaluation), review with marketing and sales.
  • Week 5–6: Integrate scoring into CRM/marketing tools, define tiers and playbooks, run a controlled live test.
  • Week 7–8: Measure impact, adjust thresholds, and plan scaling to additional segments.

Reruption’s AI PoC approach is designed to validate feasibility and value within this kind of tight timeframe before you commit to broader rollout.

You don’t need a full data science department, but you do need a few key roles. At minimum: a marketing or revenue operations person who understands your funnel and tools, a technical stakeholder who can help with data access and simple integrations, and a sales leader who can define what a qualified lead looks like and drive adoption.

Gemini covers much of the heavy lifting around code generation, analysis, and even prompt-based exploration, so your team’s focus shifts to domain expertise and decision-making rather than low-level coding. Reruption typically brings in the missing pieces—AI engineering, architecture, and experimentation know-how—so your internal team can learn and eventually own the system.

Realistic outcomes from AI-driven lead scoring depend on your starting point, but in many organisations we see improvements such as:

  • Higher lead-to-opportunity conversion in targeted segments (often 10–30% uplift where baseline scoring was weak).
  • Reduced time-to-first-touch for high-intent leads, as routing and prioritisation become automated.
  • Sales productivity gains, because reps spend more time on leads that are actually likely to convert.

On the cost side, most investments are concentrated in initial data work, model setup, and integrating scoring into your existing stack. Using Gemini to accelerate analysis and development typically reduces these implementation costs compared to building everything manually. ROI is best tracked through revenue metrics: incremental opportunities, pipeline value, and deals attributed to leads surfaced by the new scoring model.

Reruption supports organisations end-to-end in building Gemini-based lead scoring into their marketing and sales stack. With our AI PoC offering (9.900€), we focus on a specific use case—such as predictive scoring for one product line—and deliver a working prototype, performance metrics, and a concrete production plan.

Beyond the PoC, our Co-Preneur approach means we embed with your team like a co-founder: we work in your tools, challenge assumptions about what makes a “good lead”, and ship real scoring workflows and integrations rather than slideware. We bring the AI engineering and product skills; your team brings domain knowledge and ownership. Together, we can move from gut-feel lead scoring to a robust, AI-first system that your marketing and sales teams actually use.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media