The Challenge: Poor Send Time Optimization

Most marketing teams still send campaigns in broad waves: one global send, maybe a few time zones, and hope for the best. The reality is that every customer checks email, apps, and social feeds at different times. When you ignore this, even your best campaigns arrive when people are asleep, in meetings, or simply not in a discovery mindset.

Traditional approaches like fixed send windows, basic time-zone grouping, or manual A/B testing can no longer keep up. They treat audiences as blocks instead of individuals and rely on historical averages rather than real-time behavior. With fragmented channels and always-on journeys, static rules fail to capture patterns like “weekend-only openers”, “commuters checking mobile at 7:30”, or “night owls who only scroll after 22:00”.

The business impact is clear: lower open and click-through rates, higher unsubscribe risk, and wasted media and creative budgets. Messages get buried under more timely competitors, retargeting windows are missed, and carefully crafted personalization never gets a chance to perform because it arrives at the wrong moment. Over time, this erodes channel revenue, customer satisfaction, and trust in your marketing analytics.

The good news: poor send time optimization is very solvable. With modern AI, you can learn the unique engagement rhythm of each user and orchestrate sends accordingly across email, push, and in-app. At Reruption, we’ve helped teams move from static rules to AI-first workflows, turning raw engagement logs into practical decision engines. Below, you’ll find a concrete, marketing-friendly path to using Gemini to fix send time optimization and unlock the performance your campaigns actually deserve.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s experience building AI-first marketing workflows, poor send time optimization is usually not a creativity problem – it is a data and orchestration problem. Gemini gives marketing and data teams a practical way to turn raw engagement logs into per-user send time predictions, prototype models quickly, and then embed those predictions into your ESP or CDP without waiting for a multi-year martech overhaul.

Start with a Clear Send-Time Strategy, Not Just a Model

Before touching any Gemini API, define what “good send time optimization” means for your business. Are you optimizing for opens, downstream revenue, or a balance between performance and operational constraints (e.g. not sending SMS at night)? Agree on target metrics, key channels (email, push, in-app), and guardrails like quiet hours or regulatory restrictions.

This strategy acts as the decision layer above the model. It prevents teams from overfitting to open rates while ignoring brand impact or customer experience. Having a documented send-time strategy also makes it easier to align marketing, CRM, and data teams on what the Gemini models are supposed to deliver.

Treat Send Time Optimization as an Ongoing Product, Not a One-Off Project

Effective AI-powered send time optimization is never “done”. Customer habits shift with seasons, promotions, and even macro trends. If you treat the first Gemini model as a final deliverable, it will quickly become stale and underperform.

Instead, treat it as a product with a backlog: model improvements, new signals (e.g. app usage, web visits), and experiment ideas. Define a small responsible squad (marketing operations, data science/engineering, and a product owner) and give them ownership for the send-time optimization roadmap and KPIs. This mindset unlocks continuous performance gains instead of a one-time lift.

Design for Collaboration Between Marketers and Data Teams

Many send-time initiatives fail because marketers can’t access or interpret the models, and data teams don’t fully understand campaign constraints. With Gemini, you can bridge this gap by using it to generate SQL, explain model logic in plain language, and prototype experiments together in shared workspaces.

Strategically, set up recurring working sessions where marketing defines hypotheses (e.g. “weekday morning is best only for B2B buyers”) and data teams use Gemini to validate or refute them on historical data. This creates a shared understanding of what the model is actually doing and builds trust in the predictions when they hit the ESP/CDP.

Mitigate Risk with Guardrails and Incremental Rollouts

Jumping directly from a global send to a fully personalized schedule for all users introduces delivery and brand risks. Strategically, you want risk-mitigated AI adoption: start small, define guardrails, then scale with evidence. With Gemini, you can simulate predictions offline and compare against your current baseline before touching production traffic.

Roll out in phases: first for a single campaign type (e.g. newsletters), then for specific segments (e.g. high-intent users), and only later for transactional or critical messages. Set explicit performance thresholds – for example, “only scale if open rate improves by at least 8% with no increase in unsubscribe rate”. This makes the change defensible towards leadership and compliance.

Plan Early for Integration into ESPs and CDPs

AI for send time optimization only creates value when predictions actually drive sends. Strategically, you need a roadmap for how Gemini-generated send-time scores will flow into your ESP or CDP. That means clarifying which system is the source of truth for customer profiles and which tool orchestrates delivery.

Involve marketing ops and engineering early to map out data flows: from raw engagement logs, through Gemini-based modeling, into a prediction store, and finally into the orchestration layer. Having this architecture on paper avoids the common trap of building an impressive model that never leaves a notebook.

Using Gemini for send time optimization is less about fancy algorithms and more about building a focused, integrated decision engine that actually controls when messages are sent. When strategy, collaboration, guardrails, and integration are aligned, you can systematically lift engagement while improving customer experience. Reruption’s Co-Preneur approach and AI PoC work are designed exactly for this kind of challenge: we enter your stack, co-own the KPIs, and build the Gemini-powered workflows that make poor send times a thing of the past. If you’re serious about fixing this, a short conversation is often enough to outline a concrete, low-risk path forward.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Food Manufacturing to Banking: Learn how companies successfully use Gemini.

PepsiCo (Frito-Lay)

Food Manufacturing

In the fast-paced food manufacturing industry, PepsiCo's Frito-Lay division grappled with unplanned machinery downtime that disrupted high-volume production lines for snacks like Lay's and Doritos. These lines operate 24/7, where even brief failures could cost thousands of dollars per hour in lost capacity—industry estimates peg average downtime at $260,000 per hour in manufacturing . Perishable ingredients and just-in-time supply chains amplified losses, leading to high maintenance costs from reactive repairs, which are 3-5x more expensive than planned ones . Frito-Lay plants faced frequent issues with critical equipment like compressors, conveyors, and fryers, where micro-stops and major breakdowns eroded overall equipment effectiveness (OEE). Worker fatigue from extended shifts compounded risks, as noted in reports of grueling 84-hour weeks, indirectly stressing machines further . Without predictive insights, maintenance teams relied on schedules or breakdowns, resulting in lost production capacity and inability to meet consumer demand spikes.

Lösung

PepsiCo deployed machine learning predictive maintenance across Frito-Lay factories, leveraging sensor data from IoT devices on equipment to forecast failures days or weeks ahead. Models analyzed vibration, temperature, pressure, and usage patterns using algorithms like random forests and deep learning for time-series forecasting . Partnering with cloud platforms like Microsoft Azure Machine Learning and AWS, PepsiCo built scalable systems integrating real-time data streams for just-in-time maintenance alerts. This shifted from reactive to proactive strategies, optimizing schedules during low-production windows and minimizing disruptions . Implementation involved pilot testing in select plants before full rollout, overcoming data silos through advanced analytics .

Ergebnisse

  • 4,000 extra production hours gained annually
  • 50% reduction in unplanned downtime
  • 30% decrease in maintenance costs
  • 95% accuracy in failure predictions
  • 20% increase in OEE (Overall Equipment Effectiveness)
  • $5M+ annual savings from optimized repairs
Read case study →

HSBC

Banking

As a global banking titan handling trillions in annual transactions, HSBC grappled with escalating fraud and money laundering risks. Traditional systems struggled to process over 1 billion transactions monthly, generating excessive false positives that burdened compliance teams, slowed operations, and increased costs. Ensuring real-time detection while minimizing disruptions to legitimate customers was critical, alongside strict regulatory compliance in diverse markets. Customer service faced high volumes of inquiries requiring 24/7 multilingual support, straining resources. Simultaneously, HSBC sought to pioneer generative AI research for innovation in personalization and automation, but challenges included ethical deployment, human oversight for advancing AI, data privacy, and integration across legacy systems without compromising security. Scaling these solutions globally demanded robust governance to maintain trust and adhere to evolving regulations.

Lösung

HSBC tackled fraud with machine learning models powered by Google Cloud's Transaction Monitoring 360, enabling AI to detect anomalies and financial crime patterns in real-time across vast datasets. This shifted from rigid rules to dynamic, adaptive learning. For customer service, NLP-driven chatbots were rolled out to handle routine queries, provide instant responses, and escalate complex issues, enhancing accessibility worldwide. In parallel, HSBC advanced generative AI through internal research, sandboxes, and a landmark multi-year partnership with Mistral AI (announced December 2024), integrating tools for document analysis, translation, fraud enhancement, automation, and client-facing innovations—all under ethical frameworks with human oversight.

Ergebnisse

  • Screens over 1 billion transactions monthly for financial crime
  • Significant reduction in false positives and manual reviews (up to 60-90% in models)
  • Hundreds of AI use cases deployed across global operations
  • Multi-year Mistral AI partnership (Dec 2024) to accelerate genAI productivity
  • Enhanced real-time fraud alerts, reducing compliance workload
Read case study →

BP

Energy

BP, a global energy leader in oil, gas, and renewables, grappled with high energy costs during peak periods across its extensive assets. Volatile grid demands and price spikes during high-consumption times strained operations, exacerbating inefficiencies in energy production and consumption. Integrating intermittent renewable sources added forecasting challenges, while traditional management failed to dynamically respond to real-time market signals, leading to substantial financial losses and grid instability risks . Compounding this, BP's diverse portfolio—from offshore platforms to data-heavy exploration—faced data silos and legacy systems ill-equipped for predictive analytics. Peak energy expenses not only eroded margins but hindered the transition to sustainable operations amid rising regulatory pressures for emissions reduction. The company needed a solution to shift loads intelligently and monetize flexibility in energy markets .

Lösung

To tackle these issues, BP acquired Open Energi in 2021, gaining access to its flagship Plato AI platform, which employs machine learning for predictive analytics and real-time optimization. Plato analyzes vast datasets from assets, weather, and grid signals to forecast peaks and automate demand response, shifting non-critical loads to off-peak times while participating in frequency response services . Integrated into BP's operations, the AI enables dynamic containment and flexibility markets, optimizing consumption without disrupting production. Combined with BP's internal AI for exploration and simulation, it provides end-to-end visibility, reducing reliance on fossil fuels during peaks and enhancing renewable integration . This acquisition marked a strategic pivot, blending Open Energi's demand-side expertise with BP's supply-side scale.

Ergebnisse

  • $10 million in annual energy savings
  • 80+ MW of energy assets under flexible management
  • Strongest oil exploration performance in years via AI
  • Material boost in electricity demand optimization
  • Reduced peak grid costs through dynamic response
  • Enhanced asset efficiency across oil, gas, renewables
Read case study →

Mayo Clinic

Healthcare

As a leading academic medical center, Mayo Clinic manages millions of patient records annually, but early detection of heart failure remains elusive. Traditional echocardiography detects low left ventricular ejection fraction (LVEF <50%) only when symptomatic, missing asymptomatic cases that account for up to 50% of heart failure risks. Clinicians struggle with vast unstructured data, slowing retrieval of patient-specific insights and delaying decisions in high-stakes cardiology. Additionally, workforce shortages and rising costs exacerbate challenges, with cardiovascular diseases causing 17.9M deaths yearly globally. Manual ECG interpretation misses subtle patterns predictive of low EF, and sifting through electronic health records (EHRs) takes hours, hindering personalized medicine. Mayo needed scalable AI to transform reactive care into proactive prediction.

Lösung

Mayo Clinic deployed a deep learning ECG algorithm trained on over 1 million ECGs, identifying low LVEF from routine 10-second traces with high accuracy. This ML model extracts features invisible to humans, validated internally and externally. In parallel, a generative AI search tool via Google Cloud partnership accelerates EHR queries. Launched in 2023, it uses large language models (LLMs) for natural language searches, surfacing clinical insights instantly. Integrated into Mayo Clinic Platform, it supports 200+ AI initiatives. These solutions overcome data silos through federated learning and secure cloud infrastructure.

Ergebnisse

  • ECG AI AUC: 0.93 (internal), 0.92 (external validation)
  • Low EF detection sensitivity: 82% at 90% specificity
  • Asymptomatic low EF identified: 1.5% prevalence in screened population
  • GenAI search speed: 40% reduction in query time for clinicians
  • Model trained on: 1.1M ECGs from 44K patients
  • Deployment reach: Integrated in Mayo cardiology workflows since 2021
Read case study →

H&M

Apparel Retail

In the fast-paced world of apparel retail, H&M faced intense pressure from rapidly shifting consumer trends and volatile demand. Traditional forecasting methods struggled to keep up, leading to frequent stockouts during peak seasons and massive overstock of unsold items, which contributed to high waste levels and tied up capital. Reports indicate H&M's inventory inefficiencies cost millions annually, with overproduction exacerbating environmental concerns in an industry notorious for excess. Compounding this, global supply chain disruptions and competition from agile rivals like Zara amplified the need for precise trend forecasting. H&M's legacy systems relied on historical sales data alone, missing real-time signals from social media and search trends, resulting in misallocated inventory across 5,000+ stores worldwide and suboptimal sell-through rates.

Lösung

H&M deployed AI-driven predictive analytics to transform its approach, integrating machine learning models that analyze vast datasets from social media, fashion blogs, search engines, and internal sales. These models predict emerging trends weeks in advance and optimize inventory allocation dynamically. The solution involved partnering with data platforms to scrape and process unstructured data, feeding it into custom ML algorithms for demand forecasting. This enabled automated restocking decisions, reducing human bias and accelerating response times from months to days.

Ergebnisse

  • 30% increase in profits from optimized inventory
  • 25% reduction in waste and overstock
  • 20% improvement in forecasting accuracy
  • 15-20% higher sell-through rates
  • 14% reduction in stockouts
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Map and Prepare Your Engagement Data for Gemini

Effective send time optimization with Gemini starts with clean, well-structured engagement data. Begin by mapping where events live today: email sends, opens, clicks, push notifications, app sessions, and web visits. Standardize timestamps to a single time zone (e.g. UTC) and make sure you capture the user identifier, channel, and event type for each log entry.

Create a simplified engagement table that Gemini can work with, for example:

user_id | channel   | event_type | event_timestamp       | campaign_id
123     | email     | open       | 2025-10-11 07:31:02   | spring_newsletter
123     | web       | pageview   | 2025-10-11 07:35:10   | /product/123
...

Use Gemini to help you generate SQL that aggregates this data into per-user, per-hour engagement features (e.g. open counts by hour-of-day, day-of-week). This becomes the input for your send-time modeling.

Use Gemini to Prototype a Simple Per-User Send-Time Score

Instead of jumping directly into a complex model, start with a simple heuristic-based score that Gemini can help you design and validate. For each user, calculate their “preferred hour” based on historical engagement patterns.

You can use Gemini in a notebook or Workspace environment to draft and refine the logic:

Prompt to Gemini (for data teams):
"""
You are a data assistant. I have a table `user_email_events` with:
- user_id
- event_type (send, open, click)
- event_timestamp (UTC)

Write SQL that, for each user_id, calculates:
- total opens by hour of day (0-23)
- the hour with the highest open count (preferred_hour)
- a confidence score based on how dominant that hour is vs. others

Return a view `user_send_time_preferences` with:
user_id, preferred_hour, confidence_score
"""

Review the generated SQL with your data team, run it on a subset, and inspect the output. This gives you a baseline model that can already be pushed into your ESP/CDP as a custom field.

Generate and Operationalize Feature Engineering with Gemini

To move beyond naive heuristics, you need richer features: recency, frequency, weekday/weekend patterns, mobile vs desktop behavior, and cross-channel engagement. Gemini can speed up feature ideation and coding by translating natural language ideas into SQL or Python.

Prompt to Gemini:
"""
I want to engineer features for a send-time optimization model.
Given a table of email events (send, open, click) with timestamps, propose
10 useful features at user_id & channel level and write Python (pandas)
code to calculate them.

Consider:
- day of week patterns
- hour of day patterns
- recency of last open
- engagement intensity segments

Return only code and short comments.
"""

Use the generated code as a starting point in your pipeline. Store the resulting features in a feature table that both Gemini and your production systems can access, so you don’t duplicate work later.

Connect Gemini Predictions to Your ESP/CDP for Orchestrated Sends

Once you have per-user send-time scores or model predictions, the next step is connecting them to your ESP/CDP. Create or reuse custom fields such as best_send_hour, best_send_dow, and send_time_confidence in your customer profiles.

Use Gemini to help design the orchestration logic, then translate it into ESP/CDP workflows. For example:

Prompt to Gemini (for marketing ops):
"""
I have the following fields in my CDP:
- best_send_hour (0-23, in user's local time)
- best_send_dow (1-7)
- send_time_confidence (0-1)

We use ESP X, which supports scheduled sends and segments.
Draft a step-by-step configuration plan to:
1) Create segments based on confidence score
2) Schedule batch sends respecting best_send_hour and best_send_dow
3) Fallback to a global send time when confidence < 0.3

Explain each step clearly so a marketing ops manager can implement it.
"""

Implement the suggested steps in your tools, test with a small campaign, and validate that the ESP/CDP actually sends at the predicted times.

Set Up Continuous A/B Tests and Monitoring with Gemini Assistance

To prove value and keep improving, run controlled experiments. Randomly split your audience: one group uses AI-optimized send times, the other keeps the current schedule. Track open rate, click rate, conversion rate, unsubscribe rate, and delivery metrics.

Gemini can help you design the experiment and analyze results:

Prompt to Gemini:
"""
We ran an A/B test on email send times:
- Group A: global send at 10:00 local time
- Group B: AI-optimized send times using `best_send_hour`

Here are the metrics for each group (in CSV):
[PASTE METRICS]

1) Check if improvements are statistically significant
2) Summarize the results in non-technical language for executives
3) Recommend next steps for scaling or iterating the model
"""

Use the analysis to refine your targeting rules, adjust model thresholds, and build a performance report that justifies scaling the approach across more journeys and channels.

Build Marketing-Friendly Documentation and Playbooks with Gemini

Adoption often fails because marketers don’t understand how send-time decisions are made. Use Gemini to turn technical documentation into clear, role-specific playbooks: how send-time fields work, when they are updated, and how to use them in campaigns.

Prompt to Gemini:
"""
Here is a technical description of our send-time optimization pipeline:
[PASTE TECH DOC]

Rewrite this into a 2-page internal guide for campaign managers:
- Plain language, no math
- Explain what best_send_hour and best_send_dow mean
- How and when to use them in email and push campaigns
- Common pitfalls and FAQ
"""

Store these guides in your internal wiki and link them directly from your ESP/CDP so campaign owners can self-serve instead of opening tickets.

Implemented step by step, these best practices typically deliver realistic gains such as +5–15% email open rates, +5–10% click rates, and modest but meaningful improvements in downstream conversions. The exact uplift depends on your baseline and data quality, but with a structured Gemini-powered approach, you can expect visible improvements within a few campaign cycles rather than quarters.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Gemini lets you move from coarse, segment-level rules to per-user send time predictions. Instead of assuming “everyone in CET should get emails at 10:00”, Gemini can analyze historical engagement logs (opens, clicks, app sessions) to infer each user’s preferred hours and days for interaction.

Practically, this means your ESP/CDP receives fields like best_send_hour and best_send_dow for each user, which then drive scheduling logic. Over time, the model can learn patterns that simple rules miss, such as users who only engage on weekends or during evening hours, leading to higher open and click rates.

You’ll get the most value from Gemini if you can combine marketing operations, data engineering/analytics, and basic cloud skills. Someone needs to access engagement logs, prepare them for modeling, and set up a small pipeline that feeds predictions into your ESP/CDP.

The good news is that Gemini reduces the heavy lifting: it can generate SQL and Python for feature engineering, help design experiments, and translate technical logic into plain language for marketers. Many teams start with 1–2 data people (analyst/engineer) and a marketing ops specialist, then grow from there as the impact becomes clear.

For most organizations with existing engagement data, you can see first results within a few weeks. A typical phased approach looks like this:

  • Week 1–2: Data extraction, cleaning, and basic heuristic model (preferred hour/day).
  • Week 3–4: Integration of predictions into ESP/CDP and first A/B test on a single campaign type.
  • Week 5–8: Model refinement, broader rollout across more segments and channels, and performance reporting.

Because Gemini accelerates data exploration and code generation, the early stages (data prep and baseline modeling) are usually much faster than traditional projects, which is where most delays typically occur.

ROI depends on your baseline performance, list size, and campaign volume, but send time optimization is usually a high-leverage improvement. Many teams see 5–15% lifts in open rates and 5–10% in click rates when moving from global sends to personalized timing, especially if their current setup is very basic.

Because the underlying content and audience stay the same, any uplift is essentially “free leverage” on your existing budget. The main costs are initial setup and some ongoing maintenance. Gemini helps reduce both by automating analysis, code generation, and documentation – which shortens time-to-value and lowers the internal effort required to keep the models useful.

Reruption works with a Co-Preneur approach: we embed with your team like co-founders, not distant consultants. For send time optimization, that usually starts with our AI PoC offering (9,900€), where we validate – in a working prototype – that Gemini can use your real engagement data to generate actionable send-time predictions.

From there, we help you design the data pipeline, integrate predictions into your ESP/CDP, and set up the experiments and dashboards that prove business impact. Because we focus on AI engineering and enablement, we don’t just leave you with slides – we build the actual workflows, document them, and upskill your team so you own the solution. If you want to move from “we should personalize send times” to a live Gemini-powered system in weeks, not months, this is exactly the kind of project we like to co-own.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media