The Challenge: Poor Send Time Optimization

Most marketing teams still send campaigns in broad waves: one global send, maybe a few time zones, and hope for the best. The reality is that every customer checks email, apps, and social feeds at different times. When you ignore this, even your best campaigns arrive when people are asleep, in meetings, or simply not in a discovery mindset.

Traditional approaches like fixed send windows, basic time-zone grouping, or manual A/B testing can no longer keep up. They treat audiences as blocks instead of individuals and rely on historical averages rather than real-time behavior. With fragmented channels and always-on journeys, static rules fail to capture patterns like “weekend-only openers”, “commuters checking mobile at 7:30”, or “night owls who only scroll after 22:00”.

The business impact is clear: lower open and click-through rates, higher unsubscribe risk, and wasted media and creative budgets. Messages get buried under more timely competitors, retargeting windows are missed, and carefully crafted personalization never gets a chance to perform because it arrives at the wrong moment. Over time, this erodes channel revenue, customer satisfaction, and trust in your marketing analytics.

The good news: poor send time optimization is very solvable. With modern AI, you can learn the unique engagement rhythm of each user and orchestrate sends accordingly across email, push, and in-app. At Reruption, we’ve helped teams move from static rules to AI-first workflows, turning raw engagement logs into practical decision engines. Below, you’ll find a concrete, marketing-friendly path to using Gemini to fix send time optimization and unlock the performance your campaigns actually deserve.

Need a sparring partner for this challenge?

Let's have a no-obligation chat and brainstorm together.

Innovators at these companies trust us:

Our Assessment

A strategic assessment of the challenge and high-level tips how to tackle it.

From Reruption’s experience building AI-first marketing workflows, poor send time optimization is usually not a creativity problem – it is a data and orchestration problem. Gemini gives marketing and data teams a practical way to turn raw engagement logs into per-user send time predictions, prototype models quickly, and then embed those predictions into your ESP or CDP without waiting for a multi-year martech overhaul.

Start with a Clear Send-Time Strategy, Not Just a Model

Before touching any Gemini API, define what “good send time optimization” means for your business. Are you optimizing for opens, downstream revenue, or a balance between performance and operational constraints (e.g. not sending SMS at night)? Agree on target metrics, key channels (email, push, in-app), and guardrails like quiet hours or regulatory restrictions.

This strategy acts as the decision layer above the model. It prevents teams from overfitting to open rates while ignoring brand impact or customer experience. Having a documented send-time strategy also makes it easier to align marketing, CRM, and data teams on what the Gemini models are supposed to deliver.

Treat Send Time Optimization as an Ongoing Product, Not a One-Off Project

Effective AI-powered send time optimization is never “done”. Customer habits shift with seasons, promotions, and even macro trends. If you treat the first Gemini model as a final deliverable, it will quickly become stale and underperform.

Instead, treat it as a product with a backlog: model improvements, new signals (e.g. app usage, web visits), and experiment ideas. Define a small responsible squad (marketing operations, data science/engineering, and a product owner) and give them ownership for the send-time optimization roadmap and KPIs. This mindset unlocks continuous performance gains instead of a one-time lift.

Design for Collaboration Between Marketers and Data Teams

Many send-time initiatives fail because marketers can’t access or interpret the models, and data teams don’t fully understand campaign constraints. With Gemini, you can bridge this gap by using it to generate SQL, explain model logic in plain language, and prototype experiments together in shared workspaces.

Strategically, set up recurring working sessions where marketing defines hypotheses (e.g. “weekday morning is best only for B2B buyers”) and data teams use Gemini to validate or refute them on historical data. This creates a shared understanding of what the model is actually doing and builds trust in the predictions when they hit the ESP/CDP.

Mitigate Risk with Guardrails and Incremental Rollouts

Jumping directly from a global send to a fully personalized schedule for all users introduces delivery and brand risks. Strategically, you want risk-mitigated AI adoption: start small, define guardrails, then scale with evidence. With Gemini, you can simulate predictions offline and compare against your current baseline before touching production traffic.

Roll out in phases: first for a single campaign type (e.g. newsletters), then for specific segments (e.g. high-intent users), and only later for transactional or critical messages. Set explicit performance thresholds – for example, “only scale if open rate improves by at least 8% with no increase in unsubscribe rate”. This makes the change defensible towards leadership and compliance.

Plan Early for Integration into ESPs and CDPs

AI for send time optimization only creates value when predictions actually drive sends. Strategically, you need a roadmap for how Gemini-generated send-time scores will flow into your ESP or CDP. That means clarifying which system is the source of truth for customer profiles and which tool orchestrates delivery.

Involve marketing ops and engineering early to map out data flows: from raw engagement logs, through Gemini-based modeling, into a prediction store, and finally into the orchestration layer. Having this architecture on paper avoids the common trap of building an impressive model that never leaves a notebook.

Using Gemini for send time optimization is less about fancy algorithms and more about building a focused, integrated decision engine that actually controls when messages are sent. When strategy, collaboration, guardrails, and integration are aligned, you can systematically lift engagement while improving customer experience. Reruption’s Co-Preneur approach and AI PoC work are designed exactly for this kind of challenge: we enter your stack, co-own the KPIs, and build the Gemini-powered workflows that make poor send times a thing of the past. If you’re serious about fixing this, a short conversation is often enough to outline a concrete, low-risk path forward.

Need help implementing these ideas?

Feel free to reach out to us with no obligation.

Real-World Case Studies

From Streaming Media to Financial Services: Learn how companies successfully use Gemini.

Netflix

Streaming Media

With over 17,000 titles and growing, Netflix faced the classic cold start problem and data sparsity in recommendations, where new users or obscure content lacked sufficient interaction data, leading to poor personalization and higher churn rates . Viewers often struggled to discover engaging content among thousands of options, resulting in prolonged browsing times and disengagement—estimated at up to 75% of session time wasted on searching rather than watching . This risked subscriber loss in a competitive streaming market, where retaining users costs far less than acquiring new ones. Scalability was another hurdle: handling 200M+ subscribers generating billions of daily interactions required processing petabytes of data in real-time, while evolving viewer tastes demanded adaptive models beyond traditional collaborative filtering limitations like the popularity bias favoring mainstream hits . Early systems post-Netflix Prize (2006-2009) improved accuracy but struggled with contextual factors like device, time, and mood .

Lösung

Netflix built a hybrid recommendation engine combining collaborative filtering (CF)—starting with FunkSVD and Probabilistic Matrix Factorization from the Netflix Prize—and advanced deep learning models for embeddings and predictions . They consolidated multiple use-case models into a single multi-task neural network, improving performance and maintainability while supporting search, home page, and row recommendations . Key innovations include contextual bandits for exploration-exploitation, A/B testing on thumbnails and metadata, and content-based features from computer vision/audio analysis to mitigate cold starts . Real-time inference on Kubernetes clusters processes 100s of millions of predictions per user session, personalized by viewing history, ratings, pauses, and even search queries . This evolved from 2009 Prize winners to transformer-based architectures by 2023 .

Ergebnisse

  • 80% of viewer hours from recommendations
  • $1B+ annual savings in subscriber retention
  • 75% reduction in content browsing time
  • 10% RMSE improvement from Netflix Prize CF techniques
  • 93% of views from personalized rows
  • Handles billions of daily interactions for 270M subscribers
Read case study →

Rapid Flow Technologies (Surtrac)

Transportation

Pittsburgh's East Liberty neighborhood faced severe urban traffic congestion, with fixed-time traffic signals causing long waits and inefficient flow. Traditional systems operated on preset schedules, ignoring real-time variations like peak hours or accidents, leading to 25-40% excess travel time and higher emissions. The city's irregular grid and unpredictable traffic patterns amplified issues, frustrating drivers and hindering economic activity. City officials sought a scalable solution beyond costly infrastructure overhauls. Sensors existed but lacked intelligent processing; data silos prevented coordination across intersections, resulting in wave-like backups. Emissions rose with idling vehicles, conflicting with sustainability goals.

Lösung

Rapid Flow Technologies developed Surtrac, a decentralized AI system using machine learning for real-time traffic prediction and signal optimization. Connected sensors detect vehicles, feeding data into ML models that forecast flows seconds ahead, adjusting greens dynamically. Unlike centralized systems, Surtrac's peer-to-peer coordination lets intersections 'talk,' prioritizing platoons for smoother progression. This optimization engine balances equity and efficiency, adapting every cycle. Spun from Carnegie Mellon, it integrated seamlessly with existing hardware.

Ergebnisse

  • 25% reduction in travel times
  • 40% decrease in wait/idle times
  • 21% cut in emissions
  • 16% improvement in progression
  • 50% more vehicles per hour in some corridors
Read case study →

BMW (Spartanburg Plant)

Automotive Manufacturing

The BMW Spartanburg Plant, the company's largest globally producing X-series SUVs, faced intense pressure to optimize assembly processes amid rising demand for SUVs and supply chain disruptions. Traditional manufacturing relied heavily on human workers for repetitive tasks like part transport and insertion, leading to worker fatigue, error rates up to 5-10% in precision tasks, and inefficient resource allocation. With over 11,500 employees handling high-volume production, scheduling shifts and matching workers to tasks manually caused delays and cycle time variability of 15-20%, hindering output scalability. Compounding issues included adapting to Industry 4.0 standards, where rigid robotic arms struggled with flexible tasks in dynamic environments. Labor shortages post-pandemic exacerbated this, with turnover rates climbing, and the need to redeploy skilled workers to value-added roles while minimizing downtime. Machine vision limitations in older systems failed to detect subtle defects, resulting in quality escapes and rework costs estimated at millions annually.

Lösung

BMW partnered with Figure AI to deploy Figure 02 humanoid robots integrated with machine vision for real-time object detection and ML scheduling algorithms for dynamic task allocation. These robots use advanced AI to perceive environments via cameras and sensors, enabling autonomous navigation and manipulation in human-robot collaborative settings. ML models predict production bottlenecks, optimize robot-worker scheduling, and self-monitor performance, reducing human oversight. Implementation involved pilot testing in 2024, where robots handled repetitive tasks like part picking and insertion, coordinated via a central AI orchestration platform. This allowed seamless integration into existing lines, with digital twins simulating scenarios for safe rollout. Challenges like initial collision risks were overcome through reinforcement learning fine-tuning, achieving human-like dexterity.

Ergebnisse

  • 400% increase in robot speed post-trials
  • 7x higher task success rate
  • Reduced cycle times by 20-30%
  • Redeployed 10-15% of workers to skilled tasks
  • $1M+ annual cost savings from efficiency gains
  • Error rates dropped below 1%
Read case study →

Stanford Health Care

Healthcare

Stanford Health Care, a leading academic medical center, faced escalating clinician burnout from overwhelming administrative tasks, including drafting patient correspondence and managing inboxes overloaded with messages. With vast EHR data volumes, extracting insights for precision medicine and real-time patient monitoring was manual and time-intensive, delaying care and increasing error risks. Traditional workflows struggled with predictive analytics for events like sepsis or falls, and computer vision for imaging analysis, amid growing patient volumes. Clinicians spent excessive time on routine communications, such as lab result notifications, hindering focus on complex diagnostics. The need for scalable, unbiased AI algorithms was critical to leverage extensive datasets for better outcomes.

Lösung

Partnering with Microsoft, Stanford became one of the first healthcare systems to pilot Azure OpenAI Service within Epic EHR, enabling generative AI for drafting patient messages and natural language queries on clinical data. This integration used GPT-4 to automate correspondence, reducing manual effort. Complementing this, the Healthcare AI Applied Research Team deployed machine learning for predictive analytics (e.g., sepsis, falls prediction) and explored computer vision in imaging projects. Tools like ChatEHR allow conversational access to patient records, accelerating chart reviews. Phased pilots addressed data privacy and bias, ensuring explainable AI for clinicians.

Ergebnisse

  • 50% reduction in time for drafting patient correspondence
  • 30% decrease in clinician inbox burden from AI message routing
  • 91% accuracy in predictive models for inpatient adverse events
  • 20% faster lab result communication to patients
  • Improved autoimmune detection by 1 year prior to diagnosis
Read case study →

Pfizer

Healthcare

The COVID-19 pandemic created an unprecedented urgent need for new antiviral treatments, as traditional drug discovery timelines span 10-15 years with success rates below 10%. Pfizer faced immense pressure to identify potent, oral inhibitors targeting the SARS-CoV-2 3CL protease (Mpro), a key viral enzyme, while ensuring safety and efficacy in humans. Structure-based drug design (SBDD) required analyzing complex protein structures and generating millions of potential molecules, but conventional computational methods were too slow, consuming vast resources and time. Challenges included limited structural data early in the pandemic, high failure risks in hit identification, and the need to run processes in parallel amid global uncertainty. Pfizer's teams had to overcome data scarcity, integrate disparate datasets, and scale simulations without compromising accuracy, all while traditional wet-lab validation lagged behind.

Lösung

Pfizer deployed AI-driven pipelines leveraging machine learning (ML) for SBDD, using models to predict protein-ligand interactions and generate novel molecules via generative AI. Tools analyzed cryo-EM and X-ray structures of the SARS-CoV-2 protease, enabling virtual screening of billions of compounds and de novo design optimized for binding affinity, pharmacokinetics, and synthesizability. By integrating supercomputing with ML algorithms, Pfizer streamlined hit-to-lead optimization, running parallel simulations that identified PF-07321332 (nirmatrelvir) as the lead candidate. This lightspeed approach combined ML with human expertise, reducing iterative cycles and accelerating from target validation to preclinical nomination.

Ergebnisse

  • Drug candidate nomination: 4 months vs. typical 2-5 years
  • Computational chemistry processes reduced: 80-90%
  • Drug discovery timeline cut: From years to 30 days for key phases
  • Clinical trial success rate boost: Up to 12% (vs. industry ~5-10%)
  • Virtual screening scale: Billions of compounds screened rapidly
  • Paxlovid efficacy: 89% reduction in hospitalization/death
Read case study →

Best Practices

Successful implementations follow proven patterns. Have a look at our tactical advice to get started.

Map and Prepare Your Engagement Data for Gemini

Effective send time optimization with Gemini starts with clean, well-structured engagement data. Begin by mapping where events live today: email sends, opens, clicks, push notifications, app sessions, and web visits. Standardize timestamps to a single time zone (e.g. UTC) and make sure you capture the user identifier, channel, and event type for each log entry.

Create a simplified engagement table that Gemini can work with, for example:

user_id | channel   | event_type | event_timestamp       | campaign_id
123     | email     | open       | 2025-10-11 07:31:02   | spring_newsletter
123     | web       | pageview   | 2025-10-11 07:35:10   | /product/123
...

Use Gemini to help you generate SQL that aggregates this data into per-user, per-hour engagement features (e.g. open counts by hour-of-day, day-of-week). This becomes the input for your send-time modeling.

Use Gemini to Prototype a Simple Per-User Send-Time Score

Instead of jumping directly into a complex model, start with a simple heuristic-based score that Gemini can help you design and validate. For each user, calculate their “preferred hour” based on historical engagement patterns.

You can use Gemini in a notebook or Workspace environment to draft and refine the logic:

Prompt to Gemini (for data teams):
"""
You are a data assistant. I have a table `user_email_events` with:
- user_id
- event_type (send, open, click)
- event_timestamp (UTC)

Write SQL that, for each user_id, calculates:
- total opens by hour of day (0-23)
- the hour with the highest open count (preferred_hour)
- a confidence score based on how dominant that hour is vs. others

Return a view `user_send_time_preferences` with:
user_id, preferred_hour, confidence_score
"""

Review the generated SQL with your data team, run it on a subset, and inspect the output. This gives you a baseline model that can already be pushed into your ESP/CDP as a custom field.

Generate and Operationalize Feature Engineering with Gemini

To move beyond naive heuristics, you need richer features: recency, frequency, weekday/weekend patterns, mobile vs desktop behavior, and cross-channel engagement. Gemini can speed up feature ideation and coding by translating natural language ideas into SQL or Python.

Prompt to Gemini:
"""
I want to engineer features for a send-time optimization model.
Given a table of email events (send, open, click) with timestamps, propose
10 useful features at user_id & channel level and write Python (pandas)
code to calculate them.

Consider:
- day of week patterns
- hour of day patterns
- recency of last open
- engagement intensity segments

Return only code and short comments.
"""

Use the generated code as a starting point in your pipeline. Store the resulting features in a feature table that both Gemini and your production systems can access, so you don’t duplicate work later.

Connect Gemini Predictions to Your ESP/CDP for Orchestrated Sends

Once you have per-user send-time scores or model predictions, the next step is connecting them to your ESP/CDP. Create or reuse custom fields such as best_send_hour, best_send_dow, and send_time_confidence in your customer profiles.

Use Gemini to help design the orchestration logic, then translate it into ESP/CDP workflows. For example:

Prompt to Gemini (for marketing ops):
"""
I have the following fields in my CDP:
- best_send_hour (0-23, in user's local time)
- best_send_dow (1-7)
- send_time_confidence (0-1)

We use ESP X, which supports scheduled sends and segments.
Draft a step-by-step configuration plan to:
1) Create segments based on confidence score
2) Schedule batch sends respecting best_send_hour and best_send_dow
3) Fallback to a global send time when confidence < 0.3

Explain each step clearly so a marketing ops manager can implement it.
"""

Implement the suggested steps in your tools, test with a small campaign, and validate that the ESP/CDP actually sends at the predicted times.

Set Up Continuous A/B Tests and Monitoring with Gemini Assistance

To prove value and keep improving, run controlled experiments. Randomly split your audience: one group uses AI-optimized send times, the other keeps the current schedule. Track open rate, click rate, conversion rate, unsubscribe rate, and delivery metrics.

Gemini can help you design the experiment and analyze results:

Prompt to Gemini:
"""
We ran an A/B test on email send times:
- Group A: global send at 10:00 local time
- Group B: AI-optimized send times using `best_send_hour`

Here are the metrics for each group (in CSV):
[PASTE METRICS]

1) Check if improvements are statistically significant
2) Summarize the results in non-technical language for executives
3) Recommend next steps for scaling or iterating the model
"""

Use the analysis to refine your targeting rules, adjust model thresholds, and build a performance report that justifies scaling the approach across more journeys and channels.

Build Marketing-Friendly Documentation and Playbooks with Gemini

Adoption often fails because marketers don’t understand how send-time decisions are made. Use Gemini to turn technical documentation into clear, role-specific playbooks: how send-time fields work, when they are updated, and how to use them in campaigns.

Prompt to Gemini:
"""
Here is a technical description of our send-time optimization pipeline:
[PASTE TECH DOC]

Rewrite this into a 2-page internal guide for campaign managers:
- Plain language, no math
- Explain what best_send_hour and best_send_dow mean
- How and when to use them in email and push campaigns
- Common pitfalls and FAQ
"""

Store these guides in your internal wiki and link them directly from your ESP/CDP so campaign owners can self-serve instead of opening tickets.

Implemented step by step, these best practices typically deliver realistic gains such as +5–15% email open rates, +5–10% click rates, and modest but meaningful improvements in downstream conversions. The exact uplift depends on your baseline and data quality, but with a structured Gemini-powered approach, you can expect visible improvements within a few campaign cycles rather than quarters.

Need implementation expertise now?

Let's talk about your ideas!

Frequently Asked Questions

Gemini lets you move from coarse, segment-level rules to per-user send time predictions. Instead of assuming “everyone in CET should get emails at 10:00”, Gemini can analyze historical engagement logs (opens, clicks, app sessions) to infer each user’s preferred hours and days for interaction.

Practically, this means your ESP/CDP receives fields like best_send_hour and best_send_dow for each user, which then drive scheduling logic. Over time, the model can learn patterns that simple rules miss, such as users who only engage on weekends or during evening hours, leading to higher open and click rates.

You’ll get the most value from Gemini if you can combine marketing operations, data engineering/analytics, and basic cloud skills. Someone needs to access engagement logs, prepare them for modeling, and set up a small pipeline that feeds predictions into your ESP/CDP.

The good news is that Gemini reduces the heavy lifting: it can generate SQL and Python for feature engineering, help design experiments, and translate technical logic into plain language for marketers. Many teams start with 1–2 data people (analyst/engineer) and a marketing ops specialist, then grow from there as the impact becomes clear.

For most organizations with existing engagement data, you can see first results within a few weeks. A typical phased approach looks like this:

  • Week 1–2: Data extraction, cleaning, and basic heuristic model (preferred hour/day).
  • Week 3–4: Integration of predictions into ESP/CDP and first A/B test on a single campaign type.
  • Week 5–8: Model refinement, broader rollout across more segments and channels, and performance reporting.

Because Gemini accelerates data exploration and code generation, the early stages (data prep and baseline modeling) are usually much faster than traditional projects, which is where most delays typically occur.

ROI depends on your baseline performance, list size, and campaign volume, but send time optimization is usually a high-leverage improvement. Many teams see 5–15% lifts in open rates and 5–10% in click rates when moving from global sends to personalized timing, especially if their current setup is very basic.

Because the underlying content and audience stay the same, any uplift is essentially “free leverage” on your existing budget. The main costs are initial setup and some ongoing maintenance. Gemini helps reduce both by automating analysis, code generation, and documentation – which shortens time-to-value and lowers the internal effort required to keep the models useful.

Reruption works with a Co-Preneur approach: we embed with your team like co-founders, not distant consultants. For send time optimization, that usually starts with our AI PoC offering (9,900€), where we validate – in a working prototype – that Gemini can use your real engagement data to generate actionable send-time predictions.

From there, we help you design the data pipeline, integrate predictions into your ESP/CDP, and set up the experiments and dashboards that prove business impact. Because we focus on AI engineering and enablement, we don’t just leave you with slides – we build the actual workflows, document them, and upskill your team so you own the solution. If you want to move from “we should personalize send times” to a live Gemini-powered system in weeks, not months, this is exactly the kind of project we like to co-own.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media