Innovators at these companies trust us

What it really comes down to

Medical device manufacturers struggle with fragmented data sources, onerous approval processes and the constant demand to guarantee patient safety. Documentation, traceability and quality management create significant manual effort, while clinics expect reliable, integrated assistance systems.

Why we have the industry expertise

Our work combines deep engineering competence with a pragmatic understanding of regulatory pathways and quality requirements. We build production‑ready systems — from LLM applications and internal copilots to self‑hosted infrastructure — and align technical decisions with standards such as ISO 13485, IEC 62304, IEC 62443 and the MDR.

Our co‑preneur way of working means: we do not act as external consultants who hand over recommendations, but we take entrepreneurial responsibility and deliver runnable software, tests and operational concepts that can be embedded into existing QM processes. Speed, technical depth and ownership ensure prototypes quickly become validatable products.

Our references in this industry

We do not list direct MedTech projects here; nonetheless, our projects with technology‑ and manufacturing‑close customers demonstrate the transferability of our approach: at STIHL and Eberspächer we implemented complex production and quality solutions that required strict process reliability and traceability — capabilities that translate 1:1 to medical devices.

In technology projects with BOSCH, TDK and AMERIA we developed secure embedded solutions, go‑to‑market engineering and touchless control technologies. These experiences with embedded systems, real‑time data and regulatory sensitivity form the basis for robust implementation of clinically relevant AI functions.

Additionally, projects like Festo Didactic and FMG support our competence in training platforms and documentation‑driven analysis — both essential building blocks for documentation copilots and clinical training assistants in MedTech environments.

About Reruption

Reruption was founded on the idea of not only advising companies but restructuring them with an entrepreneurial mindset: we work as co‑founders on projects, deliver fast prototypes and take responsibility for the outcomes. Our team combines machine learning engineers, DevOps specialists, security and compliance experts as well as senior product managers with experience in regulated environments.

Especially in regions like Baden‑Württemberg, a hub with Aesculap, Karl Storz, Ziehm and Richard Wolf, we understand the local industry dynamics: short paths to clinic partners, strong manufacturing and supplier networks and the need to make solutions medically and regulatorily robust quickly.

Ready to embed AI into your clinical workflows?

Contact us for an initial analysis of your use cases and a quick feasibility assessment.

What our Clients say

Hans Dohrmann

Hans Dohrmann

CEO at internetstores GmbH 2018-2021

This is the most systematic and transparent go-to-market strategy I have ever seen regarding corporate startups.
Kai Blisch

Kai Blisch

Director Venture Development at STIHL, 2018-2022

Extremely valuable is Reruption's strong focus on users, their needs, and the critical questioning of requirements. ... and last but not least, the collaboration is a great pleasure.
Marco Pfeiffer

Marco Pfeiffer

Head of Business Center Digital & Smart Products at Festool, 2022-

Reruption systematically evaluated a new business model with us: we were particularly impressed by the ability to present even complex issues in a comprehensible way.

AI Transformation in Medical Devices & Healthcare Devices

Integrating AI into medical devices is not purely a technology problem — it is a multidimensional challenge that brings together clinical validation, regulatory traceability, data security and robust software development. Only when all these areas are addressed simultaneously do solutions arise that may actually be used in clinics.

Industry Context

MedTech products exist in an ecosystem of electronics, software, clinical practice and regulatory bodies. The requirements of the MDR, ISO 13485 and IEC 62304 require manufacturers to strictly document software development, risk management and post‑market surveillance in their product lifecycle. At the same time, clinics demand interoperable interfaces like HL7/FHIR, DICOM and PACS to exchange information reliably.

In Baden‑Württemberg, strong manufacturing expertise and close clinic partnerships meet. That creates opportunities for rapid validation cycles, but also raises expectations for transferability, scalability and data security — from on‑premise solutions to strictly isolated private clouds.

Data types range from structured documentation and measurement data and sensor logs to image data and free text in clinical reports. Each data class imposes its own requirements on ETL pipelines, anonymization, labeling and quality assurance — and directly influences model choice, performance metrics and validation strategy.

Key Use Cases

Documentation copilots are an immediately effective use case: they reduce manual effort for product dossiers, change logs and clinical documentation by populating templates, inserting compliance hints and generating audit trails. Such systems increase efficiency in QM and reduce formal errors without sacrificing legally required traceability.

Clinical workflow assistants support nursing staff and physicians in multi‑step processes — for example perioperative checklists, device operation steps or alarm evaluations. Here latency, deterministic behavior and clear boundaries are essential: the assistant must never autonomously make clinical decisions, but must act as a supportive, explainable system.

Regulatory knowledge systems and quality management tools consolidate normative requirements, change tracking and audit preparation into a searchable, versioned knowledge base. Combined with Enterprise Knowledge Systems (e.g. Postgres + pgvector), relevant regulations, internal SOPs and test documents can be made contextually available.

On the infrastructure side, self‑hosted AI solutions are often necessary to ensure data protection and isolation. We implement solutions with Hetzner, Coolify, MinIO and Traefik as well as model‑agnostic private chatbots that operate without external RAG services, enabling maximum data control.

Implementation Approach

Our approach follows the co‑preneur principle: we start with a clear use‑case scope, validate feasibility in a technical PoC and iteratively build toward a production‑ready solution. Early prototypes demonstrate functionality; later iterations specifically address documentation and validation requirements, including test plans and traceability artifacts.

Technically we combine modularity and reproducibility: isolated ETL pipelines for data quality, specialized ML cycles for model training, MLOps pipelines for deployment and monitoring, and versioning of all artifacts. For knowledge workloads we use Enterprise Knowledge Systems (Postgres + pgvector) and for conversational solutions model‑agnostic architectures that allow switching between providers.

Regulatory integration begins early: risk‑class analysis, requirement traceability, clinical validation designs and test cases are created in parallel with development. We deliver technical specifications, verification and validation evidence as well as operations documentation that can feed into an ISO 13485‑compliant QMS.

Security, Hosting and Data Governance

Data protection and security are not afterthoughts — they are core requirements. We implement end‑to‑end encryption, strict role‑ and permission concepts, audit logs and network segmentation. For sensitive patient data many customers prefer self‑hosted setups or private colocation, which is why we master deployment patterns with Hetzner, MinIO and Traefik.

Model governance includes versioning, explainability protocols and performance dashboards. Continuous monitoring (drift detection, concept drift alerts) is mandatory to meet post‑market surveillance obligations and to trigger re‑training and recall processes if necessary.

ROI, Timeline & Operationalization

A typical path starts with a focused PoC (for us a precise technical feasibility check) within a few weeks, followed by an extended pilot phase (2–6 months) and a production release including QMS integration within 6–12 months. Exact timelines depend on data quality, clinical validation and approval requirements.

Return on investment arises from shortened documentation times, reduced audit effort, fewer user errors and faster process speeds in clinics. Concrete KPIs include time saved per documentation case, reduction of non‑conformities and increased throughput rates in test benches or OR workflows.

Team & Change Management

Successful projects require a cross‑functional team: ML engineers, software architects, DevOps, regulatory affairs specialists, clinical SMEs as well as QM and security owners. Clinical experts should be involved from the PoC phase to validate datasets and define acceptance criteria.

Change management includes training, SOP adjustments and clear escalation paths. A new assistance system must be embedded into processes, training and service‑level agreements so that users trust and adopt it long term. We support training, rollouts and the creation of necessary operations documentation.

In summary: AI engineering for medical devices is a balance of technical excellence, regulatory rigor and pragmatic product development. Only those who integrate all these layers deliver solutions that clinics accept and regulators approve.

Ready for a proof of concept?

Book a technical PoC to receive a working demonstration of your idea in a few weeks.

Frequently Asked Questions

Regulatory compliance must be planned from the start, not attached later. This begins with the product's risk classification: is the AI functionality a medical device, an accessory, or part of a clinical workflow — the classification determines the scope of verification and validation requirements. Only the classification sets the extent of verification and validation obligations.

In practice we establish requirement‑traceability matrices that link every software and model requirement point to tests, verification evidence and SOPs. These artifacts are exactly what auditors and notified bodies want to see: traceable links between specification, implementation and test results.

Another focus is validation studies. Depending on the risk class you may need to run retrospective or prospective studies, report sensitivity and specificity and, if necessary, provide comparative data to standard procedures. Clinical endpoints, study design and data collection should be agreed early with clinical partners.

Finally, quality management is central: documentation, change control, release management and post‑market surveillance must be set up ISO 13485‑compliant. We deliver not only software but also the required QM artifacts needed for an MDR conformity assessment.

The choice of hosting depends on data sensitivity, regulatory requirements and customer IT policies. For extremely sensitive data or when the infrastructure is mandated by the customer, on‑premise installations in clinical environments are usually the safest option. There the operator retains full data control.

Private cloud or colocation solutions can be appropriate when operated in a certified data center and providing adequate network segmentation and encryption. Providers like Hetzner can be suitable in certain setups if isolation, backups and compliance requirements are met.

For applications with lower sensitivity or where pseudonymization is possible, hybrid models are practical: sensitive data stays on‑premise, while model training and evaluation occur in segregated, controlled environments. We implement secure ETL pipelines and MinIO‑based object stores for encrypted storage.

It is important that hosting is considered from the outset in architectural decisions: network, IAM, key management, audit logging and disaster recovery are not add‑ons but core components that influence approval documentation.

Clinical validation includes several stages: technical validation, retrospective clinical evaluation and staged prospective tests. Initially we check model performance on well‑annotated, representative datasets and produce metrics such as sensitivity, specificity, AUC and error rates across subgroups.

For retrospective studies we collect historical cases, create ground truth labels with clinical experts and analyze spatial and temporal biases. The goal is to uncover systematic errors and establish robust performance intervals.

Prospective validations take place in controlled pilot scenarios with defined endpoints and SOPs. Here the system is integrated into the real clinical environment and we document how the assistant influences decisions — without unduly taking over the clinical decision process.

Finally, monitoring and post‑market surveillance are essential: drift detection, logging of error rates and a re‑training plan are part of approval practice to ensure long‑term safety and performance.

For a documentation copilot, structured product data, change documents, test reports, SOPs and previous audit reports are particularly relevant. Additionally, unstructured data such as emails, free‑text notes and logs are valuable because they provide context and decision rationales that the model can use in templates and suggestions.

Image data or measurement data from devices can be complementary if the copilot, for example, should link test protocols with sensor readings. For each data element a classification by sensitivity and an anonymization strategy must be defined.

Data quality is crucial: consistent schemas, clean timestamps, handling of missing values and uniform units are prerequisites for reliable automation. We build ETL pipelines that clean, normalize and version data so that every documentation artifact is fully traceable.

Finally, legal and ethical aspects must be considered: who may view the data, how long it is stored and how it is presented in audits — all of this feeds into the copilot's design.

Self‑hosted is preferred when maximum data control, regulatory constraints or corporate policies exclude cloud services. Many MedTech companies cannot risk processing patient data or sensitive IP externally and require full isolation and auditability.

Technical reasons also support it: specific latency requirements, deterministic behavior in clinical real‑time processes or the need to operate proprietary models on‑premise make self‑hosting attractive. We rely on orchestration with Coolify, secure object stores like MinIO and ingress management with Traefik for this.

Cloud providers, however, offer scalability and managed services that can accelerate development. In many cases we implement hybrid architectures: training jobs or non‑sensitive analyses in the cloud, inferencing and data storage on‑premise.

The decision should be based on a risk analysis, cost considerations and operational requirements. We support customers with architecture workshops, cost models and implementation of both approaches.

Secure integration starts with standardized interfaces: HL7/FHIR for structured data exchange, DICOM/PACS for image data and secured REST/HTTPS APIs for services. We design integration layers that encapsulate translation rules, mapping logic and error handling so clinical systems do not have to speak directly to models or data pipelines.

Transaction safety and idempotence are important: every request and response must be uniquely logged with audit trails, timestamps and user identification. This makes all interactions traceable and auditable — a prerequisite for regulatory assessments.

At the application level we ensure clear responsibilities: what is an assistance hint, what is a documented result? Such boundaries prevent assistants from overwriting clinical decisions unchecked. We implement review workflows and explicit approval mechanisms.

Finally, testing in a staging environment is necessary: integration tests with synthetic or pseudonymized data, load tests and failure simulations reveal architectural gaps and performance issues early, before a live integration.

Costs and timelines vary greatly by use case. A focused technical PoC to validate feasibility of a use case (e.g. a prototype documentation copilot or an API integration) can be completed with us in a few weeks and is standard‑plannable — ideal for technical validation before larger investments.

For a production solution including integration, validation, QMS artifacts and security hardening you should typically expect 3–12 months of development time and a budget that depends on complexity, data effort and regulatory requirements. Small pipelines are less costly; complex clinical assistance systems require more resources.

We recommend a staged approach: PoC → Pilot → Production. The PoC reduces technical risk and provides reliable estimates for subsequent effort. We assist with business‑case calculations to make ROI drivers like time savings, reduction of non‑conformities and process throughput transparent.

On request we run a focused workshop to define scope, KPIs and budget framework in your specific context and produce a concrete roadmap.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media