Innovators at these companies trust us

Regulatory pressure meets sensitive patient data

Medical device companies operate within a dense network of regulatory requirements, quality management and strict data protection obligations. AI projects that are insufficiently secured endanger not only patient safety but also approvals, liability issues and the trust of clinics.

Why we have the industry expertise

Our work combines deep technical understanding with business responsibility: we build AI systems as if we were going to sell them ourselves in a regulated product portfolio. That means we think in terms of audit trails, traceability and verifiable tests from the start, not as an afterthought. This mindset is essential when documentation copilots and clinical workflow assistants enter production.

Technically, we combine secure-by-design architectures with pragmatic governance: secure self-hosting options, strict data classification, role-based model access controls and comprehensive audit logs. In practice this means a prototype does not remain in experimental mode but follows a clear path to MDR compliance, ISO 13485 and, where necessary, HIPAA readiness.

Our teams bring experience from regulated industries, product development and operational roles — we work with internal quality managers, regulatory affairs and IT security and take responsibility until a system can be operated permanently in the customer's P&L.

Our references in this industry

We do not list direct medical device names because many projects are confidential. Instead, we demonstrate how our experience from related, heavily regulated industries applies: at STIHL and Eberspächer we addressed security and quality requirements in production environments and created systems that support traceability, training and audit trails — essential capabilities for MDR-compliant AI in medical technology.

For technology and B2B clients such as BOSCH and AMERIA we combined complex go-to-market and product development questions with security and compliance engineering. FMG projects demonstrate our competence in document research and audit-ready analyses — exactly what documentation copilots and regulatory dossiers require.

These projects prove our ability to combine technical depth with regulatory rigor: from secure integration into manufacturing processes to scaling audit-capable AI functions in critical workflows.

About Reruption

Reruption doesn't just build prototypes — we build operational capabilities. With the co-preneur approach, our teams act like co-founders within the client organization: we take on operational responsibility, deliver rapid technical results and create the organizational prerequisites for long-term compliance. This combination of speed, ownership and technical depth is particularly critical for MedTech.

We are well connected in the region and know the BW ecosystem: from Aesculap and Karl Storz to Ziehm and Richard Wolf. This proximity helps us understand industry-specific processes, quality cultures and regulatory expectations first-hand — and align AI deliverables precisely to them.

Ready for MDR-compliant AI deployments?

Contact us now — we assess the use case, risks and feasibility in a fast technical PoC and deliver a clear roadmap to compliance.

What our Clients say

Hans Dohrmann

Hans Dohrmann

CEO at internetstores GmbH 2018-2021

This is the most systematic and transparent go-to-market strategy I have ever seen regarding corporate startups.
Kai Blisch

Kai Blisch

Director Venture Development at STIHL, 2018-2022

Extremely valuable is Reruption's strong focus on users, their needs, and the critical questioning of requirements. ... and last but not least, the collaboration is a great pleasure.
Marco Pfeiffer

Marco Pfeiffer

Head of Business Center Digital & Smart Products at Festool, 2022-

Reruption systematically evaluated a new business model with us: we were particularly impressed by the ability to present even complex issues in a comprehensible way.

AI transformation in medical devices & healthcare

Integrating AI into medical devices requires more than good models: it demands an orchestrated combination of security, documentation, regulatory evidence and operational robustness. In an environment where any system change can trigger a new risk assessment, AI deployments must be designed from the outset for audit readiness and patient safety.

Industry Context

Medical devices are subject to a variety of regulations: MDR in the EU, ISO 13485 for quality management, local data protection requirements and, in international scenarios, HIPAA. These standards define not only how a device must be designed, but also how software changes, data flows and ML models must be documented, validated and monitored. An AI component that supports clinical decisions changes the risk profile of the entire system — and thereby the requirements for testing procedures, traceability and post-market surveillance.

Additionally, the data landscape in healthcare environments is heterogeneous: structured measurements, imaging data, unstructured physician letters and device data must be classified, pseudonymized and processed with traceability. Without clear data lineage and retention strategies, compliance risks and lack of reproducibility arise.

Regionally, Baden-Württemberg is a strong MedTech hub with companies like Aesculap and Karl Storz — this creates interfaces to sophisticated manufacturing, supply chains and clinic networks. This interconnection requires that AI solutions are consistent, secure and auditable both in product development and in delivered devices.

Key Use Cases

Documentation copilots: AI can massively accelerate the creation of regulatory dossiers, technical documentation and clinical study reports. Crucial is that copilots provide traceable sources, versioning and verifiable decision paths so auditors can trace every statement back to its origin.

Clinical workflow assistants: Assistive systems in clinical workflows increase efficiency but must not obscure responsibilities. We design assistants with strict output controls, confidence indicators and human-in-the-loop processes so that clinical decisions clearly remain with medical staff and AI only provides support.

Regulatory alignment & audit-ready systems: Whether MDR, ISO 13485 or HIPAA — we establish processes that document changes to models, retrainings and datasets without gaps. This includes automated compliance checks, change logs and standardized test suites that facilitate certification and subsequent audits.

Implementation Approach

Our implementations start with a precise risk analysis and a Privacy Impact Assessment (PIA). We map data flows, classify data by protection needs and define clear separations for patient data using Secure Self-Hosting & Data Separation. In highly sensitive scenarios we prefer on-premise or VPC-based deployments with hardware isolation to minimize regulatory concerns related to cloud providers.

In parallel we define access controls at the model and API level as well as comprehensive audit logging: who made which request, with which input data, which model was active and what response was generated. These logs are designed to be usable for both internal reviews and external auditors.

For model management we implement versioning, data lineage and reproducibility. Each training run receives a reproducible recipe with seed, dataset IDs, preprocessing steps and evaluation metrics. Only in this way can validation cycles and CAPA measures (Corrective and Preventive Actions) be documented cleanly.

Evaluation & red-teaming are integral: systems undergo targeted adversarial tests, bias analyses and safety checks before entering clinical pilot phases. These tests are conducted closely with clinical experts and regulatory affairs so that technical findings can be translated into regulatory actions.

Success Factors

Success requires organizational alignment: regulatory affairs, quality management, IT security, clinical stakeholders and product management must be involved early. Without this cross-functional synchronization, delays in approvals and problems in post-market surveillance are likely.

Automation is a lever: compliance automation reduces manual effort for ISO and MDR evidence. Standardized templates for ISO/NIST/ISO 13485 as well as automated reports for audits create traceability and accelerate release processes.

Moreover, the choice of operating model is decisive: Secure Self-Hosting and data separation minimize regulatory risks in many MedTech contexts, while cloud-based approaches with strong contractual and technical protections make sense for less sensitive use cases.

Finally, a clear roadmap from prototype to validated product is needed: small, well-tested PoCs followed by iterative validation cycles and ultimately design controls that meet MDR and ISO 13485 requirements. This roadmap is part of our co-preneur approach: rapid technical results plus a sustainable compliance architecture.

Ready to start your AI security & compliance?

Schedule an initial meeting: we'll clarify architecture, data strategy and audit readiness and show concrete next steps.

Frequently Asked Questions

MDR compliance for an AI system starts with a systematic risk analysis: we identify the clinical risks, assess the risk potential of the AI outputs and define risk controls. These controls range from design-level constraints to field monitoring mechanisms. It is important that the AI is not operated as a black box, but that decisions, data and models are traceable.

Design controls must also be established: requirements, verification and validation plans, software architectures and traceability from requirements to tests. Every change to the model or training data must be documented, including version history and regression tests, so auditors can understand how system state and performance change over time.

Technically, we recommend measures such as secure self-hosting options, data separation, detailed audit logs and model access controls. These measures minimize attack surfaces and provide the basis for technical evidence in the MDR context.

Organizationally, involving regulatory affairs and quality management throughout the lifecycle is essential. Without these stakeholders, it will be difficult to create and maintain the required regulatory documents, post-market surveillance plans and CAPA processes in a timely manner.

ISO 13485 requires a quality management system that ensures the conformity of medical products — this also applies to AI-enabled components. For AI this means: clear processes for software development, risk analysis, validation, release and change tracking. Any ML modification falls under change control and requires an assessment of its impact on safety and effectiveness.

Furthermore, you must ensure complete documentation and traceability: requirement documents, architecture descriptions, test plans, validation reports and issue records must be stored in a revision-safe manner. Automated pipelines that record metadata about training runs, datasets and evaluation metrics are a major advantage here.

Auditors also expect companies to have CAPA processes in place and to feed field observations into product improvement. For AI systems this means robust monitoring, performance dashboards and clear escalation rules for degrading models.

Practically, templates and compliance automation help: standardized ISO 13485 checklists, test suites and documented validation protocols reduce effort and increase confidence in both internal and external audits.

Patient data protection starts with data minimization and clear classification: which data is truly required for the use case, which can be pseudonymized or aggregated? We implement data governance policies that define data classification, retention cycles and access rights to minimize the risk of unintended disclosure.

Technically, we rely on strict data separation, encrypted storage at rest and in transit, and role-based access controls. For sensitive workflows we recommend self-hosting or dedicated VPC solutions to avoid regulatory uncertainties around international cloud providers.

Additionally, Privacy Impact Assessments (PIA) and Data Protection Impact Assessments (DPIA) are central. These assessments document risks and technical and organizational countermeasures and are often prerequisites for regulatory approvals.

Finally, a culture of data security is required: training, clear responsibilities (data steward, data protection officer) and regular audits ensure that data protection measures are implemented continuously and not only exist on paper.

Self-hosting often offers advantages when regulatory control, data locality and lower third-party risk are critical — central themes in medical technology. When patient data, proprietary algorithms or strict audit requirements are involved, self-hosting provides maximum transparency and control over infrastructure and logs.

Cloud services can be appropriate when scalability, rapid iteration and cost efficiency are the priority. For less sensitive use cases or anonymized datasets, cloud providers offer managed services with high security levels and convenient compliance certifications.

The decision should be based on a formal risk analysis: data classification, regulatory requirements, SLA needs, integration effort and operating costs. Often a hybrid architecture is ideal: training in protected environments, deployment variants depending on sensitivity — all orchestrated by clear governance rules.

In practice we implement template decision matrices and proof-of-concept tests to determine the best option for the specific product — taking into account ISO 27001, TISAX-like requirements and MDR-specific rules.

Audit-ready means that all relevant artifacts are available at any time: data provenance, model versions, test protocols, releases and field observations. We achieve this through automated pipelines that capture metadata throughout the lifecycle — from data collection through training to production.

Key components are: versioning of datasets and models, structured test suites with reproducible test data, automated reports and an audit log that documents accesses and model changes. These data must be stored in revision-safe systems to satisfy regulatory inspections.

Organizationally, roles and responsibilities must be anchored: who approves releases, who performs validations and who is responsible for monitoring and incident management. We establish gatekeeping processes between development, quality management and regulatory so that every release follows a defined compliance checklist.

Technically, we use compliance automation to generate recurring evidence — for example ISO/NIST templates, automated risk reports and auditor-facing reports. This automation reduces audit effort and increases responsiveness to regulatory inquiries.

Red-teaming is not an optional luxury but a core part of the security strategy for MedTech AI. Through targeted adversarial tests we identify model weaknesses, unintended biases and cases where outputs could be clinically dangerous or misleading. These findings feed directly into risk mitigation measures and test plans.

Evaluation includes both technical metrics (robustness, calibration, false-positive/negative rates) and clinically oriented assessments (impact on patient safety, decision paths of staff). We combine automated benchmarks with clinical studies and expert reviews to verify performance in real-world scenarios.

A structured red-teaming process includes threat modeling, scenario generation, attack simulations and deriving technical and organizational countermeasures. We also document all results in audit-ready reports so the measures are demonstrable to regulators and internal quality departments.

Successful red-teaming initiatives reduce regulatory risk in the long term, strengthen the trust of clinics and patients, and improve the resilience of AI solutions in production.

Contact Us!

0/10 min.

Contact Directly

Your Contact

Philipp M. W. Hoffmann

Founder & Partner

Address

Reruption GmbH

Falkertstraße 2

70176 Stuttgart

Social Media