TGA published review of safety and performance issues in Medical Device software

Medical Device Software
Author: Homi Dalal

Software as a Medical Device (SaMD)

In recent years software usage has become ubiquitous and software resources, particularly mobile apps accessible through smartphones, have increased in complexity and usage. The rapid rate of change in information technology and the developing software specialisation of artificial intelligence has impacted many industries, including healthcare.

Ignoring subtle jurisdictional differences, software used in healthcare can be placed into one of four primary sub-classes:

  1. Embedded software – software used within a medical device.
    Safety and performance of the software component is assessed as part of the finished medical device.
  2. Software used as an accessory to a medical device. This category comprises software intended to be used with a specific medical device. Safety and performance of the software is assessed in the same risk classification as the device.
  3. Software as a medical device (SaMD), aka ‘standalone software’ – software that has a medical purpose (i.e. used in the diagnosis, treatment or monitoring of diseases or injuries) is considered a medical device in and of itself.
  4. General purpose software – software that has no medical or clinical purpose (i.e. no diagnostic or therapeutic value) is not a medical device and is not regulated as such. The plethora of software used in the monitoring of general health, fitness or wellbeing fall into this category.

A mobile app that, for example, acts as a remote control for a medical device is considered an accessory to that device; in contrast, mobile apps intended to treat, diagnose, cure, mitigate, or prevent disease or other conditions independent of another device are mobile medical apps and are classed as SaMD.

Alarmed by the spate of medical device recalls attributed to software defects globally, and in an attempt to ensure software resources are reliable and safe, the Australian Therapeutic Goods Administration (TGA) has published a literature review of safety and performance issues. –

Actual and potential harm caused by medical software.
https://www.tga.gov.au/resource/actual-and-potential-harm-caused-medical-software

The review highlights that data from medical device recall databases may significantly under-represent software errors for a number of reasons, including patients being unaware how to report problems, inadequate information being reported, effects of software errors being too subtle or difficult to detect, or root cause analysis may not identify software as the source of error where it causes other components to fail. While a large proportion of the recalls reported pertain to flaws in embedded software, the TGA states they illustrate the “significant and negative health impacts from [software] faults that have required recalls”.

The compilation of studies reviewed focuses on mobile medical apps for a litany of software defects that could compromise patient safety and are potentially dangerous. Many app developers lack formal medical training and clinicians are not involved in the development process, resulting in apps available on popular app stores containing incorrect or incomplete information, or inappropriate contextual or functional performance for the intended use. Digital products used in the prevention, detection, or management of healthcare are rarely based on clinical data, most use surrogate and one-dimensional outcome measures for app content quality, and many are woefully lacking in clinical validation to demonstrate efficacy.

In the review TGA has identified issues with six types of products:

  1. Symptom checkers and diagnostic apps
  2. Diabetes management software
  3. Melanoma/ skin analysis software
  4. Asthma self-management
  5. Cardiovascular measurements
  6. Medicine dosing

Software errors in the reviewed literature revealed a diverse range of problems including:

  • Functional errors – such as incorrect values or data represented, incorrect outputs inferred or calculated, or the software did not work as described.
  • Resilience and reliability errors – such as the system crashing, freezing or functioning intermittently, or the software not responding or operating within the expected time, or alarms failing to function or operating at an incorrect time.
  • Usability or accessibility issues – including navigation, flow, or inconsistent formats, leading to incorrect usage, especially by population groups who may be disadvantaged through education, age or other factors, have special needs, or the digitally disadvantaged.
  • Data errors – including lack of or incorrect validation of user entry, data integrity, quality and inconsistency. Failure of the software to validate data entered by users was a significant contributor to calculation problems.
  • Specificity and sensitivity concerns – low levels of diagnostic sensitivity, specificity in detecting suspicious features, or differentiating between certain conditions that would be clear to a trained specialist, devalue the utility and usability of many apps.
  • Privacy and security problems – such as harvesting of identifiable patient data, lack of encryption, trackers, user surveillance and hacking.

The rapid evolution of software development, low barriers to entry for software developers, and the short life cycle of individual apps have all contributed to difficulties in measuring and tracking software errors or addressing safety concerns. Where the output of SaMD impact clinical outcomes and patient care, regulators expect that performance metrics for a SaMD have a scientific level of rigor commensurate with the risk and impact of the SaMD. Established best practices for medical devices hold true for SaMD as well – including the incorporation of Quality processes within the design control process early in the SaMD development, complying with appropriate recognised standards, robust risk management, and ensuring the Essential Principles are fully met. Finally, to ensure the SaMD is clinically valid and can be used reliably and predictably, the clinical evaluation must:

  • establish clinical association between the SaMD output (based on the inputs and algorithms selected, and the targeted clinical condition);
  • demonstrate analytical validation; and
  • generate evidence to validate clinical significance (during verification and validation activities as part of the QMS and good software engineering practices).

For additional information about the SaMD and software in medical devices regulations and requirements, including information on other jurisdictions, Brandwood CKC has created a webinar for your information.

We tailor our services to the requirements of your business and can help you implement plans that are best suited to your projects. Contact us

Disclaimer:

This blog is intended to communicate PharmaLex’s capabilities which are backed by the author’s expertise. However, PharmaLex US Corporation and its parent, Cencora, Inc., strongly encourage readers to review the references provided with this article and all available information related to the topics mentioned herein and to rely on their own experience and expertise in making decisions related thereto as the article may contain certain marketing statements and does not constitute legal advice. 

Related Support

Contact us for more information

Related Blog & Articles

Scroll to Top