In their influential NEJM paper [1], David Bates and Atul Gawande argued that information technology and computerised decision aids have an important role to play in supporting the quality of care and safety of patients. This OpenClinical paper from 2002 supported their position while extending the discussion to consider some techniques for ensuring the safe development and deployment of clinical decision support systems (CDSSs). We note, for example, that CDSSs have the potential to bring significant benefit to patient care but stress the need for designers and suppliers of such technologies to recognise a duty of care entailing the adoption of appropriate techniques to address quality, safety, ethical and legal issues throughout the design and deployment cycle.
Bates and Gawande argued that “safe care now requires a degree of individualization that is becoming unimaginable without computerized decision support”. This OpenClinical paper supported their position and proposed a number of techniques designed to help ensure that such systems address quality, safety, ethical and legal as well as technical and efficacy issues.
The diverse causes of avoidable medical errors and their management have also been in discussion for a considerable time (e.g. [6]). These include organisational as well as individual causes, with remedial strategies centring on psychological factors (e.g.[7]) and socio-technical interventions (e.g. [8]). A variety of human systems strategies to prevent errors and ensure that they are properly reported and documented, if they occur, have been investigated [9].
Bates and Gawande made the important point that there are also technical strategies for reducing errors as well as “human systems” approaches. We strongly endorse their view that “… strategies for preventing errors and adverse events include tools that can improve communication, make knowledge readily accessible, require key pieces of information (such as the dose of a drug), assist with calculations, perform checks in real time, assist with monitoring and provide decision support” [1]. In 2002, OpenClinical published a discussion paper, ‘Quality & Safety in Decision Support Systems’ [19]. In the following we outlined some of the issues that have been identified and proposed methods for dealing with them.
A duty of care
The range of decision support and knowledge management systems designed for clinical practice originating from the commercial, academic and clinical sectors is wide and growing quickly (see www.openclinical.org/clinical.html and www.openclinical.org/commercial.html). Despite the apparent promise of CDSSs, all new technologies have potentially negative as well as positive effects: prescribing software that can give beneficial advice also has the potential to give inappropriate and harmful prescriptions. Whatever the potential value of prompts and reminders, interactive guidelines, workflow management systems and so on, they will only become acceptable if they can demonstrate high levels of safety and reliability and gain the trust of clinicians and patients. As with new drugs or any novel clinical procedure, it is not sufficient to show that innovative software is efficacious, we have to consider safety as well. This can be partly addressed through the normal methods of clinical trials, but software has many “failure modes” that might not be detected in trials. This is widely recognised in other “safety critical” fields where software is extensively used, such as transport and the power industries. In these and other fields, software quality and safety are viewed as vital engineering problems in their own right.
Quality
The quality of a decision support system needs to be considered at two levels: the level of the technology (the software platform which is used to build a clinical application) and the level of the medical content (the knowledge base) that the technology uses. The following quality methods are generally recommended in software safety engineering, and are applicable at both levels.
- Software should be designed, implemented, tested and documented using generally recognised methods (e.g. the ISO 9000 standard [20]; see also Leveson’s classic book on the subject, Safeware [21]).
- In cases of significant levels of clinical risk, formal design methods and rigorous verification of critical components of the system may be required [22].
- An explicit quality plan should be developed, covering all phases of implementation, testing and maintenance of the system [21]. Testing should be carried out against this plan, with all tests and their results recorded for subsequent review.
Ensuring that the medical content of a decision support or guideline system is of high quality raises particular problems. Medical knowledge is subject to frequent change and new research often demonstrates that past clinical practices are ineffective or even hazardous. Furthermore, knowledge quality often remains a professional judgement, and cannot always be based on objective scientific evidence. Even when there is evidence it may be limited, open to different interpretations, and subject to change as scientific knowledge advances.
The developers of decision support systems should therefore seek to achieve at least the level of quality assurance that is applied with more traditional knowledge sources, such as medical journals and reference texts, augmented with methods that are appropriate for the new types of knowledge technology. Methods for quality control of medical content rather than technical properties of an application may include:
1. Peer review. This may include static review of content (careful reading of the knowledge base by appropriate specialists) and, perhaps more realistically, operational review (practical testing against standard clinical cases – “paper patients” – and in clinical trials).
2. User review. All content should also be visible to the clinical professionals who use the system in static form (e.g. as intelligible text) and in context (e.g. as patient-specific explanations of any recommendation made by the software).
3. In certain circumstances where risks are high, automated checking of the software design may have a role. Mechanical checking can reveal omissions, logical inconsistencies, redundancies, ambiguities and certain violations of resource or clinical constraint.
Safety
Safety is more than quality. A decision support system that is designed and implemented to high quality standards, and is working exactly as the engineers intended, can still give bad clinical advice. In the pharmaceutical industry, it is standard practice to carry out randomised controlled clinical trials to assess quality and safety and it is widely held that clinical IT systems should be similarly assessed (e.g. [23]).
Experience in the aerospace and nuclear industries has taught us, however, that software safety requires much more than empirical testing, and requires explicit safety protocols to minimise avoidable risks. There are various types of protocol [21], some of which are directly relevant to clinical software. For example, HAZards and OPerability (HAZOP) analysis [24] prior to software development can be used to identify the anticipated level of risk in realistic clinical scenarios and its amelioration. HAZOP is a methodical investigation of the hazards and operational problems to which a technological system can give rise and “ is particularly effective for new systems or novel technologies”
If an initial risk assessment suggests a high level of risk, development should include a separate “safety lifecycle” which addresses safety management issues throughout the design, implementation and deployment of an application. Components of the life-cycle may include:
- A detailed analysis of the clinical situations or events that could be associated with increased patient mortality or morbidity. Each situation that is identified represents an obligation on system developers to make appropriate design changes to pre-empt such hazards.
- Testing that should explicitly include procedures to demonstrate that all safety obligations have been discharged.
- Preparation of a “safety case” at completion of system development which documents the principle hazards, management options and design decisions that have been considered.
- Finally a decision support application may include active software for managing potential hazards during operation, such as automatic monitoring for overdue or inappropriate clinical actions, and initiate appropriate human or system interventions [17, 15].
Just as the software industry has generally adopted the ISO 9000 quality standard [20], many companies are adopting the International Electro-technical Commission 61508 safety standard [25] as a basis for establishing best practice in the design and development of safety-critical software. However, neither the ISO nor the IEC have the authority or resources to enforce their standards (e.g. by any audit or certification process) so the current position appears to be that in this respect the clinical software industry must police itself.
Ethics
Some medical ethical issues are well known. For example, the problems associated with genetic risk assessment and genetic manipulation in science and clinical practice are subjects of wide public discussion. Yet the use of computer systems in clinical decisions that will affect the health and safety of patients raises questions of comparable difficulty. Indeed, it seems certain that computer systems will increasingly take decisions without human involvement. The ethical implications of these developments are not yet being widely discussed.
In a panel discussion sponsored by OpenClinical at Medinfo 2001, Patti Abbot summarised the ethical principles underlying medicine: beneficence – do good; maleficence – do no harm; distributive justice – be fair; autonomy – respect patient self-direction and clinical freedom [26]. She considered some of the issues arising from these principles for the use of novel diagnosis or treatment planning software. For example, the use of any experimental clinical procedure normally entails informed consent by those who are the experimental subjects, patients. Are we not ethically bound, she asks, to seek such consent for the clinical use of decision-support systems? Abbott suggests an answer to this question from standard medical ethics: if a decision support system represents a new or non-standard intervention it requires consent, but if it only formalises or standardises common practice, then consent does not seem to be required.
But is this answer sufficient in the case of software? If a doctor doubts the veracity of a computer’s advice, where years of experience appear to clash with years of software development and testing, what is the doctor’s proper professional action? Is the patient who prefers human clinical judgement to a computerised decision running a risk that his or her insurer will refuse to pay the costs of care or, worse, reject liability in a subsequent suit for malpractice?
Legal liability
Questions about legal liability in the use of decision support software are also long-standing: if a decision support system gives bad advice then who is responsible? Is it the software designers? the providers of the medical knowledge that it uses? or the healthcare professionals who are responsible for the final clinical decision? No-one knows, since so far as we can establish there is no case law to establish the relevant precedents.
All software developers will presumably wish to minimise the chances of patient harm, for the patient’s sake and to anticipate possible legal liabilities that might result from the use of technologies. As developers ourselves we have reviewed current practices in software and safety engineering with a view to establishing a development process that is appropriate for these technologies [17].
Discussions with risk managers at Cancer Research UK to establish circumstances in which legal liability issues might arise from our own developments, and consultations with independent legal opinion regarding the likely exposure of any provider should patients come to harm in situations where its technology has been used were also helpful, but are far from conclusive under English law let alone internationally. So far as we can ascertain, neither the FDA nor its counterparts in Europe have published policies or standards regarding liability. They appear to be waiting for legal cases to arise so that the courts can clarify the position for them.
Given this lack of clarity it is common practice for suppliers of decision support products and services simply to place disclaimers on applications that attempt to limit their liability by restricting “proper” use of their products. Typical examples are “The Software is provided “AS IS”, without any warranty as to quality, fitness for any purpose, completeness, accuracy or freedom from errors” or “In providing this expert system, [the company] does not make any warranty, or assume any legal liability or responsibility for its accuracy, completeness, or usefulness, nor does it represent that its use would not infringe upon private rights”. Given existing consumer protection legislation in many countries legal opinion seems to be that such disclaimers offer limited protection if there are design faults with the product. The practical exposure for technology suppliers can be reduced by taking out insurance, though the degree of protection this affords is likely to vary from country.
Despite the absence of case law in this area a supplier of decision support systems would almost certainly be expected in most countries to follow what is considered to be current best engineering practice in the design, development, verification, validation, testing and maintenance of products.
Many legal systems make this explicit as a duty of care – to the patients who might be adversely affected by the technology, and to professionals who may use it in good faith in their clinical practice. Precisely what these duties should be is a subject that the medical community might consider with some urgency.
Conclusions
Bates and Gawande were right to draw attention to the important role that decision support software and other information technologies could play in maintaining and improving quality and safety standards in patient care. A further body of evidence supporting their position can be found at www.openclinical.org. Despite the promise of these technologies, however, and the increasing level of evidence for their clinical value, IT has the potential to lead to patient harm as well as benefit. Suppliers of these technologies, and the healthcare providers who adopt them, owe a duty of care to those affected by them to pursue the highest standards of quality and safety in their design and deployment.
References
- Bates DW, Gawande AA. Improving safety with information technology. N Engl J Med 2003;348(25):2526-34.
- Kohn LT, Corrigan JM, Donaldson MS, Eds. To Err Is Human: Building a Safer Health System. Committee on Quality of Health Care in America, Institute of Medicine, 1999.
- An organisation with a memory. Report of an expert group on learning from adverse events in the NHS, chaired by the Chief Medical Officer. London: Department of Health, 2000.
- Baker GR, Norton P. Patient Safety and Healthcare Error in the Canadian Healthcare System. A Systematic Review and Analysis of Leading Practices on Canada with Reference to Key Initiatives Elsewhere. A Report to Health Canada. Ottawa: Health Canada, 2002.
- Australian Council for Safety and Quality in Healthcare. Safety in Practice: Making Health Care Safer. Second Report to the Australian Health Ministers’ Conference, 2001.
- Vincent, C.A. (ed). Clinical Risk Management, BMJ (British Medical Journal) Publications, 1995.
- Reason JT. Understanding adverse events: the human factor. In: Vincent CA ed. Clinical risk management. Enhancing Patient Safety (2nd edition). London: England BMJ (British Medical Journal) Publications, 2001:9-30.
- Secker-Walker J & Taylor-Adams S. Clinical incident reporting. In: Vincent CA ed. Clinical risk management. Enhancing Patient Safety (2nd edition). BMJ (British Medical Journal) Publications, 2001:419-438.
- Vincent CA ed. Clinical risk management. Enhancing Patient Safety (2nd edition). BMJ (British Medical Journal) Publications, 2001.
- Fox, J., Glowinski, A., Gordon, C., Hajnal, S. & O’Neil, M. Logic Engineering for Knowledge Engineering: Design and implementation of the Oxford System of Medicine Artificial Intelligence in Medicine Vol 2:6, Dec 1990;323-339.
- Walton RT, Gierl C, Yudkin P et al. Evaluation of computer support for prescribing (CAPSULE) using simulated cases. BMJ. 1997;315(7111):791-5.
- Fox J, Thomson R. Decision support and disease management: a logic engineering approach. IEEE Trans Inf Technol Biomed. 1998 Dec;2(4):217-28.
- Taylor P, Fox J, Pokropek AT. The development and evaluation of CADMIUM: a prototype system to assist in the interpretation of mammograms. Med Image Anal. 1999 Dec;3(4):321-37.
- Emery J, Walton R, Murphy M et al. Computer support for interpreting family histories of breast and ovarian cancer in primary care: comparative study with simulated cases. BMJ. 2000 Jul 1;321(7252):28-32.
- Hurt C, Fox J, Bury J, Saha V. Computerised advice on drug dosage decisions in childhood leukaemia: a method and a safety strategy To Appear in the proceedings of the 9th Conference on Artificial Intelligence in Medicine in Europe (AIME-03) (Cyprus, 18 – 22 October).
- Fox J. Decision-Support Systems as Safety-Critical Components: Towards a Safety Culture for Medical Informatics Editorial, In: Methods Inf Med., 1993; 32:345-348.
- Fox J and Das S Safe and Sound: Artificial Intelligence in Hazardous Applications, Cambridge MA: MIT Press, 2000
- Fox J, Bury J. A quality and safety framework for point-of-care clinical guidelines. Proc AMIA Symp. 2000;:245-9.
- OpenClinical Green Paper: Quality & Safety in Decision Support Systems, 2002. (Accessed 24 June 2003 at http://www.openclinical.org/docs/int/docs/qands011.pdf.)
- ISO 9000. (Accessed 24 June 2003 at http://www.iso.ch/iso/en/iso9000-14000/index.html.)
- Leveson N, 1995. Safeware. System safety and computers. Reading, Massachusetts, Addison-Wesley, 1995.
- Fox J. Quality and safety of clinical decision support technologies: a discussion of the role of formal methods. In H. Ehrig, B. J. Kramer and A. Erta eds. Proc. IDPT 2002 (World Conference on Integrated Design and Process Technology). Society for Design and Process Science, USA, 2002.
- Heathfield HA, Wyatt J. Philosophies for the design and development of clinical decision-support systems. Methods Inf Med. Feb;32(1):1-8, 1993.
- Redmill F, Chudleigh M, Catmur, J. System safety: HAZOP and Software HAZOP, Chichester: John Wiley, 1999.
- IEC 61508 standard. (Accessed 24 June 2003 at http://www.iec.ch/61508/.)
- Abbott P. Ethical considerations for decision support systems. Presentation at MEDINFO 2001 panel discussion on Quality, safety and ethical issues in the use of computers to advise on patient care. (Accessed 24 June 2003 at http://www.openclinical.org/qualitysafetyethicspanel.html.)
John Fox and Richard Thomson, OpenClinical.org 2002
[1]
OpenClinical is a web-based resource focused on computer-based knowledge management technologies and applications. It aims to provide information on such systems, demonstrate their potential benefit to clinical care and to promote methods and tools for building knowledge applications that comply with the highest quality, safety and ethical standards.