This commentary's objectives are twofold. The first is to create awareness among health and safety professionals of potential safety issues in computer software systems, and the second is to provide recommendations for action by this community that may help mitigate some of these information technology-related risks. The narrative will review the concepts and principles of software product safety and related safety management systems, potential risks in computer software technology, and recommended actions for safety professionals. The discussion will focus on the subset of information technology systems created to provide decision support to end-users such as artificial intelligence (AI) tools, that is, software systems that simulate human cognitive functions to produce advice, for example, information processing, memory, reasoning, and learning.
Occupational and environmental health professionals routinely deploy and assess the performance of safety-management systems designed to assure worker comfort, ease of equipment or device use, freedom from hazardous exposures, and prevention of harm to health. Traditionally, these systems have addressed subsystems or components of the physical, chemical, biologic, ergonomic, and organizational environment. However, rapid advances and large-scale integration of information technologies (IT) into workplaces, for example, sensors, mobile systems, massive data processing, and cloud technologies, are now continuously impacting both the nature of work and of workplace. Workplace IT deployment has been pervasive, with applications in domains ranging from training, personal protective equipment, construction and fire life safety, to environmental surveillance, wellness and health promotion, navigation, transportation, and security.1,2 This development has created the need to include an information technology safety subsystem in occupational and environmental (OEM) safety-management systems.
Safety-management systems in organizations are designed to identify and mitigate avoidable hazards, to manage unavoidable hazards that are known, and to monitor and correct instances of potential (near misses) or actual harm continuously. These systems contain five components: policy, organizing, planning and implementation, evaluation, and continual improvement.3 They are based on the principle that hazards and harm can arise out of complex interactions involving people, agents (eg, physical, chemical, biologic, digital technologies), vectors (eg, manufacturers, distributors, vendors, information systems), and internal and external environments (eg, social, organizational, economic).4
SOCIOTECHNICAL MODEL OF IT SAFETY AND SOURCES OF IT-RELATED SAFETY RISK
Concepts of worker safety and safety-management systems have parallel roots in the IT software-safety community, including safety of decision-support systems.5,6 As in workplace safety, the potential for software systems-related adverse events, near misses, and unsafe conditions arises from a complex set of interactions between the users, the technology (eg, hardware, software, knowledge base, usability, and workflow integration), and the socioenvironmental context in which use occurs, that is, the sociotechnical model of IT safety.7,8 For example, Fig. 1 illustrates the application of the sociotechnical model of IT safety to the healthcare delivery workplace.8
Technology-Based Safety Risk
Technology-based risk in the model is related to the quality (precision, accuracy, reproducibility, validity), reliability, availability, and usability of the application or system. Inadequate performance in any of these areas creates the potential for adverse outcomes. In decision-support systems, high levels of performance in each of these areas are essential for the support system's component parts, that is, the knowledge base, the subject- or environment-specific data inputs and the reasoning engine. Examples of data quality issues include incomplete, untimely or inaccurate data, loss of data integrity during transmission, and errors in translation between systems. These can result from lack of interoperability between data repositories and the IT systems.9
The knowledge base of a decision-support system can contribute to safety risk if it lacks relevance, completeness, currency, or consistency. Similarly, deficiencies in the decision-support system's technologies that identify, retrieve, process, and represent subject or environment-specific data can also create safety risks. In addition, reasoning engines that comprise logic, data models, and algorithms must also have quality and reliability attributes that minimize error and risk of harm. Finally, while less often considered, the usability and degree of effective integration of a decision-support system into end-user workflow can affect the technology's safety risk.10 Human interactions with decision-support software that is difficult to use and learn or is slow for time-sensitive users may cause errors affecting the quality or appropriateness of the technology tool's outputs. Workflow disruptions caused by poorly designed or poorly executed software implementations can have similar consequences by adversely affecting a user's interactions with the system.
User-Based Safety Risks
User-based sources of risk for harm arise from variability in cognitive capacities such as level of knowledge, training, experience, memory, and reasoning ability. Users are also subject to cognitive biases such as thinking being driven by excessive emphasis on recent events (availability bias) or prior expectations (ascertainment bias), locking on salient features too early in the decision process (anchoring bias), or resisting new evidence or knowledge because it is counter to existing norms or beliefs (Semmelweis reflex).11 Users may also commit decision errors as a result of interacting with a computer-based advisory system. These may include acting as directed by the cognitive or AI system irrespective of the correctness of the suggested action (automation bias), ignoring or dismissing the system's advice because it is an automated system (anti-automation bias), or failing to reconsider decisions because the system did not prompt them to do so (error of system omission).12 Finally, the manual abstraction and entry of data into a decision-support system are potential sources of user error that can adversely impact the quality of the system's output.
Socioenvironmental Context and Safety Risk
The socioenvironmental context of IT use is the third major locus for IT safety management system focus because user-IT interfaces are affected by social, physical, and organizational factors operating in place and time.13 Social aspects may include autonomy in decision-making, needs for collaboration or feedback from other individuals, and teaming for task completion. Accessibility, noise, light, air quality, and workstation ergonomics are aspects of the physical environment that can affect the quality of a worker's computer interaction. In addition, numerous organizational factors such as enterprise values and beliefs (culture), workplace climate, leadership and performance evaluation can affect decision-making and safety risk from use of IT decision-support tools.
CONCLUSION AND RECOMMENDATIONS
There are no regulatory standards requiring proof of product safety for decision-support systems that are not deemed medical devices by the US Food and Drug Administration.14 Nonetheless, there are potential safety risks in these non-regulated software systems which producers and OEM professionals can address. Computer-based decision-support producers can establish the safety of their software tools by providing evidence that safety has been integrated into the software product's life cycle extending from product design to product decommissioning. This would create a “safety case” for the decision-support tool composed of documented processes and outcomes that show how the company addressed: (1) predictable errors in component technologies such as natural language processing for data abstraction or knowledge base maintenance, (2) procedures for dealing with unpredictable errors arising during use, and (3) post-implementation surveillance mechanisms to identify problems and near-misses for continuous product improvement.6,15 OEM professionals can help catalyze these actions by producers by requesting that these “safety cases” be provided pre-deployment for assessing: (1) the product's testing for safety and its ability to “fail safe,” (2) the product's monitoring system for near misses and errors, (3) recommended processes for handling errors, reporting, and remediation. This pre-deployment safety review process is a standard component of safety management systems for traditional sources of workplace safety risk and can form the basis for incorporating IT technology safety into this system.
Safety management systems have been vital contributors to the health and well-being of working populations.16–18 The pervasive integration of IT technologies in work environments represents an opportunity to strengthen the protection that safety-management systems provide. The constancy of change in the workplace requires that safety-management systems be dynamic and agile in order to maintain their positive impact on worker health and performance.
3. International Labour Office. Guideline on Occupational Safety and Health Management Systems. Second edition. Geneva, Switzerland: ILO-OSH; 2009.
6. Fox J, Das S. Safe and Sound. Artificial Intelligence
in Hazardous Applications. Menlo Park, CA: American Association for Artificial Intelligence
7. Sittig DF, Singh H. A new socio-technical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care
2010; 19 (suppl 3):i68–i74.
8. Wallace C, Zimmer KP, Possanza L, Giannini R, Solomon R. How to Identify and Address Unsafe Conditions Associated with Health IT. Washington DC: Emergency Care Research Institute. Prepared for: ONCHIT; 2013.
9. Graber ML, Johnston D, Bailey R. Report of the Evidence on Health IT Safety and Interventions. Washington D.C: RTI International. Prepared for US Department of Health and Human Services; 2016.
10. Borycki E, Kushniruk A, Nohr C, et al. Usability methods for ensuring health information technology safety: evidence-based approaches. Contribution of the IMIA working group health informatics for patient safety. IMIA Yearbook of Medical Informatics
13. Mumford E. The story of socio-technical design: reflections on its successes, failures and potential. Info Syst J
15. Shortliffe EH, Sepulveda MJ. Clinical decision support
in the era of artificial intelligence
17. Robson LS, Clarke JA, Cullen K, et al. The effectiveness of occupational health and safety management system
interventions: a systematic review. Saf Sci