Secondary Logo

Journal Logo

Integrating Information Technologies Safety Into Occupational and Environmental Safety Management Systems

Sepúlveda, Martín-José, MD, ScD

Journal of Occupational and Environmental Medicine: June 2019 - Volume 61 - Issue 6 - p e297–e299
doi: 10.1097/JOM.0000000000001572
EDITORIAL
Free

Objective: To discuss the need for the inclusion of decision-support software safety in workplace safety management systems.

Methods: Review of software safety systems, the socio-technical model of Information Technologies safety and sources of decision-support software safety risk.

Results: Not applicable to a commentary viewpoint article.

Conclusions: There are no regulatory safety standards for decision-support software as there are for software that is deemed a medical device. Establishing the safety of this software therefore becomes the responsibility of the industry and the users. Occupational and environmental professionals can help close this gap by applying safety management system processes to these digital tools.

CLARALUZ LLC, St. Augustine, Florida; IBM Corporation, Armonk, New York (Dr Sepúlveda).

Address correspondence to: Martín-José Sepúlveda, MD, ScD, 412 Plantation Grove LN, St. Augustine, FL 32086 (msepulv119@earthlink.net).

Clinical significance: OEM professionals can help ensure the safety of computer-based decision support technology that is unregulated. This can be accomplished by adding this technology to existing safety management systems, requesting producers to provide evidence of product safety and applying safety management system processes to these digital tools.

Work not supported or associated with any organization.

Funding: None.

Conflict of Interest: None declared.

This commentary's objectives are twofold. The first is to create awareness among health and safety professionals of potential safety issues in computer software systems, and the second is to provide recommendations for action by this community that may help mitigate some of these information technology-related risks. The narrative will review the concepts and principles of software product safety and related safety management systems, potential risks in computer software technology, and recommended actions for safety professionals. The discussion will focus on the subset of information technology systems created to provide decision support to end-users such as artificial intelligence (AI) tools, that is, software systems that simulate human cognitive functions to produce advice, for example, information processing, memory, reasoning, and learning.

Back to Top | Article Outline

BACKGROUND

Occupational and environmental health professionals routinely deploy and assess the performance of safety-management systems designed to assure worker comfort, ease of equipment or device use, freedom from hazardous exposures, and prevention of harm to health. Traditionally, these systems have addressed subsystems or components of the physical, chemical, biologic, ergonomic, and organizational environment. However, rapid advances and large-scale integration of information technologies (IT) into workplaces, for example, sensors, mobile systems, massive data processing, and cloud technologies, are now continuously impacting both the nature of work and of workplace. Workplace IT deployment has been pervasive, with applications in domains ranging from training, personal protective equipment, construction and fire life safety, to environmental surveillance, wellness and health promotion, navigation, transportation, and security.1,2 This development has created the need to include an information technology safety subsystem in occupational and environmental (OEM) safety-management systems.

Back to Top | Article Outline

SAFETY-MANAGEMENT SYSTEMS

Safety-management systems in organizations are designed to identify and mitigate avoidable hazards, to manage unavoidable hazards that are known, and to monitor and correct instances of potential (near misses) or actual harm continuously. These systems contain five components: policy, organizing, planning and implementation, evaluation, and continual improvement.3 They are based on the principle that hazards and harm can arise out of complex interactions involving people, agents (eg, physical, chemical, biologic, digital technologies), vectors (eg, manufacturers, distributors, vendors, information systems), and internal and external environments (eg, social, organizational, economic).4

Back to Top | Article Outline

SOCIOTECHNICAL MODEL OF IT SAFETY AND SOURCES OF IT-RELATED SAFETY RISK

Concepts of worker safety and safety-management systems have parallel roots in the IT software-safety community, including safety of decision-support systems.5,6 As in workplace safety, the potential for software systems-related adverse events, near misses, and unsafe conditions arises from a complex set of interactions between the users, the technology (eg, hardware, software, knowledge base, usability, and workflow integration), and the socioenvironmental context in which use occurs, that is, the sociotechnical model of IT safety.7,8 For example, Fig. 1 illustrates the application of the sociotechnical model of IT safety to the healthcare delivery workplace.8

FIGURE 1

FIGURE 1

Back to Top | Article Outline

Technology-Based Safety Risk

Technology-based risk in the model is related to the quality (precision, accuracy, reproducibility, validity), reliability, availability, and usability of the application or system. Inadequate performance in any of these areas creates the potential for adverse outcomes. In decision-support systems, high levels of performance in each of these areas are essential for the support system's component parts, that is, the knowledge base, the subject- or environment-specific data inputs and the reasoning engine. Examples of data quality issues include incomplete, untimely or inaccurate data, loss of data integrity during transmission, and errors in translation between systems. These can result from lack of interoperability between data repositories and the IT systems.9

The knowledge base of a decision-support system can contribute to safety risk if it lacks relevance, completeness, currency, or consistency. Similarly, deficiencies in the decision-support system's technologies that identify, retrieve, process, and represent subject or environment-specific data can also create safety risks. In addition, reasoning engines that comprise logic, data models, and algorithms must also have quality and reliability attributes that minimize error and risk of harm. Finally, while less often considered, the usability and degree of effective integration of a decision-support system into end-user workflow can affect the technology's safety risk.10 Human interactions with decision-support software that is difficult to use and learn or is slow for time-sensitive users may cause errors affecting the quality or appropriateness of the technology tool's outputs. Workflow disruptions caused by poorly designed or poorly executed software implementations can have similar consequences by adversely affecting a user's interactions with the system.

Back to Top | Article Outline

User-Based Safety Risks

User-based sources of risk for harm arise from variability in cognitive capacities such as level of knowledge, training, experience, memory, and reasoning ability. Users are also subject to cognitive biases such as thinking being driven by excessive emphasis on recent events (availability bias) or prior expectations (ascertainment bias), locking on salient features too early in the decision process (anchoring bias), or resisting new evidence or knowledge because it is counter to existing norms or beliefs (Semmelweis reflex).11 Users may also commit decision errors as a result of interacting with a computer-based advisory system. These may include acting as directed by the cognitive or AI system irrespective of the correctness of the suggested action (automation bias), ignoring or dismissing the system's advice because it is an automated system (anti-automation bias), or failing to reconsider decisions because the system did not prompt them to do so (error of system omission).12 Finally, the manual abstraction and entry of data into a decision-support system are potential sources of user error that can adversely impact the quality of the system's output.

Back to Top | Article Outline

Socioenvironmental Context and Safety Risk

The socioenvironmental context of IT use is the third major locus for IT safety management system focus because user-IT interfaces are affected by social, physical, and organizational factors operating in place and time.13 Social aspects may include autonomy in decision-making, needs for collaboration or feedback from other individuals, and teaming for task completion. Accessibility, noise, light, air quality, and workstation ergonomics are aspects of the physical environment that can affect the quality of a worker's computer interaction. In addition, numerous organizational factors such as enterprise values and beliefs (culture), workplace climate, leadership and performance evaluation can affect decision-making and safety risk from use of IT decision-support tools.

Back to Top | Article Outline

CONCLUSION AND RECOMMENDATIONS

There are no regulatory standards requiring proof of product safety for decision-support systems that are not deemed medical devices by the US Food and Drug Administration.14 Nonetheless, there are potential safety risks in these non-regulated software systems which producers and OEM professionals can address. Computer-based decision-support producers can establish the safety of their software tools by providing evidence that safety has been integrated into the software product's life cycle extending from product design to product decommissioning. This would create a “safety case” for the decision-support tool composed of documented processes and outcomes that show how the company addressed: (1) predictable errors in component technologies such as natural language processing for data abstraction or knowledge base maintenance, (2) procedures for dealing with unpredictable errors arising during use, and (3) post-implementation surveillance mechanisms to identify problems and near-misses for continuous product improvement.6,15 OEM professionals can help catalyze these actions by producers by requesting that these “safety cases” be provided pre-deployment for assessing: (1) the product's testing for safety and its ability to “fail safe,” (2) the product's monitoring system for near misses and errors, (3) recommended processes for handling errors, reporting, and remediation. This pre-deployment safety review process is a standard component of safety management systems for traditional sources of workplace safety risk and can form the basis for incorporating IT technology safety into this system.

Safety management systems have been vital contributors to the health and well-being of working populations.16–18 The pervasive integration of IT technologies in work environments represents an opportunity to strengthen the protection that safety-management systems provide. The constancy of change in the workplace requires that safety-management systems be dynamic and agile in order to maintain their positive impact on worker health and performance.

Back to Top | Article Outline

REFERENCES

1. Pribanic E. 5 Modern ways of using technology to improve safety in the workplace (Tech Funnel web site). March 12, 2018. Available at: https://www.techfunnel.com/hr-tech/modern-ways-of-using-technology-to-improve-safety-in-the-workplace/. Accessed November 30, 2018.
2. Bharadwaj R. AI applications in construction and building-current use-cases [Techemergence web site]; 2018. Available at: https://www.techemergence.com/ai-applications-construction-building/. Accessed November 30, 2018.
3. International Labour Office. Guideline on Occupational Safety and Health Management Systems. Second edition. Geneva, Switzerland: ILO-OSH; 2009.
4. Penn State. Stat 505. Epidemiologic research methods [Penn State University web site]; 2018. Available at: https://onlinecourses.science.psu.edu/stat507/node/25. Accessed November 30, 2018.
5. Alberico D, Bozarth J, Brown M, et al. Software systems safety handbook. A technical and managerial team approach. [Joint Software Systems Safety Committee web site]; 1999. Available at: https://www.system-safety.org/Documents/Software_System_Safety_Handbook.pdf. Accessed November 30, 2018.
6. Fox J, Das S. Safe and Sound. Artificial Intelligence in Hazardous Applications. Menlo Park, CA: American Association for Artificial Intelligence; 2000.
7. Sittig DF, Singh H. A new socio-technical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care 2010; 19 (suppl 3):i68–i74.
8. Wallace C, Zimmer KP, Possanza L, Giannini R, Solomon R. How to Identify and Address Unsafe Conditions Associated with Health IT. Washington DC: Emergency Care Research Institute. Prepared for: ONCHIT; 2013.
9. Graber ML, Johnston D, Bailey R. Report of the Evidence on Health IT Safety and Interventions. Washington D.C: RTI International. Prepared for US Department of Health and Human Services; 2016.
10. Borycki E, Kushniruk A, Nohr C, et al. Usability methods for ensuring health information technology safety: evidence-based approaches. Contribution of the IMIA working group health informatics for patient safety. IMIA Yearbook of Medical Informatics 2013. 20–27.
11. Croskerry P. Cognitive and affective biases in medicine. Critical thinking program. [Dalhousie University web site]; 2013. Available at: http://sjrhem.ca/wp-content/uploads/2015/11/CriticaThinking-Listof50-biases.pdf. Accessed November 30, 2018.
12. Cummins ML. Automation bias in intelligent time critical decision support systems [American Institute of Aeronautics and Astronautics web site]. Available at: https://web.archive.org/web/20141101113133/http://web.mit.edu/aeroastro/labs/halab/papers/CummingsAIAAbias.pdf. Accessed November 30, 2018.
13. Mumford E. The story of socio-technical design: reflections on its successes, failures and potential. Info Syst J 2006; 16:317–342.
14. Food and Drug Administration. Clinical and Patient Decision Support Software Draft Guidance for Industry and Food and Drug Administration Staff. Available at: https://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/UCM587819.pdf. Accessed November 30, 2018.
15. Shortliffe EH, Sepulveda MJ. Clinical decision support in the era of artificial intelligence. JAMA 2018; 320:2199–2200.
16. International Labour Organization. Occupational safety and health management system. [ILO web site]; 2016. Available at: https://www.ilo.org/wcmsp5/grous/public/---africa/---ro-addis_ababa/---sro-cairo/documents/publication/wcms_622420.pdf. Accessed November 30, 2018.
17. Robson LS, Clarke JA, Cullen K, et al. The effectiveness of occupational health and safety management system interventions: a systematic review. Saf Sci 2006; 45:329–353.
18. National Academies of Sciences, Engineering, and Medicine. A Smarter National Surveillance System for Occupational Safety and Health in the 21st Century. Washington, DC: The National Academies Press; 2018. Available at: https://www.nap.edu/catalog/24835/a-smarter-national-surveillance-system-for-occupational-safety-and-health-in-the-21st-century. Accessed November 30, 2018.
Keywords:

artificial intelligence; decision support; information technologies safety; safety management system; software safety; system

Copyright © 2019 by the American College of Occupational and Environmental Medicine