Eradicating Medication Errors within EHR
Graphical user 1 Running head: GRAPHICAL USER INTERFACE DESIGN FOR A PATIENT MONITORING Graphical User Interface Design for a Patient Monitoring Device in an Intensive Care Setting: Implications of Learning David Marc University of Minnesota- Twin Cities Health Informatics Graphical user 2 Graphical User Interface Design for a Patient Monitoring Device in an Intensive Care Setting: Implications of Learning Project Summary The ICU demands that clinicians make fast and accurate decisions. Current patient monitoring devices are not conducive to this environment because the technology is difficult to learn how to use.
Current devices have been attributed to an increase in medical errors due to their obtrusive and cognitively demanding functionalities. Evaluation of schema theory and cognitive load theory provide a foundation for designing and evaluating the usability and learnability of patient monitoring graphical user interfaces (GUIs). By identifying the schemas of clinicians with varying degrees of clinical knowledge and experience one may be able to design a GUI that is optimized for the user. The GUI must minimize cognitive load for all users including those that have minimal computer skills. If such a GUI is designed for patient monitoring devices, the knowledge barrier in learning how to use the technology may be lowered thereby supporting the diffusion of the device across an organization. In addition, if the GUI is easy to learn, users may find that the technology is invisible to their routine and therefore are able to spend more time caring for patients. The purpose of this proposal is to evaluate newly designed GUIs for patient monitoring devices as they relate to user performance. The proposed study will examine the efficiency, accuracy, and cognitive load of users with varying levels of computer competencies as they use GUIs that were designed from different mental schemas. The ultimate goal of a patient monitoring device is to supplement medical professions tasks while they care for the patient so that critical events can be detected early and be resolved before an injury occurs. Although we are far from incorporating an error proof and efficient patient monitoring GUI that meets the needs of all users, careful design and experimentation may prove to be invaluable for meeting such a goal. Graphical user 3 Project Description Rationale The intensive care unit (ICU) is a fast-paced, high-risk, high-stress environment where large amounts of various types of information are needed by medical staff for making clinical decisions. In this setting, the interactions between people and medical devices, specifically patient monitors, is paramount for the efficiency and efficacy of tasks. Patient monitors were first introduced as a way to supervise patients in an automated, efficient, and accurate fashion (Malhotra, 2005)Eradicating Medication Errors within EHR. These devices were designed to supplement medical professional’s tasks while they care for patients (Malhotra, 2005). Specifically, a goal of monitoring devices is to detect critical events early so they can be resolved before an injury occurs (Eichhorn, 1989). However, research has demonstrated that approximately 67-90% of alarms generated by monitoring devices are false positive, leaving it up to the clinician to determine the appropriate clinical action (Cropp, 1994; Meredith & Edworthy, 1995). The efficacy of patient monitors is largely dependent on the actions of the clinicians. If a clinician fails to react appropriately to an alarm this may increase the possibility for introducing a medical error. Research has demonstrated that inappropriate decisions made by clinicians through interactions with monitoring devices are a contributing factor to medical errors (Malhotra, 2005). Current intensive care unit (ICU) monitoring devices provide discrete data points and discrete alarms that alert clinicians when a parameter is outside a determined range. The cognitive demand of the clinicians in quickly processing such information so they can act accordingly is not conducive for an error-free setting. Investigations into the failure of monitoring displays have demonstrated that the Graphical user 4 usability of the devices contributes to the increased cognitive demands (Drews, 2008). Interestingly, research has begun to explore the integration of graphical displays that would enhance the ability to process the information accurately and quickly (Görges et al., 2011; Effken 2006; Effken et al., 2008). Yet, much of this research has failed to examine the implications of learning on the design of the graphical user interface (GUI) to maximize the usability while minimizing the demands for training. The lack of effective GUI design has partly been a result of failing to incorporate adequate knowledge of the cognitive processes and working practices of the eventual users. Specifically, users have expressed major concerns regarding the difficulty at learning how to use the medical devices especially in their current workflow (Terry et al., 2008). When considering the ICU, the usability of an information display is crucial. The typical GUI for patient monitoring in an ICU has the purpose of displaying a patient’s physiological parameters (Figure 1). It is typically the responsibility of the nurse or physician to check the monitors on a regular basis to ensure the patient is stabile. As shown in Figure 1, the physiologic parameters most commonly used in an ICU setting include blood pressure (BP), oxygen saturation of the blood, heart rate, temperature, electrocardiogram (ECG), and respiratory rate. Critically ill patients may also require hemodynamic monitoring using a pulmonary artery (PA) catheter which measures central venous pressure, right atrial pressure, PA pressure, and cardiac output (Drews, 2008). Clinicians must integrate all of these rapidly changing physiologic parameters to develop a clear and qualitative mental representation of a patient’s current state. In cases of unexpected, potentially life-threatening events, the cognitive demands increase as clinicians are required to Graphical user 5 interpret new data for problem detection and rapid intervention. Because of the high cognitive demand for data integration there are reduced available cognitive resources for other important tasks such as taking corrective actions, documentation, and communicating with physicians and/or other nurses. In situations with considerable interruptions to the task at hand, errors and deviations from the necessary treatment plans can arise (Rivera-Rodriguez & Karsh, 2010). A display for monitoring a patient’s physiology where staff can cognitively process changes in information rapidly and easily may avoid such problems (Agutter et al., 2003). The monitoring displays must be optimized for the task at hand and the user so the displays can act as cognitive aids rather than a hindrance. When considering the design process of a GUI, it is typically engineering-centric rather than a user-centric. Displays that are developed in high-risk fields such as aviation and power plant control are often designed for monitoring purposes. These monitoring systems often utilize a single-sensor-single-indicator (SSSI) approach where a single indicator is controlled by individual sensor (Brock, 1996). For instance, if a sensor determines the fuel level is low on an airplane, an indicator might alarm the pilot. In healthcare, monitoring tasks target natural systems, such as a patient, where the specific task can be highly dynamic. An engineeringcentered display that utilizes the SSSI approach tend to yield data in a sequential, fragmented form that make it difficult and time-consuming for clinicians to develop a coherent understanding of the relationships and underlying mechanisms of the displayed parameters (Drews, 2008). Despite these limitations, the patient monitors in the typical ICU adopt an SSSI approach for displaying information (Figure 1). It is likely that the SSSI design does not support the cognitive processes of the clinicians to efficiency manage the information while also caring for patients. Graphical user 6 Figure 1. G3L Multi-parameter ICU patient monitor Research has also explored GUI design methods based off of the needs of the intended users. Because clinicians are forced to examine past and present individual physiological parameters to identify any inconsistencies between the patient’s history and current status, clinicians have stated that a graphic representation of data over time would be best for displaying trend information (Drews, 2008)Eradicating Medication Errors within EHR. In one study, nurses found fault between the information provided on a monitor to guide them versus the knowledge they needed to have previously acquired in order to navigate successfully through a menu (Drews, 2008). This study suggests that the cognitive demands of not only processing fragmented information about the patient but also the burden of learning how to use the monitor has great implication for the usability of the Graphical user 7 patient monitors. Current engineering-centric design processes fail to encapsulate the user’s cognitive capacity to process information to ensure efficiency and accuracy of clinical tasks. The goal when learning a new technology is that at some point the user is confident in their skills so that the technology is virtually invisible to the user. This way, the technology only becomes background to the relevant task at hand. Unfortunately, in the healthcare domain, the implications of not adequately learning how to use technology can quickly result in dire consequences, such as medical errors. Also, if the technology is designed in such a way that a user has difficulty reaching a learning stage where the technology is virtually invisible to their routine, the technology will persistently be intrusive and user acceptance will wane. In an intensive care setting where attention to the patient is essential, any distractions can potentially be life threatening for a patient (Rivera-Rodriguez & Karsh, 2010). If the usability and learnability of patient monitoring medical devices improves, there may be a positive impact on efficiency and accuracy of use as well as user acceptance (Drews, 2008). Researchers have developed the diffusion, adoption, and acceptance theories to explain how people adopt, accept, and use complex organizational technologies (Rogers, 2003). The knowledge-barrier institutional-network approach of explaining the diffusion of technology in an organization may help shed light on the current usability issues in the United States. Attewell (1992) introduced the concept of learning how to use a technology and diffusion of that technology across an institution. That is, the assimilation of complex technology is characterized as a process of organizational learning, wherein individuals and the entire organization acquire knowledge and skills necessary to effectively apply the technology. The burden of learning the complex technology can create a knowledge barrier that inhibits diffusion. Therefore, institutions must work to lower the knowledge barriers to encourage diffusion of the technology. In many Graphical user 8 cases, institutions may defer adoption until such knowledge barriers have been sufficiently lowered. Studies have examined the implications of such theories on the adoption of electronic health records (EHRs) and suggest that current external (i.e. standards, pay for performance) and internal (i.e. education, costs) factors may support adoption. One internal factor that was suggested as a target is educating physicians (Ford et al., 2006). For example, Ford and colleagues (2006) suggest that implementing training programs in medical schools to rely on EHRs can serve to accelerate universal EHR adoption. Arguable, this would be supported by designing EHRs that are easier to learn how to use. Interestingly, in the healthcare setting the incorporation of traditional learning theories has largely been ignored in the design of systems. Although most learning theories where developed for purposes of explaining textbook instructions, classroom instructions, and one-onone tutoring, research has generalized these concepts for GUI design. Two such theories can be applied to GUI design: Schema theory and cognitive load theory. The earliest developments of Schema theory first emerged with the Gestalt psychologists and Piaget but was formally recognized and defined by Bartlett in 1932. However, during the behaviorist era, Bartlett’s work was largely ignored. It wasn’t until 1967 where Ulric Neisser’s influential book “Cognitive Psychology” revitalized the theory, thus promoting the use of Schema theory in psychology to grow and proliferated into other disciplines, notably the cognitive and computational sciences. Schemas can be defined as ways of viewing the world, that is to say, developing mental representation of general categories of objects, events, or people (Berstein, Roy, Skrull, & Wickens, 1991). An example of a schema for “drinking with a cup” is composed of the cognitive organization of: learning to see a shape, recognizing it as a cup, grasping the cup, opening your mouth, bringing the cup to the mouth, tipping the cup up, and Graphical user 9 swallowing the contents in the cup. Piaget proposed that learning is the result of forming new schemas and building upon previous schemas. Paiget (1964) proposed that two processes guide learning: (1) the organization of schemas, and (2) adaptation of schemas. The adaptation of schemas can be further explained as the assimilation of new information into existing schemas and the flexibility of current schemas for accommodating new information. Similarly, in a series of experiments, Bartlett (1932) demonstrated that information that individuals retain is neither fixed nor immutable but rather changes as our schemas evolve with our experience of the world. Therefore, when considering the implications of Schema theory and learning how to use technology, novice versus experiences users of technology may learn differently depending on their previously defined schemas. Shapiro (1999) examined the relationship between prior knowledge and interactive overviews (a method of organization) during hypermedia-aided learning in users with varying experience levels. They found that novices benefited more from organization than did users with prior knowledge of the subject matter. Importantly, the novices required information about the semantic relations between ideas and relied heavily on tools and the GUI to help them find meaning in the information. When considering how these results relate to Schema Theory, it is evident that novice users learn best when pre-organized schemas were presented. The question remains, however, as to who should originally organize these schemas. In an effort to answer this question, McNamara (1995) compared two groups of math learners. One group simply read math problems and read the solutions and another group read math problems and worked out, or generated, the solutions. McNamara (1995) found that lowprior-knowledge and average-prior-knowledge students benefited most when they generated the solutions, rather than simply reading the solutions. They described their observations by suggesting that learners are usually better at retaining information which they generated Graphical user 10 themselves compared to retaining information which was generated for them. In contrast, Larkin and Simon (1987), proposed that instructor generated schema building would be better for learning in that it would make learning more efficient and less time consuming which would expedite cognitive processing. Therefore, there isn’t a consensus on what schemas should be used for developing educational information. Although this is conjecture, if these concepts are generalized to GUI design the previous knowledge and experiences of users may be related to user performance. A user might perform best when the GUI was designed using the knowledge of users that have similar schemas. This way, the user could easily adapt to the GUI because it is aligned with already established mental representations of the information. Interestingly, this has never been examined experimentally. The design process of GUIs for healthcare typically utilizes expert knowledge which might put novice users at a disadvantage for learning the technology (Effken, 2006; Effken et al., 2008). Therefore, it would be interesting to examine user performance of GUIs designed from expert schemas compared to novice schemas. The second learning theory, cognitive load theory, has also been considered in literature related to GUI design. Cognitive load refers to the amount of information processing expected of the user. It is predicted that the less cognitive load a user carries, the easier they should learn. Research conducted by Sweller (1988) explored the relationship between cognitive load and learning for developing educational materials. The number of elements intended to be learned and the interactions between these elements can contribute to increased cognitive load and act as a hindrance to learning (Sweller, 1988). In later research, Sweller and colleagues (1994) suggested that the interactivity of elements can increase cognitive load more than the number of elements. In healthcare, physicians must store large amounts of information about the patient Graphical user 11 and understand interactions between various clinical events, such as diagnoses, medications, and laboratory data. It is evident that the environment contributes to an increase in cognitive load. In fact, increases in cognitive load in a healthcare setting have been attributed to providing poorer care (Burgess, 2010). However, since the development and implementation of certain technology, such as EHRs, physicians have reported a decrease in cognitive load (Shachak et al., 2009). Particularly, EHRs prevent clinicians from having to recall excessive amounts of information because patient data is readily available and readable. In addition, the EHR information supports clinical reasoning better than paper records due to improvements in readability and implementation of decision support aids (Shachak et al., 2009). In research related to computer use, the degree of cognitive load and the perception of usability have shown to be dependent on the experience of the users (Rozell & Gardner, 2000). Users with little experience using computers have displayed high levels of anxiety which has been attributed to decreases in performance (Johnson & White, 1980). In a study that examined student performance on a computerized aptitude test, users with more computer experience had better performance than users with less computer experience (Lee, 1986). When considering the healthcare setting the past experience of the users should be considered in designing a GUI in order to minimize the cognitive load for users. Suggestively, prior EHR experience, computeraptitude, and user attitudes may be factors related to the learnability and usability of the GUI. Interestingly, the relationship between the GUI, computer-aptitude, and performance in an ICU setting has not been researched in the past. Past studies has demonstrated that older computer users have lower performance (i.e. time to complete task) in basic computing tasks (Riman et al, 2011). Surprisingly, the decrease in performance wasn’t a function of experience but was contributed to a decline in mental operations related to visual and auditory acuity (Riman et al., Graphical user 12 2011). Regardless of the cause of low computer aptitude, the relationship between computer skills and user performance as it relates to the GUI design is not typically considered. Optimally, a GUI would be designed in such a way that users with little computer experience would still be able to learn how to use the technology quickly and accurate performance. Patient Monitoring Graphical User Interface Design Several studies have examined patient monitoring GUI design as it relates to some aspects of usability and learning. Effken (2006, 2008) has conducted several studies that explored clinical display design in an intensive care setting and the relationship with medical errors. Effken (2006) argues that medical errors may arise due to the large numbers of data elements that clinicians must integrate and synthesize to evaluate a patient’s status (Effken, 2006). Furthermore, currently available physiological monitors do not offer the necessary organization or context for improving the cognitive load. In fact, research has shown that clinicians misinterpret data from physiological monitors quite frequently (Andrews & Nolan, 2005). Effken (2006) developed a patient monitor that compiles and synthesized data from several sources using an ecological psychology framework. Ecological psychology is based off of the work of Gibson (1986) who stressed the importance of the environment and its interactions with an organism. Gibson (1986) claimed that animals evolved to perceive meaning from complex systems that are essential for survival. He laid the foundation for describing perceptions as a direct process, which contradicted the current cognitive psychologists understanding as an indirect process. Cognitive psychologists claim that human perceive options as a mental representation and interpret the meaning of the object based on previous knowledge that was acquired or learned. In contrast, ecological psychologists claim that learning and memory are not involved in perception but rather an animal’s senses allow that to directly Graphical user 13 understand and interact with its environment. Vicente and Rasmussen (1992) adapt Gibson’s theory for designing an ecological graphical user interface. The focus of the ecological display is on the work domain or environment, rather than on the end user or a specific task. Therefore, the GUI is designed to work within the constraints of the environment and allow the user to directly perceive the intended actions. Effken (2006) began the GUI design process for the monitoring device by employing a Cognitive Work Analysis (CWA) which focuses on identifying work domain constraints. The constraints can be classified as five different types: Structure of the work domain, organizational coordination, worker competencies, potential strategies, and activity within work organization and decisions (Vicente, 1999). The purpose of Cognitive Work Analysis is to identify and map out those constraints so that design efforts may take explicit account of them. Next, the decision making tasks of expert clinicians was determined using Rasmussen’s decision ladder. Rasmussen’s decision ladder is a process of capturing formative decision-making processes (Rasmussen & Jensen, 1974). From the information they gathered, they developed an ecological prototype design which was validated using a cognitive walkthrough analysis with clinical experts. A cognitive walkthrough is a task analysis where users and developers specify the sequence of steps required to accomplish a task (Nielsen & Mack, 1994). Along the way, any issues are recorded and then compiled. The system is typically redesigned to address the issues identified. The prototype display is presented in Figure 2. The order of the data elements was determined from the results of the CWA. The display also presented clinically important relationships among data elements. Figure 2. Screen shot of the prototype ecological display as developed by Effken (2006) Graphical user 14 In Effken’s (2006) experiment, the ecological prototype display that was developed as described above was compared to two alternative displays. The first alternative display used bar graphs that are aligned by body system and organized based on current clinical flow sheets (Figure 3a). The second alternative display used bar graphs that were organized based on the results of the CWA. The primary difference between the ecological prototype display and the alternative displays was that the alternatives did not show relationships among the data elements. Twenty novice ICU nurses and 13 medical residents were randomly presented the ecological display and one of the alternative displays where 5 different patient scenarios were randomly selected for each display. Previous computer experience, critical care knowledge, and knowledge of hemodynamic monitoring were assessed prior to viewing the displays. Upon viewing the display the participants were asked to choose the appropriate treatment based on the physiological parameters and the patient history. An interface was developed where the participants could click on particular treatment buttons to begin or stop a treatment. Based on Graphical user 15 the treatment the participants provided, the physiological parameters on the display changed accordingly. The investigators measured treatment initiation time and the percentage of time patient variables were kept within a target range. Also, the participants were told to think-aloud in order for investigators to gain insights into the cognitive processes that underlie the participant’s decisions. Based on the results of the experiment, the medical residents rated their computer scores slightly higher than the nurses, yet they both scored similarly in terms of general ICU knowledge scores. When considering a mixed model effect of performance (i.e. time to initiate treatment and percent of time parameters were kept normal), medical residents performed best when using alternative display 2 (A2 > Ecological display > A1) while nurses performed best when using alternative display 1 (A1 > Ecological display = A1). In terms of overall performance, the ecological display did not aid in performance. When considering Effken’s (2006) research, the concept of learning was largely undermined. For instance, alternative display1 offered the best performance for the nurses and coincidently the display was organized based on current clinical flow sheets. Based on the concepts of schema theory, one could surmise that the nurses had well established schemas already organized (i.e. clinical flow sheets) and therefore they were able to learn how to use alternative display 1 fastest because the layout of the display was in-line with their cognitive processes. The medical residents’ performance was best with alternative display 2. This may have been due to the fact that the CWA was developed from discussions with expert ICU clinicians. Since the nurses were novices while the medical residents were more experienced, the mental schemas of these two groups may have been drastically different. It is likely that the expert ICU clinicians would have similar mental schemas to the medical resident rather than the Graphical user 16 novice nurses. Therefore, given that alternative display 2 was organized based on the CWA, the medical residence may have acted fastest and more accurately with this display because it easily adapted to their prior mental schemas. In addition, due to the complexity of the ecological prototype, the nurses and medical residence may have found the display difficult to use because their prior schemas did not coincide with the display. In addition, the ecological prototype was designed to show interactions between data elements, which may have increased cognitive load. Research suggests that high element interactivity results in high cognitive load, even if the total number of elements is small (Sweller & Chandler, 1994). Lastly, the investigators found that the medical residents had slightly higher scores for computer aptitude yet failed to demonstrate if this was a confounding factor in their analysis. The performance of the subjects appears to be related to their experiences and their expectations based on what they learned in the past (i.e. schemas). As stated by Effken (2006), the novice participants ignored several of the displayed variables because they were unfamiliar with them. In addition, the more experienced physicians preferred a display (i.e. alternative 2) that was organized based on the CWA results. Interestingly, Effken (2006) demonstrated that the organization of the display was more important for performance than displaying relationships between the data elements. However, the investigators did not explain why performance was different between the two groups of subjects and how the organization of the alternative displays could have led to differences in performance. In addition, the applicability of these findings outside of a laboratory setting is questionable. Eradicating Medication Errors within EHRA huge limitation in the experimental design was the lack of comparison to current ICU patient monitors. The investigators evaluated the performance of clinicians using two novel displays. Therefore, it is possible that currently available patient monitoring devices offer superior performance when compared to any of the Graphical user 17 displays that Effken (2008) developed. These limitations were overcome in a study by Görges and colleagues (2011) where they compared two novel displays to a current patient monitoring device in an ICU setting. Görges and colleagues (2011) examined the impact of the patient monitor’s GUI on the accuracy of clinical decisions and mental workload of nurses in a triage setting. Görges and colleagues (2011) developed two displays by employing a user-centered design process and compared user performance to a traditional patient monitoring device. One experimental display included a strip-chart (Figure 4a) whereas the other experimental display used a clock-like chart (Figure 4b). The control was a traditional patient monitoring display (Figure 5). The displays include heart rate (HR), mean arterial pressure (MAP), continuous carbon monoxide (CO), blood oxygen saturation (SpO2), ventilation minute volume (MV) over a 12-hr period sampled in 2minute intervals. The darker background color on the graphs indicates non-alarming levels. If the parameters went beyond the alarm levels, the measured value was filled in red, whereas levels below the alarm levels were filled in blue. The yellow highlight of the numerical value also indicated an alarming level. In addition, the displays included the status of syringes with the name of the medication and time left until empty. Figure 3. Two alternative displays that were used in the Effken (2006) experiment. (a) Bar graphs are aligned by body system and organized based on current clinical flow sheets; (b) bar graphs are ordered based on the results of the CWA. Graphical user 18 a. b. The study was conducted in the break room of an ICU where nurses sat at a computer monitor and were shown 20 pairs of patients on all three displays with the presentation order being randomized. They were asked to determine which of the 2 patients required their attention first. Prior to viewing the displays, the participants were offered a training session to familiarize them with the data elements of each display. The times to reach a decision and whether the Graphical user 19 decision was correct were recorded. The participants also filled out questionnaires to determine task load and user preference. Overall, nurses made decisions fastest with the bar strip-chart display (strip-chart < clock graph < control). When compared to the control display, nurses made decisions 28% faster. The overall accuracy was best with the clock display (clock graph > strip-chart > control). When comparing specific tasks, the bar plot display was best at identifying stable patients, while the clock display was best at identifying near empty syringes even though both displays showed identical syringe icons. Workload scores related to frustration were lower for the two experimental displays when compared to the control display. The majority of nurses preferred the control display (56.2%) followed by the clock display (25.0%) and then the strip-chart (18.8%). All the nurses mentioned that they liked the syringe icon. Figure 4. The graphical displays that were developed by Görges and colleagues (2011). (a) strip-chart; (b) clock-like graphs. a Graphical user 20 b Figure 5. The traditional patient monitor display that was used in the study by Görges and colleagues (2011). Interestingly, even though the novice nurses had some previous experience using patient monitors similar to the control display, their performance was lowest with the control display (Görges et al., 2011). The difficulty with designing such an experiment is controlling for the past experiences of users with traditional patient monitoring displays. It would have been interesting to evaluate performance in a novice versus experienced nurses. Research suggests that experienced clinicians make decisions subconsciously or in an implicit process, where many Graphical user 21 tasks are non-reflective when they are in familiar situations (Eva, 2005). The degree of implicit, non-reflective cognitive processing may be directly related to the experience of the physician (Eva, 2005). In a study by Hatala and colleagues (1999), diagnostic accuracy was compared between medical students, residents, and cardiologists after interpreting electrocardiogram (ECG) results. The study demonstrated that the experts (i.e. cardiologists > residents > students) identified more correct diagnostic features than the less experienced physicians (Hatala et al., 1999). Interestingly, the introduction of a novel GUI compared to a current GUI on the performance of experienced users has not been evaluated in the past. It is unknown whether the new technology would impede or facilitate the decision making process. Some aspects of Görges and colleagues (2011) experiment could have been improved. For one, the methods they used to design the display were not made clear. They indicated that a user-centered decision process was used for designing the display but this offers a lot of room for interpretation of such methods. Another limitation was that the investigators did not examine the computer-aptitude of their users or age-related differences in performance. Furthermore, they lacked evidence for determining why the experimental displays enhanced performance more than the control display. If they were to have implemented a protocol such as the think-aloud method (see below for more details), they may have been able to determine the perceptual processes that supported the triaging tasks. Lastly, the impact of learning on display performance was underrepresented in the study. The investigators found that even though the nurses had prior experience with the control display, performance was better with the experimental displays. They mentioned that longer training or prolonged use of the experimental displays might have improved performance even more, yet lack evidential data. It is possible that the observed decreases in user frustration with the experimental displays were related to decreases in cognitive Graphical user 22 load thereby contributing to improvements in performance. Whether the observed improvements of inexperienced users would also be observed in more experienced users is unknown. Therefore, their findings can only be generalized to novice nurses and may not apply to nurses with more experience or other medical professions. In general, previous research on effective design of a GUI in an ICU setting has largely ignored the implications of learning. First, when a GUI is designed from mental schemas based on experts they may not be applicable to the mental schemas of less experiences clinicians. However, the efficacy of a display based on mental schemas of less experienced clinicians is unknown. Piaget (1964) suggested that schemas adapt overtime depending on knowledge and experiences. Thus, it is possible that less experienced clinicians would not have enough knowledge for designing an effective GUI based on their mental schemas. It also possible that the less experienced clinician’s lack of clinical knowledge would prohibit them from effectively using a GUI based on expert schemas. Presumptuously, since schemas are often adapted from contextually similar situations, a GUI designed from less experienced clinicians’ mental schemas may possibly offer more experienced users and novices the opportunity to learn how to use a display faster due to adapting common knowledge to the functionality of the display. Additionally, there is limited research that has explored the role of computer-aptitude and clinical experience as it relates to cognitive load and performance with a GUI in an ICU setting. Consequently, there is the need for research in exploring GUI design processes and user performance as it relates to the mental schemas and cognitive load of clinicians with varying clinical experiences and computer aptitudes. Graphical user 23 Research Questions and Hypotheses Examination of diffusion, adoption, and acceptance theories along with learning theories can assist in designing a patient monitoring system in an ICU setting that use a GUI that is conduce to the cognitive processes and demands of the users. If the knowledge barrier for using the technology is decreased, this may have an impact on the diffusion of the technology across an organization (Attewell, 1992). If the patient monitoring devices are conducive for learning, the adoption rates may increase while minimizing medical errors. However, there is a need for research as much of this is speculative. By reviewing the literature on schema theory one can surmise that user performance and professional experience might be related to the GUI design. However, the question of who should organize the schemas that guide the GUI design is unanswered. For the purposes of this study, there are two primary aims. The research question for Aim 1 is related to schema construction and clinical knowledge: (1) Are there differences in user efficiency, accuracy, and cognitive between physicians with novice and expert knowledge when they use a patient monitoring GUI that was developed from either an expert or novice mental schema? My null hypothesis is that there is not a difference in efficiency, accuracy, and cognitive load between physicians with novice and expert knowledge when they use the different patient monitoring GUIs. My alternative hypothesis is that there is a difference in efficiency, accuracy, and cognitive load between physicians with novice and expert knowledge when they use the different patient monitoring GUIs. Specifically, I predict that efficiency, accuracy, and cognitive load will be the best for novices when using a patient monitoring device that was designed from novice physician’s schemas while efficiency, Graphical user 24 accuracy, and cognitive load will be the best for more experienced physicians when using a patient monitoring device that was designed from experienced physician’s schemas. The research question for Aim 2 concerns the role of cognitive load theory and the relationship between computer-aptitude and performance. (2) Does computer expertise relate to cognitive load, time to complete tasks, and accuracy of tasks when using different patient monitoring GUIs? My null hypothesis is that cognitive load, time to complete tasks, and accuracy of the tasks will not differ in users with different levels of computer expertise or with the different patient monitoring GUIs. My alternative hypothesis is that cognitive load, time to complete tasks, and accuracy of the tasks will differ in users with different levels of computer expertise or with the different patient monitoring GUIs. I predict that all users, regardless of their computer aptitude, will perform better and have lower cognitive load using the experimental displays than using a traditional control display. Additionally, I predict that users with lower computer aptitude will have significantly worse performance and higher cognitive load when using the control display. Methodology Research Design A combination of research methods will be employed. A prospective observational study will evaluate user schemas for the development of the user interfaces. A prospective experimental study will evaluate the performance of users as they use the different user interfaces. Graphical user 25 Subjects Physicians with varying levels of clinical knowledge and computer aptitude will be used in the study. The physicians had to have spent time working in the ICU within the last year in order to participate. Toth’s Basic Knowledge Assessment Tool Version 5 (BKAT-5) will be used to determine if a physician’s clinical knowledge is novice or expert. Computer aptitude will be evaluated as low or high based on the computer self-efficacy scale (CUSE). There will be an equal number of subjects with novice clinical knowledge, expert clinical knowledge, low computer aptitude, and high computer aptitude. Different physicians will be used for the usercentered design process and experimental procedures. Interventions and Observations An interprofessional team of nurses, physicians, designers, engineers, and human factors experts will follow the ISO 9241-210 (2010) standard user-centered design process to develop two patient monitoring user interfaces based off of the user needs for novice and expert physicians. ISO 9241-210 outlines a user-centered design process for interactive system development with the focus to enhance usability (Figure 6). The first step is specifying the context of use. That is, the users and their tasks have to be identified and the environment in which they perform the tasks has to be described. The method of acquiring this information is to conduct focus groups and interviews with the eventual users by asking them semi-structured questions about barriers to providing care, information needs, workflow processes, and limitations or benefits of existing patient monitoring systems. Additionally, the “think-aloud method” that was previously described by Ericcson and Simon (1980) can be used for identifying user needs. The think-aloud method asks users to verbalize their thought process while working Graphical user 26 through a task. Users are typically isolated in a quiet setting and are audio and/or videotaped. The users are asked to perform a specific task and told to, “tell what you were thinking as you solved the problem.” If a user pauses for longer than a few seconds they are prompted to “keep thinking aloud.” This method provides qualitative insights into the users’ cognitive processes that underlie their decision-making. Users will perform the think-aloud method while carrying out a clinical task using an existing patient monitoring device (Figure 1). The second phase of the design is to specify the user and organizational requirements, which is about structuring the information collected in the previous step into mental schemas for specific users. The mental schemas are identified based on the analysis of the transcripts that are acquired from focus groups, interviews, and the think-aloud protocol. They are typically analyzed using three iterative methods: referring phase analysis, assertional analysis, and script analysis. Fonteyn and colleagues (1993) provide a summary of each of these steps. First, in the referring phrase analysis, investigators identify all nouns and noun phrases in each subjects verbal report transcript with the name of a concept of reference. This allows the general concepts that a user was thinking to be identified. Often times, the transcripts are given to more than one reader to determine agreement in coding the concepts. An expert typically is used to determine if the coding system that was utilized provided appropriate representation and accuracy of the domain. In the assertional analysis, the investigators attempt to identify relationships between concepts that the users utilized to solve a problem. For instance, if a user says, “If we give the patient Potassium, her K should be alright”, this could be defined as a causal relationship. Lastly, the goal of a script analysis is to identify the reasoning processes the users utilized when solving a problem. The script analysis can provide insight into the information that the user primarily focused on, their rationale for their decisions, and their plan for problem resolution. A Graphical user 27 set of operators is identified from the referring phase analysis and assertional phase analysis that describe the common reasoning processes that users utilized during the problem-solving tasks. For instance, a common operator might be defined as “choose”, where users verbalized their choice of a treatment. The organization of concepts across users provides a set of cognitive operators that represent users reasoning processes. Ultimately, these operators can be used to explain the mental schemas that a user developed for solving a problem. For this study, two user interfaces based off of two different schemas will be developed: one user interface will be designed using the schemas identified from novice physicians while another user interface will be developed using the schemas identified from expert physicians. The GUI design will be adapted from Jaspers and colleagues (2004) who developed a GUI for pediatric oncologists’ computerized patient record. The investigators used the think-aloud method to gain an understanding of how the pediatric oncologists used the paper-based records. From the information they gathered, they were able to develop a cognitive model of the oncologists’ task behavior. The model was later used for developing a prototype GUI which was subsequently evaluated using different oncologists. Their methods appeared to be successful, as the oncologists reported that the system was well-organized, consistent, easy to learn, fast, and preferred over the paper method. Figure 6. The User-Centered Design Process Graphical user 28 The purpose of Aim 1 is to compare user performance with two different GUIs, which are designed using either an expert schema or novice schema. . The independent variables include display types (experimental display based on novice schemas, experimental display based on expert schema, control display), clinical knowledge (novice and expert), and patient cases (several different clinical cases). The dependent variables include cognitive load and user performance measurements (time and accuracy). A displays x patient cases x clinical knowledge mixed experimental design will be employed. Prior to experimentation the GUI will have been developed using the user-centered design process described above. Prior to viewing the displays, each subject will complete the BKAT-5 questionnaire to assess clinical knowledge and determine if they should be in the novice or expert group. Additionally, the subjects will complete the CUSE questionnaire to measure computer aptitude. The experimental procedure includes a comparison of user efficiency and accuracy using the experimental displays (novice Graphical user 29 schemas vs. expert schemas) and a control display (traditional patient monitoring device shown in Fig 1). The research subjects will view at least 5 patient cases that are similar in difficulty with each display. The order of display presentation and patient cases will be randomly assigned using the Latin square method. After viewing a patient case, the subject will provide a verbal response that details the treatment they decide to provide that specific patient. Time to act and the accuracy of their actions will be evaluated during the experiment. Following each patient case, users will fill out the NASA-TLX to evaluate cognitive load of the task for each patient case. The purpose of Aim 2 is to compare cognitive load and user performance between the previously designed GUIs in subjects with lower computer aptitude versus those with greater computer aptitude. The independent variables include display types (experimental display based on novice schemas, experimental display based on expert schema, control display), computer aptitude (low or high), and patient cases (several different clinical cases). The dependent variables include cognitive load and user performance measurements (time and accuracy). A displays x patient cases x aptitude mixed experimental design will be utilized. This procedure will use the same GUIs as the previous study. Prior to viewing the displays, the subjects’ computer aptitude will be measured using the CUSE questionnaire and clinical knowledge using the BKAT-5. At least 5 patient cases that are similar in difficulty will be viewed on each display. The order of display presentation and patient cases will be randomly assigned using the Latin square method. After viewing a patient case, the subject will provide a verbal response that details the treatment they decide to provide that specific patient. Time to act and accuracy of the task will be measured. After viewing the displays, the cognitive load of the subjects will be evaluated. Graphical user 30 The NASA-TLX (Task Load Index) was developed for assessing cognitive load subcategories with the main intention to evaluate mental effort in aviation (Hart & Staveland, 1988). Although originally developed for aviation, the NASA-TLX is the most common questionnaire used in research dedicated to user interface design and appears frequently in assessing cognitive load of medical professionals (Hart, 2006). The NASA-TLX consists of six subscales that measures different factors associated with completing a task: (1) mental demands, (2) physical demands, (3) temporal demands, (4) performance, (5) effort, and (6) frustration level (Table 2). The NASA-TLX is a two-part evaluation procedure consisting of both weights and ratings. The first part requires subjects to evaluate the contribution of each factor to the workload of the task. There are 15 pair-wise comparisons of the six scales (See Appendix A.) Each pair is presented to a subject one-by-one. Subjects are required to select the scale that they think contributed more to the task workload. For each subject, the number of times each subscale is chosen is tallied and used as a weight. The second part of the experiment requires subjects to rate the relative importance of each of the subscales for completion of the tasks using an analog scale broken into 20 equal intervals. (See Appendix B.) Numeric ratings for each scale element reflect the magnitude of that factor in a given task. The weighted task load rating for each subscale is the weight (as determined from the pair-wise comparisons) for that element, a number between zero and five, multiplied by the magnitude of load (as determined by the rated responses for each subscale), a number between zero and 100. Overall task load for a particular task is determined by summing the subject’s weighted task load ratings for a particular task and dividing by 15. This gives a total value for NASA-TLX of between zero and 100. The assessment of computer self-efficacy has shown promise as a method for examining computer aptitude. Self-efficacy has been shown to influence the frequency and success with Graphical user 31 which individuals use computers (Cassidy & Eachus, 2002). Cassidy and Eachus (2002) developed the computer self-efficacy scale (CUSE) scale which includes 30-items on a 6-point Likert scale to indicate the level of agreement/disagreement. The purpose of the scale is to assess general computer user self-efficacy. There was very good test-retest reliability (r= 0.86) and the internal consistency was shown to be high (ɑ=0.97). The construct validity was demonstrated by a strong positive association between self-efficacy scores and both computer experience (r=0.79) and familiarity with software packages (r=0.75). Therefore, the CUSE scale offers valuable information regarding users’ perception of self-efficacy of computers and computer aptitude. Table 2. Definitions of the NASA-TLX subscales Subscale End Points Description Mental demands Low/High How much mental and perceptual activity was required? Physical demands Low/High How much physical activity was required? Temporal demands Low/High How much time pressure occurred? Performance Good/Poor How successful do you think you were in accomplishing the goals of the task? Effort Low/High How hard did you have to work, mentally and physically, to accomplish your level of performance? Frustration Low/High How insecure, discouraged, irritated, stress versus secure, content, and relaxed did you feel during the task? Finally, clinical knowledge will be used to determine if a physician is a novice or expert. Rather than determining clinical knowledge based on the amount of time the subjects were in the Graphical user 32 profession, a more sophisticated method using Toth’s Basic Knowledge Assessment Tool Version 5 (BKAT-5) can be used for objectively determining clinical knowledge. The BKAT-5 is a 96 item questionnaire that tests subjects critical care knowledge related to the following topics: cardiovascular, monitoring lines, pulmonary, neurology, endocrine, renal, GI/parenteral, and a miscellaneous category (Toth, 1986). Accuracy is measured as the total number of errors across all of the patient cases. Our proposed studies measure the accuracy of treatment decisions for specific patient cases. Because each physician will evaluate the same patient cases, a standard definition for the correct treatment will be determined for each case by a team of physicians. Any deviation from the predetermined treatment protocol will be marked as an error. This may include errors of omission or commission. Two investigators will independently score the accuracy of the physician’s responses for each patient case and interrater reliability measurements (see Banerjee et al, 1999) will be carried out at the end of the study. This way, the accuracy of the observations can be assessed. The time to complete tasks will be determined using an electronic timer. The timing of a task will begin after a subject receives task instructions and the assigned user interface is fully loaded. The task will be deemed complete after the subject finished providing a response for their chosen treatment. Sample Size A total of 10 clinicians will varying clinical roles (e.g., resident, fellow, attending) will be used for the GUI design phase for evaluating mental schemas. A total of 20 clinicians will be used to evaluate user performance of the different GUIs. Aim 1 will include 5 clinicians with novice Graphical user 33 clinical knowledge and 10 clinicians with expert clinician knowledge. Aim 2 will include 5 clinicians with low computer aptitude and 5 with high computer aptitude. Data Management and Analysis For Aim 1, an ANCOVA will be used for determining if the means of user performance (i.e. speed and accuracy) and cognitive load are equal across display, patient cases, and expertise while controlling for computer aptitude. For Aim 2, an ANCOVA will be used to determine if the means of user performance and cognitive load are equal across displays, patient cases, and computer aptitude while controlling for clinical knowledge. Ethical Considerations This study has been approved by the institutional review board. The risk for any psychological or social harm is very minimal. To preserve the anonymity of the research subjects and to ensure confidentiality of participants, any data that is collected will be kept anonymous. The data will also be stored on a secured, password-protected computer that is only accessible by the principal investigator. Informed consent will be collected from each study participant. Study participants can choose to drop out of the study at any point and their data will not be used. Ethical considerations: Is the research design adequate to provide answers to the research question? It is unethical to expose subjects to research that will have no value. Is the method of selection of research subjects justified? The use of vulnerable subjects as research participants needs special justification. Vulnerable subjects include those in prison, minors and persons with mental disability. Particularly in international research, it is important to ensure that the population in which the study is conducted will benefit from any potential outcome of the research. They should not be doing it to the benefit of another population. Justification is Yes X X No Graphical user 34 needed for any inducement, financial or otherwise, for participants to be enrolled in the study. Are interventions justified, in terms of risks/benefits ratio? Risks are not limited to physical harm. Psychological and social risks must also be considered. For observations made, have measures been taken to ensure confidentiality? X X Graphical user 35 References Agutter, J. et al. (2003). Evaluation of graphic cardiovascular display in a high-fidelity simulator. Anesthesia & Analgesia, 97, 1403-1413. Andrews, F.J. & Nolan, J.P. (2006). Critical care in the emergency department: Monitoring the critically ill patient. Emergency Medicine, 23, 561-564. Attewell, P. (1992). Technology diffusion and organizational learning: The case of business computing. Organization Science, 3, 1-19. Banerjee, M., Capozzoli, M., McSweeney, L., & Sinha, D. (1999). Beyond kappa: A review of interrater agreement measures. The Canadian Journal of Statistics, 27(1), 3-23. Bartlett, F.C. (1932). Remembering: A Study in Experimental and Social Psychology. Cambridge, England: Cambridge University Press. Berstein, D.A., Roy, E., Skrull, T.K., & Wickens, C.D. (1991). Psychology. Boston: Houghton Mifflin Company. Brock (1996). Evaluation of a rankine cycle display for nuclear power plant monitoring and diagnosis. Human Factors, 38, 506-521. Burgess, D.J. (2010). Are providers more likely to contribute to healthcare disparities under high levels of cognitive load? How features of the healthcare setting may lead to biases in medical decision making. Medical Decision Making, 30, 246-257. Cassidy, S. & Eachus, P. (2002). Developing the computer user self-efficacy (CUSE) scale: Investigating the relationship between computer self-efficacy, gender, and experience with computers. Journal of Educational Computing Research, 26(2), 133-153. Cropp, A.J., Woods, L.A., Raney, D., & Bredle, D.L. (1994). Name that tone. The proliferation of alarms in the intensive care unit. Chest, 105, 1217-1220. Drews, F.A. (2008). Drews, 2008. In: Henriksen, K., Battles, J.B., & Keyes M.A. (eds). Advances in Patient Safety: New Directions and Alternative Approaches (Vol 3: Performance and Tools). Rockville, MD: Agency for Healthcare Research and Quality. Effken, J.A. (2006). Improving clinical decision making through ecological interfaces. Ecological Psychology, 18(4), 283-318. Effken, J.A., Loeb, R.G., Kang, Y., & Lin, Z. (2008). Clinical information displays to improve ICU outcomes. International Journal of Medical Informatics, 77, 765-777. Eichhorn, J.H. (1989). Prevention of intraoperative anesthesia accidents and related severe injury through safety monitoring. Anesthesiology, 70, 572-577. Ericsson, K. & Simon, H. (1980). Verbal reports as data. Psychological Review, 87(3), 215-251. Graphical user 36 Eva, K. W. (2005). What every teacher needs to know about clinical reasoning. Med.Educ., 39, 98-106. Fonteyn, M.E., Kuipers, B., & Grobe, S.J. (1993). A description of think aloud method and protocol analysis. Qualitiative Health Research, 3(4), 430-441. Ford, E.W., Menachemi, N., & Phillips, T. (2006). Predicting the adoption of electronic health records by physicians: When will health care be paperless? JAMIA, 13, 106-112. Gibson, J.J. (1986). The ecological approach to visual perception. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. (Original work published 1979). Görges, M, Kuck, K, Koch, S.H, Agutter, J., Westenskow, D.R. (2011). A far-view intensive care unit monitoring display enables faster triage. Dimensions of Critical Care Nursing, 30(4), 206-217. Hart, S.G. (2006). NASA-Task Load Index (NASA-TLX); 20 years later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 50(9), 904-908. Hart, S.G. & Staveland, L.E. (1988) Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In: Hancock, P.A. & Meshkati, N. (Eds), Human mental workload. Advances in psychology, 52, Oxford, England: North-Holland Hatala, R., Norman, G. R., & Brooks, L. R. (1999). Impact of a clinical scenario on accuracy of electrocardiogram interpretation. J.Gen.Intern.Med., 14, 126-129. ISO 9241-210 (2010). Human-Centred Design Process for Interactive Systems. International Organization for Standardization, Geneve. Jaspers, M.W.M., Steen, T., van den Bos, C., & Geenen, M. (2004). The think aloud method: A guide to user interface design. International Journal of Medical Informatics, 73, 781-795. Johnson, D.F. & White, C.B. (1980). Effects of training on computerized test performance in the elderly. Journal of Applied Psychology, 65(3), 357-358. Larkin, J., & Simon, H.A. (1987). Why a diagram is (sometimes) worth ten thousand words. Cognitive Science, 11, 65-99. Lee, J.A. (1986). The effects of past computer experience on computerized aptitude test performance. Educational and Psychological Measurements, 46, 727-733. Malhotra, S., Laxmisan, A., Keselman, A., Zhange, J. & Patel., V.L. (2005). Designing the design phase of critical care devices: A cognitive approach. Journal of Biomedical Informatics, 38, 34-50. McNamara, D.A. (1995). Effects of prior knowledge on the generation advantage: calculators versus calculation to learn simple multiplication. Journal of Educational Psychology, 72(2), 307-318. Nielsen, J. & Mack, R.L. (1994). Usability inspection methods. John Wiley & Sons, Inc.: New York, NY. Graphical user 37 Meredith, C. & Edworthy, J. (1995). Are there too many alarms in the intensive care unit? An overview of the problems. Journal of Advanced Nursing, 21, 15-20. Piaget, J. (1962). Play, dreams and imitation in childhood. New York: Norton. Rasmussen, J. & Jensen, A. (1974). Mental procedures in real-life tasks: A case study of electronic trouble shooting. Ergonomics, 17(3) Riman, C., Ghusn, H., & Monacelli, E. (2011). Differences in computer performance across age groups. Physical & Occupational Therapy in Geriatrics, 29(3), 169-180. Rivera-Rodriguez, A.J. & Karsh, B.T. (2010). Interruptions and distractions in healthcare: Review and reappraisal. Quality & Safety, 19, 304-312. Rogers, E.M. (2003). The diffusion of innovations. Fifth edition. The Free Press, New York. Rozell, E.J. & Gardner III, W.L. (2000). Cognitive, motivation, and affective processes associated with computer-related performance: A path analysis. Computers in Human Behavior, 16, 199-222. Shachak, A., Hadas-Dayagi, M., Ziv, A., & Reis, S. (2009). Primary care physicians’ use of an electronic medical record system: A cognitive task analysis. Journal of General Internal Medicine, 24(3), 341-348. Shapiro, A.M. (1999). The relevance of hierarchies to learning biology from hypertext. Journal of Learning Sciences, 8(2), 215-243. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12, 257-285. Sweller, J. & Chandler, P. (1994). Why some material is difficult to learn. Cognition and Instruction, 12(3), 185-233. Terry, A.L. et al. (2008). Implementing electronic health records: Key factors in primary care, Canadian Family Physician, 54, 730-736. Toth, J.C. (1986). The basic knowledge assessment tool (BKAT) – Validity and reliability: A national study of critical care nursing knowledge. West J Nurs Res, 8, 181-196. Vicente, KH (1999). Cognitive Work Analysis: Towards safe, productive, and healthy computerbased work. Mahwah, NJ: Lawrence Erlbaum Associates Vicente, K.J. & Rasmussen, J. (1992). Ecological interface design: Theoretical foundations. IEEE Transaction on Systems, Man, and Cybernetics, 22(4), 589-606. Graphical user 38 Appendix A. For each NASA-TLX pair-wise comparison, the subjects will choose the most appropriate scale that best describes what contributed to the workload of completing the tasks for all patients on each interface. Performance or Mental demand Effort or Physical demand Performance or Frustration Frustration or Effort Frustration or Mental Demand Physical Demand or Temporal demand Performance or Temporal demand Effort or Performance Physical Demand or Performance Mental demand or Effort Temporal Demand or Frustration Temporal Demand or Mental Demand Mental Demand or Physical demand Temporal Demand or Effort Physical Demand or Frustration Graphical user 39 Appendix B. The NASA-TLX ratings scale that each participant will complete after completing the tasks for all patient cases on each interface. e Proposal Guidelines These guidelines are based on the WHO recommendations. The proposal should be approximately 7-20 pages in length and include all of the sections below. The paper should be in Times New Roman, 12-point font, 1-inch margins, and double-spaced. For citations and references, please use APA style formatting. The research proposal will consist of the following sections: Title Page The project title should be descriptive and concise. It may need to be revised after completion of the writing of the protocol to reflect more closely the sense of the study. Project Summary (1-3 pages) The summary should be concise, and should summarize all the elements of the protocol. It should stand on its own, and not refer the reader to points in the project description. It is similar to an abstract in a research manuscript or an executive summary in a business report. Project Description Rationale (10-20 pages) This is related to the introduction and literature review in a research paper. It puts the proposal in context. It should answer the question of why and what: why the research needs to be done and what will be its relevance. A description of the most relevant studies published on the subject should be provided to support the rationale for the study. Research Questions & Hypotheses (1 page) Specify the primary research question. The research question should be simple (not complex), specific (not vague), and stated in advance (not after the research is done). After statement of the primary research question, secondary questions may be stated. Young investigators are advised to resist the temptation to put too many objectives or over-ambitious objectives that cannot be adequately achieved by the implementation of the protocol. After each research question, a hypothesis should be provided. The hypothesis should be written as a statement that predicts the relationship of the variables. For guidance on developing research questions and hypotheses, read the following: http://www.elsevierhealth.com/media/us/samplechapters/9780323057431/Chap ter%2002.pdf Methodology The methodology section has to be thought out carefully and written in full detail. It is the most important part of the protocol. It should include information on the research design, the research subjects, interventions introduced, observations to be made and sample size. Research design (1-2 paragraphs) The choice of the design should be explained in relation to the study objectives. Research subjects (1 page) Depending on the type of the study, the following questions should be answered: ● What are the criteria for inclusion or selection? ● What are the criteria for exclusion? ● In intervention studies, how will subjects be allocated to index and comparison groups? ● What are the criteria for discontinuation? Interventions and/or Observations (1-3 pages) Intervention: if an intervention is introduced (e.g., mobile device, training program, decision support tools), a description must be given, and whether they are already commercially available, or in phases of experimentation. For technology that is commercially available, the protocol must state their proprietary names and manufacturer. For interventions that are still in the experimental stage (or that are commercially available but are being used for a different indication), additional information should be provided on available pre-clinical investigations. Observations: Information should be provided on the observations to be made, how they will be made, and how frequently will they be made. If the observation is made by a questionnaire, this should be appended to the protocol. Sample size (1 page) The protocol should provide information and justification about sample size. A larger sample size than needed to test the research hypothesis increases the cost and duration of the study and will be unethical if it exposes human subjects to any potential unnecessary risk without additional benefit. A smaller sample size than needed can also be unethical if it exposes human subjects to risk with no benefit to scientific knowledge. The basis on which sample size is calculated should be explained in the methodology section of the protocol. Calculation of sample size has been made easy by computer software programs, but the principles underlying the estimation should be well understood. Data management and analysis (1-2 pages) The protocol should provide information on how the data will be managed, including data coding for computer analysis, monitoring and verification. Information should also be provided on the available computer facility. The statistical methods used for the analysis of data should be clearly outlined. Ethical considerations (1 page) All research protocols in the biomedical field (including health information and health informatics research) particularly if it involves human subjects, must include a section addressing ethical considerations. This includes two components: The first is a written approval of the appropriate ethics review committee, together with a written form for informed consent, where appropriate. The second is a special section, preferably in the format of a checklist (see bulleted points below), to address all possible ethical concerns. Simply getting the ethical approval is not enough. Is the research design adequate to provide answers to the research question? It is unethical to expose subjects to research that will have no value. Is the method of selection of research subjects justified? The use of vulnerable subjects as research participants needs special justification. Vulnerable subjects include those in prison, minors and persons with mental disability. Particularly in international research, it is important to ensure that the population in which the study is conducted will benefit from any potential outcome of the research. They should not be doing it to the benefit of another population. Justification is needed for any inducement, financial or otherwise, for participants to be enrolled in the study. Are interventions justified, in terms of risks/benefits ratio? Risks are not limited to physical harm. Psychological and social risks must also be considered. For observations made, have measures been taken to ensure confidentiality? References The protocol should end with relevant references on the subject in APA style.
Purchase answer to see full attachment Eradicating Medication Errors within EHR