We specifically report here on the design of a portable and customizable hardware/software system called Remote Analysis of Team Environments RATE that we have used to run two studies de
Trang 1Safer Surgery
114
detail while being practical, inexpensive and easy to use, and that describe more closely the relationship between teamwork, performance, safety and quality
Acknowledgements
This work was generously funded by the BUPA foundation, with our earlier reported work supported by the Patient Safety Research Programme Thanks also
to the rest of the Great Ormond Street project team, and especially Professor Marc
de Leval and Mr Tony Giddings
References
Avermaete, J.A.G and van Kruijsen, E.A.C (1998) NOTECHS The Evaluation
of Non-technical Skills of Multi-pilot Aircrew in Relation to the JAR-FCL Requirements Amsterdam: EC NOTECHS Project final.
Catchpole, K., Godden, P.J., Giddings, A.E.B., Hirst, G., Dale, T., Utley, M.,
Gallivan, S and de Leval, M (2005) Identifying and Reducing Errors in the Operating Theatre (Rep No PS012) Patient Safety Research Programme
Available at <http://www.pcpoh.bham.ac.uk/publichealth/psrp/documents/ PS012_Final_Report_DeLeval.pdf> [last accessed February 2009]
Catchpole, K.R., Giddings, A.E., de Leval, M.R., Peek, G.J., Godden, P.J., Utley, M., Gallivan, S., Hirst, G and Dale, T (2006) Identification of systems failures
in successful paediatric cardiac surgery Ergonomics 49, 567–88.
Catchpole, K., Giddings, A.E., Wilkinson, M., Dale, T., Hirst, G., and de Leval, M.R (2007) Improving patient safety by identifying latent failures in successful
operations Surgery 142, 102–110.
de Leval, M.R., Carthey, J., Wright, D.J., and Reason, J.T (2000) Human factors and
cardiac surgery: A multicenter study Journal of Thoracic and Cardiovascular Surgery 119, 661–72.
Fletcher, G.C.L., Flin, R.H., Glavin, R J., Maran, N.J and Patey, R (2004) Rating non-technical skills: Developing a behavioural marker system for use in
anaesthesia Cognition, Technology and Work 6, 165–71.
Fletcher, G.C.L., Flin, R.H., Glavin, R.J., Maran, N.J and Patey, R (2003) Anaesthetists’ Non-Technical Skills (ANTS): Evaluation of a behavioural
marker system British Journal of Anaesthesia 90, 580–8.
Flin, R., Martin, L., Goeters, K., Hoermann, J., Amalberti, R., Valot, C and Nijhuis, H (2003) Development of the NOTECHS (Non-Technical Skills)
system for assessing pilots’ CRM skills Human Factors and Aerospace Safety
3, 95–117
Helmreich, R.L and Musson, D.M (2000) The University of Texas Threat and
Error Management Model: Components and examples British Medical Journal website Available at: <http://homepage.psy.utexas.edu/homepage/
Trang 2Rating Operating Theatre Teams 115
group/HelmreichLAB/Publications/pubfiles/Pub248.pdf> [last accessed February 2009]
Helmreich, R.L., Klinect, J.R and Wilhelm, J.A (1999) Models of threat,
error, and CRM in flight operations Proceedings of the 10th International Symposium on Aviation Psychology (pp 667–82) Columbus, OH: The Ohio
State University
Lodge, M., Fletcher, G.C.L., Russell, S., Goeters, K.M., Hoermann, H., Nijhuis,
H et al (2001) Results of the Experiment JARTEL WP3 Final Report (Rep
No JARTEL/BA/WP3/D5_20.) Brussels: The JARTEL Consortium, for the European Commission, DG TREN
McCulloch, P., Mishra, A., Handa, A., Dale, T., Hirst, G and Catchpole, K (2009) The effects of aviation-style non-technical skills training on technical
performance and outcome in the operating theatre Quality and Safety in Healthcare 18, 109–115
Mishra, A., Catchpole, K., and McCulloch, P (2009) The Oxford NOTECHS System: Reliability and validity of a tool for measuring teamwork behaviour
in the operating theatre Quality and Safety in Healthcare 18, 104–108.
O’Connor, P., Hormann, J., Flin, R.H., Lodge, M., Goeters, K.M and The JARTEL Group (2002) Developing a method for evaluating Crew Resource
Management skills: A European perspective International Journal of Aviation Psychology 12, 263–85.
Tang, B., Hanna, G.B., Joice, P and Cuschieri, A (2004) Identification and categorization of technical errors by Observational Clinical Human Reliability
Assessment (OCHRA) during laparoscopic cholecystectomy Archives of Surgery 139, 1215–20.
Undre, S., Sevdalis, N., Healey, A.N., Darzi, A and Vincent, C.A (2007) Observational Teamwork Assessment for Surgery (OTAS): Refinement and
application in urological surgery World Journal of Surgery 31, 1373–81.
Yule, S., Flin, R., Paterson-Brown, S., Maran, N and Rowley, D (2006)
Development of a rating system for surgeons’ non-technical skills Medical Education 40, 1098–104.
Trang 3This page has been left blank intentionally
Trang 4Chapter 8
RATE: A Customizable, Portable Hardware/ Software System for Analysing and Teaching Human Performance in the Operating Room
Stephanie Guerlain and J Forrest Calland
introduction
Researchers at the University of Virginia are studying human performance in the operating room in an effort to understand the factors that lead to or inhibit patient safety in this environment In general, observation ‘in the wild’ is extremely difficult as there is no transcription or video recording upon which to base an analysis or to validate the result Or, even with such recordings, the work to code and analyse these recordings is extremely labour intensive and thus may not be practical for ‘real-world’ projects that do not have enormous budgets (and time) to accommodate such analyses We have developed and tested several methodologies that enable collecting team data ‘on the fly’, some of which are
‘summative’ evaluations and others that are more detailed process tracing of events and communications yielding directly usable data that can be analysed to characterize team behaviours immediately following the team process that was observed We specifically report here on the design of a portable and customizable hardware/software system called Remote Analysis of Team Environments (RATE) that we have used to run two studies designed to improve team communication and coordination in the operating room (Guerlain et al 2004, 2005, 2008), one handoff-of-care study for paediatric residents (Sledd et al 2006) and one usability study (in progress) This chapter is based upon work supported by the National Science Foundation and describes how the system works; it has been used in our operating room studies The hardware enables digitally recording up to four video feeds and eight audio feeds The event-recording software enables observers to create a customized set of events of interest that can be logged into a time-stamped database when watching a case live The playback software synchronizes the event-recording database with the audiovisual (AV) data for efficient review of the case
Trang 5Safer Surgery
118
Why Study Team Communication in the Operating Room?
In the United States, operating suites at academic medical centres do not have consistent standard procedures or protocols Team members often rotate across different teams and techniques and procedures often vary depending on the staff and the technology available
Research, though limited, has demonstrated that poor teamwork and communication exist during surgical procedures (Guerlain et al 2008, Helmreich and Schaefer 1994, Sexton et al 2002) Our pre-intervention observation studies showed that rarely is sufficient information shared between the surgeon and other team members from anaesthesia to nursing about the plan
Researchers have begun to study team performance in the operating room, most often focusing on anaesthesiology Gaba has studied the use of crew resource management (CRM) training for anaesthesiologists using an anaesthesia simulator (Gaba 1989) Xiao, McKenzie and the LOTAS group at the University
of Maryland Shock Trauma Center have evaluated anaesthesiology performance
on trauma teams (Xiao et al 2000, Xiao and The LOTAS Group 2001), and have evaluated focused tasks, such as patient intubation Little research, however, has been conducted evaluating team performance from the surgeon’s perspective This may be due to the difficulty of judging performance during long, complicated, primarily manual procedures (e.g., no computerized data are collected)
evaluation of Teamwork
Because team members with distinct tasks must interact in order to achieve a shared goal, multiple factors play a role in determining the success of a team, such
as organizational resource availability, team authority, team effort, leadership, task complexity and communication These constructs cannot be measured in the same sense as temperature or pressure and may interact with each other in complex ways Collecting behavioural data means that an observer needs to capture ‘the moment-to-moment aspects of team behaviors’ ‘While reliance on expert observer ratings may be a necessity, there is considerable discretion about what is measured’, creating noise in the data as well as missing data points (Rouse et al 1992,
p 1298) In crew resource management training and evaluation, trained raters judge the adequacy of communication overall on a rating scale, either periodically
or at the end of a test (or live) situation Thus, the team is given one or a few overall scores It is difficult, however, to develop reliable, sensitive scoring metrics, although significant work has been done in this area (Flin and Maran 2004, Law and Sherman 1995) Others look at just performance metrics or knowledge of team members at particular points during a team activity or at the end (e.g., Endsley 1995) or count ‘utterances’ or communication ‘vectors’ (who is talking to whom) (e.g., Moorthy et al 2005) If one has transcribed verbal data, then verbal protocol analysis (Simon and Ericsson 1993) or even automated linguistic analysis can be
Trang 6RATE 119
conducted For example, in a study conducted by Sexton and Helmreich (2000),
a computer-based linguistic tool was used to analyse how various language usages affect error rates by flight deck crew members The study indicated that
‘specific language variables are moderately to highly correlated with individual performance, individual error rates, and individual communication ratings’ (p 63) Thus, frequent usage of first person plural (we, our, us) tends to be correlated with
a reduced occurrence of error The study also pointed out the number of words used by the crew members increases in an abnormal hostile situation or during times when workload is increased Such techniques, however, first require that all communications are transcribed, an extremely labour-intensive, time-consuming and tedious task
Our approach was to have a ‘happy medium’ that provides a fairly detailed process tracing of events and communications that goes beyond just utterances but does not require transcription of verbal data RATE enables trained observers to mark events of interest ‘on the fly’ while watching the team process, knowing that the resultant data set may not be 100 per cent accurate, but benefiting from the fact that the data are immediately available for summarization of events (e.g., number
of observed contaminations, or amount of communication that was focused on teaching vs coordination) with the ability to immediately jump the AV record to
a few seconds before any event that was marked, either for teaching or review purposes or to further validate the data Thus, we end up with a human-indexed summarization of events with an ability to play back those events of interest without having to search through a long AV record
The methods reported here were developed to support observation and scoring
of teams performing laparoscopic cholecystectomy, a surgical procedure to remove the gallbladder The procedure created an ideal situation due to its frequency of performance and relatively short time length of procedure (1–1.5 hours from patient entering to patient leaving the room) The challenges include the fact that
a standard team is made up of at least five people: anaesthesiologist, attending surgeon, resident surgeon, scrub tech, circulating nurse; in our institution, a medical student who acts as the laparoscopic camera operator and an anaesthetist
is most often included too Others such as technicians and nurses-in-training may also be present Reliability of the data becomes an issue when the observer is confronted with tracking multiple events by multiple test subjects when evaluating
a team (Simon and Ericsson 1993)
One of the biggest problems in collecting such data is determining how much individual interpretation by an observer affects the make-up of the data If the purpose of the data collection tool is to produce consistent and valid data, then the data collection tool/methodology should also produce data that has high inter-rater agreement between multiple observers A data collection tool that can minimize the effects of individual interpretation may result in a data set that has high degree
of inter-rater agreement This can be aided with a computerized system that helps standardize the data collection options In our system, we agreed ahead of time
on the types of events we wanted to track, and the event-marking software aids
Trang 7Safer Surgery
120
in this process Events can be ‘one-time’ checkboxes, such as ‘Patient enters room’, ‘Antibiotics given’, ‘First skin incision’, etc., ‘countable’ list items, such as
‘Dropped the gallbladder’, ‘Contamination’, ‘Distraction’, etc or a series of pick lists that enable the observer to quickly summarize a communication event, such
as ‘Surgery Attending Scrub Tech Requesting Tools’, ‘Surgery Resident
Medical Student Teaching Camera’, etc The observer also has the ability
to type in free-hand notes at any time All of these events are time-stamped and synchronized with the AV recordings (if any) using a method described further below We have also experimented with creating a ‘union’ of the two scorers’ data files, such that if one observer marked communication events that the other did not or vice versa, a more complete data set would result by joining the two data files and eliminating any that are the ‘same’, with the ability to also measure inter-rater agreement on those that were an interpretation of the ‘same’ communication event (Shin 2003) In other words, the moving window alignment algorithm automatically detects the ‘same’ conversations that were encoded so that a union
of the two data sets can be made, and inter-rater agreement can be measured on just the intersection
Interestingly, in our second operating room study, RATE was used as part of
the Independent variable, which was the training of crew resource management
skills Thus, RATE was used to help train surgeons on their team communication and coordination skills In this longitudinal study, the dependent variables were composed of answers to a questionnaire distributed to all team members immediately following each case Improvements over time in questionnaire scores was the method for measuring impact (Guerlain et al 2008) This method of measuring team performance has several advantages, including ease of collection, increased power (due to all team members rating the team performance, one case gets seven or so ratings) and no need to train observers on a team scoring metric
Data Collection System
We list here the set of equipment used for the operating room studies along with some details about how and when it is used Researchers can choose to use less recording equipment, depending on their methodological needs
Rolling cart This cart stores all of the equipment listed below (except the
two scoring laptops) It is rolled into the operating room, placed in the corner of the room to the left of the anaesthesia monitor to be most out
of the way, the AV equipment is set up and recording is started prior to the patient’s arrival After the patient leaves, recording is stopped, the AV equipment is taken down and the cart is rolled out
Four Pentium III computers, each with a video capture card, corresponding
video compression software and a large hard drive These computers are placed on the bottom half of the cart Each video card and corresponding
•
•
Trang 8RATE 121
software automatically compresses one video feed (laparoscopic image view, table view, anaesthesia monitor view or room view) and two audio feeds Upon playback, we can thus view all four videos and hear all conversations, or selectively mute any of the four pairs of audio feeds
One LCD monitor, mouse, keyboard and switcher These are placed on
the top half of the cart The switcher enables switching the control of the monitor, keyboard and mouse among the four computers
Eight Shure wireless microphones (each on a different frequency) The
receivers are placed on the top of the cart, to the right and left of the LCD monitor, with each audio feed going into one of the stereo (left/right) audio input lines of the video capture cards on the four computers The lapel microphones are placed on staff as they arrive For the surgeons, the lapel receiver is clipped onto the front of the scrub top, with the wire running over the shoulder, taped to the back of the scrub top, and the microphone itself is turned on and placed into the pocket of the scrub bottom This is done before the surgeon scrubs and has a gown put on
A video cable is directly connected from the output jack on the back of the
laparoscopic monitor to the laparoscopic view recording computer, thus enabling video capture of the laparoscopic image
One high definition digital camera (requires running cabling to it), which
has remote pan/tilt/zoom capabilities and can handle the bright lights vs dark room changes that occur during surgery is installed on one of the booms over the operating table to get a ‘table’ (operating area) view Each boom in our operating rooms now has Velcro placed in the correct spot, as does the back of the camera Due to the camera’s weight, we also use surgical tape
to secure the camera in place prior to each surgery The camera is adjusted using the remote pan/tilt/zoom capability such that just the abdominal area is
in view and the cable is secured out of the way using surgical tape along the boom and runs under the anaesthesia monitor etc to get to our cart, and is plugged into the video capture card of the table view recording computer
A digital-to-analogue scan converter This is an off-the-shelf product that
can be used to ‘split’ the video feed of any computer monitor We use it to get a video capture of the anaesthesiologist’s computer screen, the analogue output of which is connected to the anaesthesia view recording computer (which then digitizes the analogue input)
One wireless video camera (low resolution but extremely small and
portable, we use this to get a ‘room view’ of the operating room) All of our operating suites now have Velcro on the upper corner of the wall, and
we have Velcro on the back of the wireless video camera such that we can quickly place this camera prior to a surgery The receiver is on the top of the cart One person climbs onto a stool to place the video camera and a second person looks at the output on the room view recording computer so that the person placing the camera knows at what angle to point the camera (as it has no remote pan/tilt features)
•
•
•
•
•
•
Trang 9Safer Surgery
122
Two laptop computers, each running the RATE event-marking software
We have two people observe each case live and manually ‘mark’ events of interest using predefined categories of information described further in the next section The observers stand on one or two steps to get a better view, and are behind the attending, on the surgeon’s left side
Two headsets These are worn by the observers to enable hearing all
conversations
One eight-jack local area network (LAN) hub, which enables all four
computers to have internet access through one ethernet cable We also use this hub to connect external computers (e.g., the laptops we use to manually mark events of interest when observing the case) to the four recording computers Thus, to summarize, we designate the audio-video input to the four computers
as follows:
For the laparoscopic image computer, we capture the image coming off the laparoscopic camera (using a video cable), along with the surgery attending’s and surgery resident’s voices (using one pair of Shure wireless microphones)
For the table view computer, we capture video taken from just above the operating table (using a high-definition video camera) On this view, we capture the voices of the scrub tech and camera operator using a second pair of Shure wireless microphones
For the anaesthesia monitor computer, we capture the video from the anaesthesiologist’s monitor (using a scan converter) along with the voices
of the anaesthetist(s) using a third pair of Shure wireless microphones For the room view computer, we capture an overall room video, taken from high up in the corner of the OR (using a wireless video camera) and, on this view, we capture the circulating nurse’s voice along with any ‘extra’ person
in the room using the final pair of Shure wireless microphones
The observers set up and take down all equipment, and then make live observations using the RATE event-marking software
general Setup Procedure
Our method of synchronizing the four audio-video files and the observers’ event database file require that four activities take place First, we manually synchronize the four Pentium III computers’ clocks prior to each recording session (e.g., the morning before a case), to an external time server, which we do by using a software tool, available for free download from <www.arachnoid.com/abouttime/> Second, once all equipment is set up, we start compressing/recording the laparoscopic view first, as this is the reference recording time from which the other three computers’ recording times are offset upon playback
•
•
•
•
•
•
•
•
Trang 10RATE 123
Third, once the four audio-video feeds are up and running (e.g., all the above equipment is set up and each computer is recording/compressing their respective audio-video inputs), we start the RATE event-marking software, set the ‘start time’
to approximately 30 seconds ahead of the currently counting up recording time on the laparoscopic view, and then hit ‘Start’ on the RATE event-marking software just as the recording time on the laparoscopic view reaches the just-entered start time The RATE event-marking software will then ‘count up’ from the start time; this counting up time remains approximately in sync with the counting up time of the laparoscopic view This enables the time stamps captured with the RATE event-marking software to be synchronized with the four video feeds upon playback (all computers’ clocks are slightly different, so the two recording laptops and laparoscopic video capture time will eventually go out of sync, but not by more than a few seconds by the end of the case)
Finally, all software files are named with a unique ID for that case, followed
by a description of the file (e.g., 6134_LapView.mpg, 6134_TableView.mpg, 6134_AnesthView.mpg and 6134_RoomView.mpg for the four AV files and 6134_ Observer1.mdb, 6134_Observer2.mdb for the two observers’ database files) Thus, when running the playback software, one can select the appropriate observer data file (e.g., 6134_Observer1.mdb) and know which four video files to load along with that file (e.g., 6134_LapView.mpg, 6134_TableView.mpg, 6134_AnesthView mpg, and 6134_RoomView.mpg) In our studies, we had two observers recording but designated one as the ‘primary’ observer, and used this person’s log file for debriefing purposes immediately following the case Some work was done later to measure inter-rater agreement of the two observers etc as described above
Of note, upon playback, we run the four video using the LAN hub The playback software just opens and plays the videos directly from their location on the recording computers In other words, we avoid the extensive time it would take
to copy the videos to a single computer Thus, we can review the case immediately, which is how we are able to debrief the surgeons on their crew resource management and/or non-technical skills in a breakout room during the time between cases (see Figure 8.1) We tear down the equipment as soon as the patient is wheeled out of the room, store it on the cart, wheel the cart to the break-out room and meet the surgeons there once they are finished dictating the case (In our studies, the surgery attending always attended the debriefs and sometimes the surgery resident joined
as well The other team members are busy during this change-over time)
The RATe Playback Software
Both the ‘event marking’ and ‘playback’ components of RATE are programmed using Visual Basic 6.0 (VB) The time-stamped events are stored in an Access database
RATE playback is used to synchronize the four video feeds upon playback
by offsetting the start of each video based on the time the videos were encoded