xiii CHAPTER I INTRODUCTION ...1 II THEORETICAL GROUNDING ...8 Technique to Anesthetize the Bottom Jaw...8 Theoretical Grounding ...15 Motor Skills Learning ...15 Challenges Students Fac
Exploratory - Initial Design and Development
An overview of the design and development of LAMRS in the previous chapter, conducted through design-based research (DBR), frames this chapter It outlines the research methodologies used during Phase I, the exploratory phase of the study The goals at this stage were to become familiar with virtual reality (VR) through Internet research and a literature review, to investigate what kinds of learning VR can support, and to create a foundation for subsequent phases of the research.
The VR system aligned with my instructional intervention (Figure 8) was examined through an informal process of investigation and analysis of the application I created In Spring 2005, I worked with three students to investigate VR using the VR tools we had developed, collecting observations and reflections to gauge how the immersive environment supported the instructional aims.
$2,000 we had at our disposal
Phase I: Exploratory - Initial Design and Development
Three undergraduate research students joined me in the spring and continued into the following year, working as a small team without outside professional help With a development budget of $2,000, we purchased our most expensive item, the iGlasses PC/SVGA, to provide a mobile visual display for viewing our 3D image from an immersive, first-person perspective The iGlasses were the cheapest head-mounted display we could reasonably afford We also acquired a webcam, while a Dell desktop computer we already owned served as the platform for the virtual engine We researched software online and discovered Daz Studio to create our 3D content.
Phase I—Exploratory: design and development began with free access to kits for different model characteristics, making it inexpensive to obtain a Daz Man kit and create a 3D image of the human cranium We used ARToolKit, freely available software that employs pattern recognition to render and display the 3D image in the head-mounted display (HMD) At this stage, our tracking relied on ARToolKit’s pattern recognition, and we did not have funding to integrate haptics, navigation, or broader integration software yet.
During Phase I, I and my research students were new to VR technologies, and we had to investigate absolutely everything—from the definition of VRML to understanding a message about a missing msvcirtd.dll file To document our progress and offer practical guidance for others who might simulate our approach, we kept a blog throughout Phase I I’ve included the first two posts to illustrate the concepts we were grappling with and the learning milestones we began to reach at this early stage of development.
We kicked off an augmented reality (AR) project for Weber State Dental Hygiene students to enhance local anesthetic training through immersive AR learning tools Our plan centers on using AR to teach and reinforce the administration of local anesthesia within dental hygiene education This summer, we started by reviewing the calendar, then conducted focused research and downloaded the software programs and essential documents needed to build the project, laying a solid foundation for AR-based instruction.
We downloaded ARToolKit on our home computers
We practiced rendering images using ARToolKit and a webcam
We researched the best head mounted display available in our price range
We purchased personal webcams for home instant message conferences
We subscribed to the ARToolKit mailing list to explore others questions on augmented reality
We read and followed the tutorial on ARToolKit
Today we began familiarizing our group with computer software programming and Microsoft Visual Studio, laying the groundwork for our future development projects We purchased a head-mounted display (i-glasses) and started using image cards printed from the Washington HITLab website to render images with our webcam.
We reviewed the archive of the ARToolKit mailing list to see if any other groups or persons had asked the same questions we had Our questions were:
How do we render VRML (virtual reality modeling language) images?
What is the best HMD to use? o What does msvcirtd.dll mean and if it is a file how do we find it?
At first we suspected a firewall, but an antispyware scan confirmed the problem was a missing msvcirtd.dll file The msvcirtd.dll and msvcrtd.dll are part of the C runtime library and are available in ARToolKit 2.65 (not VRML) By copying these DLLs from ARToolKit 2.65 into ARToolKit DirectShow 2.52 VRML, the missing file alert disappeared and we were able to render an image on our home computers.
We evaluated several head-mounted displays: the i-glasses offer a 640×480 resolution, which is low but usable; the Olympus Eyetrek FMD 700 lacks a see-through mode; the cy-visor from www.personaldisplay.com also does not provide a see-through option; and Sony's Glasstron line was outside our price range.
We’re not professional programmers, but we need to learn a few coding basics to generate the image we want We’ll keep building our coding skills by working through the ARToolKit manual’s examples, using hands-on practice to become more proficient.
Phase 1: Exploratory - Process of Investigation
My investigation combined online study of VR technologies with books on VR concepts and components Once my students and I understood how to get started, we designed a VR application that fit our budget (Figure 9) The analysis linked the virtual system with my instructional intervention, and our success depended on both building a functional VR setup and writing a persuasive funding proposal Evaluation of the VR system was based on its ability to provide a manipulable 3D image, offer experiential learning and just-in-time feedback, and support self-controlled practice and iterative learning.
Over six months, three senior dental hygiene students and I conducted an in-depth exploration of virtual reality (VR) technologies and their applications in dental education We reviewed articles describing VR uses with specific learning goals and highlighting the technological aspects of the platforms To deepen our understanding, we purchased supporting textbooks, with 3D User Interfaces among the most useful resources for interpreting VR interfaces and three-dimensional visualization.
Figure 9 Phase I: Exploratory - process of investigation
(Bowman et al., 2005) This book provided VR taxonomy as well as simple descriptions of hardware and software options based on individual development considerations
To build a foundation in VR technology, we scoured reputable sites and joined beginner-friendly listservs, focusing on sources like the Human Interface Technologies Lab (HITLab) at the University of Washington, the New Media Consortium’s Virtual Worlds site, and Georgia Tech’s Graphics, Visualization, and Usability portal; membership in quality listservs such as HITLab and Georgia Tech proved helpful because observing the discussions exposed us to relevant jargon, complex technical concerns, and collaborative problem solving that informed our own research, while we also sought open source VR content online since many toolkits and resources are freely available; for Phase I we used the open source ARToolKit toolkit, which we downloaded from the HITLab website.
Creation of VR System and Research Procedures
In exploring VR systems, my students and I gathered information on the decision-making process for selecting VR hardware and software, and we used these insights to develop a VR teaching application We began with the components shown in Figure 36 and built a basic VR viewer using ARToolKit to render a 3D model of the human cranium, created in DAZ Studio, a freely available but relatively simple tool The DAZ Studio output was accessible but produced low image quality and offered limited ability to modify the model ARToolKit relies on a marker-based pattern recognition system in which printed patterns are detected by a webcam, triggering the computer to display the programmed 3D image The rendering depends on an uninterrupted camera view; any interruption causes the 3D image to disappear.
Although my VR knowledge expanded during this exploratory phase, I could not create or operate a functional VR learning system I acquired hardware and software including ARToolKit, iGlasses, a Daz Studio 3D image, and a webcam, but the resulting 3D image was low-fidelity and low-quality, failing to provide in-depth anatomy analysis Moreover, the model could not be manipulated—any student interaction caused it to disappear—so self-guided practice and iterative learning were not supported.