10. SUPPORT FOR AUTHORING AND MANAGING WEB-BASED COURSEWORK: THE TACO PROJECT
10.6 IMPROVEMENTS IDENTIFIED DURING THE TRIALS
10.6.1 User Interface
As reported in Sections lOA.l and 10.5, the authoring user interface to TACO requires considerable improvement. During the requirements capture phase, lecturers' main concern was that the system should have a "really simple" user interface. That was exactly what TACO provided. Once lecturers started to use the simple, form-based user interface, however, it became clear they wanted something that was simple to use, i.e., that shielded them from HTML or other programming- style authoring tools. However, like many users today, they expected the look-and- feel and functionality offered by contemporary Windows interfaces: WYSIWYG, menus, icons, moving files by "drag-and-drop", search functionality, and bubble help. Absence of these features not only makes the interface look less attractive, but fails to support many aspects of the authoring process (see also Section 10.6.2).
From the developers' point of view, these features are not technically difficult to implement, but they require considerable time and effort. One major problem with the user interface from the students' point of view was that pressing the return key submitted the coursework for marking. Several students accidentally submitted their coursework before they had completed all the questions; since assessed coursework can only be submitted once, lecturers had to remove the "accidental"
mark from the database to allow the student to complete the assignment properly.
10.6.2 The Process of Authoring Coursework
Both lecturers and developers discovered early during the pilot that some aspects of authoring coursework had not been considered in the requirements capture phase. The process of authoring coursework was described as a top-down approach. Each lecturer is responsible for a number of courses and plans a number of assignments for a particular course. For each assignment, the lecturer writes a number of questions, and maybe a number of variants for each question. When all questions are written, the assignment is released to the students. While lecturers started to author assignments in this manner, it quickly emerged that questions - rather than assignments - are the level of granularity at which authoring is done.
Lecturers were more inclined to write a number of questions and variants, and then put together an assignment by selecting a number of questions. Lecturers also discovered that they wanted to reuse questions in assignments on different courses.
The model of authoring which emerged from these observations was that lecturers could create a library or pool of questions, from which they then put together assignments.
Furthermore, there is the issue of reuse or adaptation of other lecturers' questions.
Some courses cover similar material, but from a different perspective. Some
Support for Authoring and Managing Web-Based Coursework 157 assignments have more than one author. This requirement became evident with one of the pilot courses, which taught mathematics to geology students. A perennial problem with such service courses is that, while it is desirable to have the subject taught by an expert mathematician, students often fail to relate the material taught to their own subject. Ideally, mathematical knowledge should be presented in the context of problems arising from the students' subject. Such questions could be created as a joint effort between two lecturers - the expert mathematician and an expert in the other subject. Since this requirement exists for many other subjects, the expert mathematician could create sets of questions on a range of topics, which could then be adapted for use in different subjects. Implementation of both suggestions - question libraries and joint authoring - will require modifications to the design of TACO: the current model has a virtual web server owned by each lecturer, and the hierarchical model of course, coursework and questions (see Figure 10.3).
10.6.3 Coping with Distributed System Environments in HEls
As outlined in Section 10.1, most HEIs have a variety of hardware and operating systems. One of the main reasons for choosing a web-based system was that it could be accessed from a wide range of hardware platforms and operating systems.
During the pilot, all students could access their coursework, but there were still a number of problems arising from the distributed and heterogeneous nature of the terminals:
• some of the machines were not very powerful and, as a result, scrolling through long sets of coursework was slow;
• network access from some clusters was slow;
• some machines had several versions of web browsers installed - students using very old versions found that not all features of the coursework (e.g., color) were supported.
It is easy for lecturers to forget that the coursework they author may have a different look-and-feel on some students' screens. Guidelines stating "best practice" for not to be developed, and one recommendation would be that lecturers test assignments on machines which are used by their students. This will help to detect some, but not all, potential problems, since some students (and many lecturers) prefer to complete (or author) coursework from home. While this reduces the demand for machines in the HEI, we found that it can create other problems.
Since telephone costs are lower outside normal working hours, most outside access to the system occurred in the evenings and at weekends. Unfortunately, there is no support available if any technical problem or query about an assignment arises during those hours. Even though students had been made aware that this was the case, they were still upset when they encountered a problem and found they had to wait until normal working hours to have it dealt with. Similarly, lecturers' planning
158 The Digital University - Building a Learning Community was badly upset when a night or weekend authoring session could not be completed as planned.
10.6.4 Participatory Design in the Development of Educational Technology
Participatory design advocates the use of early prototypes: (a) to help users to envision how they will work with the system; and (b) to detect omissions in the requirements, or problems in the use of the system, at an early stage. In many projects, user interaction with prototypes takes place in a series of short lab sessions, where the designers observe users "walk through" a small number of selected tasks. This type of evaluation is, however, not sufficient to validate requirements for, and identify potential problems with, a large distributed system.
It is also unlikely that a short session with simulated tasks gives users sufficient opportunity to discover all the major problems which may arise in everyday use of the system.
The evaluation of the first implementation of TACO - the prototype in this project- was therefore conducted as a field trial. In retrospect, it did yield additional requirements and helped to identify some problems that had not been anticipated by lecturers and the development team. It is difficult to imagine that the development ofthe type of technology which will support the digital university can be undertaken without this type of field study, where a large number of potential users evaluate a new system in the context of real courses. Developers should, however, be aware that some users find it difficult to distinguish between prototypes and fully developed systems. This poses a dilemma for any development project as to when to submit the system to a field trial: running it too early may create a negative response which kills the project; running it late may mean that fundamental omissions or problems are only discovered very late in the design process.
The pilot study with real users completing real tasks was extremely valuable, but it was also resource-intensive and occasionally straining. Many students (especially those taking Computer Science courses) did not appreciate the difference between a prototype and a fully tested and stable system. Reports of problems with, or failure of, the system early in the pilot were sometimes accompanied by derogatory comments about the technical abilities of those who had developed it. The developers were able to recognize that these comments were born out of temporary frustration, and the unique combination of arrogance and ignorance with which some first year computer science students tend to view other people's software.
However, there is a danger that users may form a tainted view of a system through interaction with a prototype. It is therefore best to ensure that the system used in this context is stable before large numbers of users get involved in the trials. There have been few problems with the continuing use of TACO over the past 3 years;
consequently, the feedback from students using TACO is overwhelmingly positive, to the extent that they ask why it is not used in more courses. The two positive
Support for Authoring and Managing Web-Based Coursework 159 features cited most frequently by students are: (1) that they do their coursework from home (charges for dialup access are less of an issue than at the start of the project); and (2) they receive immediate feedback, which is viewed very positive by compared to paper-based coursework.