Java程序辅导

C C++ Java Python Processing编程在线培训 程序编写 软件开发 视频讲解

客服在线QQ:2653320439 微信:ittutor Email:itutor@qq.com
wx: cjtutor
QQ: 2653320439
Interface Design Using Java: Part One (Introduction) Copyright Chris Johnson, 1998. This course must NOT be used for any commercial purposes without the express permission of the author. Human Computer Interface Design Using Java (Part 1) by Chris Johnson Part One: Introduction An Introduction to Human Computer Interaction An Introduction to Interface Development in Java An Introduction to Evaluation. Part Two: AWT Components Part Three: Text... Part Four: Laying Things Out... Part Five: Selection Components Part Six: Graphics... Part Seven: Additional information Back to Main Index This module introduces the design, implementation and evaluation of effective user interfaces. The practical component will emphasise, but not exclusively address, facilities available in the World Wide Web and much of the programming work will involve the use of Java and the Java libraries for windowing systems. The lectured material will be divided approximately 60:40 between design and implementation topics, with thorough coverage of both the underlying principles and the practical application of the techniques presented. It is assumed that you are familliar with the basics of Java programming (ie, you know how to compile and run a Java program). Prerequisites: It is strongly recommended that you have taken a good introductory course in Java. It is also recommended that you have some knowledge of object oriented design. For our MScIT, this corresponds to the X stream in programming and the object oriented design module. Aims This module will: provide a basic practical and theoretical introduction to HCI introduce you to HCI as a design discipline extend your technical knowledge of dialogue styles equip you with a basic set of analysis and evaluation techniques familiarise you with current software tools for interactive system development allow you to experience the difficulty in designing an effective user interface on the first attempt and give you experience in the iterative nature of User Interface development introduce the windowing facilities of Java Objectives By the end of the module the student should be able to: construct a simple user interface using appropriate Java libraries evaluate user interfaces using appropriate techniques describe the prototyping cycle describe the use of emprirical testing in the evaluation of design alternatives describe and apply cognitive walkthrough for the evaluation of designs produce usability requirements for an application context describe the facilities offered by current window systems implement a simple user interface in Java describe the architecture and component functionality of a User Interface class library produce design rationales for designs in the HCI literature and for your own designs Module Structure The module will consiste of 20 lectures together with associated tutorials and laboratory sessions. The laboratory work will include exposure to the basic facilities for building web-based interactive systems including (if not covered in the Core) HTML, forms and simple use of CGI for interaction with a server. The main programming emphasis will be on the use of Java libraries for windowing systems. Assessment By examination (70%) and coursework (30%). The assessed coursework will be a single exercise involving: the design of an interactive system; implementation of a prototype of that design; and evaluation of the prototype. Textbooks These notes supplement the briefer bullet points that structure the lecture material (see the Course Index ). The following book is recommended as a very general introduction to the problems of designing artefacts that support peoples' tasks: D. Norman, The Psychology of Everyday Things, Basic Books/Harper-Collins, 1988. (ISBN 0-465-06709-3). A more complete introduction to user interface design is provided by: J. Preece (ed.), Human Computer Interaction, Addison Wesley, 1994. (ISBN 0-201-62769-8). More details about the Abstract Window Toolkit are provided by: D.M. Geary, Graphic Java: Mastering the AWT, Prentice Hall, 1997. (ISBN 0-13-863077-1) C.S. Horstman and G. Cornell, Core Java 1.1 (Volume 1 - the Fundamentals), Prentice Hall, 1997. (ISBN 0-13-766957-7) M. Campione and K. Walrath, The Java Tutorial and, in particular, Creating a user interface in Java, available as http://www.star.le.ac.uk/java/ui/ or in its latest form from http://java.sun.com/docs/books/tutorial/ (unfortunately this can be a very busy site). Fintan Culwin, A Java GUI Programmers' Primer, Prentice Hall, 1998. ISBN 0-13-908849-0. Peter Sawyer's on-line notes for his course on AWT The Geary book is a thorough introduction to AWT, however, much of it is devoted to a graphics package that extends AWT. This package is not discussed in this course. Geary's material provides a valuable next step in applying AWT after completing this course. The Culwin book is a more gentle introduction to AWT. Details about the JFC/Swing classes are provided by: M. Campione and K. Walrath, The JFC/Swing Tutorial: A Guide to constructing GUIs, Prentice Hall, June 1999. ISBN 0-201-43321-4 Introduction and Motivation Human computer interaction is arguably the most important topic to be studied as part of any computing science course. Here are some of the reasons why it is important to study this topic: user satisfaction. Unless we understand the needs of our users then there is little prospect that we will be able to support their tasks. This is a non-trivial problem. Users may not be able to tell you what they would like their system to do. If they have never used a computer they may have unrealistic expectations. Even if they are familliar with computer systems then it may be difficult to look beyond the applications that they already use. For example, try to imagine what the successor to the Macintosh's operating system or Windows98 might look like. It is difficult to undertsand what the user is doing even with their present systems. For instance, it is crazy to ask someone what they do in their working day. Most people have thousands of tasks that vary over time - it's hard to know where to begin. One way round this is to watch people and record the activities that any new system must support. However, people will alter their behaviour when they know that somebody is watching them use a system. This phenomena has become known as the Hawthorne effect after a 1939 study of car workers in which output increased just because people were studying their production techniques. Human computer interaction addresses these problems by providing analytical techniques that can be used to identify users' real world activities so that designers are better prepared to support those tasks when they build computer systems. safety. People make mistakes when they use computer systems. They inadvertantly delete files. They ignore warnings and fail to read help files. They type the wrong input when asked to provide information to their systems. Such "errors" are to be expected. Whilst they may have only a minimal impact upon most office systems, they have more serious implications as computer systems are integrated into process control applications. Many recent aviation accidents that were blaimed upon pilot error were originally caused by a well-known HCI "mode confusion" problem. This occured when pilots thought that they were being asked to provide one set of figures but the system was, in fact, expecting another set of figures. Human computer interaction provides a range of evaluation techniques that can be used to detect situations in which such "errors" are likely to occur. innovation and a competitive edge. When I first started to teach HCI, everyone was demanding a GUI (Graphical User Interface). This reflected a general dissatisfaction with command line interfaces. Software companies that marketed products with a GUI, therefore, had a competitive edge over those that did not. More recently, the same trend has been repeated by the rise of the Internet and the World Wide Web. Even if a product is only to be used within a company, over its Intranet, it seems to be a pre-requisite that it should be designed to fit within the interfaces provided by current web browsers. It is, therefore, important that designers follow recent trends in interface development if they are to keep up with their competitors. A related point is that a number of innovative devices are beginning to open up entirely new markets. There markets will only be exploited if companies can devise means for user to effectively interact with those devices. For example, mobile computers and personal digital assistants (PDAs) provide their users with access to a large amount of data as they travel from location to location. It remains to be seen whether appropriate interaction techniques can be devised so that users can view all of this information on very small displays or if they can interact with the data using tiny keyboards. Human computer interaction addresses these central issues that can determine whether or not new technologies will be successfully exploited by their intended users. equality and accessibility. Computers provide their users with access to vast amounts of information. The techniques that we have devised for people to interact with these devices also prevents many users from accessing this information. For example, how would a blind user find a menu or a button on a graphical user interface? Screen-readers can convert textual displays into spoken output but these applications will not work for the user interfaces that dominate today's mass market applications. The design of a user interface, therefore, determines who can and who cannot access your system. These, often implicit, decisions can have profound practical, ethical and legal implications for your workforce. On the other hand, human computer interaction provides innovative solutions that increase the accessibility of information to many groups of users. For instance, the Phantom force feedback device can be used to "trace" the shape of a graph or diagram that might otherwise be inaccessible to blind users. marketing. A number of companies have attempted to trademark phrases such as "easy to use". Such qualities are important to consumers who have a growing distrust of "feature laden" products. For instance, most users only understand a small subset of the facilties that are offered by video recorders. It is important to emphasise, however, that terms such as "easy to use" have little meaning on their own. In particular, an interface that I find "easy to use" may be impossibly difficult for someone that has not studies computing. Conversely, I would struggle to operate a crane, a submarine or an aircraft. Ease of use depends upon the users' expertise, training, age, attention, physiological characteristics etc. Human computer interaction provides a means of examining the more complex factors that contribute to such marketing hype. This course will address many of the issues introduced in the previous paragraphs. However, the focus will be upon designing and constructing user interfaces rather than on the social issues surrounding those systems. More information about these issues can be found in the social aspects of computing course. Jargon This area is full of jargon and acronyms. Many of these terms differ between Europe and the United States. CHI. Computer-Human Interaction, this phrase most commonly describes the theory and practice of user interface design in the United States. The Association for Computing Machinery runs a Special Interest Group in Computer-Human Interaction (SIGCHI) and its web pages are a useful resource for general information on human computer interaction. HCI. Human computer interaction, this phrase most commonly describes the theory and practice of user interface design in the United Kingdom and Europe. The British Computer Society runs a Human Computer Interaction Special Interest Group. Its web pages are a useful resource for information about meetings, conferences and jobs in this area. Ergonomics. This means "the study of work". The UK Ergonomics Society provides general information about this subject. In previous years, HCI research focussed very much on the user, the keyboard and their display. Ergonomists looked more widely at working practices in complex environments. They looked at posture, physiology, noise levels and so on. However, with the rise in problems such as Repetitve Strain Injury (RSI) and Carpal Tunnel Syndrome, people in the field of HCI have become more interested in areas that were previously the focus of egonomists. At the same time, there has been a move towards understand the use of interactive systems by groups of people (within companies, over the Internet etc). This broadening of the scope of HCI has borrowed many more general techniques from sociology. Human Factors. This is to Ergonomics what CHI is to HCI. In other words, the term human factors is often used in the United States to represent many of the issues and techniques addressed by ergonomists in the UK. There is, therefore, a Human Factors (and Ergonomics) Society that is based in the United States. The previous paragraphs provide a number of pointers to futher information about the varying "traditions" that have combined to support user interface design. Although our focus in this course is upon the design and implementation of interactive systems it is important not to lose sight of the wider issues about working practices and working environments that are being addressed by human factors experts and ergonomists. More details are provided about these issues in the Interactive Systems Design course. The HCI Lifecycle So what is HCI? One way of thinking about the subject is that is provides a series of techniques that are intended to focus upon the users needs at each stage of development. This contrasts from traditional approaches to software engineering where the user may only be considered at the beginning of a project, to establish initial requirements and at the end, to perform final product texting. The "user centred" approach advocated by HCI would, instead, encourage user involvement at all stages of development. For instance, prototypes or partial implementations might shown to potential users to gather their feedback as a system is built. This helps to ensure that designers find mistakes early in development. Otherwise, they might not be discovered before the system is built and delivered; when all of the development resources might have been used up. The following paragraphs briefly describe the main stages of the HCI development lifecyle. They are intended to indicate the sorts of activities that interface designers might conduct to ensure a user centred approach to systems development: Requirements elicitation. This identifies the objectives that the system must satisfy. Designers must identify the intended users. This involves an analysis of their previous computer experience, their educational level, cultural factors etc. For instance, if users are already used to use style of interface then designers may choose to emulate features of the existing systems in a new application. This ensures consistency between applications and helps users to tranfer exertise gained in one system to support interaction with another. Designers must also work out which of their users tasks must be supported by the new system and its interface. This is complicated by the Hawthorne effect mentioned earlier; observations of users performing existing tasks may be influenced by the very fact that users know they are being watched. At the end of this stage of analysis, designers should have a concrete list of objectoves for their system. For instance, "it should take no more than two hours for a novide user to learn how to add a new booking to the system". These objective should be specific enough so that they can be tested once the final system has been built. Design and Specification. This involves the identification of a number of alternative options that a designer might use to satisfy the requirements mentioned above. For example, one option might be to use a form-based interface. Another might be to use icons and graphics rather than textual fields. The designer must then identify the criteria that either support or weaken these options. For instance, if users must enter data at high speeds then touch typists might prefer a form based style of interaction to a graphical interface. If the system is to be part of a "walk-up-and-use" booth then icons might be less intimidating. Such criteria are often difficult to verify or check. Pencil and paper prototypes might, therefore, be used to determine whether novice users did indeed prefer icons. These prototypes are simply mocked-up using pieces of cardboard. They are low cost and help to elicit user feedback without the overheads of full implementation. By the end of this stage, designers should have a clear idea of what their interface must do. However, they may have no clear notion of how it will be constructed. Ideally, this stage would not consider the data structures, objects and classes, or control flow that must be considered by a final implementation. Implementation. This involves the development of the interface. In a user centred approach, it is important that this takes place ALONGSIDE the development of the application. It is a difficult, almost impossible, task to retrofit a good interface onto an existing application. One reason for this is that the interface designer is forced to map the abstract data types of the programmer into a language that the user will understand. The further away these two languages are then the more difficult this translation task becomes. Having close user participation helps to ensure that the users concepts and concerns have guided the implementation and hence simplifying the task of user interface development. By the end of this stage, designers should have a product that is ready for summative evaluation. Summative Evaluation. This takes place at the end (summit) of the development life cycle. It includes acceptance testing. This usually involves a demonstration that the system actually achieves all of the requirements identified at the beginneing of the project. Tests may be devised to show that "it should take no more than two hours for a novide user to learn how to add a new booking to the system". Of course, such tests must be carefully constructed and controlled. Success or failure may depend on who the novice is and whether or not the test is conducted amongst the noise and distractions of an office. It is important to distinguish such exercises from formative evaluation. This is used during development to help form the designer's ideas. Testing different design options using pencil and paper prototypes would be an example of formative evaluation. Maintenance. This stage is often neglected, however, it is seldom the case that interactive systems are perfectly developed first time round. The large number of programs that are identified as Version 5.3 or Version 6.8 is indicative of the number of modifications that many pieces of software must go through. In many ways this is a natural process as users think of new things to do with their systems. It may also indicate that there have been many previous failures on the road to the current version. It is important to understand that the stages described in previous paragraphs do NOT provide a straightforward route from requirements through to installation and maintenance. Each of the stages may force revisions to previous activities. For example, we have already argued that users find it difficult to explain what a system ought to do. As a result, many requirements only emerge after an initial prototype has been shown to the user. This implies that designers should make the time between requirements analysis and design as short as possible do that they can quickly obtain user feedback about their initial ideas. This is a central concept behind what has become known as RAD (Rapid Application Development). Guidelines and Standards It is important to identify the mechanisms or techniques that can be used to introduce HCI into the software development lifecycle. The pragmatics of the software industry mean that many companies cannot affoard to emply full time usability consultants. As a result, most commercial organisations have introduced HCI through the use of guidelines. These are lists of rules about when and where to do things, or not to do things, in an interface. For instance, a guideline might be not to have more than ten items in a menu. Another guideline might be to avoid clutter on a graphical user interface. This approach is declining as more and more organisations employ teams of human factors specialists. It is, however, important to have some understanding of what these guidelines are like. The most famous set of guidelines were developed by Smith and Mosier on behalf of the Mitre Corporation. Unsurprisingly, these are know as the Smith and Mosier guidelines. They now include several thousand rules and you really need a hypertext tool to use them. They have been adapted for use by the US military and by NASA. An example of one of Smith and Mosier's guidelines is: 1.6.2 DATA ENTRY: Graphics - Drawing When users must create symmetric graphic elements, provide a means for specifying a reflection (mirror image) of existing elements. Several companies have also developed their own style guides. These are similar to the Smith and Mosier guidelines because they simply list do's and dont's for interface design. They are slightly different from Smith and Mosier because there are commercial motivations behind them, they are not simply intended to enhance the usability of the interface. Apple's guidelines help you to produce a system that looks and feels like other Apple products. Microsoft's Window's guidelines help you to produce a system that looks and feels like a Window's products. The point here is that once your workforce have become accustomed to one style of interface then you will be encouraged to buy other systems that are consistent with the first one. In other words, you will buy more Microsoft products, more Apple products and so on. This proprietorial approach is less evident in the guidelines that have been produced specifically for the web. The philosophy of the web that stresses the importance of platform independence implies that designers must produce pages that support their users irrespective of whether they are being downloaded onto a Mac, PC or Unix system: IBM Web Guidelines Sun Web Guidelines There are, however, further problems. Guidelines and style guides help you to identify good and bad options for your interface. They also restrict the range of techniqus that you can use and still 'conform' to a particular style. Further problems arise because guidelines can be very difficult to apply. In many ways, they are only really as good as the person who is using them. This is a critical point because many companies view guidelines as a panacea. The way to improve an interface is not just to draft a set of rules about how many menu items to use of, what colours make good backgrounds etc. Users' tasks and basic psychological characteristics MUST be taken into account. Unless you understand these factors then guidelines have no meaning. For example, the Apple guidelines state that: ``People rely on the standard Macintosh user interface for consistency. Don't copy other platforms' user interface elements or behaviours in the Macintosh because they may confuce users who aren't familliar with them.'' This simple guidelines glosses over all of the important points about the differences between novices and experts. Using inconsistent features removes an expert's skills in using the previous system. Unless the programmer/designer understands such additional justifications then the true importance of the guideline may be lost. Apple recognise some of the problems in using guidelines whent they state that: ``There are times when the standard user interface doesn't cover the needs of your application. This is true in the following situations: you are creating a new feature for which no element or behaviour exists. In this case you can extend the Macintosh user interface in a prescribed way; An existing element does almost everything you need it to, but a little modification that improves its function makes the difference to your application...'' The Apple Guidelines, go on to present a number of more generic guidelines, or principles, that can then be used to guide these novel interfaces. The problem with guidelines is that you need a large number of rules in order to cover all of the possible interface problems that might crop up. Also, it's difficult to know what to do when you have to break a guideline. For instance, what do you do if you have a menu of eleven items? More recently, companies have been concerned to document the steps that they take to elicit the users requirements and to test the system. This has been largely brought about by the movement to conform with the International Standards Organisations ISO9000 standard. This sets out approved procedures for software development. Many software ourchasers now expect their suppliers to be 'ISO9000 conformant'. For the last decade or so, there as been a move to introduce standrads into interface design. Initially, these focussed upon when and where to use particular pieces of hardware. For example, Systems Concepts reviewed the British Standard's Institute's standards in this area as follows: BS EN 29241-1:1993 (ISO 9241) Part 1 General Introduction The purpose of this standard is to introduce the multi-part standard for the ergonomic requirements for the use of visual display terminals for office tasks and explain some of the basic underlying principles. It describes the basis of the user performance approach and gives an overview of all parts currently published and of the anticipated content of those in preparation. It then provides some guidance on how to use the standard and describes how conformance to parts of BS EN 29241 should be reported. Not exactly gripping stuff but if you are interested in recent work in this area then take a look at System Concept's review of usability standards. Norman's Models of Interaction Donald Norman is one of the leading researchers within the field of human computer interaction. One of his most important ideas is that human-computer interaction is based around two gulfs that separate the user from their system. Norman's model is illustrated in this diagram. The Gulf Of Execution Users approach a system with a set of goals: `print the letter', `send mail to my boss' etc. At a more detailed level they develop intentions: `I'll send the mail now'. These intentions have to be broken down into a series of action specifications. By this we mean the step that the user has to go through to satisfy their intentions: first I'll have to open the mail program then I'll have to edit a new message... These steps must be performed using the interface facilities provided by the system. The model would be of little benefit if it didn't provide designers with a framework for understanding why things occasionally `go wrong' in user interfaces. For example, problems might arise if users have inappropriate goals and intentions: `I'll print out an executable file' or `I'll remove my operating system'. Other problems can arise through inappropriate action specifications: `First, I'll delete this old file, then I'll see if I can find my really important collection of e-mail addresses'. Finally, there may be problems with the interface mechanisms themselves. The bottom line from this analysis is that in order to understand good and effective interface design we must also understand the goals and intentions of our users. Mistakes, errors and frustration can occur even if we have high-quality interaction mechanisms. The Gulf Of Evaluation. The second component of the model is the gulf of evaluation. Once the user has issued a command they must determine whether they have achieved the desired result. They must do this by observing some change in the state of the display. For instance, an icon may appear, a dialogue box may be presented or the prompt may return. Interface designers must not only implement such changes they must also carefully consider whether users will be able to interpret them correctly. It's no good presenting an icon if nobody knows what it means. Even if the user can interpret the display correctly, they must then be able to interpret whether their command has been successful. For example, when I print a document from my PC I occasionally get a message stating `Memory violation during printing'. I can interpret this as a message about a problem with my print job. I do not have sufficient information, however, to evaluate this is a serious problem of not without referring to manuals and on-line documentation. As with the gulf of execution, the gulf of evaluation illustrates the point that usability problems can occur even in systems with well designed displays. If users cannot interpret and evaluate the information on their screen then issues of presentation and layout are irrelevant. The importance of Norman's model is that it focusses the designer's attention upon the user's perspective during interaction. They have to map their goals and intentions into the language supported by the system. In this view, the Java implementation techniques that we are about to discuss are of secondary importance to the design skills that designers must exploit when constructing an interface. It doesn't matter what sophisticated programming techniques are used, if people cannot work out what inut to provide or if they cannot understand the displays provided by a system then the interface has failed. Interface Development in Java Douglas Kramer's Java White Paper describes Java in the following terms: The computer world currently has many platforms, among them Microsoft Windows, Macintosh, OS/2, UNIX� and NetWare�; software must be compiled separately to run on each platform. The binary file for an application that runs on one platform cannot run on another platform, because the binary file is machine-specific. The Java Platform is a new software platform for delivering and running highly interactive, dynamic, and secure applets and applications on networked computer systems. But what sets the Java Platform apart is that it sits on top of these other platforms, and compiles to bytecodes, which are not specific to any physical machine, but are machine instructions for a virtual machine. A program written in the Java Language compiles to a bytecode file that can run wherever the Java Platform is present, on any underlying operating system. In other words, the same exact file can run on any operating system that is running the Java Platform. This portability is possible because at the core of the Java Platform is the Java Virtual Machine. While each underlying platform has its own implementation of the Java Virtual Machine, there is only one virtual machine specification. Because of this, the Java Platform can provide a standard, uniform programming interface to applets and applications on any hardware. The Java Platform is therefore ideal for the Internet, where one program should be capable of running on any computer in the world. The Java Platform is designed to provide this "Write Once, Run Anywhere"SM capability. This idea that you should be able to write a Java program and run it on any number of architectures poses particular problems for interactive systems because the look and feel of these systems can be very different. For example, here is the interface to Word running under WindowsNT and here is the interface to the same application running on a Macintosh. If you look at them side-by-side in two different browser windows you can spot a large number of differences. For example, the NT version uses the Control key (CTRL) to access keyboard accelerators; these are shown on the right on menu options and allow users to select items by pressing keys rather than forcing them to move their hand from the keyboard to the mouse. In contrast, the Macintosh interface uses the Apple key - shown by a clover shape of interlocking circles around a square. There are further differences in the presentation of the windows that enclose the applications. These differences do not occur by chance; the Apple and Microsoft operating systems were designed by different people and they were not intended to look and feel the same. Therefore, if Java is to provide a write-once, run anywhere, approach to user interface implementation then its run-time system must translate particular interface components into the particular look and feel of the platform that the program is running on. This is illustrated by three buttons that were generated by the same Java code on a Unix machine, a PC running NT and a Macintosh. Notice that the PC and Unix/Motif versions look almost identical but that they are both very different from that of the Macintosh. Java's promise of adapting a user interface to the platform that it is running on is a very good thing, in principle. When a Java program is run on a Macintosh it will provide its users with an interface that looks and feels like a Macintosh interface; this is important if users are to transfer the skills they have built up using other Mac applications to the new interface that you have written. In a similar way, one would not expect to find an interface designed for a Macintosh if you were working on a PC. The Abstract Window Toolkit (AWT) The Abstract Window Toolkit (AWT) forms part of the Java Development Kit (JDK). There is an AWT home page It is probably the most widely used means of constructing graphical user interfaces in Java, although, significant numbers of people are using Swing (see later). The AWT environment will provide the focus for the rest of this course. An important benefit of AWT is that it is part of the standard Java distribution and so programmers can assume that browsers and Java Virtual Machines will support its components. However, there are now several versions of the AWT environment (1.0, 1.1 and more recently enhancements in the Java 2 SDK v1.3). Browsers that support AWT programs up to version 1.0 will not support all of the features of 1.1. This course will focus on version 1.1 but will provide examples of 1.0. This diagram gives you some idea of the way in which AWT relates to particular architextures. One of the key points about this diagram is that AWT uses the existing window managers that have been written for particular platforms. Window managers are programs that are responsible for updating the screen. They translate calls from application programs into the low-level instructions necessary to draw icons, buttons etc onto the screen. Window managers also pass on user input to application programs as it is received from the operating system. Window managers are platform specific because they must deal with relatively low level operating systems features; the facilties provided by MacOS will be different from those provided by UNIX and so on. AWT can, therefore, be seen as a buffer between your code and the particular facilities provided by the window managers on a number of different platforms. This is important because you do not need to learn how to translate your user interface code into the calls provided by many different window managers. However, it is possible in Java to directly access the functions of a particular windowing system without going through AWT or similar interfaces. If you do this then there is no guarantee that you program will run on other platforms that do not share the same features of your original window manager. Just to give you some idea of what we are talking about, here is a very simple Java applet that makes use of AWT. /* * Copyright (c) 1995-1997 Sun Microsystems, Inc. All Rights Reserved. * * Permission to use, copy, modify, and distribute this software * and its documentation for NON-COMMERCIAL purposes and without * fee is hereby granted provided that this copyright notice * appears in all copies. Please refer to the file "copyright.html" * for further important copyright and licensing information. * * SUN MAKES NO REPRESENTATIONS OR WARRANTIES ABOUT THE SUITABILITY OF * THE SOFTWARE, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED * TO THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A * PARTICULAR PURPOSE, OR NON-INFRINGEMENT. SUN SHALL NOT BE LIABLE FOR * ANY DAMAGES SUFFERED BY LICENSEE AS A RESULT OF USING, MODIFYING OR * DISTRIBUTING THIS SOFTWARE OR ITS DERIVATIVES. */ /* * 1.1 version. */ import java.awt.*; /* Notice - this links to AWT classes */ import java.awt.event.ActionListener; import java.awt.event.ActionEvent; import java.applet.Applet; public class ButtonDemo extends Applet implements ActionListener { Button b1, b2, b3; /* AWT provides a Button class */ static final String DISABLE = "disable"; static final String ENABLE = "enable"; public void init() { b1 = new Button(); b1.setLabel("Disable middle button"); b1.setActionCommand(DISABLE); b2 = new Button("Middle button"); b3 = new Button("Enable middle button"); b3.setEnabled(false); b3.setActionCommand(ENABLE); //Listen for actions on buttons 1 and 3. b1.addActionListener(this); b3.addActionListener(this); //Add Components to the Applet, using the default FlowLayout. add(b1); add(b2); add(b3); } public void actionPerformed(ActionEvent e) { String command = e.getActionCommand(); if (command == DISABLE) { //They clicked "Disable middle button" b2.setEnabled(false); b1.setEnabled(false); b3.setEnabled(true); } else { //They clicked "Enable middle button" b2.setEnabled(true); b1.setEnabled(true); b3.setEnabled(false); } } } Here is the AWT 1.0 code. Although we will be focussing on AWT, this is not the end of the story. This is an extremely dynamic area. Many commercial and academic groups are developing systems that reduce the complexity of constructing graphical user interfaces. Most of these systems are built on top of environments that look very similar to AWT and so it is relatively easy to transfer skills gained in AWT to these new systems. The following section provides a brief overview of one of extension to AWT. Swing Swing provides a set of classes that extend those provided by AWT. It is NOT intended to replace AWT. Both provide object-oriented classes to help programmers write graphical user interfaces for their Java programs. AWT applications will still run if you later decide to introduce elements from the Swing component set. More details about the relationship between AWT and Swing are provided in this article. One of the most important differences between Swing and AWT is that Swing components don't borrow any native code from the platforms on which they run. In order to understand this point, it is important to explain something that was missing in the previous diagram. In this idealised architecture, AWT calls are mapped directly to the native window managers. Swing does not use this intermediate stage. Instead, Swing provides its own components that are written from scratch. One intention behind this is to develop a cross-platform style of user interface. This new `look and feel' are described in the following document. If this catches on then it will be more important for programmers to be consistent with the Java look and feel than it will be for them to be consistent with the Macintosh or Windows style guides, mentioned above. Having said this, Swing also provides platform specific facilities if programmers want to retain the look and feel of an existing interface. This course focusses on AWT partly because AWT provides the foundations for Swing. A further justification is that the platform independent look and feel is less widely used than the platform-specific approach initially adopted by AWT. Finally, if you have mastered the AWT classes then it is relatively easy to pick up the Swing libraries. ActiveX In the beginning there was OLE (Object linking and embedding). OLE allows one application to make use of services provided by another. For instance, a desk-top publishing system might send some text to a word processor or a picture to a bitmap editor using OLE. This was generalised to an architecture for distributed programming, Microsoft's Component Object Model (COM). COM can be seen as Microsoft's answer to Java. There are some important differences between COM and Java. For instance, it relies upon a radically different client-server architecture, described here. The Microsoft virtual machine automatically maps any Java object into a COM object and vice versa so these differences may not be the disadvantage that they might at first appear. More information about the general approach can be found at the Microsoft web site. From an HCI perspective there are important differences between Java and ActiveX. Because Java depends on the Java virtual machine to run on particular platforms performance is not spectacular. In contrast, ActiveX is really based on the Win32 (ie Windows) architecture. This means that it doesnt have the extra level of processing implied by the Java virtual machine and so will, typically, run faster. A further difference is that because ActiveX relies upon native features of the Windows environment, it is also possible to access native features of that platform - including file input and output. This is much harder to achieve in existing implementations of the Java security model. Finally, many Windows tools and applications can make use of ActiveX controls, so they aren't confined to your browser. ActiveX forms part of this more general COM architecture. In the Internet implementation, ActiveX includes controls for incremental rendering (ie slowly piecing together an image) and for code signing (ie, including security features). It remains to be seen how the battle between COM and Java will be resolved. This course focusses on Java and so our emphasis is on AWT rather than ActiveX controls. There is an excellent introduction to ActiveX and COM on the Byte website VRML, Java3D and beyond... This course focuses on "conventional" user interfaces development techniques. These are employed in the vast majority of interactive systems. However, there is a growing number of systems that exploit desktop virtual reality techniques. These interfaces provide their users with the impression of interacting in three dimensions without the use of additional hardware; gloves, helmets etc. The Virtual Reality Markup Language (VRML) is the file format standard for 3D multimedia on the Internet. Its developers see it as a natural progression from the two dimensional formats of HTML. VRML is a platform independent language for composing 3D models from cones, spheres and cubes. These primitives are combined to create more complex scenes such as those shown in this image of Glasgow University's Hunterian Museum. With the advent of VRML 2.0 it is possible to generate and animate scenes that contain links to a wide variety of other information sources including videos, databases and other web pages. In order to view VRML models you need to have access to a browser such as Silicon Graphics' COSMO player. There is a wide variety of tools to help you generate VRML worlds. It is also possible to construct VRML worlds by hand. VRML files consist of collections of objects, called nodes: Shape - e.g., cube, cone, ASCIIText Property - e.g., texture2, rotation, scale, which can be used to modify other nodes. Groups - contain other nodes e.g., separator, which ensures that the modifying effects of any property nodes it contains are only applied to other nodes within it. WWWAnchor is also a grouping node allowing hyper-linking. For example, the following VRML code describes a tree: #VRML V1.0 ascii the header, required as the first line of every VRML file. Separator { start of grouping node Texture2 property node within the group {filename "bark.jpg"} the image file to be used as a texture Cylinder { shape node, which will be parts ALL modified by the texture2 node radius 0.5 height 4 } } One of the the problems with VRML is that it provides limited facilities for animating the three dimensional worlds that you can create (using its own scripting language). As a result, programmers are often forced to use a link between Java and a particular browser (usually Cosmo) in order to update the inmformation presented to the user. Java3D takes an alternative approach. instead of starting with a modelling tool and linking to a programming language, this approach starts with Java and then extends it with facilties for rendering three dimensional objects on a screen. Java3D is implemented on top of JDK1.2 and the lower level, platform independent graphics calls supported by OpenGL. It is designed for fast/parallel execution. This later point is important because the appearance of hundreds of individual objects may have to be updated as users move through complex scenes. As far as implementation is concerned, development progresses by constructing a scene graph that describes all of the objects that are to be repesented in the user interface. Java 3D provides an assortment of classes for making different types of 3D content: Shapes and geometry Lights and sounds Fog and backgrounds Groups and animations Components of a scene graph are derived from base classes: SceneGraphObject Node NodeComponent Here is an excellent introduction to using Java3D. As with AWT, however, the technical details of interface development in VRML and Java3D are less important than DESIGNING a system that satisfies user requirements. Recent user interfaces that exploit desktopVR have had very mixed success. Many are simply gratuitous applications of flashy technology and are quickly discarded for more conventional approaches. Here is a paper describing some of the design and evaluation problems. Until some of these problems can be resolved, the more conventional interfaces described in this course will continue to dominate the home and business markets. An Introduction to Evaluation Why bother to evaluate? There are a number of reasons that justify the use of evaluation techniques suring the development of interactive systems. They provide benefits for many of the parties involved in development: designers. Evaluation techniques enable designers to judge the adequacy of their designs. It provides evidence that can be used in the marketing of a product and can convince clients that a product meets their needs. It can also be used to inform the development process. For example, one means of deciding whether a graphical or a textual interface is best would be to run a trial evaluation on two prototypes. clients. They enable clients to make informed decisions about the software that they pay for, Evaluation tests can also be set as mile-stones in the development process. Practical implementations can be signed off as they pass the required stages. users. Evaluation techniques provide users with the opportunity to voice their opinions and preferences. The aim of the exercise is largely to elicit their views. A secondary objective is to make them feel part of the development process. The evaluation of user interfaces is closely linked to requirements elicitation. Like the techniques introduced earlier in the course it is vital that designers have a clear set of objectives in mind before they start to evaluate an interactive system. For example, evaluation techniques might be used to find out about: the user and their tasks. For example, evaluation techniques can be used to determine whether systems actually support user tasks in the manner predicted. Does the interface provide the relevant information when the user needs it? IS the dialogue style appropriate for the level of expertise and confidence of the user population? Evaluation techniques can also be used to find out how long it will take users to learn how to operate the system and its interface. Evaluation techniques can also provide evidence about likely errors and their associated frequencies. Finally, they can be used in conjunction with questionnaires etc to establish levels of user satisfaction with potential interfaces. {the system and interaction devices. Evaluation techniques can be used to find out if users can successfully operate system hardware. This includes physical interaction devices but also incorporates processors and disks. Delays in the refresh rate for graphical systems can cause sever usability problems when moving between machines. It is for this reason that many interface development companies advocate the use of low end machines for user testing. Its a common phenomena to find that all is well on the designer's high spec. system but is completely unusable on the typical customer configuration. the working environment and supporting documentation. Finally, evaluations can be carried out in the eventual work-place to determine whether the users' environment will cause any problems. Often these final stages of analysis are high-cost. Any changes discovered at this point will be hard to fix. In many cases, it is easier to document the problem and provide support through training. The critical point about evaluation is that, like software testing, the longer you leave it, the worse it gets. If you avoid user contact during the design phase, then a large number of usability problems are liable to emerge when you do eventually deliver it to users. The design cycle shown in the previous slide uses interface evaluation to drive the development cycle. This may be a little extreme but it does emphasise the need for sustained contact with target users. It also illustrates the point that there is little good in evaluating an interface if we are unwilling to change the system as a result of the findings. By the `system', we do not necessarily mean the interface itself. The problems that are uncovered during evaluation can be corrected through training and documentation. Neither of these options are `cost free'. When To Evaluate Formative Evaluation It is possible to stages in the evaluation of user interfaces. The first is formative because it helps to guide or form the decisions that must be made during the development of an interactive system. In a sense, the requirements elicitation techniques of previous sections were providing early formative evaluation. If formative evaluation is to guide development then it must be conducted at regular intervals during the design cycle. This implies that low cost techniques should be used whenever possible. Pencil and paper prototypes provide a useful means of achieving this. Alternatively, there are a range of prototyping tools that can be used to provide feedback on potential screen layouts and dialogue structures. Formative evaluation can be used to identify the difficulties that arise when users start to operate new systems. As mentioned, the introduction of new tools can change user tasks. This means that interface design is essentially an iterative task as designers get closer and closer to the final delivery of the full system. Summative Evaluation In contrast to formative evaluation, summative evaluation takes place at the end of the design cycle. It helps developers and clients to make the final judgements about the finished system. Whereas formative evaluation tends to be rather exploratory; summative evaluation is often focussed upon one or two major issues. In this sense, it is like the comparison between general software testing and more specific conformance testing. In the case of user interfaces designers will be anxious to demonstrate that their systems meets company and international standards as well as the full contractual requirements. The bottom line for summative evaluation should be to demonstrate that people can actually use the system in their working setting. This necessarily involves acceptance testing. If sufficient formative evaluation has been performed then this may be a trivial task. If not then this becomes a critical stage in development. A friend of mine had to re-design an automated production system where the night-staff kept reverting to manual control. As a stop-gap the production manager had to move a camp-bed into the supervisors area to check that the system had not been switched off. Clearly, such problems indicate wider failings in the development process if they only emerge at the acceptance testing stage. How To Evaluate The following pages introduce the main approaches to evaluation. Scenario-Based Evaluation One of the biggest issues to be decided upon before using any evaluation techniques is `what do we evaluate'? Recent interest has focused upon the use of scenarios or sample traces of interaction to drive both the design and evaluation of interactive systems. This approach forces designers to identify key tasks in the requirements elicitation stage. As design progresses, these tasks are used to form a case book against which any potential interface is assessed. Evaluation continues by showing the user what it would be like to complete these standard tests using each of the interfaces. Typically, they are asked to comment on the proposed design in an informal way. This can be done by presenting them with sketches or simple mock-ups of the final system. The benefit of scenarios is that different design options can be evaluated against a common test suite. Users are then in a good position to provide focussed feedback about the use of the system to perform critical tasks. Direct comparisons can be made between the alternatives designs. Scenarios also have the advantage that they help to identify and test hypotheses early in the development cycle. This technique can be used effectively with pencil and paper prototypes. The problems with this approach are that it can focus designers' attention upon a small selection of tasks. Some application functionality may remain untested while users become all to familiar with a small set of examples. A further limitation is that it is difficult to derive hard empirical data from the use of scenario based techniques. In order to do this they must be used in conjunction with other approaches such as the more rigorous and formal experimental techniques. Experimental Techniques The main difference between the various approaches to interface evaluation is the degree to which designers must constrain the subject's working environment. In experimental techniques, there is an attempt to introduce the empirical techniques of scientific disciplines. It is, therefore, important to identify a hypothesis of argument to be tested. The next step in this approach is to devise an appropriate experimental method. Typically, this will involve focusing in upon some small portion of the final interface. Subjects will be asked to perform simple tasks that can be observed over time. In order to avoid any outside influences, tests will typically be conducted under laboratory conditions; away from telephones, faxes, other operators etc. The experimenter must not directly interact with the user in case they bias the results. The intention is to derive some measurable observations that can be analysed using statistical techniques. In order for this approach to be successful, it usually requires specialist skills in HCI development or experimental psychology. There are some notable examples that have demonstrated the success of this approach. For instance, the cockpit instrumentation on Boeing 727s were blamed for numerous crashes. One of Boeing's employees, Conrad Kraft, conducted a series of laboratory simulations to determine the causes of these problems. He couldn't do tests on real aircraft and so he used a mixture of low-fidelity cardboard rigs and higher quality prototypes. In the laboratory he was able to demonstrate that pilots over-estimated their altitude in particular attitudes when flying over dark terrain. This led to widespread changes in the way that all commercial aircraft support ground proximity warning systems. Similar approaches have been used to demonstrate that thumb-wheel device reduce error rates in patient monitoring systems when compared to standard input devices such as mice and keyboards. There are a number of limitations with the experimental approach to evaluation. For instance, by excluding distractions it is extremely likely that designers will create a false environment. This means that the results obtained in a lab setting may not be useful during `real-world interaction'. A related point is that by testing limited hypotheses, it may not be cost effective to perform this `classic' form of interface evaluation. Designers may miss many more important problems that are not affected by the more constrained issues which they do examine. Finally, these techniques are not useful if designers only require formative evaluation for half-formed hypotheses. It is little use attempting to gain measurable results if you are uncertain what it is that you are looking for. Cooperative evaluation techniques. Laboratory based evaluation techniques are useful in the final stages of summative evaluation. They can be used to demonstrate, for instance, that measurably less errors are made with the new system than with the old. In contrast, cooperative evaluation techniques (sometimes referred to as `think-aloud' evaluation) are particularly useful during the formative stages of design. They are less clearly hypothesis driven and are an extremely good means of eliciting user feedback on partial implementations. The approach is extremely simple. The experimenter sits with the user while they work their way through a series of tasks. This can occur in the working context or in a quiet room away from the `shop-floor'. Designers can either use pencil and paper prototyping techniques or may use partial implementations of the final interface. The experimenter is free to talk to the user as they work on the tasks but it is obviously important that they should not be too much of a distraction. If the users requires help then the designer should offer it and note down the context in which the problem arose, for further reference. The main point about this exercise is that the subject should vocalise their thoughts as they work with the system. This can seem strange at first but users quickly adapt. It is important that records are kept of these observations, either by keeping notes or by recording the sessions for later analysis. This low cost technique is exceptionally good for providing rough and ready feedback. Users feel directly involved in the development process. This often contrasts with the more experimental approaches where users feel constrained by the rules of testing. Most designers will already be using elements of this approach in their working practices. It is important to note, however, that vocalisations are encouraged, recorded and analysed in a rigorous manner. Cooperative evaluation should not simply be an ad hoc walk-through. The limitations of cooperative evaluation are that it provides qualitative feedback and not the measurable results of empirical science. In other words, the process produces opinions and not numbers. Cooperative evaluation is extremely bad if designers are unaware of the political and other presures that might bias a user's responses. This is why so much time discussing different attitudes towards development. Observational techniques. There has been a sudden increase in interest in this area over the past three or four years. This has largely been in response to the growing realisation that the laboratory techniques of experimental psychology cannot easily be used to investigate unconstrained use of real-world systems. In its purest form the observational techniques of ethnomethodology suffer from exactly the opposite problems. They are so obsessed with the tasks of daily life that it is difficult to establish any hypothesis at all. Ethnomethodology briefly requires that a neutral observer should enter the users' working lives in an unobtrusive manner. They should `go in' without any hypotheses and simply record what they see, although the recording process may itself bias results. The situation is similar to that of sociologists and ethnologists visiting remote tribes in order to observe their customs before they make contact with modern technology. The benefit of this approach is that it provides lots of useful feedback during an initial requirements analysis. In complex situations, it may be difficult to form hypotheses about users' tasks until designer shave a clear understanding of the working problems that face their users. This technique avoids the problems of alienation and irritation that can be created by the unthinking use of interviews and questionnaires. The problems with this approach are that it requires a considerable amount of skill. To enter a working context, observe working practices and yet not affect users' tasks seems to be an impossible aim. At present, there seem to be no more pragmatic approaches to this work in the same way that cooperative evaluation developed from experimental evaluation. There have, however, been some well-documented successes for this approach. Lucy Suchman was able to produce important evidence about the design of photocopiers by simply recording the many problems that users had with them. The Kitchen Sink Approach. The final evaluation technique can be termed the `kitchen sink approach'. Here you explicitly recognise that interface design is a major priority for product development. Resources must be allocated in proportion to this commitment. Scenarios may be obtained from questionnaire and interviews. Informal cooperative evaluation techniques might be used for formative analysis, more structured laboratory experiments might be used to perform summative evaluation.