Java程序辅导

C C++ Java Python Processing编程在线培训 程序编写 软件开发 视频讲解

客服在线QQ:2653320439 微信:ittutor Email:itutor@qq.com
wx: cjtutor
QQ: 2653320439
TL Forum 2005: Hingston and Combes - using games and simulation to teach AI Category: Professional practice Teaching and Learning Forum 2005 [ Refereed papers ] Using games and simulation to teach AI Philip Hingston and Barbara Combes School of Computer and Information Science Edith Cowan University In this paper, we report on the preliminary observations of an action research project that uses an animated competitive game with simulated physics to teach artificial intelligence techniques in an undergraduate computer science course. Students develop intelligent controllers for simulated vehicles, which compete with each other in a tournament. The simulation includes a graphical animation of the contests, and the students' solutions utilise an AI toolkit that provides animated displays showing the internal workings of their controllers in parallel with the simulation. The result is a learning experience that is motivational, engages students with the learning materials and helps them to develop mental models of the AI algorithms. Introduction There is an important and enduring relationship between artificial intelligence (AI) and games. Historically, games such as chess, backgammon, checkers, poker and, more recently, Go, have provided pivotal challenge problems for AI researchers, exposing and elucidating the nature of intelligence. It seems especially fitting, then, to use games to teach students about aspects of intelligence and how it may be artificially simulated. Contemporary students seem less interested in traditional board and card games, and more interested in real time interactive strategy games such as Starcraft, massively multiplayer online role playing games such as The Saga of Ryzom, or first person shooters such as Unreal Tournament or Counterstrike. These games also use AI techniques to provide intelligent computer players, known as NPCs (non-player characters) or "bots" (short for robots) as opponents for human players. This presents an opportunity for AI educators to motivate students to learn about AI technologies by designing learning experiences around the use of AI in these kinds of games, while also introducing them to an important application area. Interactive learning can exploit the facilities provided by technology to cater for diverse learning styles and individual differences. These learning experiences support constructivist learning pedagogy where students build on prior and acquired knowledge to develop deeper meaning and understandings (About Learning, 2004). Learning materials that use constructivist principles have the capacity to engage students in open ended, inquiry based learning that encourages interaction with the learning materials as a major part of the learning process. Individual learning and motivational styles, such as those identified in the 4MAT System: Imaginative Learning; Analytic Learning; Common Sense Learning and Dynamic Learning (McCarthy, 2004), can also be built into online learning materials through the use of alternative pathways and activities (Combes & Ring, 2004). These considerations underpin the development of an AI programming toolkit that provides illustrative animated displays, and the creation of programming assignments where students use the toolkit to develop intelligent bots for a simple real time animated battle game with simulated physics. The rest of this paper is structured as follows. The first section briefly reviews the historical relationship between AI and games. The next section focuses on previously reported educational use of games for teaching AI, and relevant educational theory on learning styles and pedagogy. This is followed by a description of the simulated battle scenario and the task that was set for the students, as well as the main features of the AI toolkit. The final section presents a report of the students' response to the task and the learning experience. AI and games There are many good reviews of the use of AI in games (eg. Sweetser, 2002; Schaeffer & van den Herik, 2002b and see also many chapters in Schaeffer & van den Herik, 2002a). The description contained in this paper is a very brief overview and relates directly to a specific unit and group of students at Edith Cowan University (ECU). There has always been a close connection between AI and games. Why is this so? Playing any game well requires a player to choose a course of action while taking into account the environment (the game situation) and the likely actions of other agents (opponent/s), so as to maximise opportunities for achieving goals (winning the game). For non-trivial games, this may require the ability to plan and reason about the environment, other agents, and the effects of one's own actions on these things, while also adapting to changes (such as opponents' changing strategies). Planning, reasoning, and adapting: these are central problems in AI research. Very early in the piece, Turing proposed the Turing test as an objective test of success for AI (Turing 1950). In the version that has become folklore, the challenge is to create an artificial intelligence that can carry on a conversation with a human being, and deceive the individual into believing they are conversing with another human being. This can be construed as a game of deceit between the AI and the human. Curiously, the theme of deception is common in games. Another well-known game based on deception is poker, which to some degree inspired Borel, Von Neumann and others to develop formal game theory (von Neumann & Morgenstern, 1944). Deceiving one's opponent is important in many other games - consider spin bowling in cricket, direction of serve in tennis, and pitching speed in baseball as sporting examples, or parlour games like Diplomacy and Balderdash. Some evolutionary biology theorists champion the idea that the evolution of intelligence itself may have been driven by an 'arms race' involving the ability to deceive and the ability to detect deception (Byrne & Whiten, 1988; Whiten & Byrne, 1997). The game most popularly associated with AI would have to be chess. Chess has been a challenge for AI researchers ever since Shannon first proposed an algorithm to play it (Shannon, 1950). This inspired fundamental developments such as heuristic search, and culminated in Deep Blue's famous victory over world champion Kasparov in 1977 (Campbell, Hoane & Hsu, 2002). Other board games that have successfully used AI methods include backgammon (using neural nets and reinforcement learning, (Tesauro, 1995)) and checkers (using neural nets and evolutionary algorithms (Fogel, 2001)). The game of Go is currently receiving plenty of attention, with limited success so far (Muller, 2001). Another game that has become a challenge problem for AI is soccer, in the form of RoboCup (Noda, Matsubara, Hiraki & Frank). Another modern development is the rise of AI methods to provide intelligent computer opponents for human players in video games. A more serious application that has a lot in common with these games is the use of bots in military training simulation games (Atkin, Westbrook & Cohen, 1999). This pervasive entanglement of AI and games makes game programming assignments especially effective in teaching AI. Educators sometimes report that games used as a teaching tool distract students' attention from the subject matter at hand (Lepper & Malone, 1987), but in an AI course, the game embodies the subject matter. Far from being a distraction, students are, in fact, following in the footsteps of the pioneers by studying AI through studying games. Pedagogical considerations Aside from the peculiar aptness of games for teaching AI, there are many pedagogical arguments to support the use of games in teaching in general, and programming in particular. In recreational computer games, players engage in processes such as proactive/anticipatory, recursive thinking, organisation of information, general search heuristics, means-ends analysis, and generating alternative solution paths (Pillay, 2002). Players must have high level literacy skills including text literacy, visual literacy and interpretive skills. They parallel process information, make choices and problem solve, while dealing with time constraints. Players require highly developed hand-eye coordination and use a variety of physical mechanisms to actually play the game. Computer games as educational tools also have an intrinsic motivational factor that encourages curiosity (Kumar, 2000) and creates the impression that the students are in control of their own learning. Educators have long recognised that people learn best when they internalise new information along a continuum of perceiving and processing. Successful learning occurs when students experience first and then conceptualise understandings from their experiences (About Learning, 2004). In this case, students develop programs for their controllers, implement the AI program and simultaneously view a graphical animation of the result alongside 'viewers' provided by the AI toolkit. These viewers are animated displays of the internal workings of their controllers. Thus students can visualise and experience the results of their program on two levels: the programming in action and operationally as a finished product in the game. Students can then make changes to their original program based on observations of the animation and the AI viewers. They are engaging with the learning materials and in charge of the learning process. Such experiential learning opportunities enhance and enrich the learning experience as students can immediately see the results of any changes made to the original programming. Running the animation simultaneously with the AI viewers actively engages students in the learning experience, promotes the development of higher order thinking skills and mental models, and encourages metacognition and reflection as they endeavour to improve their programming and their results in the game. The toolkit assists students to deepen their understandings of AI concepts and provides an example of real world applications during the learning experience. Visualisations such as the toolkit's AI viewers provide students with enhanced opportunities to engage and interact with the learning materials. The learning experience here is much more than the traditional educational experiences offered by computer aided learning programs. The animation and the AI viewers offer a more immersive learning experience that provides students with an instantaneous feedback loop (Friedman, 1994). This learning by doing reflects real world situations where products are created, tested, evaluated and improved (Jayakanthan, 2002). Instant feedback via the AI viewers provides the mechanism whereby students can participate in a simulated action research cycle during the development of their program. The phrase "competitive programming" has recently been coined by Lawrence (Lawrence, 2004) to describe the technique of setting a student programming assignment in which student solutions compete with other and with instructor authored solutions. The introduction of a competition between students at the end of this unit adds another real world context to the learning experience. Fraser (1999, p. 16) describes authentic assessment as "... assessment tasks that resemble skills, activities and functions in the real world". The simulation and assessment task is designed to encourage cooperative learning and provide opportunities for students to develop skills that will make them adaptive and flexible learners in the workplace. Malan notes that through this approach "assessment then becomes a learning experience in which learners are prepared to apply their knowledge, skills and values in an integrated manner" (2000, p. 26). "Inserting the students (or other human recruits) into the tournament brings about the 'human against machine' angle and serves to contextualise the tournament in an AI course" (Kumar, 2000). The competition also mimics the real world of games programming where only the best game is developed for the marketplace. Authentic assessment tasks which have real world applications are more relevant to students and provide a more comprehensive approach to assessment. The AI toolkit is an example of how computer games programming can be used to enhance and enrich the learning experience. The toolkit caters for a range of learning styles and students have opportunities to engage and interact with the learning materials. They receive instant feedback on their learning and are involved in a simulated action research cycle that reflects real world practice. The inclusion of a competition at the end of the unit also mimics the real world of computer programming and acts as an added incentive for students to participate in the task and engage with the learning materials. The toolkit The first author has developed an artificial intelligence toolkit in Java, for use in student programming assignments. It includes functionality for fuzzy reasoning, evolutionary computation and artificial neural networks. The toolkit classes have been designed so that it is very simple to create displays showing the internal workings of the implemented algorithms. These displays are designed to be visually similar to the diagrams used in the unit text book (Negnevitsky, 2002). Students make use of these displays when debugging and fine tuning their programs. As a side effect, they continually build and reinforce their mental models of how the algorithms work. For example, the toolkit contains classes for performing fuzzy reasoning. Fuzzy reasoning has been called 'computing with words' by its founder, Zadeh (Zadeh, 1975a, 1975b, 1975c). Its purpose is to allow computers to perform 'approximate reasoning'. For example, the statement 'If the road is slippery, you should drive slower' is a very imprecise one, but is nevertheless a useful piece of advice to a human being. People are adept at manipulating and using such imprecise or 'fuzzy' data. Computers, on the other hand, deal with extremely precise data. How can a computer make use of fuzzy data? Zadeh proposed a computational system based on 'fuzzy set theory', a mathematical generalisation of standard 'crisp' set theory. In this system, measurements made in the real word, ie. crisp values, are translated into membership values in appropriate fuzzy sets. This process is called 'fuzzification'. These fuzzy set membership values are manipulated using the rules of fuzzy set theory. Finally, the resulting fuzzy values are translated into crisp actions, decisions or predictions in the real world, a process called 'defuzzification'. In our example, the English language statement given above would be written in as a fuzzy rule If road_condition is slippery Then speed is low. The symbols road_condition and speed denote fuzzy or 'linguistic' variables, which would have associated fuzzy sets or 'linguistic values' like slippery and dry or low, medium and high. A measurement of the coefficient of friction or the road surface would determine the degree of membership of road_condition in the fuzzy set slippery. The rule would then 'fire', adding weight to the proposition that speed should be low. This would then be combined with the outcomes from other fuzzy rules to determine a final crisp value for the speed of the car. One could imagine rules like this being used in a driver assistance system in a car. In fact, one of the most common uses for fuzzy systems is for the control of vehicles and other machines. The toolkit has classes for fuzzy variables, fuzzy sets and fuzzy rules, allowing students to create programs that carry out fuzzy reasoning. When writing a program to solve a problem using fuzzy reasoning, a useful development strategy is to first write code to create each fuzzy variable and its associated fuzzy sets. Check that this is all correct, then add code that uses these variables and sets to create fuzzy rules. Check that these appear correct, and finally proceed with linking the fuzzy rules with the logic of the problem to be solved. The built in display methods provide a means to check the work at each stage. In an example developed with the students during a lecture, a fuzzy rule was proposed for predicting a person's build from their height and sportiness. A fuzzy variable 'height' was constructed, with linguistic values 'very tall', 'tall', 'medium' and 'short'. Once the code to construct this fuzzy variable object was written, the variable could be examined by calling its display method, bringing up a 'viewer' window as shown in Figure 1. The viewer shows the crisp value of the variable, 1.75m, and the membership functions for each of the linguistic values (the viewer shows that the membership value for the linguistic value 'very tall' is 0.37). Clicking on the buttons along the bottom causes the viewer to switch between the corresponding linguistic values. The crisp value can be changed by editing the value in the text box, which will cause the membership values to be recalculated and the display will change. The dynamic, interactive nature of the display is not only useful for debugging, but it also provides students with a visually based mental model of fuzzy sets and fuzzy variables. Figure 1: Viewer for a fuzzy variable. The fuzzy variable 'sportiness' was added and checked in the same way. These fuzzy variables were then used in code that created fuzzy rules. Each fuzzy rule is also an object with its own display method, which brings up a viewer like the one in Figure 2. The viewer shows the membership values for the two linguistic values in the rule condition on the left, shows their combination to produce a rule firing strength of 0.33 in the centre, and shows scaling of the rule consequent on the right (the red triangle). Once again the display is dynamic: changing the crisp values of the fuzzy variables results in the update of the rule viewer display. Figure 2: Viewer for a fuzzy rule. The assignment The assignment is one component of assessment in an undergraduate unit on Intelligent Systems, covering fuzzy reasoning, evolutionary computation, artificial neural networks, and touching briefly on artificial life, swarm intelligence and evolutionary game theory. Programming examples in Java, featuring the AI toolkit, are woven through lectures and workshop tasks. For the assignment task, students were asked to write a control program for a 'saucer' in a simulated battle, using fuzzy rules. The simulation was based very loosely on Mathew Nelson's tank battle simulation, RoboCode (Li, 2004; RoboCode, 2002), although much simplified and modified. Saucers move around a rectangular battlefield. Each saucer has a limited supply of energy. Saucers can change their direction of motion and speed, and can fire 'photon blasts' at enemy saucers. Moving, changing direction, and firing all use up energy. Being hit by an opponent's blast also uses up energy. Saucers that exhaust their energy supply 'die', and the winning saucer is the one that outlasts its enemies. The students' task was to write a single Java class, implementing methods to accept sensor data giving details of the enemy saucers and other flying objects on the battlefield, and methods to control the speed of the saucer, to steer it and to fire photon blasts. In the first stage, students were allowed to work in groups, and their aim was to defeat three saucers with controllers written by the first author. These were: Simple, a controller that makes random decisions; Dodger, a controller that tries to avoid the enemy by running away and unpredictably changing direction; and Berzerk, a controller that rushes headlong at the enemy with all guns blazing. The authors have no idea what an optimal strategy might be, if one exists. After this initial stage, the simulation was modified to incorporate changes suggested by the students, and students had to work individually to write a controller for the modified simulation. Two new kinds of objects were added: 'space rocks' which fly about the battlefield, bouncing off the sides and anything else they hit; and 'power ups' which provide extra energy to any saucer that flies into them. In addition, the competition was changed from a one on one contest against instructor authored opponents to a tournament in which up to 10 student authored controllers competed against each other at any one time. At several points before final submission, tournaments were run to allow students to see how their solutions performed against other students' solutions. Figure 3 shows a simulated battle in progress. There are four saucers on screen, Dodger (black triangle on grey circle), Simple (red triangle on blue circle), Berzerk (yellow triangle on orange circle), and a new one, Moody (currently magenta triangle on pink circle), which uses fuzzy rules to determine its mood. Depending on its mood, it plays more or less aggressively, and changes colour to match. The triangles indicate the direction in which each saucer is currently heading. The grey clumps are space rocks, the bright yellow dots are photon blasts, and the blue dot is a pulsing power up. Overlaid on the simulation is another of the toolkit's viewers. This one shows nine fuzzy rules laid out as a 3x3 table, or matrix, in the middle panel of the display. The nine rules determine the value of the fuzzy variable 'be macho', depending on the combination of fuzzy variables 'my energy' and 'energy difference'. In this example, one of the rules are currently firing (corresponding to the cell coloured red). The top panel shows one of the rules, currently the one from the top left corner of the matrix. Clicking on the cells of the matrix changes which rule is displayed. The panel at the bottom shows the output of the rule that is firing being defuzzified to determine the final crisp value of 'be macho'. As before, this display is dynamic, constantly reflecting the current state of the simulation and the controller. Students can pause the simulation at any time and examine the states of their fuzzy variables and rules using these viewers. Figure 3: A Saucers battle simulation in progress, also showing a fuzzy rule matrix viewer. The assessment value for the assignment was divided into three parts: 60% was allocated to a report describing the controller's strategy and its implementation, and reflecting on what had been learned; 20% was allocated to producing a controller that compiled and ran successfully; and 20% was calculated from the controller's final placing in the tournament. While the assessment value of the competitive component was small, it was sufficient to rouse students' competitive spirits and get their creative juices flowing. Postings to the students' online forum reflected their enjoyment and satisfaction in beating the instructor authored controllers, and finding new ways to improve their solutions. It was interesting to observe the increasing sophistication of the strategies that students developed. Initially, most students were concerned with mastering the syntax of the Java language and the rationale of the toolkit, and were satisfied with controllers that simply responded sensibly to the game situation - trying to get power ups when they were available, shooting at opponents when in range, running away when outmatched. As their facility with Java and the toolkit improved, students began to consider anticipating possible strategies of opponents, and how these might be exploited. This in turn led them to attempt more difficult programming, and to invent more creative uses for fuzzy reasoning. Student experiences When developing an interactive tool to assist students in their learning, it is vital to seek their feedback on the learning experience. So what do the students have to say about the AI toolkit? Overall, our observation is that the student response is very positive. Some examples of student reflections collected during the first part of the assignment illustrate this claim. Several students reported on the motivational and immersive aspects of the experience, while also acknowledging the deeper learning that occurred. Other than the obvious benefits of this assignment aiding in improving my personal level and awareness of programming, it proved to be an entertaining experience with a level of interaction I have not previously experienced in the world of programming. After working on the saucer for a few hours it starts to grow on you and a desire to improve/optimise it becomes innate as you grow attached to the little circle of floating pixels on the screen, including a slight feeling of joy as your little fighter rains death upon its enemies. Likewise, I expect the second part of this assignment to also be an interesting experience as I have found this one (Student feedback 1). Way to go Dr Phill!!! This unit has been one of my favourites cause you've made it fun and interesting (Student feedback 2). The assignment was probably one of the more interesting assignments I've done in my 3 years at uni. And even though I had no prior knowledge of java I was still able to complete the assignment to the point where I was satisfied with the work I'd done (Student feedback 3). Another student commented on the advantages of being able to 'see' their program in action and how this improved their understandings of fuzzy logic. The final code enabled us the see how the saucer created by the team was compiling and when it was executed, the saucers were able to perform the instructions that were given to us by the tutor. It was clearly visible that the saucers were running after each other and as they were the whole idea was to shoot at each other and the one that lost all the energy first lost the competition. From the creation of this program we were able to expand our knowledge on fuzzy expert systems, as well as to find out what fuzzy expert system[sic] can perform (Student feedback 4). A third student felt that the opportunities to test, evaluate and improve the original model, was a realistic application of the theory being studied in the unit. After a several testing[sic] of the saucer, it can defect[sic] all the other opponents and get a good value of the energy remaining. After completing this assignment, all team members are able to apply intelligent systems techniques to design and implement a solution to a realistic problem. (Student feedback 5) A preliminary online survey was also conducted at the end of the unit. Fifty (41%) of the one hundred and twenty students doing the unit completed the survey. The timing of the survey (last week of semester) may account for the number of students completing the survey, which may have been higher if the survey had been administered earlier. The survey asked students to reflect on their learning using the AI Toolkit. Eighty eight percent of the survey group (44 students) felt that using the AI Toolkit provided them with extra support when learning about AI and Java programming. Ninety four percent of the survey group (47 students) reported that being able to see their code in the AI toolkit as it was working helped them to better understand the underlying code. These students also felt that being able to see the animated version of the game together with the AI Toolkit helped them to better understand AI principles and concepts (r = 0.414, p < 0.005). Forty two students (84%) felt that using games programming to understand AI principles was an appropriate way to learn about AI. Forty one students (82%) felt the competition and the AI toolkit was a realistic way of learning about intelligent systems. These students also reported that the competition was a positive motivational factor (r = 0.575, p < 0.005); being able to see the code as it was working alongside an animation of the game helped them with their understandings of AI principles and concepts (r = 0.433, p < 0.005) and the underlying Java code (r = 0.471, p < 0.005); and the AI toolkit and the competition provided an interesting and useful way of learning about AI principles (r = 0.754, p < 0.005). Forty two students (84%) felt that participating in the competition encouraged them to improve their programming and helped them to better understand the theory of fuzzy logic and systems. There was almost total agreement from students in the survey group that the AI toolkit and the competition provided an interesting and useful way of learning about AI principles (45 students or 90%). Table 1: Correlation matrix, preliminary student survey Correlation matrix - using the AI Toolkit Q5 Q6 Q7 Q8 Q9 Q10 Q4 Being able to see my code in the AI toolkit as it was working helped me to better understand the underlying code. 0.414 -0.304 0.225 0.474 0.471 0.418 Q5 Being able to see the animated version of the game together with the AI toolkit helped me to better understand AI principles and concepts. -0.234 0.268 0.498 0.433 0.439 Q6 Using games programming to understand AI principles was not an appropriate way to learn about AI. -0.320 -0.318 -0.308 -0.457 Q7 Participating in the competition encouraged me to improve my programming. 0.184 0.575 0.486 Q8 Using the AI toolkit helped me to better understand the theory of fuzzy logic and systems. 0.327 0.376 Q9 The competition and the AI toolkit was a realistic way of learning about intelligent systems. 0.754 Q10 The AI toolkit and the competition was an interesting and useful way of learning about AI principles. Conclusive evidence of improved learning and the extent of student understandings will not be available until students have completed all sections of the assessment task using the AI toolkit and other assessment tasks in the unit. However, students' feelings of self efficacy and confidence are important indicators of how they feel about their learning. The fact that many students also took the time to add positive comments about the AI Toolkit in the UTEI (Unit and Teaching Evaluation Instrument) survey at the completion of the unit is another indication that the students not only enjoyed the assessment task, but felt that the AI Toolkit and the task were a relevant and worthwhile learning experience. As the project progresses, a more in depth survey will be conducted and anecdotal evidence from forum discussions will be triangulated with students' academic performance. However, early observational and anecdotal evidence and results from the preliminary student survey, indicate that the use of animated competitive simulation games in teaching AI techniques adds another dimension to the learning experience that motivates students while actively engaging them in the learning and with the learning materials. Conclusion Integrating a competitive game and simulation exercise to teach AI techniques has provided students in an undergraduate computer science course with opportunities to actively engage with the learning materials and the learning experience. Early feedback from students participating in the unit indicates that the simultaneous display of the graphical animation and an AI toolkit that shows the internal workings of their controllers, has helped them to develop mental models of the AI algorithms. The authentic assessment task also reflects real world practice and includes team work, individual development using the action research cycle and a competitive element. While the authors are still in the early stages of documenting the results of this action research project, feedback indicates that the toolkit, the competition and the gaming is providing students with a richer, more relevant and enjoyable learning experience. Materials The fuzzy reasoning component of the toolkit and the Java class files for the simulation are available from the first author on request. References About Learning (2004). [retrieved 13 Oct 2004] http://www.aboutlearning.com/ Atkin, M. S., Westbrook, D. L. and Cohen, P. R. (1999). Capture the flag: Military simulation meets computer games. In Papers from the AAAI 1999 Spring Symposium on Artificial Intelligence and Computer Games. AAAI Press: Menlo Park, Calif. Byrne, R.W. and Whiten, A. (Eds) (1988). Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes and Humans. Clarendon Press: Oxford. Campbell, M., Hoane, A.J. and Hsu, F. (2002). DeepBlue. In J. Schaeffer and J. van den Herik (Eds), Chips challenging champions: Games, computers and artificial intelligence. Elsevier: Amsterdam. Combes, B. and Ring, J. (2004). If you help us build it, we will come! - The role of the teacher librarian as an online curriculum facilitator and innovator in constructing communities of learning and literacy. ASLA online conference 2004 Proceedings. ASLA Inc: Qld. [abstract only, verified 18 Jan 2005] http://www.asla.org.au/online/il_abstracts.htm Fogel, D. B. (2001). Blondie24: Playing at the Edge of AI. Morgan Kaufmann Publishers. Fraser, W.J. (1999). The foundations of continuous assessment: Its link to performance-based, authentic, competence-based and outcomes-based assessment. University of Pretoria: Pretoria. Unpublished article. Friedman, T. (1994). Making sense of software: Computer games and interactive textuality. In S. Jones (Ed), Community in Cyberspace. Sage, Thousand Oaks: CA. Hauser, M. (1997). Minding the behaviour of deception. In A. Whiten and R. Byrne (Eds), Machiavellian Intelligence II. Cambridge University Press: Cambridge. Jayakanthan, R. (2002). Application of computer games in the field of education. The Electronic Library, 20(2), 98-102. Kumar, D. (2000). Pedagogical dimensions of game playing, ACM Intelligence Magazine, 10(1). Lawrence, R. (2004).Teaching Data Structures using Competitive Games. IEEE Transactions on Education, 9(3), 205-260. Lepper, M. R. and Malone, T. W. (1987). Intrinsic motivation and instructional effectiveness in computer-based education. In R. E. Snow and M. J. Farr (Eds), Aptitude, learning and instruction. Volume 3: Cognitive and affective process analysis. Erlbaum: Hillsdale, NJ. Li, S. (2004). Rock-em, sock-em Robocode. [retrieved 18 Sep, verified 17 Jan 2005] http://www-106.ibm.com/developerworks/java/library/j-robocode/index.html Malan, S.P.T. (2000). The 'new paradigm' of outcomes-based education in perspectives. Journal of Family Ecology and Consumer Sciences, 28, 22-28. Mathematical and statistical tables and chemical data (undated). Secondary Education Authority: Osborne Park, WA. McCarthy, B. (2004). Welcome to 4MAT. [retrieved 13 Oct 2004/] http://www.aboutlearning.com/ Muller, M. (2001). Computer Go survey. [retrieved 18 Sep 2004, verified 17 Jan 2005] http://www.cs.ualberta.ca/~mmueller/cgo/survey/ Negnevitsky, M. (2002). Artificial Intelligence: A Guide to Intelligent Systems. Pearson: Harlow. von Neumann, J. and Morgenstern, O (1944). The Theory of Games and Economic Behavior. Princeton University Press. Noda, I., Matsubara, H., Hiraki, K. and Frank, I. (1998). Soccer server: A tool for research on multiagent systems. Applied Artificial Intelligence, 12, 233-250. Pillay, H. (2002). An investigation of cognitive processes engaged in by recreational computer game players: Implications for skills of the future. Journal of Research on Technology in Education, 34(3), 336-351. Robocode (2002). [retrieved 13 Oct 2004] http://robocode.alphaworks.ibm.com/home/home.html Schaeffer, J. and van den Herik, J. (Eds) (2002). Chips challenging champions: Games, computers and artificial intelligence. Elsevier: Amsterdam. Schaeffer, J. and van den Herik, J. (2002). Games, computers and artificial intelligence. In J. Schaeffer and J. van den Herik (Eds), Chips challenging champions: Games, computers and artificial intelligence. Elsevier: Amsterdam. Shannon, C. (1950). Programming a computer for playing chess. Phil. Mag, 41, 256-275. Sweetser, P. (2002). Current AI in Games: A Review. [retrieved 18 Sep 2004, verified 17 Jan 2005] http://www.itee.uq.edu.au/~penny/Game%20AI%20Review.pdf Tesauro, G. (1995). Temporal difference learning and TD-gammon. Journal of the ACM, 38(3), 58-68. Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460. Whiten, A. and Byrne, R.W. (Eds) (1997). Machiavellian Intelligence II: Extensions and Evaluations. Cambridge University Press: Cambridge. Zadeh, L.A. (1975a). The concept of a linguistic variable and its applications to approximate reasoning: Part I. Information Sciences, 8, 199-249. Zadeh, L.A. (1975b). The concept of a linguistic variable and its applications to approximate reasoning: Part II. Information Sciences, 8, 301-357. Zadeh, L.A. (1975c). The concept of a linguistic variable and its applications to approximate reasoning: Part III. Information Sciences, 9, 43-80. Authors: Dr Philip Hingston is a Senior Lecturer in Computer Science at Edith Cowan University. His career includes 13 years as an academic and 9 years in industry, most recently leading an R&D group in a mining company. His research interests are in the theory and applications of intelligent systems, especially evolutionary algorithms. Barbara Combes has recently taken up a contract at Edith Cowan University as a lecturer in Information Science. During 2001-2002 she was the Teacher Librarian at Sevenoaks Senior College, an innovative experiment in the use of high-end technology to facilitate the delivery of curriculum and support materials using ICT. Contact Details: Dr Philip Hingston School of Computer and Information Science Edith Cowan University, 2 Bradford Street, Mount Lawley WA 6050 Phone: 9370 6427 Fax: 9370 6100 Email: p.hingston@ecu.edu.au Please cite as: (2005). Using games and simulation to teach AI. In The Reflective Practitioner. Proceedings of the 14th Annual Teaching Learning Forum, 3-4 February 2005. Perth: Murdoch University. http://lsn.curtin.edu.au/tlf/tlf2005/refereed/hingston.html Copyright 2005 Philip Hingston and Barbara Combes. The authors assign to the TL Forum and not for profit educational institutions a non-exclusive licence to reproduce this article for personal use or for institutional teaching and learning purposes, in any format (including website mirrors), provided that the article is used and cited in accordance with the usual academic conventions. [ Refereed papers ] [ Contents - All Presentations ] [ Home Page ] This URL: http://lsn.curtin.edu.au/tlf/tlf2005/refereed/hingston.html Created 18 Jan 2005. Last revision: 18 Jan 2005.