Programming 2: An assessment review David Chen1, Roger W. Moni2 and Lynda Davies2 1School of Information Technology, Griffith University; 2.The Griffith Institute for Higher Education, Griffith University Introduction to the case Learning, teaching and assessment issues within Information and Communication Technology (ICT) university curricula are tightly geared to both the rapidity of technological innovations and the dynamics and vicissitudes of market forces. The federally-funded Foresighting Working Group (2006) made seven recommendations around closer education-industry practices, leading to the establishment of the Australian Council of Deans of Information and Communication Technology (ACDICT) in July 2008. This body regularly sponsors networking and leadership forums. Concerns about ICT programs remain, however, with similar issues identified across Australian and UK universities (Koppi, Naghdy, Chicharo, Sheard, Edwards & Wilson, 2008). A recent report on managing change at the tertiary level (Koppi and Naghdy, 2009) captured the opinions and experiences of academics, graduates and industry employers. While strengths were identified— notably that ICT programs generally prepare students well for the types of teamwork they need in industry—critical concerns dominated the findings. These included educational issues: a) the ICT discipline is fragmented because it is delivered across many other disciplines (e.g. Engineering, Economics) and complicated by struggles to provide effective developmental balance among technical, business and ‘soft’ skills; b) student enrolments have declined due to unfavourable perceptions of the profession, yet industry demands for graduates continues to rise; c) more students entering ICT programs have lower academic achievement records and that may contribute to the higher attrition rates emerging; d) there is a need for curricula with greater relevance to industry; and e) good teaching practices need to be documented and disseminated throughout the sector. This case study takes its lead from the final point about documenting and disseminating good practices. What follows is a description of assessment review in the undergraduate ICT course Programming 2 convened by one of the authors, David Chen. This early-stage investigation should be considered as part of the cycle of course improvement centred largely around reflection-upon-action (Schön,1983). It documents disciplinary and course aspirations, changes to learning-teaching activities and associated trends in student achievements in laboratory classes. Course improvements and challenges are summarised and then contextualised by reference to the Statements and Quality Indicators of Good Practice in Assessment developed by Griffith University. Programming 2: An assessment review Page 1 Context The ICT program At Griffith University, the Bachelor of Information Technology (BIT) is a three-year degree program emphasising the integration of theory and technology in real-life situations. Students can specialise in one of ten diverse areas: Artificial Intelligence and Robotics, Computer Systems and Networks, Computer Science, Databases, Internet Computing and eCommerce, Information Systems, Multimedia, Pervasive Computing, Computer Security and Software Engineering. The structure of Griffith’s ICT program focuses on the University’s strategic priority of work-integrated learning which is evidenced in a number of ways: “Information technology is a rapidly changing industry with fast-paced careers and research issues that are evolving swiftly. Griffith University has strong links with industry via a number of scholarships, guest lecturers, programs, research centres and the Industrial Affiliates Program.” Entry into the BIT and Bachelor of MultiMedia (BMM) programs requires an OP of 17 and 16, respectively. The course, Programming 2 Programming 2 is convened by a team across Nathan, Gold Coast and Logan Campuses. Co-author, David Chen, is based at Nathan. Programming 2 is a core course in the BIT and BMM programs, and is offered in semester two of Year One at both Nathan and Gold Coast campuses and to be eligible to enrol, students must first pass Programming 1. Enrolments typically exceed 150 at Nathan, and 100 at the Gold Coast. The course provides a broad coverage of programming concepts; the Java programming language, including an introduction to the Java class library; and a range of basic problem-solving and software engineering techniques for the development, testing and documentation of moderate-sized programs that address a variety of problem types. Intended learning outcomes are that students should have: 1) a detailed knowledge of the syntax and semantics of the Java programming language and a general knowledge of the Java class library; 2) learned and practiced the application of primitive data types, arrays, classes, and the application of classes in the Java class library; 3) learned basic software engineering skills for designing, implementing, testing and documenting moderate-sized programs using procedural, and object-oriented programming; 4) learned how to design and apply reusable software components; and 5) developed skill and confidence in Java programming through the solution of selected problem-solving and programming exercises. Programming 2: An assessment review Page 2 Programming 2: An assessment review Page 3 Teaching and learning strategy Course delivery The course has two one-hour lectures, one two-hour laboratory class, and an optional one-hour drop-in clinic each week. New material is introduced, explained and illustrated during lectures. Material is reinforced and extended in laboratory classes where students are able to practice and then submit laboratory exercises for marking; receive advice and feedback on other programming projects; and request clarification on all aspects of the course. Drop-in clinics provide an opportunity for students to receive additional assistance when it is of most benefit. Students are required to attend both lectures each week and to attend and participate in one laboratory class each week. Drop-in clinics are optional. Failure to attend and participate in required classes may be taken into consideration by the teaching team if students request out-of-hours assistance, special consideration and/or deferred examinations. Extensive teaching materials, including weekly learning plans, lecture notes, example programs, online quizzes, and selected reference material is available on the course Web site. Learning and teaching issues The chief educational concern is whether students understand the fundamental concepts underlying their working program so that they can address similar issues identified in new situations and apply what they have learned. This is an important developmental step because success in Programming 2 relies on students being able to adapt what they have learned from this object-oriented programming to a broader application where they will tackle architecture and systems across networks and multiple computers. Teaching and learning strategies in this context can be understood by first considering an idealised depiction of how professional programmers work: 1. Code pattern, re-use (copy and paste). 2. Modify the code, and add some original code. This may require looking up documents and examples. 3. Test the code, if it doesn’t work, go back to step 2. 4. Still can’t fix error(s)? Ask colleagues/internet for help. In contrast, the Course Convenor has observed that when some students program they: repeat steps 2 and 3 many times until they eventually get it right; might do a lot of step 4; and/or complete step 1 and re-use some (or all) code from a friend doing the same assessment item/s. Course history Before 2008 and the subsequent assessment review, there were disturbing trends emerging. Student attendance in laboratory classes was declining; overall student academic achievements were decreasing; and the number of plagiarism cases was increasing. In 2009, the Course Convenor introduced changes to teaching and assessment activities: “Before-lab questions” were introduced that students needed to answer prior to them attempting the practice “in-lab questions”; one-hour, open-book assessed-lab tests were conducted. This format tests what was practiced the week before; and Marking occurred on-the-spot to provide immediate feedback. Assessment Historical changes to course assessment Before 2008, the course assessment consisted of four formative projects, and an end-of-semester summative exam. Overall student achievement was poor due to excessive effort on getting the project programs to work and insufficient focus on understanding the fundamental concepts. Thus some students fell behind early in the course. To address this in 2008, the number of projects was reduced from four to two, and a system of practice laboratory exercises (formatively assessed) were introduced for students to complete before they were required to submit responses for marking. Further changes to the assessment schedule were introduced in 2009 as per the assessment plan below: 1. Five Practice labs 5x2% 10% 2. Five Assessed labs 5x4% 20% 3. Two projects (building medium size programs) 10%, 20% 30% 4. Five in-lecture quizzes 5x2% 10% 5. End-of-semester exam 30%. Description of assessment items 1. Practice laboratory exercises Five practice labs are assessable, at 2% each. Practice lab exercises contain a set of before-lab questions that need to be completed before students are allowed to work on the in-lab practice questions. If students can show their tutor at the start of the class that they have completed the required before-lab questions, then they are awarded 1% for that component of the work. If students complete the in-lab questions before the end of class, then they are awarded another 1%, bringing the Programming 2: An assessment review Page 4 potential marks for the practice-lab exercises to 2%. Part marks are allowed. Given that these exercises are meant to be formative in nature, students can ask tutors for help before and during the lab class. 2. Assessed laboratory exercises Each assessed-lab exercise is structured as an open book programming test to be completed without asking for help (particularly from friends). Students are given one hour to produce a solution, then submit their work online at the end of class. Each test is a simpler and shorter version of the practice lab exercises students were given to do the previous week and they are permitted to re-use some of the practice-lab exercise solutions. If students understood what they did in the practice-lab, they should perform equally well (if not better) in the assessed-lab exercise. These exercises are marked during lab classes so students can receive immediate feedback. 3. Projects The two individual programming projects test students’ understanding and problem- solving skills by requiring them to solve increasingly complex programming problems. Students should be able to answer simple questions regarding their submission, and may be required to demonstrate their work. 4. In-lecture quizzes The five in-lecture quizzes test students’ understanding of the basic concepts in programming. Each quiz consists of five to ten multiple choice questions. While quizzes are held during lectures to encourage lecture attendance, students are required to demonstrate conceptual understanding in the tests and hence this activity is a test of learning and achievement 5. End-of-semester exam The final examination will test students’ knowledge and skills acquired during the course and by requiring them to demonstrate their understanding of programming concepts, ability to write Java methods, classes, and applications. To be eligible for a passing grade in this course, students are required to achieve an overall mark of 50% and achieve a satisfactory standard (50%) in the final exam. Outcomes from changes to assessment Averaged assessment results from practice and assessed labs can be compared to determine how much students understood from their practice labs and how much they can apply this understanding to other contexts. As students cannot receive help from others during the assessed lab, the assessed lab result is a more accurate indication of what students know. The Semester 2, 2009 lab assessment results are summarised in Table 1. Table 1. Comparison of practice and assessed lab results in Programming 2, Semester 2 2009. Programming 2: An assessment review Page 5 Item 1 Item 2 Item 3* Item 4 Item 5 Practice lab result (N=no. of submissions) 96% (N=269) 77% (N=246) 80% (N=190) 74% (N=168) 70% (N=141) Assessed lab result (N=no. of submissions) 89% (N=273) 63% (N=257) 63% (N=223) 56% (N=188) 67% (N=162) Difference -7% -14% -17% -18% -3% The difference in student marks between the practice and assessed labs is of interest. Negative results indicate that students perform better in practice than during assessed labs. Results from Item 3 (of Table 1) serve as an illustration. Here, 178 students attempted both practice and assessed labs; 36 students did better in assessed labs than in practice labs; 39 students did equally well in both labs; 103 students (58%) did worse in assessed labs; and The average difference for these students is 25%. The Course Convenor concluded from these data a significant number of students do not sufficiently understand the work they submit during the practice labs. A more promising trend for student achievement in the assessed labs was identified across 2009. Table 2 summarises the averaged class results for Semesters 1 and 2 – indicating a clear overall improvement. Table 2. Assessed laboratory results for Semesters 1 and 2, 2009. Item 1 Item 2 Item 3 Item 4 Item 5 Sem 1 59% 53% 66% 48% 66% Sem 2 89% 63% 63% 56% 67% Difference +30% +10% +3% +8% +1% Course improvements The assessment schedule for, and activities during, practice laboratories has: 1) been associated with a substantial increase in lab attendance and student participation; 2) provided tutors and students with a concrete demonstration of student learning; and Programming 2: An assessment review Page 6 3) provided examples of the types of problems students were experiencing and identified what part of the curriculum was producing difficulties for them. Assessed-laboratory exercises were structured in ways that made it easier for tutors to identify those students who were having trouble understanding the concepts underpinning their work in the practice lab, and provided opportunities for them to help those students in subsequent practice laboratories. Remaining challenges The biggest challenge arising from this new design is the capacity of tutors to help all the students during both practice- and assessed-labs due to increased calls on their time. Tutors are now fulfilling a number of roles during class: assisting students with their problems, marking the solutions and providing feedback. While the introduction of the practice- and assessed-labs has been successful, a number of challenging issues outside the scope of assessment review remain. These include: variation of students’ ability when they enter Programming 2 and the prior knowledge and skill they should be bringing with them; tensions between ideal teaching conditions (such as staff-student ratio) and budgetary restraints; insufficient time in class to cover content so students can assimilate knowledge and practice new skills; and the capacity to cover the topics and develop skill sets for the diverse student cohort in ways that are relevant and challenging to all while maximising graduate employability. A review of the three Programming courses has been proposed. The aim of this review is to take a more unified approach to course design so there is a smoother transition for students between the courses Programming 1, 2, and 3. This programmatic focus will address the challenges identified to date. The Conferences in Research and Practice in Information Technology (CPRIT) are a useful source of assessment research in ICT (http://crpit.com/). The struggles which many students face when they learn to program have long been documented (Oman, Cook and Nanja, 1989). More recently, an analysis of how novice student programmers perform on summative assessment tasks illustrates the need for designing assessment tasks which better reflect disciplinary needs (Shuhidan, Hamilton and D’Souza, 2009). Clearly, there is an urgent need for more scholarship around the learning, teaching and assessment of Programming. Principles of Good Practice illustrated by the practice The review of assessment in Programming 2 can be situated within the context of the Statements and Quality Indicators of Good Practice in Assessment developed by Griffith University as part of its work on national Teaching Quality Indicators Project in 2007-2008. The outcomes of, and resources produced for, this Australian Learning and Teaching Council initiative underpin the review of Griffith’s Assessment Policy and support the work of this Good Practices in Assessment Project. Programming 2: An assessment review Page 7 It is useful, therefore, to reproduce the Principle upon which the Statements and Quality Indicators were constructed, and to identify the two Statements and Quality Indicators relevant to this case study. Principle Griffith University’s Teaching Quality Indicators Project has been guided by research into the theoretical literature on good principles and practices of assessment in higher education; assessment practice at Griffith University; and assessment policies used across Australia and overseas. This work has shown that assessment inevitably shapes how students approach learning, including what they focus on and how they go about learning it, and is used for a variety of purposes. Necessarily, assessment underpins the core values and principles of the University’s learning and teaching strategic plans and a clear enunciation of what drives assessment at the University is important for students, staff, and the broader community. It is accepted, therefore, that the primary purpose of assessment is to: promote student learning; and provide information upon which judgements are made about students’ work and the standards their performances exhibit. In doing this, the University has a commitment to processes that are transparent, fair, reliable and valid. To articulate the purpose of assessment, Griffith adopts the following Statements of Good Practice. It is clear that the Convenor of Programming 2 believed the pre-2009 assessment plan was no longer adequately serving its students and teachers. As such he started a process of review that included looking at the purpose of the assessment and what he wanted it to achieve for his students and his teaching team. This activity is reflected in the following Statement of Good Practice and shows the link between the stages of review and subsequent improvement. Statement of Good Practice #4: Assessment policies and practices are planned, implemented, reviewed and improved This occurs when: 4.1. assessment practices are given consideration in cyclical reviews of teaching, Programs, Courses and academic units; and 4.2. staff use feedback from peers and students to improve subsequent assessment practices; It had become clear to the convenor and teaching team members that many students did not have a deep understanding of the concepts and processes they used to produce answers in the practice-laboratory exercise. When the teaching team compared students’ responses to pairs of like practice- and assessed- laboratory exercises, it showed that only half the students could reproduce the quality of work from the practice-laboratory exercise in the subsequently assessed- laboratory exercise. Programming 2: An assessment review Page 8 Programming 2: An assessment review Page 9 The review of the course’s assessment therefore produced changes in both the assessment plan and the design of the tasks. The new assessment tasks encourage and require students to engage with their course content on a progressive basis, offering them opportunities to practice exercises and test their knowledge in a formative environment before they are expected to complete exercises for marking. The new assessment plan reflects the research about the primary purpose of assessment and the principles underpinning the first of our Statements and Quality Indicators of Good Practice. Statement of Good Practice #1: Assessment tasks are designed to advance student learning This occurs when: 1.3 tasks test appropriately the increasing complexity of intellectual activity, and require students to demonstrate their growth in understanding and development of skills; and 1.5 there is an appropriate combination of formative and summative tasks to maximise learning opportunities; The new assessment plan encourages students to start work earlier, do work regularly and test themselves as they go along to make sure they understand the concepts they will need to employ later as the complexity of the tasks increase. Students need to demonstrate they can build medium-sized programs and the new assessment arrangements support the advance of students’ learning throughout the semester. Acknowledgements The authors thank Professor Vladimir Estivill-Castro for referring us to the Conferences in Research and Practice in Information Technology. References Australian Council of Deans of Information and Communication Technology http://www.acdict.edu.au/ Conferences in Research and Practice in Information Technology http://crpit.com/ Foresighting Working Group (2006). Recommendations available at http://www.acs.org.au/news/181206.htm [viewed 10 April 2010]. Koppi, T., Naghdy, F., Chicharo, J., Sheard, J., Edwards, S. & Wilson, D. (2008). The crisis in ICT education: An academic perspective. In Hello! Where are you in the landscape of educational technology? Proceedings ascilite Melbourne 2008. http://www.ascilite.org.au/conferences/melbourne08/procs/koppi.pdf [viewed 10 April 2010]. Koppi, T., Naghdy, F. (2009). Managing educational change in the ICT discipline at the tertiary education level. Support for the original work was provided by the Australian Learning and Teaching Council, an initiative of the Australian Government Department of Education, Employment and Workplace Relations. http://www.altc.edu.au/project-managing-educational-change-ict-uow-2006. [viewed 10 April 2010]. Oman, P. W., Cook, C. R. and Nanja, M. (1989). Effects of programming experience in debugging semantic errors. J.Syst. Softw. 9(3): 197-207. Schön, D. (1983). The Reflective Practitioner. How professionals think in action. London: Temple Smith. Shuhidan, S., Hamilton, M. and D’Souza, D. (2009). A taxonomic study of novice programming summative assessment. Eleventh Australasian Computing Education Conference, Wellington, New Zealand, January. Conferences in Research and Practice in Information Technology. Vol. 95. Hamilton, M. and Clear, T. (eds). Programming 2: An assessment review Page 10