Java程序辅导

C C++ Java Python Processing编程在线培训 程序编写 软件开发 视频讲解

客服在线QQ:2653320439 微信:ittutor Email:itutor@qq.com
wx: cjtutor
QQ: 2653320439
UG Exam Performance Feedback
Second Year - Semester 2
COMP20012 Algorithms and Data Structures DER GDG
No feedback received - please see DER.Comments:
COMP20032 Distributed Computing RiS CCK
Q1 (a) IDL = Interface Definition Language. Getting the acronym wrong (invarious ways) was 
too common!
b) The first lab exercise involved running a Java servlet using servletrunner. It is clear that 
some students didn't understand what they were doing, and weren't curious enough to find 
out.
c) The main assumption is that the time delay in sending a message to the server is equal to 
the time delay in the server sending it back again. If
this is true, Cristian's algorithm works perfectly! Other assumptions, such as shorter round-
trip times being better, are secondary.
Q2) Sshows that a large number of students still did not fully understand the concepts of 
ACID transactions - most students got question 2.a and 2.c right but 2.d wrong. Also, most 
students were able to correctly answer questions related to messaging and service-oriented 
architectures. 
Q3)To come from RiS.
Q4 (a) Suggesting that semaphores be used to achieve distributed mutualexclusion is not 
sensible. It just leads to the question of how to implement semaphores in a distributed 
system - and one answer is to use a single coordinating server in the way this question 
required you to explain.
(b) and (c) - bookwork.
Comments:
COMP20072 Computer Graphics TLJH
Section B.
All the candidates answered the Colour Model question, and no-one touched the Bezier 
Curves question. This surprised me, because I didn't perceive the two questions as being 
particularly different in terms of difficulty.
The Colour question was reasonably well answered, with an average of 13.2/20, or 66%. The 
best mark was 20/20, the worst 5/20.
About 1/3 of the candidates used no diagrams at all, which is plain daft, especially as the 
paper implored them to.
Many people simply didn't provide enough detail in their answers, and this was especially 
apparent in the section on JPEG, where people would list the steps of the algorithm but give 
no context or explanation at all. And this of course makes you doubt if there is any real 
understanding, or if it's simply regurgitated memorisation.
Overall I got a strong feeling of two "levels" of answers: those who wrote lucidly and 
explained things well; and those who listed facts in isolation. I don't know whether this 
corresponds to people who did or didn't attend lectures, but it would not surprise me.
Comments:
07 January 2008 Page 1 of 7
COMP20142 Logic in Computer Science AV
No feedback received - please see AV.Comments:
COMP20212 Digital Design Techniques EWH PWN
COMP20212 Feedback Questions 1 & 2
Q1.
On the whole this question was answered poorly.
The operation of the required controller was tightly bound, yet a number of (incorrect) variants 
were produced. Two states which caused problems were the 'Idle' state and the 'Close Tray' 
state, where a number of required conditions were not tested for. It was apparent that there 
was a lack of understanding on how to design ASM charts from a textual description of the 
controllers operation.
The simplification of K-maps including map-entered variables was clearly difficult for a 
number of students, and it was clear that some students were unsure how to derive simplified 
equations from such maps. In addition, everyone who attempted the question failed to 
recognise that as the design was one-hot, there were a considerable number of don't care 
states in the k-map making simplification easier.
Q2.
This question was relatively straightforward, with two possible solutions for part c).
However, there were a few problems:
The discussion of the design approaches was limited in most cases, and failed to discuss the 
technological differences between the three design approaches.
A number of students failed to discuss the differences between PLA, PAL and ROM devices; 
this was discussed in detail in the lectures. The advantage of the PAL being the availability of 
on-board flip-flops making the implementation of sequential systems much easier.
The simplest implementation of the code converter of part c) is using the ROM, as the logic 
table can be implemented directly in memory. However, a number of students failed to 
recognise this. In addition, a few simply listed the gray code on the decoded outputs of the 4 
to 16 line decoder, which is clearly incorrect as the output is BCD!
Comments:
07 January 2008 Page 2 of 7
COMP20252 Mobile Systems DAE
36 candidates sat the COMP20252 exam on May 30 2007.
The performance on the exam was generally encouraging, and indicated that the candidates 
had absorbed a considerable proportion of the core course material.
Question 1 was a compulsory 'shotgun' question, asking for answers to ten out of a dozen 
short questions each worth 2 marks and between them covering all aspects of the course. 
The great majority of candidates performed well on this question, achieving marks in double 
figures.
Question 2 asked for an essay on the material covered in the practical laboratories. In the lab 
students worked in teams with each individual
responsible for a component of the lab work, and here they were asked to write about their 
component. The essay nature of the question resulted in
a relatively narrow spread of marks, with no-one doing really well or really badly.
Question 3 was a technical question on error detection and correction.  Candidates confident 
enough to answer this generally did well.
Question 4 began with JPEG image compression bookwork but then moved on to mobile 
system issues, and require candidates to think outside the material presented in the course. 
Answers were generally encouragingly good.
This is the first year COMP20252 has run, and it has deliberately been set up in an 
experimental way involving teamwork, self-organisation, a technical laboratory, and lecture 
material presented in a very dense format in half the usual number of lectures. Although 
there have been problems, and some complaints about work-loads, the exam performance 
reinforces my view that the course has been effective in that the students have absorbed the 
material well and have emerged with real practical knowledge of the subject matter. They 
have worked hard, but their generally good marks reflect their achievement.
Comments:
07 January 2008 Page 3 of 7
COMP20312 Fundamentals of Databases SH
Overview
COMP20312 was taken by 118 undergraduate students mainly from the School of Computer 
Science but also including BCS, ABIS, with BM, Geology with IT, Physics with IT, and 3rd 
year Mathematics students. I'm pleased to say that this was one of the best answered 
COMP20312 exams I have set with the average score being 37.86, making 63.1%, with a 
standard deviation of 4.4. Students answered 3 out of 4 questions with topic areas spread 
through all questions avoiding the problem of answering single topic areas and better testing 
the spread of a students knowledge. I am happy to say that the average standard deviation 
per student over 3 questions was only 3.0 meaning that students scoring low - scored low 
over all questions, and students scoring high, scored high over all questions. Some 
anomalies did exist but these were mostly due to students running out of time and not 
answering all parts of their last question.
This year however there has been one major change regarding the exam format and two 
changes regarding the unit delivery; both have had an impact on the examination results for 
this year. Firstly, the examination has now been changed into two parts, one compulsory -  
comprising 10 multiple choice questions (MCQ) valued at 2 marks each; the whole 
representing one conventional 20 mark question; and one part comprising 3 longer style 
(conventional) questions of which two must be answered. Part one aims to test Blooms 
(Cognitive) Taxonomy on Knowledge while part two aims to test Blooms (Cognitive) 
Taxonomy on Comprehension, Application, and Synthesis over each question. Secondly, in 
last years examination report I said:
"Q1 part c covering functional dependencies and normalisation proved most problematic for 
students. Normalisation always proves to be difficult for students and I intend to move it from 
an example topic to a required lab exercise next year to expose the students to it in a more 
practical setting...Q2 part c covering SQL and relational algebra proved most problematic for 
students. Relational algebra was the main problem and always proves to be difficult for 
students, again, I intend to move it from an example topic to a required lab exercise next year 
to expose the students to it in a more practical setting."
I therefore made substantial changes to the unit so that relational algebra, functional 
dependencies, and normalisation where included in both the examples clinics (by adding an 
additional class) and to the laboratory (by removing some the simpler SQL work). The effect 
of both these changes can now be seen in this summers 2007 examination.
Part A
Questions 1-10 (2 marks each)
Marks: Average 15.36 / Mode 18.00 / Median 16.00 / Average % 76.78 / Standard Deviation 
3.32 - Answered by 118 (all) students.
This was the highest scoring question mainly due to the fact that each question investigated 
'bookwork'. However, the question skewed the results slightly because the conventional 3 
questions previously allowed a total of 12 marks to be earned for bookwork overall - this year 
we allowed 20 but over a much broader spectrum of topics. This skewing will not occur next 
year as I intend to create another 10 part MCQ testing Blooms (Cognitive) Taxonomy - 
Application (re-evaluating laboratory and examples clinic work) and move the testing to 2 
large MCQ questions testing all topics; and then a choice of 1 (from 2) conventional 
questions testing Blooms (Cognitive) Taxonomy - Comprehension and Synthesis.
Part B
Questions 1 (20 marks)
Marks: Average 10.80 / Mode 14.00 / Median 11.00 / Average % 54.01 / Standard Deviation 
3.48 - Answered by 96 students.
This was the third most answered question and tested relational schema's, functional 
dependencies and normalisation, and database performance. I must say that I think this 
question was a success and the decision to move functional dependencies and normalisation 
into a practical setting has been vindicated. Indeed, this part of the question was answered 
very well by most students. The main problem here was with regard to the 'original thought' 
(synthesis) question which always proves problematic.
Questions 2 (20 marks)
Comments:
07 January 2008 Page 4 of 7
Marks: Average 9.44 / Mode 6.00 / Median 10.00 / Average % 47.20 / Standard Deviation 
4.85 - Answered by 25 students.
The least answered question and for the most part seen as a question of last resort by most 
students. Those that seemed to attempt this out of choice did well, but those who chose it out 
of desperation did very badly (born-out by the mode / median disparity). The questions tested 
consistency and completeness, the SQL language (application in detail), and ER 
diagramming from an original thought aspect. The consistency and completeness question 
brokered some differing interpretations which I accepted and marked for their substance, the 
SQL question was answered well, again the original thought question proved problematic.
Questions 3 (20 marks)
Marks: Average 12.13 / Mode 13.00 / Median 12.00 / Average % 60.66 / Standard Deviation 
2.62 - Answered by 114 students.
By far the best answered long style question also chosen by the most students. Very little to 
say on this question, most students answered it very well indeed. It tested transactions 
(taught last and so fresh), (E)ER diagramming (application in detail), and relational schema 
as original thought (synthesis).
FIRE ALARM DISRUPTION
This years examination was disrupted by a fire alarm. Accurate details have been lodged with 
the schools Special Circumstances Committee, however, the general details are that the 
exam started on time at 09:45 on Tuesday 05th June 2007, paused between 10:30 - 11:05 
for a fire alarm (false) and then restarted finishing at 12:15. I can understand that some 
students 'thought' may have been disrupted although having 30 minutes of extra revision time 
after knowing what the questions are I would expect most students to see this disruption as 
more of a 'gift'. Looking across student answers and over question sets the standard 
deviations are pretty constant forming an approximately normally distributed population 
(following, the classical, central limit theorem). I see no reason to make adjustments to the 
scoring / grading or re-run the examination.
In Summary
I was very pleased with the students performance this year (well done) and I am further 
convinced that practical exposure to subjects, active learning, the full unit text (no ad-hoc 
lecture notes), the weekly SAQ's, and a change in the exam style (as stated above) is the 
correct direction for the unit. I will continue to pursue these in the coming years to hopefully 
increase the students comprehension of, and practical exposure to, databases in support of 
their future careers.
07 January 2008 Page 5 of 7
COMP20352 Software Engineering 2 SME JTL KKL
Report on Student Performance in COMP20352, Summer 2007
Question 1
About half the class answered this question.  The average mark was 13.2 out of 20.
a)(i) Most candidates correctly identified all 6 partitions, although acouple of people were 
confused about what a partition actually is.  A
small number of people explained the principle of equivalence partitioning in general, without 
making reference to the specific
example mentioned in the question.  Unfortunately, since the question did not ask for such 
general comments, no marks could be awarded for
answers that did not get into the specifics of the example.
(ii) Most candidates managed to give a sensible list of test casesthat covered all the 
partitions.  But very few correctly identified
the minimum number of tests needed to covere all the partitions, which was just 3.
b)(i) Everyone who attempted this part of the question scored full marks.
(ii) Some good CFGs were given but this question was also a source oflost marks for many, 
due to errors in identifying which lines of the
program should appear as nodes in the CFG and in specifying the control flow between 
them.  The flow pattern for the for loop at line
11 tripped up a number of students.  Note that after the statement on line 15, the flow of 
control returns to line 11, and that when the
condition at line 11 is false, the flow of control jumps to the statement on line 17.
(iii) Answers to this part of the question were marked as though theCFG given in answer to 
part b (ii) was correct, so that students were
not penalised twice for incorrect CFGs.  Marks were lost for incorrectly remembering the 
equation for computing the McCabe Cyclomatic Complexity Metric, and for failing to identify a 
complete set of independent paths.
(iv) Everyone who attempted this part of the question got full marks,except for a handful of 
candidates who failed to give an actual value
for the test case input.  Recall that giving a description of the set of suitable values for the 
test case (e.g. saying "some value for n
less than or equal to 1") is not sufficiently detailed for a true test case.  True test cases must 
be specified with actual literal values
for inputs and outputs. (Check back to your COMP2034 notes if you are unsure why this point 
is so crucial.)
Comments:
07 January 2008 Page 6 of 7
COMP20442 Artificial Intelligence Programming IEP
Generally, performance on the exam was much poorer than in previous years. There is no 
obvious explanation for this, as the course hardly
changed at all. The students complained throughout that the labs were very difficult; however, 
lab marks seemed to hold up quite well.
Q1. This was generally very badly done, and (mercifully) attempted by    relatively few 
students.
    a Most students got this. I turned a blind eye to minor errors.
    b This was done well.
    c A few students got this: many struggled.
    d A few students got this: many struggled.
Q2. This question was attempted by all and exceedingly well done by     most. There was 
some sloppiness in presenting grammar rules, in c,
    and some students gave the semantics for sentence (i), instead of sentence (ii) as 
instructed, which simply lost them the marks.
Q3. a Pretty well everyone could do this, but a disturbing minority couldn't. Frankly, for this 
minority, I would say there is no  hope.
    b A simple question, which most students made unnecessarily difficult for themselves. 
There was no requirement in the question to conform to any linguistic theory. The students 
had also seen a very  similar example in the lectures!
    c I was pleased that quite a few students got this (or nearly got it).
    d There were few serious attempts at this.
Q4. This was generally badly done.
    a Easy, and well done.
    b Only about half the students got this; the remainder have no excuse.
    c Only the best students attempted the all-important second part of the question: I was 
looking for `equisatisfiable, not logically equivalent'.
    d Most could do this.
    e Most could do this.
    f I was very gentle with people who had accumulated errors on part b.
    g A complete wipe-out. Only the top student really got this right.
Comments:
07 January 2008 Page 7 of 7