![]() |
Bulletin of Applied Computing and Information Technology |
![]() ![]() ![]() ![]() |
Prevalence of Online Assessment? Causative factorsAbstractOnline assessment tools are advertised with the promise that setting and marking assessment tasks can be more efficient. So why are they not prevalent at tertiary level? This paper seeks to investigate the reasons behind this from both online author and student perspectives. Separate focus groups consisting of students and lecturers were used to expose the issues behind this question. Factors such as computer screen design for text placement, appropriate/ inappropriate cognitive domain choice, and lack/provision of online writing professional development, security, and student technophobia of the interface were issues raised by the focus groups. Conclusions were that appropriate assessment tool selection and use is a function of the knowledge and skill of the lecturer. Online assessment also requires context and purpose for which the assessment tools are used. It was found that self-marking assessment tools can be useful, for the speed in which items can be checked and the results returned. These types of assessment can be used productively by students to monitor their own progress as they learn online. For assessment online, it is recommended that students be given practice sessions to ensure they are familiar with the ways of responding and submitting appropriately. This may avoid inadvertent assessment on computer competence rather than subject knowledge. Role-play and Simulation Based Assessment tools were seen by both focus groups as excellent tools for both further learning and to assess application, analysis, synthesis and evaluation. The problem here being they are not readily available due to the fact that they are complex to create and hence are expensive. KeywordsOnline assessment, self-marking assessments, collaborative feedback assessments, simulations 1. IntroductionOnline assessment is a new horizon. Despite the wealth of research or studies on online learning, there is a serious lack of empirical research on what constitutes good practice in learning and assessment in Australasia (Cashion, 2000; Rowlands, 2001). “New” can be coined as since the late 80s, for instance the American Psychological Association first published guidelines for the development, use and interpretation of computerized testing in 1986. The new horizon for tertiary lecturers and students to cross is acceptance of this assessment tool. The research question for this paper is: “Why is online assessment not prevalent at the tertiary level?” The specific aim of this research is to identify the student issues and lecturer issues influencing online assessment at degree level and draw conclusions on strategies for acceptance of online assessment. The following sections of the paper discuss the theoretical background, study design, and focus groups’ findings. 2. Theoretical Background of the ResearchOnline testing is purported to reduce testing time, gives instantaneous results, increases test security, and can be more easily scheduled and administered than paper-and-pencil tests (Gretes & Green, 2000; Bugbee, 1996). Despite the reported benefits, online assessment is not prevalent at degree level in New Zealand tertiary institutions; this is reflected in the lack of research results found nationally. Research observations of causative factors identified such items as lack of care with the student-assessment interface (Ricketts & Wilks, 2002), student technophobia of computers, and inadequate or lack of student ‘testwiseness’ (Lee, 2001). Inappropriate cognitive domain choice (Nichols, 2003) can be surmised to be a result of inadequate or lack of staff development (Zarzewski & Steven, 2000) in online assessment writing. Online assessment for the purposes of this research is defined as self-marking assessment tools, simulation-based assessment tools, collaborative and feedback-oriented tools (Nichols, 2003). This research proposes to discuss online assessment that is used to assess student learning in an online environment at tertiary level, the importance of the systems for lecturers, and the advantages and difficulties of using the method for online students. 3. The Study DesignThe study design used a grounded theory approach (Flick, 2002) where the focused interview groups provide context for the empirical data to be collected (Patton, 1990). The data was collected by using separate focused group interviews with students and with lecturers. This allowed independent identification of issues rather than confinement to discussion of the researched issues. The focus group deliberated on the online assessment issues in the first part of the group interview, and open-ended questions drew out personal experiences of the participants on these issues. The second half consisted of probing questions of the issues raised. Themes to structure participants’ responses to online assessment were on:
4. Lecturer Focus Group FindingsThe following comments and discussion are the result of the lecturer focus group findings. 4.1 Students’ TestwisenessLecturers expressed definite opinions that multiple choice construction, whether paper-and-pencil or online based had limited application as the main assessment tool for their varied curriculum. They agreed assessments must offer opportunities for learning. To create assessment instruments that move beyond recall and recognition, online assessment should be framed so that, according to Marlow & Page (1998):
Simulation and collaborative-based assessment can provide tools for this constructivist approach. 4.2 Students’ Computer CompetencyLecturer’s believed that students’ test anxiety was increased because of:
Student mindset also suffers from the immediacy or finality of using online technology. There is no going back. 4.3 Perceived Benefits and Disadvantages4.3.1 Online Assessment AdministrationOnline assessment controls that lecturers found effective were using supervised test conditions, locked in time frames, log in and password access, practice tests and tutorials on how to answer online questions and having a subject expert on hand to clarify questions. Lecturers doubted that mimicking classroom exam techniques online was effective and that the technology could provide further learning and assessment experiences than self-marking tests. The benefits included the instant response that self-marking assessment test banks of questions made in saved marking time. Some programs flagged answers that were not explicit i.e. short answers, for the lecturer to mark. In terms of reduced marking, a lecturer took 45 minutes for 80 students who’d completed multiple choice and short answer tests from a randomized test bank. Results were then automatically generated by category; short answers were flagged for the lecturer’s attention. 4.3.2 Online Assessment Writing“Once set up, it's a real plus - and the students come on board really fast. I offer online and paper testing for each test, and have around 99% compliance with the online option. I have 14 test bank based tests set up so far with around 250-300 students each year doing an average of 6 online tests per year, so after 3 years we're reasonably streamlined with it.” (Private communication, UCOL lecturer, 2003). This comment reflected discussion on the initial development time for self-marking assessments is soon offset by reduced assessment writing and marking time. Often this factor alone stopped lecturer acceptance of the tool. 4.3.2.1 Self-marking online assessment Lecturers discussed that often the ‘big picture’ of the online assessment was not evident for the student. Design features did not allow for searching through all questions, or attempting answering them in the order of student preference, scrolling was detrimental to answering questions; no allowance was made for going back/changing student answers, self-marking options were not always provided. The comments were a result of the lack of flexibility of the online assessment programs. 4.3.2.2 Role-play and simulated interviews Situated learning, in the form of role-play and simulations, can be a stimulus for assessment items and had been utilized via online discussion and bulletin boards. Lecturers discussed setting up scenarios appropriate to the subject (systems analysis), assigning roles and required students to refer to concepts and research. Participation was often in the form of asynchronous discussion. Actual assessment of online discussions were the number of times the student participates or students selecting the best examples of participation and justifying this selection. The cost and development time of the program and planning to incorporate such tools into the course meant this tool was infrequently used. 4.3.3 Student-Assessment InterfaceLecturers found the programs to create self-marking assessments were rife with design errors. These included lack of online feedback of results. An example was when results were automatically generated by category, which had no feedback on actual incorrect answers. This has implications for re-sit examinations if competency based assessment is used. 5. Student Focus Group Findings5.1 Students’ testwisenessStudents expressed definite opinions that self-marking assessments of multiple choice construction, whether paper-and-pencil or online based had limited application as the main assessment tool for their varied curriculum because, in their opinion, it tested rote learning which was not usually retained. A common theme from students was that online assessment provided no chances to mimic pen-and-paper test techniques. Paper based allowed them to view all the questions at once, add notes as desired and study the English construction for patterns. The self-marking assessments design often frustrated their learnt exam skills mainly due to scrolling through text, and unclear response/input instructions. 5.2 Students’ computer competencyThese students were comfortable with the use of computer technology and had used it from school age. 6. Online assessment writing and design guidelinesThe focus group results will be further analyzed to aid survey construction aimed at gathering good practice online assessment techniques. 6.1 Good Practice Checklist on Online AssessmentA good practice checklist (Rowlands, 2001) that matched the focus groups’ results follows.
7. Study Design Part TwoThe foci data collected will be analyzed to develop a survey on causative factors for online assessment acceptance by students and lecturers at degree level. The survey will be administered in 2004 to members of the Association of Polytechnics in New Zealand (APNZ) members offering degree programmes. They will receive information, if requested and have provided contact details on the returned survey, about the outcome of the activity in the form of further published results. 8. ConclusionAppropriate assessment tool selection and use is a function of the knowledge and skill of the lecturer and also the context and purposes for which the assessment tools are used. 8.1 Self-Marking Assessment ToolsQuizzes can be useful, for the speed in which items can be checked and the results returned. These types of assessment can be used productively by students to monitor their own progress as they learn online. For self-paced learning, incorrect responses can have a built in suggestion for further learning. For assessment online, it is recommended that students be given practice sessions to ensure they are familiar with the ways of responding and submitting appropriately. This may avoid inadvertent assessment on computer competence rather than subject knowledge. This research raised practical considerations related to assessment design guidelines that will be explored further –in the form of empirical data. It was seen as obvious by all involved that self-marking assessments were suitable for issues requiring knowledge and comprehension not application, analysis, synthesis, and evaluation. This is supported by Nichols (2003). 8.2 Role-play and Simulation Based AssessmentThese tools were seen by both focus groups as excellent tools for both further learning and to assess application, analysis, synthesis and evaluation. The problem here being they are not readily available due to the fact that they are complex to create and hence are expensive. 8.3 Causative Factors for Online Assessment AcceptanceWe previously asked: “Does previous experience in sitting an online test influence the implementation of online assessment in education? Does previous experience in setting an online test influence the implementation of online assessment in education?” The answers arising from focus group discussions were that these were not the major influences for online assessment acceptance. The answer may in fact be at an institutional administrative level. 8.4 Further Research ImplicationsGood practice in online assessment selection of tools and matching to cognitive domains requires further research. This will be undertaken in 2004 with gathering data. ReferencesBugbee, A. C. (1996). The equivalence of paper-and-pencil and computer-based testing. Journal of Research on Computing in Education, 28(3), 282-299. Cashion, J. & Palmieri, P. (2000). The e-quality question. Australian Training Review, 36, 6-7. Flick, U. (2002). An introduction to qualitative research (2nd ed.). London: SAGE Publications Ltd. Gretes, J.A., & Green, M. (2000). Improving undergraduate learning with computer-assisted assessment. Journal of Research on Computing in Education, 33 (1), 46-54. Lee, G. (2001). The role of computer-aided assessment in health professional education: comparison of student performance in computer-based and paper-and-pen multiple-choice tests. Medical Teacher, 23(2), 152. Marlowe, B. A., & Page, M. L. (1998). Creating and sustaining the constructivist classroom. Thousand Oaks: Corwin Press Inc. Nichols, M. (2003). Using eLearning tools for assessment purposes. Submitted for publication. Patton, M.Q. (1990). Qualitative evaluation and research methods. (2nd ed.). California, USA: SAGE Publications Ltd. Ricketts, C., & Wilks, S. J. (2002). Improving student performance through computer-based assessment: insights from recent research. Assessment & Evaluation in Higher Education, 27 (5), 475-479. Rowlands, B. (2001). Good practice in online learning and assessment. Australia: TAFE NSW – Information Technology, Arts and Media Division. Zarzewski, S., & Steven, C. (2000). A model for computer-based assessment: the catherine wheel principle. Assessment & Evaluation in Higher Education, 25(2), 201-216. Bulletin of Applied Computing and Information Technology Vol 1, Issue 2 (December 2003). ISSN 1176-4120. Copyright © 2003, Irene Toki, Mark Caukill |
Copyright © 2003 NACCQ. All rights reserved. |