![]() |
Bulletin of Applied Computing and Information Technology |
Article B3:
Report on the Fourth BRACElet Workshop |
|
05:01 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Jacqueline L. Whalley & Phil Robbins Whalley, J. & Robbins, P. (2007, Jun), Report on the fourth BRACElet workshop. Bulletin of Applied Computing and Information Technology Vol. 5, Issue 1. ISSN 1176-4120. Retrieved from ABSTRACTThis paper reports on the activities of the fourth BRACElet workshop held in conjunction with the 19th Annual NACCQ Conference. The BRACElet project is a longitudinal multi-institutional multinational investigation into the code reading, code comprehension and code writing skills of novice programmers. KEYWORDSMulti-institutional, Computer Science Education, Research, Novice programmers. 1. WORKSHOPS TO DATEThe workshop reported on in this paper follows on from three previous BRACELet workshops. In order to put this workshop in context, a brief outline of the outcomes from prior workshops follows. During the second workshop held at the 18th Annual NACCQ conference in 2005, participants analysed and evaluated the results from the pilot studies. The results were employed to further hone the research instrument. The Bloom and SOLO taxonomies were used to try to gain a better understanding of the levels of difficulty of the MCQs (multi choice questions) and short-answer questions. The participants went away with a toolkit that allowed them to undertake a fully-fledged study at their respective institutions. The results from this work were disseminated at ACE 2006 (Whalley et al., 2006) and further papers were authored over the next few months by subgroups . A third workshop was held at AUT in late March 2006 and at this workshop we developed a prototype ‘common framework’ (Lister, Whalley, & Clear, 2006) that allows researchers to compare-and-contrast studies undertaken within BRACELet, but that also gives them the flexibility to tailor research to their particular interests. 2. THE GOALS OF THE FOURTH WORKSHOPThe main purpose of the fourth workshop was to review the common framework, which had been initiated at the third workshop, and to continue developing it so that it would be useful for the next phase of the project, an investigation into the writing skills of novice programmers. It was also intended that we would revisit and evaluate any results to date. However, given the organic nature of the working group, the work in a workshop tends to be driven largely by the participants themselves with the agenda being used only as a loose guide. 3. PARTICIPANTSThe working group consisted of seven participants from four institutions within New Zealand (Table 1). All non-attending BRACElet participants were kept informed of the outcomes of the workshop by way of a report and during Skype™ meetings. Table 1. Fourth workshop participants
4. GATHERING DATA FROM EXAM SCRIPTSOne of the working group participants reported back on the initial analysis of their second data collection phase for BRACElet. They had included a number of questions in their 2006 final exam that were based on problems developed during earlier BRACElet workshops. The BRACElet style questions were mixed with more traditional style questions. The students were not given access to a computer during the 2 hour exam. Forty five students took the exam with marks ranging from 89% to 24.5%, with an average of 67%. Section A consisted of 12 compulsory short answer questions. Unlike the first iteration of the BRACElet study (Whalley et. al., 2006) no multiple choice questions (MCQs) were used. On five of these short answer questions, the average mark was over 80%, suggesting that these were the questions that most of the students found relatively easy. The questions were: deducing numbers of parameters and returning types from method signatures (86%), identifying syntax errors (86%), completing code for a simple class *(82%), data types (82%) and identifying examples of terms (e.g. local variable) from code for a simple class* (81%). Another five questions proved moderately difficult. They were two small pieces of code where a numerical output had to be calculated *(77%), identifying the purpose of a method* (70%), two pieces of code to be explained in plain English *(69%), basic definitions (67%) and writing a test case (66%). This left two questions which students clearly found harder. The first was a question asking students to explain the differences between two pairs of terms, (a) class and object, and (b) actual and formal parameters. The average mark for this was 52%. The hardest question, in terms of student performance, in section A was one requiring them to explain some terms concerned with testing; positive and negative testing, and self documenting data. This scored only 48%. The questions which scored the lowest average marks were more traditional type questions requiring an understanding of theory, rather than the questions developed through BRACElet (marked * in the above description). Section B offered a choice of three from five longer answer questions. Two questions were handled significantly better than the others; answering questions about code and writing a mutator method (68%) and writing test data for a method that required a date to be input (59%). Two other questions proved quite hard, defining iteration and giving examples (44%) and a question about coupling and cohesion (43%). The question that received the lowest overall mark was based on the question from the original BRACElet instrument that also proved to be very hard to answer correctly. As in the original problem, we gave them the code and described the aim of the code section. This time we asked them for the expected output, the actual output and the change required to make the output correct. Twenty five students opted to answer this question with an average mark of only 38%. Nobody found the correct change to make the code work as required. Several could not work out the expected result from the description. The workshop participants concluded that analyzing BRACElet style questions within the BRACElet framework using naturally occurring data from exams, assignments and formative assessment could provide a wealth of insight into the novice programmer. Errol Thompson reported back to the working group on the most recent BRACElet work using variation theory and how this could be applied to the BRACElet classification style questions within a teaching and learning context (Thompson et al., 2006). He also gave a tutorial on using the SOLO taxonomy to analyze short answer questions for those participants that had yet to assist in the data analysis phases of the project. 5. DEVELOPING THE COMMON ANALYTIC FRAMEWORKIn the BRACElet common framework design (Lister, Whalley, & Clear, 2006) we drew components from the psychology of programming literature and at the same time tried to ensure that the framework would be one that could be applied in the real teaching environment. During this workshop we identified the key components of our experimental toolkit and analytic framework. We now have four clear components to the framework. The components can be used in isolation or combined with the other components and are intended to contribute towards a shared understanding among BRACELet researchers. The framework should enable them to combine or compare data collected in different institutions in a meaningful way. 5.1 Component I: Bloom’s TaxonomyTo categorise the cognitive domain and process implicit in questions, Bloom's taxonomy (framework component I), is to be used to place the questions that students are set into categories that indicate the cognitive process required to complete them. This will facilitate the comparison or combination of data obtained from different question sets. The taxonomy provides a way of categorising both MCQs and short-answer questions prior to test administration. When using Bloom’s taxonomy we have experienced difficulty in reaching agreement on the cognitive process, and even the cognitive domain, in which a particular question lies. However, the taxonomy has been a useful tool in areas other than computer programming and appears to have the potential to become a useful component of the BRACELet framework. We are therefore continuing to work towards a version of the taxonomy that will prove reliable and useful for classifying computer-programming problems. 5.2 Component II: SOLO ClassificationThe SOLO analysis, framework component II, provides a way of classifying the the reasoning level displayed in answers: the reasoning level students have used when solving a problem, i.e. what was going on in the student’s heads (Biggs, &Collis, 1982, Whalley et al., 2005, Lister et al., 2006, Thompson et al., 2006). Within the scope of the work envisaged for BRACELet studies the Solo analysis will be most useful as a way of categorizing student responses to short-answer questions. As SOLO analysis requires information about how students have reasoned while reaching an answer to a problem it is not useful as a way of classifying MCQs. In prior workshops we had developed and examined two question types based on the SOLO taxonomy: the Classification and Code Purpose type questions. Each type was designed to elicit a higher-order SOLO response (relational or more abstract response), where many traditional assessment forms (e.g. fixed code, skeleton code and theory questions targetting the lower end of the Bloom taxonomy such as identify, name etc.) ask for lower-order expression of learning (a more concrete response). When we talk about SOLO we can simplify this discussion to simply talking about students who do (or do not) exhibit a level or reasoning/description above the concrete code. For this reason our framework contains a small number of SOLO categories for comparative SOLO analysis. In this workshop we agreed upon four primary classifications (Table 2). This in itself does not prohibit independent researchers from analyzing and classifying their data at a more detailed level and individual researchers may extend these primary classes, however, comparative analysis across multiple studies should be made using these primary classes.Table 2. The primary SOLO classes for the BRACElet common framework.
Development of the common framework has been an organic process. The framework has evolved as the study has progressed but we now feel that the framework in the SOLO classification has a fairly solid foundation. 5.3 Component III: Categorisation of Question-typeThe set of question types, component III of the framework, provides another way of categorizing questions and thereby helps researchers to work within a common research framework without having to adhere strictly to the use of a prescribed set of questions.During the workshop the group identified problem types of particular interest for future data collection phases and the problem pool was identified as a framework component that was itself categorized into classes based on question type. (The question types existing in the BRACElet framework to date are given in Table 3). Many of the questions in the BRACElet problem pool feature code tracing to determine whether students can read and trace code (for example skeleton-code and fixed code questions (Lister et al., 2004)). Indeed some deliberately contain small bugs, which might have discouraged students from reasoning at a high level, and instead encouraged them to concretely hand execute code. This type of question has its role in assessment of novice programmers but eliminates assessment that encourages students to think like an expert. The literature shows that experts look for higher meaning in code (Chi et al., 1988). In light of the research on the differences between novices and experts, we believe that one of our teaching aims should be to encourage students to reason about their code at a level above the concrete code (Thompson et al., 2006, p291). Table 3. Common Framework component IV: the Assessment Framework
5.4 Analysis of Student Script Annotations (Extending the work by McCracken et al. , 2005)The use of some forms of annotation has been found to be common amongst expert programmers and many novice programmers are taught to use annotations to facilitate problem solving. The procedure that a student uses to attempt to solve a given programming problem may often be apparent from the annotations the student writes on their script. The analysis of annotations can provide researchers with information about the way in which students have gone about solving particular problems. They can also provide further insight into the type of cognitive processes the students engage in when attempting to solve a problem. 6 FUTURE DIRECTIONS6.1 Ethics and Data Collection IssuesMany of the BRACElet participants’ find ethical clearance difficult. In most cases they are limited to using volunteers from small student numbers. These two factors combined mean that these participants are unable to obtain sufficient data from within their own institutions to complete a piece of research. During the workshop we identified ways in which they could obtain sufficient data. Firstly they could run a common problem set, under similar environmental conditions, and combine the data. Normally this would limit them to working with other institutions teaching the same programming language. However, by using appropriate components of the framework they could undertake a classification of the questions during the development of the question set to ensure that iterations of the question set presented in different languages were measuring the same cognitive processes. Secondly, they could use the framework during both the construction of question sets and the analysis of student responses to make it possible to combine data from different question sets in a meaningful way. In this way, cooperating researchers could obtain sufficient data for meaningful analysis. They could also turn the fact that they have low numbers of subjects to their advantage by carrying out additional studies using think-out-loud style interviews (Lister et al., 2004). In this case fewer students could result in greater depth of research and the ability to combine qualitative and quantitative methodologies to enrich their findings. We also defined two types of acceptable data collection modes for participants namely:
We anticipate that many people will be readily able to participate in and get ethical clearance for studies of type (1) but not type (2). We also anticipate that type (1) studies will prove to be a great way of recruiting people who may go on to join type (2) studies. 6.2 Future Problem Sets: The Research InstrumentsDuring the workshop it was concluded that all future problem sets should aim to incorporate at least three elements of the common framework. The group proposed that the BRACElet team develop a sample set of questions, as exemplar anchor points that we all agree upon, and use questions type categories to classify the exemplars. In conclusion, the working group additionally identified the following types of questions as key to any future problem sets that are developed within the common framework:
7. AcknowledgementsThe authors would like to thank in particular the contributions of the participants of the fourth BRACElet workshop. We would also like to acknowledge the contributions of the entire BRACElet working group (past and present) for their contribution to the project as a whole and their helpful discussions. ReferencesAnderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., Raths, J., & Wittrock, M. C. (eds). (2001). A taxonomy for learning and teaching and assessing: A revision of Bloom's taxonomy of educational objectives. New York: Addision Wesley Longman Inc. Biggs, J. B., & Collis, K. F. (1982). Evaluating the quality of learning: The SOLO taxonomy (Structure of the Observed Learning Outcome). New York: Academic Press. Lister, R., Adams, E. S., Fitzgerald, S., Fone, W., Hamer, J., Lindholm, M., McCartney, R., Mostrom, J. E., Sanders, K., Seppala, O., Simon, B., & Thomas, L. (2004). A Multi-National Study of Reading and Tracing Skills in Novice Programmers. SIGCSE Bulletin. 36(4): 119-150. Lister, R., Whalley, J. & Clear, T. (2006). For Discussion: A Framework for a Meta-Project on Students Programmers (BRACElet Technical Report No. 0106). Auckland: Auckland University of Technology. Lister, R., Simon, B., Thompson, E., Whalley, J. L., & Prasad, C. (2006). Not seeing the forest for the trees: novice programmers and the SOLO taxonomy, Proceedings of the 11th annual SIGCSE conference on Innovation and Technology in Computer Science Education. Bologna, Italy. 118-122. ACM Press. McCracken, M., Almstrum, V., Diaz, D., Guzdial, M., Hagen, D., Kolikant, Y., Laxer, C., Thomas, L., Utting, I., & Wilusz, T. (2001). A Multi-Institutional, Multi-National Study of Assessment of Programming Skills of First-year CS Students. SIGCSE Bulletin. 33(4).125-140. Parson D., & Haden, P. (2006). Parson’s Programming Puzzles: A Fun and Effective Learning Tool for First Programming Courses, Proceedings of the Eighth Australasian Computing Education Conference (ACE2006), Hobart, Australia CRPIT, 52: pp. 157-163. Soloway, E., (1986). LEARNING TO PROGRAM = LEARNING TO CONSTRUCT MECHANISMS AND EXPLANATIONS. Communications of the ACM, 29: 850-858. Thompson, E., Whalley, J. L., Lister, R., & Simon, B. (2006). Code Classification as Learning and Assessment Exercise for Novice Programmers. Proceedings of the 19th Annual Conference of the National Advisory Committee on Computing Qualifications (NACCQ) (pp. 291-298). Wellington, New Zealand: NACCQ. Whalley, J. L., Lister, R., Thompson, E., Clear, T., Robbins, P., Kumar, P. K. A., & Prasad, C. (2006). An Australasian Study of Reading and Comprehension Skills in Novice Programmers, using the Bloom and SOLO Taxonomies. Proceedings of the Eighth Australasian Computing Education Conference (ACE2006), Hobart, Australia CRPIT, 52: 243-252. Copyright © 2007 Jacqueline L. Whalley and Phil Robbins |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Home | Issue Index | About BACIT
Copyright © 2007 NACCQ, Krassie Petrova, Michael Verhaart, Alison Young and Tony Clear (Eds.) . An Open Access Journal, DOAJ # 11764120 |