![]() |
Bulletin of Applied Computing and Information Technology |
Refereed Article A1:Incoming first year university students: Does confidence equate to computing competence? |
|
06:01 |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Shirley Gibbs Theresa McLennan Gibbs, S. & McLennan, T. (2008). Incoming first year university students: Does confidence equate to computing competence?. Bulletin of Applied Computing and Information Technology, 6(1). Retrieved March 14, 2012 from http://www.naccq.ac.nz/bacit/0601/2008Gibbs_1stYear.htm AbstractThere is a perception that students entering tertiary education have appropriate computing skills for study purposes and there is no longer a need for introductory computing courses in business programmes. First year cohorts were surveyed in 1999 and again in 2007. They were asked about their education and computing backgrounds and to solve some end user computing problems. Further elaboration of issues raised was provided through focussed interviews. Nowadays students have higher confidence in their computing abilities. One reason for this is increased access to computers and the internet. They are also likely to confuse knowledge and confidence. Surprisingly the survey results showed no improvement in the specific end user skills tested in 2007 compared with the 1999 students. Therefore, removal of compulsory computing courses for business students is not recommended. KeywordsFirst year business students, computing competence, computer literacy, skills perception level 1. INTRODUCTIONThere is a general perception that first year university students are now much more computer literate than was the case seven or eight years ago. Increasingly students are being exposed to a digital environment at younger ages than in the past. They have computers at home and at school, and they own cell phones and MP3 players. They are used to rapid change in technology. However, while they may be more adept at using internet resources for research and communication, this does not necessarily equate to them having have good skills in basic calculations or the management and manipulation of data (Kline & Strickland, 2004; Bartholomew, 2004; Hoffman & Vance, 2005). Lincoln University is a small New Zealand university with about 4500 students. The course COMP101 Computing has been taught, in some form or other, for more than 20 years. During this time its content has been reviewed a number of times. The objective is to continue to provide appropriate practical skills and background computing knowledge to enable students to complete their degrees and successfully enter the workforce. COMP101 has always been compulsory for all commerce (business) degree students. They normally complete it in the first year of study. There is a widespread view in the University that COMP101 should no longer be compulsory. It is felt that while the course is useful, particularly for mature aged students, the majority of new students will be much more computer literate than in the past (Eves & Dalziel, personal communication, 23 July 2007). This is not surprising as academic groups at other universities have espoused the notion that there is no need to compel new business students to study dedicated end user computing (Case, MacKinnon & Dyer, 2004; Wallace & Clariana, 2005). However, the latter authors also concluded that because graduates now require more computing skills, a greater emphasis is needed on teaching them in undergraduate courses. Some years ago, McLennan, Churcher and Clemes (1998) warned of the pitfalls of not teaching end user computing in universities. They said “The problems encountered in constructing a complex spreadsheet are just as perplexing as those found during traditional programming, and some of the pitfalls and subtleties are arguably more difficult to come to terms with.” Creating spreadsheets to handle business problems, or designing databases, so that data can be accessed in a useful way, is no less complicated in 2007 than it was in 1998. Nowadays graduates are also likely to be asked to create commercial web pages as well. It was decided to investigate the computer literacy of students entering COMP101 in 2007 by surveying semester one students. The survey instrument used was an updated version of one last used in 1999. We wanted to know if the students of 2007 rated their computing abilities more highly than their counterparts of 1999 and if levels of confidence had improved between these two groups of students. In both surveys there were the same specific end user computing questions that could be used to indicate competence in the important areas of spreadsheeting and database management. As well as the survey, some current Lincoln University students have been interviewed in depth. They were asked about their computing backgrounds and whether studying computing at university was beneficial. The remainder of the paper describes the surveys and interviews that were conducted. Key results and conclusions are presented and discussed. 2. METHODOLOGYQuantitative data were gathered using a survey instrument administered, in the first week of Semester 1, to COMP101 classes in 1999 and 2007. An abridged survey instrument for 2007 is given in Appendix 1. The 2007 instrument was essentially the same as the earlier survey except for an expanded question about computer usage (question 11). The surveys collected information about students’ genders, schooling including previous computing education, and current computer usage. Students were also asked to rate both their knowledge of and confidence in using computers. Individual students who participated in the survey cannot be identified in any way. Three knowledge based questions were included to give an estimate of students’ competencies in end user computing. For each student the number of correct answers for the problem solving questions (questions 12 to 14) was calculated. While it would be useful to ask further knowledge questions only a short amount of lecture time is available for the survey. The specific end-user computing questions asked in 1999 are still considered to be a relevant measure. Computer competency has been defined in a number of ways (see for example Yoon and Lee, 2007). In this study the authors use the term competency to mean having knowledge and capability to complete specified spreadsheet and database tasks. The 2007 survey, discussed below, raised some interesting issues. To gain further insights qualitative data were collected from focussed interviews with seven current Lincoln University students. The questions asked in the interviews were based on the framework given in Appendix 2. The results are presented in the Interviews section. 3. SURVEY RESULTSThere were 345 responses in 1999 and 141 in 2007. In 1999 COMP101 was only taught in Semester 1 whereas in 2007 it was also offered in other semesters. This largely accounts for the much larger number of surveys completed in 1999. About 40% of the students in both years were female. While more students had just left school in 1999 than 2007 (47% compared to 35%) about 70% had left in the last three years in both surveys. This suggests that both classes had similar proportions of mature aged students. Significantly more students, 46% compared with 34%, had completed at least one computer course at high school (question 3) in 2007. No data were collected about students’ ethnicities. Comparative results for the two years will be presented for the following topics:
3.1. Self Rating of Computing KnowledgeIn both years survey respondents were asked to rate their knowledge of computers (question 5) by making a choice of one of the following categories: absolute beginner, some knowledge, average knowledge, pretty knowledgeable and expert. These choices were given a Likert type score from one to five (one for absolute beginner ranging to five for expert) and averaged. The averages of 2.4 for 1999 and 2.6 for 2007 while not significantly different suggest a perception of increased knowledge. The graph in Figure 1 compares the 1999 and 2007 results. The distribution for 2007 appears to be shifted towards the higher levels compared with that for 1999. A surprisingly large percentage (16%) still classed themselves as absolute beginners in 2007 and three-quarters of these students had left school in the past two years. Figure 1. A comparison of students’ perceptions of their computer knowledge for 1999 and 2007 COMP101 students3.2. Perceived Confidence in Using ComputersIn question 6, students were asked to rate their own level of confidence in computer use. The choice of categories was: not confident, a little confident; average; confident and very confident. Figure 2. A comparison of students’ perceptions of their confidence using computers for 1999 and 2007 COMP101 students The average scores, computed using a Likert score similar to the previous section, were 2.6 in 1999 and 2.9 in 2007. While the difference is not statistically significant the graph in Figure 2 suggests that there has been a shift from the lower categories towards average. 3.3. Access to Computers and the InternetAs expected many more students in 2007 had access to the Internet from home (question 9). In 1999 fewer than 60% had a computer they could use at home (question 7) whereas now it is almost everyone. There has been a complete reversal with Internet access from home as only 12% had it 1999 and in 2007 this had risen to 88%. Two thirds of the 2007 students used broadband rather than dial-up for access (question 10). In the 2007 survey, in question 11, we also asked students to indicate how they used computers for other than study purposes. As the results in Table 1 show the survey results were strongly biased toward online usages. Email was the most common usage with 81% spending at least some time daily on email. Table 1. Use of computer applications by 2007 survey respondents. Results listed include usages of less than 1 hour per day.
3.4. Competence in End User SkillsQuestions 12 to 14 were designed to test students’ basic knowledge of spreadsheet techniques and database query logic. Competence in solving these problems was considered to indicate sufficient knowledge of end user computing to be useful for university study. The competency measures used were considered indicative of the skills we would expect students completing the course to have mastered. Summarised results are given in Table 2. The first of these questions, a comparatively straight forward spreadsheet IF function, was answered correctly by fewer than 10% in both years. It had a higher proportion of incorrect answers in 2007 than in 1999. The second, arguably trickier question, about copying a mixed reference cell in a spreadsheet was badly answered with a disturbing 30% getting the same wrong answer (question 13, choice a) in both years. A higher and similar percentage, in both years, got the correct answer for the database query. (Note: this question is simpler than it might be given that obvious distracter using an AND rather than an OR is missing. It was deliberately left unchanged in 2007.) Table 2. Summarised survey responses to the problem solving questions in 1999 and 2007
Figure 3. Percentage of responses for each of the options in question 12 by 1999 and 2007 COMP101 students The total number of correctly answered questions, out of three, is given in Table 3. The two years have very similar distributions. This is probably not surprising given the results presented in Table 2. No one in 2007 got all questions correct. It was expected that more 2007 students would have answered all questions correctly given their stronger computing education (question 3). Table 3. Number of correct responses (out of 3) to problem solving questions in 1999 and 2007
4. DISCUSSION OF SURVEY RESULTSThe confidence and knowledge results can be compared with the end-user problem solving results to establish if there was any correlation between how people perceived their own abilities compared with the correctness of their answers to specific questions. Figures 4a and 4b below show the proportion of the people in each category who got from 0 to 3 correct answers. The results for the two years are fairly similar except for the small percentage of people (Figure 1) who considered themselves experts. In 1999 a sizeable percentage of the people who thought themselves to be expert got all questions right, the rest got them all wrong. In 2007 all of the experts got only one question correct. In the middle three knowledge categories, in both years, between 40 and 50% got all questions wrong and a further 45 to 55% got one or two wrong. Therefore perceived knowledge is not a good predictor of end user skills, especially in 2007.Figure 4a. Percentage of people in each knowledge category who got 0 to 3 end-user computing questions correct in 1999 Figure 4b. Percentage of people in each knowledge category who got 0 to 3 end-user computing questions correct in 2007 Self-ratings have been found elsewhere to be inaccurate. Studies have shown that students entering university from high school overrate their computing knowledge and competence (Karsten & Roth 1998; Zhang & Espinoza, 1998; van Braak, 2004; Ballantine, Larres & Oyelere, 2007; Lim & Lee 2000; Easton & Easton, 2004; Wallace & Clariana, 2005). Comparing confidence with number of correct responses is perhaps more interesting. Figures 5a and 5b show the distributions for the two years. Once again we need to be careful of the small numbers of people in the outer categories. For the 1999 data increasing confidence more or less relates to a small increase in average number of questions correct (from 0.5 questions correct for not to 0.9 for very confident). For the 2007 data there is no such trend. Those people who considered themselves to be more confident did not score any higher than those who were more self effacing of their ability. Figure 5a. Percentage of people for each confidence level who got 0 to 3 problem solving questions correct in 1999Figure 5b. Percentage of people for each confidence level who got 0 to 3 problem solving questions correct in 2007It is also interesting to note that the increased use of technology has not lead to a corresponding increase in end user skills as we measured it. Other authors have also found this. Van Braak (2004, p. 309) said, “Access to computers, however, does not guarantee that all students are equally proficient in performing computer-related tasks.” 5. INTERVIEWSThe interviews were structured conversations based around the questions in Appendix 2. They were conducted in semester two, 2007. Seven students, from a variety of backgrounds, were interviewed and none were software and IT majors. They were studying in a range of disciplines from foundation studies to agriculture. All but two had previously studied COMP101. Student A, of Chinese origin, had also completed 200 and 300 level COMP subjects at Lincoln. Students B, C and D had participated in this year’s COMP101 survey. These people and Student E were currently enrolled in the second year end user computing subject for which COMP101 was a prerequisite. Student F was a third year student who had not studied computing at all. Student G, the final interviewee, was an international student studying foundation studies computing (lower level than COMP101). Students A & B were mature aged and the rest had gone to university directly from school. All the younger students had studied computing to Level 3 NCEA level except for Student F. Three females and four males were interviewed. The students were asked to rate their computer ability (Questions 9 and 10) on a scale of one to ten. The results were not readily predictable. For example Student F, with no formal computing education, gave herself a higher (arbitrary) rating than another Student A who had studied end user computing at the 300 level. Student F said, “When I came to university probably three or four but now probably five or six.” Whereas more computing educated Student A was more aware of his deficiencies, “I would have put myself at a three or four [on coming to university] now probably four or five.” Student G, enrolled in a sub-degree level introductory computing course, described his pre-university ability as being, “Eight out of ten at least that. Now I would probably say seven out of ten. When I started doing computers here I discovered there were things that I didn’t know how to do.” A key point, which surfaced from the focused interviews, was that all the students interviewed seemed to mistake confidence for knowledge. This was demonstrated on more than one occasion when a student was asked how they rated their ability on entry to university the first word spoken was confidence. Student C said, “I was pretty confident … I could do most things” and Student D said “I was confident but there was still a lot to learn.” The survey identified much higher home use of computers in 2007 than in 1999. All interviewed students had access to a computer at home. When asked how they were used the responses were strongly biased toward online uses. Student D said “I go on the NZ Herald site quite a bit to look up the news. Just to keep up with what is happening around the world.” Social networking was mentioned by several. “I use it to keep in touch with them [friends] but I don’t really get into it that much” (Student B).“Facebook is incredibly handy these days with all my friends having access as well” (Student G). The problem solving questions in the survey were not answered better by the 2007 cohort yet more of these students had previous computing education. The five recent school leavers interviewed expected that any computing they would need to do at university would be easy. They had all considered themselves to be confident and competent. However the more they thought about it the less convinced they were. Student D said, “I was pretty confident … I thought I could do most stuff on Excel but it just seems that there is so much more depth as you go on.” Student E had similar views, “At the start I thought that Comp maybe wasn’t a subject that I really needed. Later I realised that it was a handy subject to take. We had done Excel at High School but really quite basic sorts of things.” Studies elsewhere have also reported similar declines in a student’s perception of their own confidence as they progress through their degree education (Johnson, Bartholomew & Miller, 2006). This type of error of omission is not uncommon in self assessment situations (Caputo & Dunning, 2005). Student F had not taken any computing subjects as part of her degree. On entering university she had expected to use computers quite a lot for course work but had assumed that she would get some instruction. “I must admit I did think that if an assignment required some form of computing that we would be told how to use it within that class but that hasn’t really been the case.” This student abandoned the use of the computer for an assignment as the use of the application felt too difficult. “I hand wrote the whole lot - 28 pages - whereas a lot of others did it on Excel and they only had about four pages because all that had to do was to change one formula … to get the next result. I thought that it would take too long to figure out how to do it with Excel so I did it all by hand.” Sadly what often happens in these situations is that people are left to teach themselves. In some cases this works out well but in others it leads to frustration, wasted time and the possibility of inaccurate work (Lim & Lee, 2000; Larres et al, 2003). This point was well articulated by Wallace & Clariana (2005, p. 9) when they said, “While some students may eventually pick up some computer skills during the course of their degree program, they would most likely learn them imperfectly.” 6. CONCLUSIONThis study investigated the perception that students entering university have a greater knowledge of, and competency in, computing skills than in the past. Survey data, last collected in 1999, was again collected from COMP101 introductory computing students in 2007. The resulting data shows that while the 2007 students now own and use computers much more extensively there are still gaps in their computing knowledge. The 2007 students consider themselves more knowledgeable and confident than their 1999 counterparts. No doubt this confidence is reinforced by their use of the Internet and all it has to offer. The results of the knowledge based questions suggest that they are not more competent at using office applications. The analysis also confirmed that self-rating is not a reliable indicator of competency. While the data presented are not exhaustive it would also suggest that it may well be premature to remove COMP101 as a compulsory subject for commerce and IT students at Lincoln. From the interviews we found that students, who had previously studied computing at high school, were surprised to find how much computing there was still to learn. Often little help is given to students when they are required to use new tools in non computing classes because the teachers themselves may believe the students have the necessary knowledge, simply because they come from a generation born into a technological era. The authors believe that in New Zealand tertiary institutions there is still a place for introductory computing subjects. The results presented here suggest that this will remain the case in the near future. Ongoing studies are needed to monitor the situation as the technological and education environments change. 7. ACKNOWLEDGEMENTThe authors wish to thank all the students who participated in the surveys and interviews. REFERENCESBallantine, J., Larres, P., & Oyelere, P. (2007). Computer usage and the validity of self assessed computer competence among first year business students. Computers and Education, 49, 976-990. Bartholomew, K. (2004). Computer literacy: Is the emperor still exposed after all these years? Consortium for Computing Sciences in Colleges, pp 323-331. Caputo, D., & Dunning, D. (2005). What you don’t know: The role played by errors of omission in imperfect self assessments. Journal of Experimental Social Psychology, 41, 488-505. Case, T., MacKinnon R., & Dyer, J. (2004). Computer literacy and the introductory student: An analysis of perceived and actual knowledge of computers and computer applications. Annual Conference of the Southern Association for Information Systems, pp 278- 284, Savannah, Georgia, February 27-28. 2004. Easton, A., & Easton, G. (2004). Trends in self-assessment of computer literacy, Proceedings of the Academy of Educational Leadership, 9 (1), 85-88. Eves, C., & Dalziel P. (2007). Personal communication from the Commerce Self Review Committee, Commerce Division, Lincoln University, 23 July, 2007. Hoffman M., & Vance D. (2005). Computer literacy: What students know and from whom they learned it. SIGCSE 05, 356-360. Johnson, D., Bartholomew, K., & Miller, D. (2006). Improving computer literacy of business management majors: A case study. Journal of Information Technology Education, 5, 77-92. Karsten, R., & Roth R., (1998). The relationship of computer experience and computer self-efficacy to performance in introductory computer literacy courses. Journal of Research on Computing Education, 31(1), 14-24. Kline, D., & Strickland, T., (2004). Skill level assessment and multi-section standardization for an introductory microcomputer applications course issues in information systems V (2), 572-578. Larres, P., Ballantine, J., & Whittington, M. (2003). Evaluating the validity of self assessment: Measuring computer literacy among entry level undergraduates with accounting degree programmes at two UK universities. Accounting Education, 12, 97-112. Lim, K., & Lee, J. (2000). IT Skills of university undergraduate students enrolled in a first year unit. Australian Journal of Educational Technology, 16 (3), 215- 238. McLennan, T., Churcher, C., & Clemes, S. (1998). Should end user computing be in the computing curriculum? Software Engineering: Education and Practice, pp 346-352, Dunedin, New Zealand, January 26-29, 1998. van Braak J., 2004. Domains and determinants of university students’ self perceived computer competence. Computers & Education 43, 299-312. Wallace, T. & Clariana, R.B. (2005). Perception versus reality – Determining business students’ computer literacy skills and need for instruction in information concepts and technology, Journal of Information Technology Education, 4, 141-151. Yoon C. Y., & Lee K. M. (2007). An end user evaluation system based on computing competency and complex indicators. Information Reuse and Integration, 2007, IRI 2007. IEEE International Conference, 13-15 Aug. 2007. Zhang Y., & Espinoza S. (1998). Relationships among computer self efficacy altitudes toward computers and desirability of learning computing skills. Journal of Research on Computing in Education, 30 (4), 420-436.
APPENDIX 1: COMP101 2007 Student Background Survey (Abridged*)Tick the box next to your answer for each question. Choose the “don’t know” option if you can not answer a question.
* Three questions from the original survey are not included here. These questions were not considered relevant for this study. APPENDIX 2: Framework for Questions Asked in Student Interviews(Note: these questions are in no particular order and depending on answers given to preceding questions may not have been asked of everyone.)
Home | Issue Index | About BACIT
|