THE CHALLENGES AND STATISTICAL IMPLICATION OF COMPUTER BASED TESTING(JAMB) ON NIGERIAN STUDENTS; THE NEED TO IMPLEMENT COMPUTER ASSISTED LEARNING
TABLE OF CONTENT
Table of content
1.1 Background of the research
1.2 Statement of research problem
1.3 Objectives of the study
1.4 Significance of the study
1.5 Research Question
1.6 Scope of the study
1.7 Limitation of the study
1.8 Definition of terms
CHAPTER TWO: LITERATURE REVIEW
2.1 Review of concept
2.2 Review of related work
2.3 Empirical studies
2.4 Theoretical framework
2.5 Summary of the review
CHAPTER THREE: SYSTEM ANALYSIS AND DESIGN
3.2 Method of data collection
3.3 Data preparation
3.4 Program structure
3.5 File maintenance module
3.6 Main menu specification
3.7 Problem of the existing system
3.8 Justification for the new system
3.9 System modeling
3.10 Information flow diagram
3.11 System flow chat
3.12 Activity diagram
3.14 Program flow chart
3.15 Database specification and design
CHAPTER FOUR: SYSTEM TESTING AND DOCUMENTATION
4.2 Program language justification
4.3 Systems requirement
4.4 Implementation details
4.5 Procedure testing plan
SUMMARY, CONCLUSION AND RECOMMENDATIONS
In recent times, Computer-Based Assessment (CBA) was introduced as a new assessment mode in some of the tertiary educational institutions in Nigeria. This is a sharp departure from the traditional paper-and pen or pencil mode of testing. The trail blazing tertiary educational institutions in Nigeria in the use of these innovation include University of florin, University of Benin, University of Lagos and National Open University of Nigeria to mention but a few. Some Polytechnics and Colleges of Education also introduced CBA for their yearly entrance examinations. These institutions started this form of examination with the post University Matriculation Examination (Post UME).’;
Some of these institutions have started using CBA for their semester examinations especially where the classes are very large. For instance, the University of Ilorin has been using the system in the past three years for all levels of students (Jimoh, 2010). In the same vein, the National Open University of Nigeria (NOUN) examined her distance learners using the CBA for the first time in May/June/2010 semester examination. CBA used to be seen as an examination mode for developed nations but is now in practice in several developing nations of the world including Nigeria.
Wikipedia (2010) defines a computer-based Assessment (CBA), also known as e-assessment, as computerized testing and computer-administered testing as a method of administering tests in which the responses are electronically recorded, assessed or both. As the name implies, computer- based assessment makes use of a computer or an equivalent electronic device such as a cell phone. CBA systems enable educators and trainers to author, schedule, deliver and report on surveys, quizzes, test and examinations. Computer Based Assessment may be a stand-alone system or a part (jif a virtual learning environment, possibly accessed via the World Wide Web. Virtual learning environment work over the internet and provides a collection of tools such as those for assessment (particularly of types that can be marked automatically, such as multiple choice test-item papers).
A good example of Computer-Based Assessment is Business Language Testing Service- (the BULATS) which is an on line test which is a highly sophisticated system which pin points the candidates ability quickly and accurately by using adaptive testing techniques. As the candidate progresses through the test, the computer selects the next question on the basis of the previous answers, becoming progressively easier or more difficult until a consistent level of ability is achieved.
In 1998, TOEFL (Test Of English as a Foreign Language) began switching from a paper-based test to a computer-based test (CBT) in many parts of the world. The test combines many of the same question types as traditional paper-based test with new question types that can be offered only on the computer. While many examinations are administered in the paper based format, most testing companies in the United States of America are following the national trend of computer-based testing. Each computer test takes the candidates through a short tutorial to instruct the examinee on the use of computer, and how to answer test questions. The test administrator is available at all times for technical assistance. Many candidates find the individual, non-distracted environment and in most cases, immediate score report feedback very attractive features of computer based testing.
1.1 BACKGROUND OF THE RESEARCH
During the past few years, technology has significantly reshaped the method of assessment.
In many academic domains, educational measurement has been moving towards the use of computer-based testing (CBT), defined as tests or assessments that are administered by computer in either stand-alone or dedicated network, or by other technology devices linked to the internet or world wide web most of them using multiple choice questions (MCQs) (Sorana-Daniela, B. & Lorentz, J. 2007). Computer based tests have been used since 1960s to test knowledge and problem solving skills (Peter, C. Bill, I., & David, S. 2004). Computer based assessment systems have enabled educators and trainers to author, schedule, deliver, and report on surveys, quizzes, tests and exams.
There are two main types of computer based testing. The most familiar type is where candidates fill in their responses on a paper form, which is fed into a computer optical mark reader. This reads the form, scores the paper, and may even report on the test reliability. The second type of computer based testing is where computers provide an assessment interface for students: they input their answers and receive feedback via a computer (Peter, C. Bill, I., & David, S. (2004).).
An effective method of student assessment is necessary in chemistry as well as all areas and levels of education. Due to an increase in student numbers, ever-escalating work commitments for academic staff, and the advancement of internet technology, the use of computer assisted assessment has been an attractive proposition for many higher education institutions (Darrell, L.B. 2003). Since their first use, computer-assisted test construction systems have made a major impact on the design and generation of chemistry examinations at many universities and colleges.
Currently in University of Ilorin, the traditional method (a combination of essay examination and practical examination) is most used as evaluation of students’ knowledge. In the past few years, the number of students increased drastically and the conventional examination method became time consuming in term of the examination time for evaluation and assessment. A solution of examination in large classes of students is an automated testing system and this has been introduced by the University of Ilorin in 2008, primarily to address this concern and others.
Generally, advantages of CBT systems over traditional paper-and-pencil testing (PPT) have been demonstrated in several comparative works and as mentioned by (Peter, C. Bill, I., & David, S. (2004).), CBT is not just an alternative method for delivering examinations, it represents an important qualitative shift away from traditional methods such as paper based tests. Despite, these advantages available in computerized test administration as it was shown that, it does not mean that CBTs are intrinsically better than paper-and-pencil tests John, C.K., Cynthia, G.P., Judith, A.S., & Tim, D. (2002). Previous study by (Fyfe, G., Meyer, J., Fyfe, S., Ziman, M., Sanders, K., & Hill, J. (n.d)) have even found that testing format does not affect test scores and as such CBT canbe considered a valid and acceptable testing mode.
As CBT began to be used for summative assessment, establishing whether computer based testing performance was comparable to that of paper based assessment became important.
Researchers have performed large scale reviews of studies examining differences in performance of CBT and paper-based version of tests and have generally found that when CBT is similar in format to pencil and paper tests, it has little if any effect on test performance (Darrell, L.B. 2003).
From students’ perspective of the CBT there have been a number of mixed reactions. Previous research) showed that more people anticipated problems with the computer assisted assessment than actually had them (Darrell, L.B. (2003)). Their research also showed that despite fewer students being confident about CBT before completing the assessment more students stated a preference for CBT afterwards. Previous study conducted indicated a preference for CBT over PPT (Fyfe, G., Meyer, J). Some studies reported the main disadvantage as being increased anxiety amongst those unfamiliar with use of computer (Darrell, L.B. (2003) and (Fyfe, G., Meyer, J) and as such students agreed that they are “technophobic”.
The challenge to test examinees by means of microcomputers demands appropriate software design. To comply with this demand, students’ beliefs or perceptions on the advantages and disadvantages of a computerized test are important since user perceptions and criticism are crucial in the acceptance, implementation and improvement of computerized tests.
Furthermore, whilst recognising the systemlevel advantages associated with CBT it is important to explore the relationship between assessment mode and the behaviour of the students being assessed. If the term “affordances” is used to describe what is made possible and facilitated, and what is made difficult and inhibited by a medium of assessment (Johnson, M. & Green, S. 2004). It is possible that the affordances offered by computer mediated assessment may affect the perceptions of students involved in computer-based assessment differently than if they were engaged in paper-based assessment (Johnson, M. & Green, S. 2004).
In general, several areas appear worthy of investigation, including issues related to quality factors that may influence performance and student perceptions regarding computer-based tests.
The fact that students’ perception of CBT for chemistry courses is an under-explored topic is apparent. This study describes the findings in this domain with a view to disseminate good practice, guidelines, and models of implementation and evaluation of a particular type of test mode, namely, CBT for undergraduate chemistry courses.
1.2 STATEMENT OF RESEARCH PROBLEM
The management of University of Ilorin has recently implemented the use of CBT to test students’ knowledge. The advantages of using computer technology for educational assessment in a global sense have been recognised and these include lower administrative cost, time saving and less demand upon teachers among others. Whilst recognising these system-level advantages it is important to explore the challenges and its implication on the students being assessed (Johnson, M. & Green, S. (2004)) because the assumption of comparability between CBT and PBT without proper investigation within that particular testing context, is inappropriate ( Lorentz, J. 2007). Some test takers reported that, it is more difficult to navigate back to rework problems. Some are resistant to the computerized testing process because they are accustomed to taking notes and circling question and/or answers for later review ( Lorentz, J. 2007). Others say that they read more quickly and more easily on paper than on a glaring computer screen.
The challenge on CBT test designers and administrators is to construct CBT to be fair and reliable and to produce valid test scores.
Furthermore, they have to be designed to minimize examinees’ frustration and to limit the sources of examinee anxiety. These additional test design steps are well worth taking, because of the effective and measurement improvements they offer (John, C.K., Cynthia,). CBT implementation should also be constructed to meet the standard requirements such as that of International Test Commission (ITC) as has been summarized under four issues. These are: the Technology, Quality, Control, andSecurity. Also, it has been stated that computerized administration of test normally should provide test takers with at least the same degree of feedback and editorial control regarding their responses that they would experience in traditional test taking formats.
1.3 OBJECTIVES OF THE STUDY
With the use of a new technology in town, the CBT, the researcher went on to fine the challenges this system is facing and its implication a statistical evaluation of its effect on the Nigerian students. Other objectives of this study are:
1. To evaluate the statistical performance analysis of the students with the use of this CBT among them.
2. To ascertain the operational effectiveness of the system.
3. Introduce a means of training our students on before the actual use of the CBT system CAL – Computer Assisted Learning.
1.4 RESEARCH QUESTION
The following six research questions were formulated in addressing the problems identified in this study:
1. What are the issues peculiar to the use CBT among the student?
2. What are the general constraints on the use of CBT for assessment of student?
3. What are the effects of the test administration mode on students’ performance i.e students’ scores?
4. What is the relationship between prior computer experience and performance in computer-based testing?
5. What practices are helpful to improve the perception about CBT?
1.5 SIGNIFICANCE OF THE STUDY
The significant importance of this research work is on the new technology or idea that the research work introduce, the need for the use of a CAL system to aid the learning and understanding process of the CBT. This study evaluates the challenges of the use of CBT and analyze the statistical implication on students.
1.6 SCOPE OF THE STUDY
This research work uses the Unified Admission and Matriculation Board – JAMB as the case study of this work. Also, this study is an evaluation research work, that evaluate the challenges facinf the use of CBT and its statistical implication. Any other thing falls outside the scope of this study.
1.6 LIMITATION OF THE STUDY
In times as such, where there is a financial crisis within the nation, this have gratlly contributed to the main source of limitation to this research work, as the researcher was faced with financial constrain and could not successfully visit place needed to for the course of this study. Also, an access to the information needed for this study was a problem.
1.7 DEFINITION OF TERMS
CBT: Computer Based Testing
CAL: Computer Assisted Learning
JAMB: Unified Admission Matriculation Board
COMPUTER: an electronic device capable of taking instruction and executing it and returning the result back
IMPLICATION: The consequence that follows a misuse of a device
. American Psychological Association Committee on Professional standards and committee on psychological tests and assessment (1986). Guidelines for computer-based tests and interpretations. Washington, DC.
. Braun, H.I. &Wainer, H. (1988). Test Validity. New Jersey: Lawrence Erlbaum.
. Bugbee, A.C Jr. (1996). The equivalence of paper-and-pencil and Computer-based testing. Journal of Research on Computing in Education, 28(3) 282-299.
. Bunderson, C.V, Inouye, D.K, & Olsen, J.B. (1989). The four Generations of computerized educational measurement in RL
. Coniam, D. (1999). Subjects’ Reaction to Computer-Based Tests, Journal of Educational Technology Systems. 27(3).
. John, C.K., Cynthia, G.P., Judith, A.S., & Tim, D. (2002). Practical Considerations in Computer-Based Testing. Sheridan Books. New Jersey: Lawrence Erlbaum Associates
. Darrell, L.B. (2003). The Impact of of Computer-Based Testing on Student Attitudes and Behaviour. The Technology Source.
. Erle, L., Benjamin, O., Einar, W.S., & Raymond, S. (2006). Computer-based versus Penand- paper Testing: Students’ Perception. Ann Acad Med Singapore 35, 599-603.
. Fyfe, G., Meyer, J., Fyfe, S., Ziman, M., Sanders, K., & Hill, J. (n.d). Self-evaluation of assessment performance can enhance student’s perception of feedback on computer-generated tests.
. Harmes, C.J. (1999). Computer-Based Testing: Toward the design and use of innovative items. University of Florida.
. Johnson, M. & Green, S. (2004). On-line assessment: the impact of mode on students’ strategies, perceptions and behaviours. University of Cambridge.
. McVay, R.B. (2002). An examination of computer anxiety related to achievement on paper-and-pencil and Computer-Based knowledge testing of United States air force technical training students. PhD Thesis. University of North Texas.
. Mead, A.D. &Drasgow, F. (1993). Equivalence of Computerized and paper-andpencil cognitive ability tests: a meta-analysis. Psychological Bulletin 114, 449-458.
. Poggio, J., Glasnapp, D.R, Yang, X., &Poggio, J.A. (2005). A comparative evaluation of score results from computerized and paper and pencil mathematics testing in a large scale state assessment program. Journal of Technology, Learning, and Assessment, 3(6). Available from http://www.jtla.org
. Powers, D.E. &O’neill, K. (1993). Inexperienced and Anxious computer users: coping with a computer- administered test of academic skills. Education Assessment. 1(2), 153- 173.
. Roy, C. & Patricia, W. (2002). Paper-based versus computer-Based assessment: key factors associated with the test mode effect. British Journal of Educational Technology 33(5), 593- 602.
. Sorana-Daniela, B. & Lorentz, J. (2007). Computer-based testing on physical chemistry topic: A case study. International Journal of Education and Development using Information and Communication Technology, 3(1), 94-95