Showing posts with label Formative assessment. Show all posts
Showing posts with label Formative assessment. Show all posts

Saturday, November 9, 2013

Some Issues with Norm-referenced Tests

Below is part of the  post of my classmate Ian Kevin Magabilin on 5 November 2013 regarding norm-referenced test. I find his work worth taking because it highlights some problems with the traditional norm-referenced assessment. To view the full post of Ian, please click http://myportal.upou.edu.ph/mod/forum/discuss.php?d=72638

"Here are some of the issues with norm-referenced tests:

Tests can be biased. Some questions may favor one kind of student or another for reasons that have nothing to do with the subject area being tested. Non-school knowledge that is more commonly learned by middle or upper class children is often included in tests. To help make the bell curve, test makers usually eliminate questions that students with low overall scores might get right but those with high overall scores get wrong. Thus, most questions which favor minority groups are eliminated.

NRTs usually have to be completed in a time limit. Some students do not finish, even if they know the material. This can be particularly unfair to students whose first language is not English or who have learning disabilities. This "speededness" is one way test makers sort people out.

The items on the test are only a sample of the whole subject area. There are often thousands of questions that could be asked, but tests may have just a few dozen questions. A test score is therefore an estimate of how well the student would do if she could be asked all the possible questions.

All tests have "measurement error." No test is perfectly reliable. A score that appears as an absolute number -- say, Jamal's 63 -- really is an estimate. For example, Jamal's "true score" is probably between 56 and 70, but it could be even further off. Sometimes results are reported in "score bands," which show the range within which a test-takers' "true score" probably lies.

There are many other possible causes of measurement error. A student can be having a bad day. Test-taking conditions often are not the same from place to place (they are not adequately "standardized"). Different versions of the same test are in fact not quite exactly the same.

Any one test can only measure a limited part of a subject area or a limited range of important human abilities. A "reading" test may measure only some particular reading "skills," not a full range of the ability to understand and use texts. Multiple-choice math tests can measure skill in computation or solving routine problems, but they are not good for assessing whether students can reason mathematically and apply their knowledge to new, real-world problems.

Most NRTs focus too heavily on memorization and routine procedures. Multiple-choice and short-answer questions do not measure most knowledge that students need to do well in college, qualify for good jobs, or be active and informed citizens. Tests like these cannot show whether a student can write a research paper, use history to help understand current events, understand the impact of science on society, or debate important issues. They don't test problem-solving, decision-making, judgment, or social skills.

Tests often cause teachers to overemphasize memorization and de-emphasize thinking and application of knowledge. Since the tests are very limited, teaching to them narrows instruction and weakens curriculum. Making test score gains the definition of "improvement" often guarantees that schooling becomes test coaching. As a result, students are deprived of the quality education they deserve.

Norm-referenced tests also can lower academic expectations. NRTs support the idea that learning or intelligence fits a bell curve. If educators believe it, they are more likely to have low expectations of students who score below average."

 

References:

Suskie, L. (2003). What is “Good” Assessment. Retrieved fromhttp://faculty.ccp.edu/dept/viewpoints/f03v4n1/suskie.html

Merrell, A. (n.d.) Traditional Assessment. Retrieved from

http://audreymerrell.net/INTASC/INTASC8/Assessment/traditionalassessment_files/traditional.html

Fairtest. The National Center for Fair and Open Testing (2007). Retrieved from http://www.fairtest.org/facts/nratests.html

 

 


Wednesday, October 16, 2013

The Impossible Dream?

The very purpose of assessment is manifested when the teachers or instructors are able to know with precision the previous understanding of their learners, identify how these students approach their learning, provide adequate opportunities to enhance/improve their learning process and how to use appropriately the information gathered in the formulation of the next instructional design and approach to teaching in accord with the learning goals and how students viewed meaningful learning activities.

DepEd NAT

To craft a formative assessment to be used in the entire Philippines for elementary, high school and even tertiary students are not only overwhelming but also impossible. To gauge the performance and skills of every student based on a uniform standard or criteria is difficult because such assessment can be unfair and bias. This is because the students, the infrastructure of their learning, the provision of instructional materials and the qualification of their teachers are mostly varied and diverse.

SouthCotabatoPupils

It is not wise to evaluate and compare the level of understanding of Juan of Quezon City about the “Laws of Motion” with that of Khalid of South Cotabato where the former has all the equipment and materials in conducting experiments on the subject matter while the latter does not have even a physics book to know about it. Even if the same students have the instructional materials and equipment, the result of their assessment cannot truly represent their learning if the quality and time spent by their teachers on them are not the same. Juan’s teacher has master’s degree in Science or Physics and teaches full time and only that subject while Khalid’s instructor has Science as minor subject and teaches Physics once a week and other subjects.

PhilScienceHS

However, to formulate a formative assessment on a national scale is still possible if the assessment criteria are varied, in accord with the learners’ background and achievement,  and implement only to specific students and schools having the same qualifications as previously identified. For example, the assessment with the purpose of improving/enhancing the mathematical ability of grade 4 pupils in La Union can also be given to grade 4 students of Iloilo if both students have similar opportunities and level of instructions. In the same manner, English proficiency of Grade 8 students of Quezon City can be evaluated and compared with those students of Manila because they have similar learning/teaching backgrounds. In essence, a specific formative assessment should be given to students and schools possessing the same learning/teaching experiences so that the information collected will be fair and comparable.

AssessmenrFORLearning cartoon

Devising a national formative assessment is positively challenging if the education officials from the national government to the school are willing to participate in its planning, execution, monitoring and revision. But the greatest challenge would be seen in their preparedness and eagerness to come out with specific assessment tasks and address the outcomes of the assessment. This is because most programs of the Department of Education of the Philippines to enhance/improve basic education start and end with pilot studies. They are often short-term, short-lived and die with the shortage of funds. The very aims of the assessment programs are not well-defined. Even if they are, the implementation of corrective measures often only reached the recommendation stage.

Standard Test

To be successful, national formative assessment should be primarily aimed to identify students’ weaknesses and strengths on particular topics or subjects, and to determine how instructional strategies and learning activities can be revised and improved in accord with the learning goals previously articulated and discussed with different stakeholders.

TestingToTheTest cartoon

This assessment must never be used to evaluate teachers’ performance and level of competence. If it is, the very purpose of the assessment shall lose its meanings. Instead for students’ learning, the assessment might be used by the teachers for their own good. It might be expected that teachers shall result to teaching to the test, where they use the opportunity to teach their students only the lessons to be tested or even answers to exact questions of the assessment.

Images from

1) http://www.nscb.gov.ph/headlines/StatsSpeak/2009/071309_3.gif

2) http://www.mindanews.com/wp-content/plugins/dynpicwatermark/DynPicWaterMark_ImageViewer.php?path=2013/06/07andap02.jpg

3) http://mc.pshs.edu.ph/wp-content/uploads/2012/04/sub_phy.jpg

4) http://kbarnstable.files.wordpress.com/2009/10/eportfoliocartoon-346082.png

5) https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVsfoztllNHOFjX6sKT2Eg0WoX3u45BHAaKXWv0mZF9KiYGqVKVQgXl1mOu8_O9vREkWdgnY-PYTZNWnwcScufO1zIXdBKyir_6S9kDQq94eSk_4_0DV0RilqHeHmKIvYNkwIjuG9XAwUe/s1600/multiple+choice+cartoon.png

6) http://www.roanokecountyva.gov/images/pages/N584/test-cartoon005.gif

RJ De Vera Wins Mr Man Hot Star International 2023

The Philippines' RJ De Vera won Mr. Man Hot Star International 2023 held at the Saensuk Grandhall of the Bangsaen Heritage Hotel in Chon...