As a follow-up to Mary Black’s post about assessments, we’ve asked Wendy Hart, a friend of IAHE Action, to share how they may be rigged.
We all know that polls can be skewed and that ‘what everybody knows’ may not be so. Similarly, assessments and assessment data can be gathered, used, and presented in various ways to feed an agenda. Just because a child is said to be proficient on a state assessment doesn’t mean he or she actually is ‘proficient’ in the way parents want him or her to be.
When I was in school, my teachers would give us tests to help figure out how much of what they were teaching we had actually learned. Then, the state stepped in and started giving assessments to make sure teachers were teaching what the state wanted them to teach. And now? We’re told the assessments are great, but we are just supposed to trust. We can’t see the assessment questions. The algorithms (mathematical formulas) determining which questions come next or whether you have a higher or a lower score are kept secret. The State Boards of Education or the assessment vendors, themselves, can move and change the ‘proficiency’ levels at will.
We take it on faith when a student passes a math assessment it means the student is proficient. Is it possible to rig an assessment? Not only is it possible, but it’s also being done all the time. I have four examples of how the assessments are and have been manipulated to provide different results than most people expect. This is being done without oversight, without insight into what is occurring, and certainly without permission from parents.
The first example is assessing not just what a student is supposed to know but making them do the problem in a particular way. Ask yourself, does this create a disadvantage for a child who knows the math facts but hasn’t been shown a particular way of doing things?
This problem is an example of a Common Core Math Standard from First Grade:
Add and subtract within 20, demonstrating fluency for addition and subtraction within 10. Use strategies such as counting on; making ten (e.g., 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14); decomposing a number leading to a ten (e.g., 13 – 4 = 13 – 3 – 1 = 10 – 1 = 9); using the relationship between addition and subtraction (e.g., knowing that 8 + 4 = 12, one knows 12 – 8 = 4); and creating equivalent but easier or known sums (e.g., adding 6 + 7 by creating the known equivalent 6 + 6 + 1 = 12 + 1 = 13).
This question doesn’t just assess whether a student knows how to do an addition word problem, but it assesses whether a student has been trained on the Making Ten Strategy as outlined in the standard. Could a student solve 8+6 without knowing the Making Ten Strategy? Yes, of course. Does using the Making Ten Strategy indicate critical thinking? Or does it simply indicate a student has been instructed in this strategy? Would you be able to succeed as a mathematician without learning this Making Ten Strategy in First Grade? Have you successfully used addition in your life without thinking about the Making Ten Strategy?
Many parent complaints about Common Core Math come from having to show the various methods for getting the answer or having to explain why an answer is correct.
Parent:“When I was in school, we did it this way.”
Child: “I have to do it this other way or it will be marked wrong.”
One mother asked her child’s teacher if he could simply do the standard algorithm on all his math homework because the multiple strategies were causing him stress. The teacher said if he didn’t learn the strategies, he wouldn’t do well on the state assessment. Once the mother indicated her child would not be taking the assessment, the teacher readily agreed to give credit for just the standard algorithms. The reason for the multiple methods? To do well on the assessment.
A review written in 2011 by Dr. Stephen Wilson of Johns Hopkins University states the following about the Common Core SBAC test (then under development). He says, “It appears that the assessments will focus on communication skills and Mathematical Practices over content knowledge.”
Furthermore, “Mathematical Practices, or what was usually called ‘process’ standards in most states, do little more than describe how someone pretty good at mathematics seems to approach mathematics problems. As stand-alone standards, they are neither teachable nor testable. Mathematics is about solving problems, and anyone who can solve a complex multi-step problem using mathematics automatically demonstrates their skill with the Mathematical Practices, (whether they can communicate well or not).”
In short, we see Dr. Wilson’s concerns demonstrated in the above example: the process of getting the answer is of greater importance than the actual mathematical abilities most people think the assessment should be assessing.
A second example comes from Utah’s SAGE (end-of-year) sample assessment for Third Grade. This question is supposed to assess a deeper understanding of division than simply asking if a child knows the answer to 12 ÷ 4. Unfortunately, in creating a more convoluted problem, the assessment question can be solved without knowing anything more than how to count and how to write a division problem. Division facts, themselves, are not necessary.
There are lots of kids who can divide things equally by putting them in different boxes without knowing 12 ÷ 4 = 3. Supposedly, by dragging the stars and dragging the numbers, you are assessing higher-order thinking. But what you are really assessing is the child’s familiarity with the software interface, the format of the problem, and whether they can count and relate counting to division. But they don’t have to know 12 ÷ 4 = 3.
Would a child who knows her division facts be able to do this problem anyway? Most likely. However, it is also true this question doesn’t distinguish the child who does know her math facts from the one who does not.
A third example has to do with reading comprehension. It dates back to the 1980’s but illustrates that what is on an assessment and how it is asked can be used to manipulate and ‘direct’ a student’s thought processes. I quote Dr. Peg Luksik who worked for Pennsylvania’s Department of Education. From her video :
‘A sample question said: “There’s a group called the Midnight Marauders and they went out at night and did vandalism. I (the child) would join the group IF…”
“…my best friend was in the group.”
“…my mother wouldn’t find out.”
There was no place to say they would not join the group. They had to say they would join the group.’
Dr. Luksik states that while this was listed as a citizenship assessment, the internal documents stated, “We’re not testing objective knowledge. We are testing and scoring for the child’s threshold for behavior change without protest.”
Additionally, Dr. Luksik discusses another state’s Reading Assessment question: “If you found a wallet with money in it, would you take it?”
She asked, ‘Do you read better if you say “yes”? Or do you read better if you say “no”? Or were they assessing a child’s honesty on a state assessment with their name on it…?’
Clearly, these are examples of assessment questions that were not assessing either citizenship or reading as you and I would define them.
And finally, before a single Utah student took the state’s SAGE assessment in 2014, the head of state assessments warned local school board members that student test scores were going to drop by 10 or 20 points. He also stated there was no way to correlate the previous test results with the SAGE results. So, how did he know this? The point was they knew what the target proficiency rate was. Utah was looking for a proficiency rate in the 40’s. And as they went through the process of setting those proficiency scores, they did so after the first round of testing. Then they modified the scoring to make sure the result fell within that 40% range*. So, in one year, did Utah kids lose 20 points of knowledge? Or does it simply mean the Powers That Be decided only 40% of the kids got to be labeled ‘proficient’ regardless of what they actually knew?
The only sure way of knowing an assessment is truly measuring academic content and grading it appropriately requires transparency with the assessment questions, the assessment methodology, and independent verification procedures.
Instead of wondering how kids are doing on state assessments and whether a school is “good” based on the assessment scores, we need to be asking what are these assessments supposed to be measuring and how do we know they really are measuring what they claim?
*Alpine School Board Study Session Audio September 23, 2014, Additional Media->Study Session @ 45 minutes. http://board.alpineschools.org/2014/09/18/september-23-2014-board-meeting/
Wendy Hart is the mother of three children. She and her husband Scott have lived in Highland, UT for 17 years. She was raised in Cupertino, CA, and moved to Utah to pursue her B.S. in Mathematics from Brigham Young University. She has worked as a programmer and manager in several hi-tech companies in Utah, and owns her own database migration company. Wendy is honored to serve the citizens of Highland, Alpine, and Cedar Hills, UT as a member of the Alpine School District Board of Education.
1 thought on “Rigged Assessments”