When I was in Primary School I remember (on more than one occasion) being inundated with information that turned out to be completely irrelevant and useless to the task at hand. My teacher was looking for an opportunity to allow the class to engage in critical analysis and debate over what was relevant in our learning. Many of us simply took what we were given and tried to make the most of it but some figured out what was going on. It was an effective way to help us consider that not all information we receive is to be believed and that we need to think for ourselves in order to prioritise and seek relevance. I believe that sometimes schools fall into the same trap that my peers and I did all those years ago. The current trend of results-by-testing is influencing the world of education and the information being delivered to schools, in many cases, lacks relevancy.
PISA, ISA, TIMSS and other similar assessments are widely considered the best 'progressive' measures of 'how a school is doing'. In the absence of alternative measures, schools bend their philosophies to allow these sorts of tests. The often argued strengths of these assessments is that they bring an element of comparison between schools. They can be used as a measure to see how your school matches with world or regional trends. Certainly in my experience, through working in international IB schools around the world, it is primary parents who find this information most interesting and useful as it helps them to keep track of whether their children will be able to cognitively assimilate into their home environment should they move back. Other suggested benefits range from teacher accountability, predicted future success and student motivation.
However, is this accurate?
Perhaps. It is fairly easy to construct thoughtful discourse, based on current research, to counter the arguments on motivation, feedback and the effects of autonomy on productive output. I would like to discuss the comparison reason. This is one that I find being given more frequently, as if it serves as some sort of justification to allow this sort of assessment practice to take place despite it not aligning with school goals or strategic vision.
Governments in many countries use the sort of data that PISA et al. provide as a basis for reform. If their curricula are designed around these models of statistical analysis then there's every chance that the best way to elicit data from their students is through the use of subject specific standardised achievement tests. Some national curricula fit this model. Some used to and are changing. Some are quite the opposite. Yet an overwhelming majority, regardless of their policies on education, use these tests to track their progress against each other. They're also used, as mentioned before, by many international schools (that are by and large devoid of government regulations and restrictions) to see how they stack up against each other.
The basis of the PYP is a theoretical and experiential belief in qualitative research. The early initiators of the ISCP (which later became the PYP) looked to theorists such as Bruner, Gardner, Vykotsky and Piaget, those that aligned with their perspectives, experiences and beliefs as educators. For schools that follow a true PYP approach, these tests are foreign. The time spent preparing for them is wasted and the data that they receive back from them can be easily misconstrued. Some schools look at the setup: criterion referenced (heaven forbid that a school is using norm-referenced tests); will provide us with feedback on our students' performance; helps us to validate our curriculum - tick, tick, tick, let's do it. If we look closer, however, we see that there are not as many common threads as first thought:
(1) Criterion referencing is usually generated for specific grades, whereas the PYP supports the notion of non-linear phases of learning. The method isn't aligned to beliefs about learning and there is a clear gap in the planning-teaching-assessing cycle.*NOTE* I accept that there are PYP schools that organise their curricula into grade level expectations.
(2) These sorts of large-scale assessments are graded externally and can take a long time from being written for the results to be returned. There is little value in receiving feedback on a performance completed three months prior.
(3) Assessment beliefs in the PYP are built around the notion that skills and knowledge learned in context lead to the construction of conceptual understandings. These are what teachers make evaluations on. There is little hope for a standardised assessment to offer any sort of authenticity or context when it is being administered in many different places around the world. PYP teachers construct contextual scenarios for their assessments, based in real-life problems and issues. Yet they then receive data based on cookie cutter assessments given completely out of context, and, in some cases, that only judge one's ability to memorize than to think critically. Then they're expected to respond to it.
(4) Many of these tests collect information on Language, Mathematics and Science. Sometimes Social Studies too but the first three are generally always in there. These are valuable disciplines but there are many others as well that also offer a great deal of opportunity for cognitive stimulation. Are these not important? The PYP maintains a very strong stance in the importance of a holistic education. By subscribing to these assessments a school is valuing certain types of skills, knowledge and understanding as being more important than others.
Research that quotes data received from these sorts of tests should not be used to make decisions about the future directions of these programs. I'm not anti-data. If we are able to collect relevant and accurate data about successes that other schools and/or countries have, in ways that support our beliefs behind teaching, learning and assessment, and it can be made available so that others can learn from it, then we should. My questioning lies in our ability measure achievement qualitatively rather than the current obsession with quantitative measures. Why don't we collect and use data based on the types of assessment practices that we believe in?
PISA, ISA, TIMSS and other similar assessments are widely considered the best 'progressive' measures of 'how a school is doing'. In the absence of alternative measures, schools bend their philosophies to allow these sorts of tests. The often argued strengths of these assessments is that they bring an element of comparison between schools. They can be used as a measure to see how your school matches with world or regional trends. Certainly in my experience, through working in international IB schools around the world, it is primary parents who find this information most interesting and useful as it helps them to keep track of whether their children will be able to cognitively assimilate into their home environment should they move back. Other suggested benefits range from teacher accountability, predicted future success and student motivation.
However, is this accurate?
Perhaps. It is fairly easy to construct thoughtful discourse, based on current research, to counter the arguments on motivation, feedback and the effects of autonomy on productive output. I would like to discuss the comparison reason. This is one that I find being given more frequently, as if it serves as some sort of justification to allow this sort of assessment practice to take place despite it not aligning with school goals or strategic vision.
Governments in many countries use the sort of data that PISA et al. provide as a basis for reform. If their curricula are designed around these models of statistical analysis then there's every chance that the best way to elicit data from their students is through the use of subject specific standardised achievement tests. Some national curricula fit this model. Some used to and are changing. Some are quite the opposite. Yet an overwhelming majority, regardless of their policies on education, use these tests to track their progress against each other. They're also used, as mentioned before, by many international schools (that are by and large devoid of government regulations and restrictions) to see how they stack up against each other.
The basis of the PYP is a theoretical and experiential belief in qualitative research. The early initiators of the ISCP (which later became the PYP) looked to theorists such as Bruner, Gardner, Vykotsky and Piaget, those that aligned with their perspectives, experiences and beliefs as educators. For schools that follow a true PYP approach, these tests are foreign. The time spent preparing for them is wasted and the data that they receive back from them can be easily misconstrued. Some schools look at the setup: criterion referenced (heaven forbid that a school is using norm-referenced tests); will provide us with feedback on our students' performance; helps us to validate our curriculum - tick, tick, tick, let's do it. If we look closer, however, we see that there are not as many common threads as first thought:
(1) Criterion referencing is usually generated for specific grades, whereas the PYP supports the notion of non-linear phases of learning. The method isn't aligned to beliefs about learning and there is a clear gap in the planning-teaching-assessing cycle.*NOTE* I accept that there are PYP schools that organise their curricula into grade level expectations.
(2) These sorts of large-scale assessments are graded externally and can take a long time from being written for the results to be returned. There is little value in receiving feedback on a performance completed three months prior.
(3) Assessment beliefs in the PYP are built around the notion that skills and knowledge learned in context lead to the construction of conceptual understandings. These are what teachers make evaluations on. There is little hope for a standardised assessment to offer any sort of authenticity or context when it is being administered in many different places around the world. PYP teachers construct contextual scenarios for their assessments, based in real-life problems and issues. Yet they then receive data based on cookie cutter assessments given completely out of context, and, in some cases, that only judge one's ability to memorize than to think critically. Then they're expected to respond to it.
(4) Many of these tests collect information on Language, Mathematics and Science. Sometimes Social Studies too but the first three are generally always in there. These are valuable disciplines but there are many others as well that also offer a great deal of opportunity for cognitive stimulation. Are these not important? The PYP maintains a very strong stance in the importance of a holistic education. By subscribing to these assessments a school is valuing certain types of skills, knowledge and understanding as being more important than others.
Research that quotes data received from these sorts of tests should not be used to make decisions about the future directions of these programs. I'm not anti-data. If we are able to collect relevant and accurate data about successes that other schools and/or countries have, in ways that support our beliefs behind teaching, learning and assessment, and it can be made available so that others can learn from it, then we should. My questioning lies in our ability measure achievement qualitatively rather than the current obsession with quantitative measures. Why don't we collect and use data based on the types of assessment practices that we believe in?
No comments:
Post a Comment