On ecologically valid assessments

At some point during the mini-conference on the future of assessment (held on April 11, 2011), Ray McDermott raised questions about the validity of the kind of tests the Educational Testing Service and such design.  He told of the work he conducted in the late 1970s as part of Michael Cole’s Laboratory of Comparative Human Cognition.  Then McDermott, Cole and others wondered about the relationship between tests and the settings about which the tests were supposed to say something.  As they showed (1979, 1998: Chapter 1), the relationship between, for example, a reading test and baking banana bread by reading a recipe is tenuous, at best.  In the setting of a cooking club, so much else happens (from confused writing to interpersonal tensions) that ability to read is the least of the problem the children have to deal with.  The generalizability of these observations across settings and populations is now well established through repeated observations.

What has been left open in this work is the question of finding out what ecologically valid assessments would actually look like.

Soon after the conference, another participant, Katie Anderson-Levitt (U. of Michigan-Dearborn), suggested we look at Paradise and Rogoff’s recent paper about ongoing learning in families (2009).  In that paper, Paradise and Rogoff mention all the work done in the Cole tradition over the past 30 years with a new twist that fits well with my own sense of what I call ‘education.’  In everyday life, at home, “learning” is not a simple automatic matter proceeding below deliberation or symbolic expression.  In everyday life “teaching” (and assessing) is—probably—ubiquitous.

As I reflected on all this, I saw a route I have not yet quite explored and that could lead to further research expanding on the Cole, Lave, etc., traditions.  Starting with an expansion of the point Paradise and Rogoff made, I suspect that  the movement through publicized ignorance is accompanied by all sorts of speech acts, many of which fit in the paradigm of knowledge assessment.  Developing all this is also an expansion on Garfinkel, as I take him.

Garfinkel has kept arguing that maintaining any order requires ongoing work, including the work of figuring out what is going on.  Conversational analysts has given abundant evidence that this is indeed correct.  More recently, Garfinkel wrote about ‘instructions’ as a necessary aspect of this work.  The paper ends with one of my favorite quotes about screwing around and getting instructed (2002: 257).  What I do not think Garfinkel noted, and what I know I never noted myself, is that the instruction moments proceed either from an earlier assessment, or themselves constitute an assessment.  This is also an implication of Gus Andrews recent dissertation (2010) on blog comments when these are assessed as being “wrong” in some way that is specified by a later comment (“this comment does not belong here,” “you should not write your social security number here,” etc.).  In an interactional sequence (conversation?) utterance of the type “Do X differently!” are probably essential mechanisms for maintaining order, constituting emerging orders, moving participants into new positions, etc.

I am quite sure that such ongoing assessment is ubiquitous and should probably added as a function in Jakobson’s model of communication (1960 — though he might have classified it as an aspect of the metalingual function).  Much of the recent work on metapragmatics may also fit here.

In brief, and for our purposes, we could say that Ethno-methodology is at the service of ethno-science (what is the world made of?), and ethno-politics (how do we maintain the order within which we are now caught?), it also at the service of ethno-assessments. [or should we say that (ethno) Methodology is at the service of (ethno) Science, (ethno) Politics, and (ethno) Assessment?]

If this proves a useful direction for inquiry, it suggests that assessment is not an extra-ordinary task.  It also suggests how school assessment has drifted away from the ordinary [I am not sure that ‘drifted’ is the right work, but it will do for today].  The well known school-based QAE (Mehan 1979) model is formally equivalent to what might get known as the SARS model (Statement, Assessment, Re-statement) except that the former starts with the assessor’s question while the later starts with a seeker’s request that may then lead to an assessment (though this proposal may not have been presented as such).  In other words, the sequence starts with ignorance grounded in the here and now (“ecologically valid ignorance”?) and proceeds with statements of local knowledge that are themselves proposals for what it is that the seeker may plausibly not know (I am using the word ‘seeker’ rather than ‘learner’ since it will remain a question wether the subject whose ignorance is marked will learn anything out of the encounter).  This sequence is what I would now say my earlier statements about “productive ignorance” were about.

The question to designers of future tests is something like: how might you produce assessments that are triggered by acknowledgments of ignorance, whether generated by the subject (“I would like to know about X”) or by a co-participant in the polity (“you really should learn more about X”).  The challenge is to find the moment in the sequence of a life when the co-participant teachers will enter.  In everyday life it is a non-problem to the extent that co-participants or “consociates” have the built in or self-generated (legitimate) authority to assess (as siblings may have).  When social distance increases, that is when the network links between those who set what is to be assessed, what is to count as ignorance, and what should be done about include many persons in many institutions, then the problem gets acute.  It may even be unsolvable unless we find ways to reposition the official assessors within the network so they are closer to the performance in such a way that they can get a better sense, in real time, of the feedbacks that the seeker (learner) provides.

(More on what I am trying to formulate about network linkages later)

image_pdfimage_print

One thought on “On ecologically valid assessments”

  1. This blog entry’s reference to the notion that “assessment is ubiquitous…not anextraordinary task,” is of particular interest to me. In counterpoint to this, I recently read an article in which Marilyn French was quoted as saying, “Only extraordinary education is concerned with learning, whereas most is concerned with achieving,” suggesting a clear distinction between learning and performance. In an attempt to connect the dots, so to speak, my thoughts turn to whether assessment is valid only when it is conducted both ubiquitously,and under conditions that ensure social connectedness between the assessor and the assessed. As you point out, as social distance increases between the learner and his/her needs, and those who determine what counts as knowing and ignorance, the potential for exploiting “productive ignorance” to create learning is limited or foreclosed.

    I’m also intrigued by your suggestion that assessments be triggered by “acknowledgements of ignorance,” and wonder how this approach might work in the context of the assessment of writing skills (as an English teacher, this is tends to be my focus). Lave and Wegner’s discussion about universal learning mechanisms being understood only in terms of acquisition and assimilation (52) seem particularly relevant to writing development where knowing about writing (mechanics, as well as rhetoric) may not translate into being able to write (I wanted to add effectively,but hesitate on the basis of having to provide a definition). To get more to my point, are there, and if so, what are the, unintended consequences of standardized assessments (of anything, but especially)writing? Is what is being learned (in this case, perhaps, “performance” writing with a focus on achievement) quite distinct from what we are intending to teach, that is, presumably “good,” “honest,” “quality”writing that reveals excitement/engagement in the topic, and promotes growth)? If Lave and Wegner are correct that “learning involves the construction of identities” (53), it seems this would be particularly true in the uniquely intimate and personal art of writing(self-expression that seems more akin to painting a portrait than sewing a garment). In which case, the establishment of evaluative criteria should begin with, and consistently return to, the learner. But how is this accomplished in school settings replete with standardized achievement requirements, to say nothing of diverse student needs, understandings, and goals?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>