- On anthropology, education, culture, and more … - http://varenne.tc.columbia.edu/blgs/hhv -

On ecologically valid assessments

At some point during the mini-conference on the future of assessmen [1]t (held on April 11, 2011), Ray McDermott raised questions about the validity of the kind of tests the Educational Testing Service and such design.  He told of the work he conducted in the late 1970s as part of Michael Cole [2]’s Laboratory of Comparative Human Cognition [3].  Then McDermott, Cole and others wondered about the relationship between tests and the settings about which the tests were supposed to say something.  As they showed (1979 [4], 1998 [5]: Chapter 1), the relationship between, for example, a reading test and baking banana bread by reading a recipe is tenuous, at best.  In the setting of a cooking club, so much else happens (from confused writing to interpersonal tensions) that ability to read is the least of the problem the children have to deal with.  The generalizability of these observations across settings and populations is now well established through repeated observations.

What has been left open in this work is the question of finding out what ecologically valid assessments would actually look like.

Soon after the conference, another participant, Katie Anderson-Levitt (U. of Michigan-Dearborn), suggested we look at Paradise and Rogoff’s recent paper about ongoing learning in families (2009 [6]).  In that paper, Paradise and Rogoff mention all the work done in the Cole tradition over the past 30 years with a new twist that fits well with my own sense of what I call ‘education.’  In everyday life, at home, “learning” is not a simple automatic matter proceeding below deliberation or symbolic expression.  In everyday life “teaching” (and assessing) is—probably—ubiquitous.

As I reflected on all this, I saw a route I have not yet quite explored and that could lead to further research expanding on the Cole, Lave, etc., traditions.  Starting with an expansion of the point Paradise and Rogoff made, I suspect that  the movement through publicized ignorance is accompanied by all sorts of speech acts, many of which fit in the paradigm of knowledge assessment.  Developing all this is also an expansion on Garfinkel, as I take him.

Garfinkel has kept arguing that maintaining any order requires ongoing work, including the work of figuring out what is going on.  Conversational analysts has given abundant evidence that this is indeed correct.  More recently, Garfinkel wrote about ‘instructions’ as a necessary aspect of this work.  The paper ends with one of my favorite quotes about screwing around and getting instructed (2002 [7]: 257).  What I do not think Garfinkel noted, and what I know I never noted myself, is that the instruction moments proceed either from an earlier assessment, or themselves constitute an assessment.  This is also an implication of Gus Andrews recent dissertation (2010 [8]) on blog comments when these are assessed as being “wrong” in some way that is specified by a later comment (“this comment does not belong here,” “you should not write your social security number here,” etc.).  In an interactional sequence (conversation?) utterance of the type “Do X differently!” are probably essential mechanisms for maintaining order, constituting emerging orders, moving participants into new positions, etc.

I am quite sure that such ongoing assessment is ubiquitous and should probably added as a function in Jakobson’s model of communication (1960 — though he might have classified it as an aspect of the metalingual function).  Much of the recent work on metapragmatics may also fit here.

In brief, and for our purposes, we could say that Ethno-methodology is at the service of ethno-science (what is the world made of?), and ethno-politics (how do we maintain the order within which we are now caught?), it also at the service of ethno-assessments. [or should we say that (ethno) Methodology is at the service of (ethno) Science, (ethno) Politics, and (ethno) Assessment?]

If this proves a useful direction for inquiry, it suggests that assessment is not an extra-ordinary task.  It also suggests how school assessment has drifted away from the ordinary [I am not sure that ‘drifted’ is the right work, but it will do for today].  The well known school-based QAE (Mehan [9] 1979 [10]) model is formally equivalent to what might get known as the SARS model (Statement, Assessment, Re-statement) except that the former starts with the assessor’s question while the later starts with a seeker’s request that may then lead to an assessment (though this proposal may not have been presented as such).  In other words, the sequence starts with ignorance grounded in the here and now (“ecologically valid ignorance”?) and proceeds with statements of local knowledge that are themselves proposals for what it is that the seeker may plausibly not know (I am using the word ‘seeker’ rather than ‘learner’ since it will remain a question wether the subject whose ignorance is marked will learn anything out of the encounter).  This sequence is what I would now say my earlier statements about “productive ignorance” were about.

The question to designers of future tests is something like: how might you produce assessments that are triggered by acknowledgments of ignorance, whether generated by the subject (“I would like to know about X”) or by a co-participant in the polity (“you really should learn more about X”).  The challenge is to find the moment in the sequence of a life when the co-participant teachers will enter.  In everyday life it is a non-problem to the extent that co-participants or “consociates” have the built in or self-generated (legitimate) authority to assess (as siblings may have).  When social distance increases, that is when the network links between those who set what is to be assessed, what is to count as ignorance, and what should be done about include many persons in many institutions, then the problem gets acute.  It may even be unsolvable unless we find ways to reposition the official assessors within the network so they are closer to the performance in such a way that they can get a better sense, in real time, of the feedbacks that the seeker (learner) provides.

(More on what I am trying to formulate about network linkages later)