- On anthropology, education, culture, and more … - http://varenne.tc.columbia.edu/blgs/hhv -

On the limits of human rationality when confronted to human practical intelligence

The programmers at Google (mostly human, I will grant) have a problem: how to make their robot (car) make eye contact with non-robotic drivers so the robot does not get paralyzed at four way stops.

Actually, some humans (particularly of the French kind, at apéritif) are sure all humans must have this problem since it rationally impossible to determine who has priority when four cars approach together a four way stop.

Practically, of course, there is no problem: humans, in each case, make up a way to solve the “problem,” one four way stop at a time, using all their tools (eye contact, inching forward to assert right of way, withdrawal to avoid possible confrontation, etc.).

Anthropologically (in the broadest sense of finding out what humanity is all about), all this is about the tension between rationalism and pragmatism: do human beings act from rules or do they make it up as needed?

More than a century ago, this is a tension that must have haunted Durkheim and led him to give a full course on it (“Pragmatisme et sociologie.” Cours dispensé à La Sorbonne en 1913-1914 [1]).

As I understand it, Durkheim granted pragmatism what it said about the ongoing constitution of humanity and its local and temporary truths (culture) but returned to scientific rationalism as the ground for saying that, precisely, pragmatism (cultural anthropology, ethnomethodology, etc.) must be granted primacy when the goal is systematic understanding. Affirming that, on the basis of a century of research, it is more likely that human beings “make it up” rather than they follow rules learned earlier, is an act of scientific rationalism. (The development of scientific rationalism being by this very research a historical product of attempts to deal with new conditions from the ‘0’ to the printing press to the … robot car!)

Where does that leave the Google programmers?

And how are we to talk about the many who, soon I suspect, will want to prevent error-prone, “irrational” if not criminal humans from driving now that rationality (in the guise of Google programmers) has triumphed?

The first question is a question about communication theory that it will a lot of fun to ponder and discuss.  The robot car is also an ethnomethodological experiment to delve more deeply into the conduct of everyday practical life on the highways of life (hint to doctoral students: there are many dissertations here).  But first the programmers will have not to blame humans for not following the letter of the law…

Which leads to the second question and the probable development of new forms of arbitrary forms enforced by new forms of arbitrary powers-that-be.  Among these:

. Insurance companies keen to lessen their losses (“bonuses” for people who let their cars drive);

. Advocacy groups for a safer world free from “bad” drivers (get ready for much moralizing);

. State agents reacting to the others and developing authoritative regulations for what is to count as bad (if not now illegal) driving.

. Lawyers, …………….

Along with all this, imagine the many forms of resistance.  Imagine what will happen when resistance gets institutionalized.  Imagine the resulting rules, regulations, customs that transform what happened earlier and become, for a population, that which is the real they must now deal with (see for examples the multiplication of the responses to global warming across the globe)…  Negotiating the institutionalization of robots will not be a rational process, but one more akin to driving through a four way stop, and, for a few seconds, making a uniquely adequate and multiply arbitrary immortal social fact (culture).

Coda to my earlier post about non-robotic driving in Haiti: Dany Laferrière on his friend driving a new Jeep in Port-au-Prince [2] (1997: 171-72)


Laferrière, Dany 1997 Pays sans chapeau. Montréal: Lanctôt Editeur