On hackathons, machines, and flamingos

Recently, Audrey Le successfully defended a most interesting dissertation about “hackathons.” Like me a while ago you may have no idea what those would be… Well, they are events when (very much mostly) young (mostly) men play/work over a weekend at developing some “thing” (app, process, and who knows what else) that involves some computer programming (or can be analogized to computer design). Until Le started teaching me about them, I had never heard of hackathons–like I had never heard of DoItYourself biology labs, venture capitalists, equine therapies, video badge games and so many other wonder-inspiring stuff that first appeared in the late 20th century. There is indeed much “new” here for anthropologists looking for the odd human beings they thought could only encounter up the Amazon or the Congo. An anthropologist just has to go down the corridors of Columbia (Harvard, MIT, etc.) to meet never-yet-imagined “others.”

On the other hand, as Boas or Lévi-Strauss told us, what one actually finds in the jungles of Columbia (University) is … humans being themselves and, altogether not surprising. Much of Le’s dissertation is about the actual interactions during which ‘hacks’ are produced (which are not the same moments when ‘hackathons’ are produced). Then, anyone who has read Charles Goodwin (1995, 1996) and others on the moment to moment production of science will recognize the ongoing difficult efforts to make this do that. Everywhere we see the people working off the “etc.” principle (“you know what I mean”), using deictics in their speech and bodies, giving meta-instructions when noticing someone screwing around, etc. All that work sometimes produces a “thing” but, in any event, leads to a concluding statement. Hackathons are also jokes as Harvey Sacks wrote about them.

But there is something else to notice. Hackers, and the Large Multinational Corporations who fund them, are trying to produce … machines using tools like computer languages and “team work” that are very much not machines even when the people happen to treat them as if they were machines. Computer languages (C++, Javascript, Python, etc.) are not so different from “natural” languages in that they are the product of cultural arbitraries imposed by arbitrary processes. To say hello to the world one says Hello world in English. In computere one might say, in C++, std::cout<<“Hello World”;. One might say, in Python, print “Hello World”. Whether one uses English, C++ or Python depends on who has power and authority in the setting. But there is a major difference between a natural language and a human (machine) language. In English, when writing, one can put the phrase as Hello World! or Hello, World? or even perhaps as Hll Wrld. No such play is allowed in computer languages when one missing comma can make all the difference between communication and catatonic silence.

I got to ponder all this as I was reading Le’s dissertation at the same time as I was working on the concluding chapter to When is Education? and framing it in terms of Bateson’s musings about the wonderful croquet game imagined by a 19th century mathematician to amuse little girls (and very many adults). Bateson summarized the problem as follows:

F: The point is that the man who wrote Alice was thinking about the same things that we are. And he amused himself with little Alice by imagining a game of croquet that would be all muddle, just absolute muddle. So he said they should use flamingos as mallets because the flamingos would bend their necks so the player wouldn’t know even whether his mallet would hit the ball or how it would hit the ball.

D: Did everything have to be alive so as to make a complete muddle?
F: No—he could have made it a muddle by . . . no, I suppose you’re right. … It’s curious but you’re right. Because if he’d muddled things any other way, the players could have learned how to deal with the muddling details. I mean, suppose the croquet lawn was bumpy, or the balls were a funny shape, or the heads of the mallets just wobbly instead of being alive, then the people could still learn and the game would only be more difficult—it wouldn’t be impossible. But once you bring live things into it, it becomes impossible.

And then Bateson summarized the fundamental problem for all social sciences, particularly when confronting arbitrariness, learning, working out what to do next, education:

Something about living things and the difference between them and the things that are not alive—machines, stones, so on. Horses don’t fit in a world of automobiles. And that’s part of the same point. They’re unpredictable, like flamingos in the game of croquet.
D: What about people, Daddy?
F: What about them?
D: Well, they’re alive. Do they fit? I mean on the streets?
F: No, I suppose they don’t really fit—or only by working pretty hard to protect themselves and make themselves fit. Yes, they have to make themselves predictable, because otherwise the machines get angry and kill them.
(Bateson [1953] 1972:40-41)

This is very much the problem for the hacking teams in Le’s dissertation. Consider that one of the teams was trying to control flamingos (doctors) as they attempt to keep track of hedgehogs (patients) even as they themselves have to play a game (hackathon) set up by some Queen (Large Multinational Corporation) where the stakes may not involve one’s head, but could be almost as high since some of the players imagined they might join the Queen’s court, for some fun and, perhaps, much profit. In Bateson’s terms, the hackers were working hard to make themselves fit the machines they were using so that they could make a machine to which others would have to make themselves fit. After all machines can kill unwary humans.

As a journalist for the New York Times noted: “As I read her statement, my eyes lingered over one line in particular: “‘We never intended or anticipated this functionality being used this way — and that is on us,’ Ms. Sandberg wrote.” (Kevin Roose, NYT Sept. 21, 2017). I am not sure that it is “on them,” designers of machines who will never have the power of preventing human beings … from being alive!


[1953] 1972 “Metalogue: Why do things have outline?”. In Steps to an ecology of mind. New York: Ballantine Books. pp. 37-42

Goodwin, Charles   1995     “Professional vision.” American Anthropologist 96:606-633.

Goodwin, Charles   1996     “Seeing as a situated activity: Formulating planes: ” in Cognition and communication at work. Edited by Y. Engestrom and D. Middleton. New York: Cambridge University Press. pp. 61-95.

Print This Post Print This Post

Leave a Reply

Your email address will not be published. Required fields are marked *