Search
Full
Incoming links
from Agency Made Me Do It
  • I've been circling around the topic of agency for a few decades now. I wrote a dissertation on how metaphors of agency are baked into computers, programming languages, and the technical language engineers use to talk about them. (See Agency at the Media Lab).
from Vivarium Project
from animism
  • My dissertation had a chapter on animacy and its application to computing (see Agency at the Media Lab) The work was based on a set of animist-related ideas:
    • that animism is built into our everyday cognitive models via metaphor;
    • that these metaphors are implicitly built into the ideas, languages and tools we use to build computational systems;
    • that we ought to understand this better and start to use animist ideas explicitly.

Agency at the Media Lab

18 Jan 2021 01:35 - 11 Apr 2021 01:23

    • I glommed onto the topic of agency in graduate school, and wrote a dissertation that explored its relationship to computation. That work, while it garnered some praise, I have thought of mostly as a failure. It opened up some interesting questions, but didn't really provide much in the way of answers. It tried to do too many different things (which might indicate problems with my own agency).
    • Most notably, it failed to come up with any kind of novel computational theory of agency. Other people have tried that, but for some reason I wasn't sufficiently interested in that aspect of things – as if to capture agency in a mathematical formalism was to miss the point. The concept of agency was for me (perhaps unfortunately) endowed with certain spiritual qualities. It was too important and cosmic to be merely a term in some formal language.
    • Because this was the Media Lab and because I've always hovered somewhere between the user interface design and language design ways of approaching things, I ended making a series of visual or semi-visual systems for building very simple models of agent-like behavior. Most of this work was done under the aegis of the Vivarium Project, an Apple-sponsored research program, directed by Alan Kay, which had the intent of inventing some new models for novice programming environments.
    • It wasn't hard to build artificial systems that demonstrated something that seemed agent-like. When I started off on this particular journey, Valentino Braintenberg's book Vehicles was an important inspiration. He'd done just that – constructed (on paper) a series of simple abstract machines that demonstrated primitive forms of agency (starting with simple reflex/phototaxis and moving on to things that more complexity and internal state).
    • I started off trying to turn these ideas into visual programming languages, and built a series of prototypes:
      • Brainworks was basically a construction environment for Vehicles-level creatures.
      • Agar had a somewhat more sophisticated modelling framework for behavioral rules, inspired mostly by Tinbergen's ethological theories. (built for my master's degree)
      • LiveWorld was a system I built for my PhD dissertation, and it tried to do too many things:
        • a prototype-based visual programming environment combining ideas from Self and Boxer
        • an agent language building on the earlier work
        • assorted other ideas like agent-based debugging tools
      • Behave! went in a different direction. It was a radically simplified visual behavior-rule programming designed for a museum exhibit (the Virtual Fishtank at the Boston Computer Museum). I believe it had some influence on Scratch, the hugely popular novice programming system that emerged from the Media Lab a bit later.
    • This was all lots of fun, and the systems were successful as academic projects go. But it wasn't leading me to the Grand Insights I thought I should be having. The implicit vision behind these efforts was something that could scale up to something more like Marvin Minsky's Society of Mindd, which was a mechanical model not just of animal behavior but of human thought. I don't think that ever happened, and while I might blame my own inadequacies it might be also be that Minsky's theories were not very language-like. A good language like Lisp is built around basically a single idea, or maybe two. Minsky's theory was a suite of many dozens of ideas, each of which was at least in theory mechanizable, but they didn't necessary slot together cleanly as concepts do in a pristine language design.
    • Or it could be that I just focused on the wrong aspect – I got hung up on the word "agent" and agency, and as you can see I'm still hung up three decades later. In The Emotion Machine, a sequel to Society of Mind, Minsky abandoned the word "agent" for the components of his mental theory, substituting "resource":
    • This book uses the term "resource" where my other book, The Society of Mind, used "agent". I made this change because too many readers assumed that an "agent" is a personlike thing (like a travel agent) that could operate independently, or cooperate with others in much the same way that people do.
    • That's perfectly understandable but makes the whole enterprise seem less interesting to me. "Resources" immediately calls up an implied user of resources, which is the very self we are trying to explain (or demolish). It's retreating from the radically parallel architecture of a society to a single-strand-of-control architecture, and so pushes aside the questions that most interest me.