Marvin Minsky/vs Situated Action

02 Feb 2023 - 05 Nov 2024
Open in Logseq
    • Marvin kind of hated the trend towards embodiment and situated action that was on the rise at the AI lab in the 80s. He didn't think building robots was the way to AI, that it was a distraction. This was a bit curious since he did robotics in his early days, including a famous robotic arm. But now he was focused on common sense and the Society of Mind, and thought dealing with embodiment was a distraction. And he very much did not like philosophers, and the emphasis on Heidegger probably triggered a lot of hostility (with roots dating back way before my time, to his interactions with Hubert Dreyfus )
    • I was presumptious enough to think he was wrong about this. It seemed to me that embodiment and society of mind went together very well, very naturally in fact. My own work tried to bridge this gap by making agent systems that were tightly coupled to a virtual environment and emboded behavior (see Agency at the Media Lab). I thought they should unite against their common enemy, which was a the logic-and-rationality view of mind. Both were alternatives to this desiccated and boring view of mind, they shared a certain appreciation for the actual machinery of thought, the energetics of it. I at least was interested in them both and was looking for synergy
      sorry to use that term but it is actually what I mean
      .
    • At one point, Pattie Maes (Media Lab faculty and a former student of Brooks) and I convened a semi-formal meeting to try to convince Marvin to not be so hostile and that his work and this new strain should not be enemies. This was a failure. I'm very sorry to not have made some kind of a record of this event, I don't even remember who else participated, there were a few other people participating.
    • Unfortunately no records were kept that I know of. All I can remember is that it didn't work, Marvin left as grumpy on these topics as he when he started. My attempt at intellectual diplomacy was a failure. I can't even recall who was there, or what year it was – probably in the mid 90s?
    • Oops, he really hated situated action even more than I thought. This is from an interview in 2001: Hal's Legacy
      • To give you one idea of some of the dumb things that have diverted the field, you need only consider situated action theory. This is an incredibly stupid idea that somehow swept the world of AI. The basic notion in situated action theory is that you don't use internal representations of knowledge. (But I feel they're crucial for any intelligent system.) Each moment the system —typically a robot—acts as if it had just been born and bases its decisions almost solely on information taken from the world—not on structure or knowledge inside. For instance, if you ask a simple robot to find a soda can, it gets all its information from the world via optical scanners and moves around. But it can get stuck cycling around the can, if things go wrong. It is true that 95 percent or even 98 percent of the time it can make progress, but for other times you're stuck. Situated action theory never gives the system any clues such as how to switch to a new representation.
      • When the first such systems failed, the AI community missed the important implication; that situated action theory was doomed to fail. I and my colleagues realized this in 1970, but most of the rest of the community was too preoccupied with building real robots to realize this point.