LWMap/Agency: Introduction

08 Mar 2022 - 08 Mar 2025
Open in Logseq
    • Each volume of A Map That Reflects the Territory has a short introduction to its theme. I'm going to dissect a few quotes from the introduction to Agency, because they seem to compactly and precisely embody my issues with Rationalism in general:
    • There's something very strange about being human. We do not have a single simple goal, but rather a weird amalgam of desires and wants that evolution hacked together for some unrelated purpose.
    • First, what is so strange about not having "a single simple goal"? What is this implied but unnamed sort of intelligence that humans are strange in comparison to?
    • Second, that word "unrelated" is just hanging there without naming what it is that biology is unrelated to. Presumably some non-biological goal, but whose? And if it's not created by biology, where does it come from? Is this the same as the single simple goal that we don't have?
    • We would like to live with fire in our hearts, and rise to the virtues of honesty, courage, and integrity. But it is unclear how they can be grounded in the cold-seeming math of decision-making.
    • Who says they have to? It's only because of the axiomatic allegiance to a sort of rational decision-making model of the mind that this problem arises.
    • I don't mean to attack the author's character, of which I know nothing, but there is something almost craven going on here – like saying, oh sure, we could be courageous and have integrity, but damn it, that would violate rationality and we can't have that. Rationality seems to be being used as an excuse for cowardice . That's not quite what the author is saying, but he is saying that rationality is not a rich enough ideology to encompass these important values.
    • These passages carry a large weight of theoretical presumption: There's an implicit dichotomy between humans and some imaginary inhuman intelligence whose nature better reflects the rationalist theory of mind. This construct is single-minded, pitiless (cold), and with purposes unrelated to those of mere life, quite likely hostile to humanity and human values. Humans are a poor approximation to this powerful agent; they should strive to be more like it (if only in self-defense), but are hampered by their biological nature.
    • While this imaginary intelligence is only implied here, the rationalists have a worked-out explicit image for it: the paperclip maximizer. Despite this model exerting a powerful gravitational pull on rationalist thought, it is wholly imaginary. There are no pitiless maximizer engines, unless you count capitalism.
    • Rationalists know this of course, but they are convinced that their mathematical models are so powerful (for a somewhat indeterminate definition of power) that they will inevitably be realized in computational systems with their own agency. Their mission is to guide this process so that the goals of this immensely powerful agent will be compatible with human goals (this is the Alignment problem).
    • There's a whole lot of this that I disagree with (see Rationalism), but here I just want to point out how it leads to a distorted and arguably harmful view of human agency as somehow deficient because it is not pitiless and single-minded.
    • This makes Rationalists sound like a bunch of sociopathic humorless gradgrinds, which couldn't be further from the case. They may idealize maximization, but being real people they in fact do what people do – they seek to construct human values for themselves.
    • My intuition is that Rationalism is an inadequate platform for doing this, but I do have to admire the sincere efforts on display here.