AMMDI is an open-notebook hypertext writing experiment, authored by Mike Travers aka mtraven. It's a work in progress and some parts are more polished than others. Comments welcome! More.
That's why the concept of "alignment" sounds so weaksauce to me. It's like the protagonist of a Lovecraft story encountering Yog-Sothoth and saying "hm, how can I work with this guy?"
This neatly encapsulate a central point of late-Landism; that intelligence (artificial or otherwise) is a sort of cosmic force that is wholly independent of and oblivious to human needs. Rationalists share this view but think that they have some ability to contain and control this force with their feeble alignment incantations; Land is laughing at them.
A buzzword from recent AI that refers to the practice or goal of making an artificial agent's goals compatible with human ones. This is not an obviously stupid idea. I happen to think that the way rationalism and AI think about goals is kind of stupid (see orthogonality thesis) and as a consequence, the efforts at alignment seem mostly misguided to me.