To give you one idea of some of the dumb things that have diverted the field, you need only consider situated action theory. This is an incredibly stupid idea that somehow swept the world of AI. The basic notion in situated action theory is that you don't use internal representations of knowledge. (But I feel they're crucial for any intelligent system.) Each moment the system —typically a robot—acts as if it had just been born and bases its decisions almost solely on information taken from the world—not on structure or knowledge inside. For instance, if you ask a simple robot to find a soda can, it gets all its information from the world via optical scanners and moves around. But it can get stuck cycling around the can, if things go wrong. It is true that 95 percent or even 98 percent of the time it can make progress, but for other times you're stuck. Situated action theory never gives the system any clues such as how to switch to a new representation.
When the first such systems failed, the AI community missed the important implication; that situated action theory was doomed to fail. I and my colleagues realized this in 1970, but most of the rest of the community was too preoccupied with building real robots to realize this point.