Recently, thanks to a chance remark on an IRC channel, I found a page (well, the first of four pages; arstechnica.com obnoxiously, and pretty pointlessly, hides the easily readable version of their content behind a paywall) talking about a program (the Overmind, its creators called it) that won a StarCraft competition.
This is really interesting stuff. But I take issue with their calling this thing AI. Perhaps I'm just succumbing to the "the core of AI is that which has not yet been implemented" syndrome, but it seems to me more like an unusually elaborate expert system.
If you haven't already seen it, you may want to go give the arstechnica.com piece a read, because some of what I'm about to say won't make much sense otherwise.
I find the remarks about the one opponent, Krasi0, that gave the Overmind trouble highly significant, because that too is a program that's exploiting a computer's ability to micromanage. But neither Krasi0 nor the Overmind was truly intelligent, able to notice similarities and problems and figuring out ways to counter problems. For example, I don't know StarCraft—pretty much everything I know about it was learned reading that piece—but, from reading that, I would expect that a human in the Overmind's position (ie, with the ability to micromanage that way) dealing with Krasi0 would, after the first couple of encounters, start taking out the workers first, then the units they're protecting.
This is where the difference between the Overmind and a (hypothetical) real AI shows. The Overmind expert system didn't have any foundation on which to build that strategy coded into it; it just kept grinding away at a war of attrition. An entity capable of noticing similarities and deducing probable causes, such as a human, would notice that attacks weren't working as well as expected and, at the very least, revise its anticipated success figures; a good one would notice the workers responsible for the difference and start dealing with them.
Not that this is a difficult concept, for a human; for example, in King's Quest VIII, a heuristic I sometimes find useful is "take out the opposing healers first". And that the Overmind didn't come up with it is not, per se, a problem. That it wasn't capable of even trying to come up with it, though, is a failing, and is pretty close to my core point.
I fully expect that some people would react by adding code for "take out supporting healers first" as a possible tactic. This might well result in a stronger player, but as far as addressing my real point goes, it's totally missing the point: it's just adding another rule to the expert system; it will still remain limited to the toolbox it's been given. Until it becomes capable of inventing tactics and strategies and reasoning about them, it will remain vulnerable to being blindsided by the unexpected.
An acquaintance of mine who is a StarCraft player also remarked that the Overmind would fare badly against someone who built up anti-air to levels that would normally be excessive, because it isn't capable of recognizing when a strategy isn't working and switching to another strategy—such as ground units. This is just another example of the same basic problem: it has nothing like actual understanding, just a few canned techniques. Fairly elaborate techniques, mind you, but it is not capable of things such as similarity recognition or creative experimentation.
I suspect games this complex are AI-complete. Very deep, and very deeply reflexive, behaviours (such as modeling opponents' strategies, hypothesizing possible holes, probing for them, and exploiting them when found) seem to me to be necessary.
Perhaps I'm not giving the Overmind and its creators enough credit. Things like tuning the contributions to its field potentials are the first fumbling steps towards inventing strategies and tactics, though there does seem to me to be a qualitative difference between tuning a handful of parameters to a canned strategy and inventing a strategy; what would really impress me would be if they had built a system that invented and developed the field-potential control mechanism.
One of my past friends (was a good friend back in the '80s, now rather out of touch) once built a Rubik's Cube solver that came as close to this as anything I've heard of in that space has: it would experiment, developing its own macros. Still a canned strategy (it has no chance of coming up with Kociemba's algorithm, for example), but, like the Overmind here, early steps towards creativity.
Don't get me wrong. On rereading, this feels critical of the Overmind. It's not; this is very impressive and interesting work, akin to the development of Deep Blue for chess, only more interesting. It contains what I suspect may be seeds of some great things. I just think it's a bit excessive to call it AI, except in the watered-down sense in which any expert system's canned knowledge counts as AI.
 Or at least a registerwall, which is really just a flavour of paywall where the payment is non-monetary.