I think you'll find that when you limit your approach based on semantics, you'll get a limited success, but that's just my take. For a game like AI War, I've had to blend together just about every kind of game-style AI (plus a few not all that often used in games at all) to get a good result. Checkers is a solved game, as you mention, and so that's why that sort of no-processing master table works. It also has a wildly finite number of board states compared to, say, Chess.
That said, from an AI standpoint there are only three ways to intentionally make a bad move that I can think of:
1. Choose from a bad move list.
2. Do something random (which has just as much a chance of being a good move, in the end). I lump reducing the depth search into this, as that increases the random component of the move, at least.
3. Programmatically determine at runtime that X condition exists and that it would be wonderfully stupid to do Y.
In AI War, I used 2 and 3 off that list. It sounds like you want a big move list database in general, which is why 1 off the list seemed a good solution for you. Now I'm not quite sure what you're looking for, since you don't seem to want to handicap the AI while still having it play on a scale. You can't "coerce" an AI to do anything, as it doesn't really think. Remember, you're not making a true AI in the first place -- you're making a simulation of a true AI, like all the rest of us. It's all just algorithms and math and randomness, and how you choose to apply those components to get a smart or a stupid result.
It's easy to anthropomorphize the result, which is great, but one of my hugest lessons with AI War, and why the AI there seems better than most other games while being designed and coded in a fraction of the normal time, is this: It doesn't matter what the AI does, just what the player thinks the AI is doing. Truly. It's like being a stage magician, not like teaching bicentennial man to walk.