AI in AI War is not learning, it is emergent ("nondeterministic AI that was not dependent on having any historical knowledge"):
http://christophermpark.blogspot.ru/2009/07/designing-emergent-ai-part-4.html
The fact that it is producing emergent results (as in, not expected) does not make it an A
I. If you throw 50 stones at a wall, you get an emergent hit pattern, 1 rock might even ricochet, 1 rock will land on the moon and the other will cause the end of the dinosaurs. But that is not a sign of your intelligence, it's just a sign that you got a lot of rocks to throw and
really bad aim. Which means I could deduce just from that that you are human. An AI would not have bad aim. It would aim 50 times at the same spot, hitting the same spot. And thus create the "hole in the wall" problem. Any scripted AI falls prey to this, although some rts games managed to
nearly entirely hide it.
Flowery analogies aside, to me this easy predictability of current pseudo AI scripts is the primary reason I get really quickly bored of rts games, and yes that includes AI War 1, though here with the many different scripts (AI personalities) I had a lot of fun after all, and a lot of suffering with certain combos... but my point is there isn't anything intelligent about anything the AI in AI War 1 does, it really just throws a lot of rocks at you and it doesn't just have bad aim, it flat out does not aim (as in, it does not care what you do, aside from reinforcement and the doom counter). That is producing emergent results, true, but is it a sign of intelligence? ;P
But all this is kinda moot, because I honestly do not believe machine learning could make an AI that would be a fun enemy AI personality in AI War .... how would you ever pre-train it.. you'd need to play through at least 30000 to 100000 generations. That is an unfathomable scale of work. And at the end, it would be heavily debatable whether what you produced is actually capable of playing AI War against a human, and is instead only capable of playing against a CERTAIN human playstyle (And even that is questionable, because I would bring up the notion that you can machine learn an AI , but you would spoiling the results by shortsighted or bad definition of goals, and this is why there may be a GO AI, but there ain't any AI that passed the Turing test.
And if you now say,
"but ERE you unfathomable beautiful being that types a lot of text, there was an AI in the news that passed the Turing test.."Well, read the
actual conversationhttp://www.scottaaronson.com/blog/?p=1858And then weep,
weep hard. This passed as human response to an actual human being, which really tells us more about the human than the "pseudo" intelligent chatbot.
30% thought this chatbot was a human?
?
Which reminded me of a quote I read somewhere, probably on an off-link from that AI researchers page about the Turing Test
A Turing Test is a conversation with a goal: for the subject, to prove that it’s human; for the interrogator, to learn the truth. And both of them know that this is the goal, and both know that the other knows it. So what’s the use pretending otherwise?
Think hard how you would prove you are human just by text. This doesn't require a vastly sourced database, or huge intelligence, you could just ask "how do you feel about Trump" and an AI would give you a response, so would a human.
And then you kill the AI; by asking WHY?
And Whatson, it crawled the net for responses, you can imagine how that turned out (if you imagine porn and cat pics, that is exactly not how it turned out because imagine that, an AI has no notion what is a cat pic or porn, instead it became a hateful troll because it was sourced from twitter)
Anyway lots of text, it's a funny discussion that I enjoy thinking about to be honest, but machine learning does not lead to intelligent anything, it is just a more complicated way to script responses to situations you expect. Because let's face it, the crux of a machine learned AI is the teacher and how would you even begin to train an AI for AI War
?? You'd have to play against a godlike human player and play as AI so that the AI learns from that??? But the moment you encounter a player that plays totally different, your trained AI is totally borked... (anyone still remember transport ship raids? AI could never cope with that, so the devs nerfed transports removing player agency from the game and thus reduce the amount of weirdness the AI encounters ;P)
So yeah, machine learned AI for AI War.. would never work. Imo. I would be happy to learn how ye would envision it working though. Just think about AI in AI War, it's a lot of stuff an AI has to do and even a good human player would be terrible at managing 80 planets at the same time, so right there you would already have a problem that you as human couldn't teach a proper self-learning AI anything about how to play as AI. Unless I misunderstand on what level you want to employ machine learning.
When you say pre-trained, on what level would you employ pre-training, and how would your formulate the decision tree?
Ps.: I am also aware AI researches don't care about chatbots currently, as the fundamental technology is still so far away that it isn't even worth to contemplate. I just wanted to give these examples of things that the media called AI's, which were really
nothing of the sort