Ye gods, that's alot of text. Ugh. Normally I try to read absolutely everything in a thread, but
Now you know how I feel! Minus the ugh part, but there is just overwhelming amounts of stuff.
Yep, I used to feel this way too. Now that I don't work here, I get to pick and choose what I want to read.
Happy belated birthday, by the way!
With the major events, I started a whole thread discussing what I thought would be a good idea for them: http://www.arcengames.com/forums/index.php?topic=15846.0
In short, the idea is that these are the things that should flavor each game and make it distinct. They're things that make the player have to respond to them and change their playstyle so that once you figure out one strategy that works you can't just do only that strategy and always win. Having how frequently these events can happen be tied to the difficulty level or a different option in the menu isn't a bad idea, but they really are needed and you need to make sure that at least one or two trigger near the start of the game to give you a good reason to play the game after you've beaten it once or twice without intentionally sabotaging yourself.
Nice!
As for the buildings, earlier in the beta they were a flat addition to RCI amounts, which turned out to be a really bad model. So we shifted them to the current model, but made them conservatively slow.
- In my search for the history of RCI I don't recall seeing why they turned out to be a bad model. Can you clarify?
Hmm... I'm not sure how much was documented, and when it was. May have been during alpha. For a long while it was a "+20 to RCI" value on buildings, and it would just be a flat boost. I shifted away from that because the very concept of that is just a little odd.
- Rare Event Rarity
- I believe the major problem is that games still don't last long enough or turn out to have such dire situations to force the interesting Rare Events
- Instead we get: Burlusts are again able to defend their rock with 1TB of troops. You have to spend 20 years of intense population destruction to make any progress
- Part of that problem comes from the primarily multiplicative combat strength mechanism being combined with the high birthrates of the Burlust(and their super building).
- By being able to either finish games more quickly or make the end game more interesting than Burlust eradication, more players will see more rare end game events.
Right, this is true. A lot of the rare events are actually time-gated, too, so that they don't stomp new players.
- Traditional Decision Tree AI versus Individual Agent AI versus hybrid AI.
- I feel I'm being mischaracterized here.(I don't think that means we have to duel to the death, but I might be mistaken. Please consult my second if this is the case for further arrangements)
- The initial description was system-neutral not favoring nor endorsing any system of AI organization.
- The second more in depth characterization was definitely hybrid in nature, the same system you speak about in your AI war thesis.
- I'm blaming it on my lack of clarity, I need to use more bullet points.
- Racial Memory -I suppose it depends on what you mean by memory. This is getting far afield, so I won't pursue it.
- Randomness - This one seems to be a hot issue given your posts and articles on AI. There is a range of AI behaviors, for which you want some randomness. However, making true random behavior is not desirable (and the point you bring up in your own article). However, on a simple reading you come out in saying that you don't want predictability.
having memory is demonstrably bad, because it makes them predictable
- I believe you are caricaturing your case here. You want AIs that provide a range of behaviors when there are many reasonable options, however when there are not, you do want a predictable AI. That's inherent in the probabilistic agent-based AI approach. However, as it read, you sound like one of those GOG extremists (per other thread where GOG extremist was inadvertently misused leading to massive walls of text over the misunderstanding).
Apparently we are both being mischaracterized.
In terms of memory, I really am not a fan of it for AI in the main because it makes them slow to react. I prefer AI that looks at the current state moment by moment and makes decisions based on that. Obviously some memory is needed in terms of "what was I just doing?" so that the AI doesn't just flail about. But that's more about following through on one action than something else.
The way that I understood your note was to try to use data from past actions of the player and the other AIs to predict their future actions. That... gets very dangerous. It's really easily abusable because players can do one thing for a long time, then immediately switch what they are doing and the AI will react slowly because the vast bulk of its data tells it to. Either that, or it is having some special overrides to make it react more specifically to specific cases. Either way, you wind up with mounds more code, mounds more bugs, and typically more exploits. If the AI is making choices that are in the range of optimal rather than always optimal, and then remembering and following through on those once chosen (to a certain extent), then you get the ideal behavior, in my opinion.
The example from dumpsterKEEPER, if I recall it correctly, was the AI sending ships back and forth through wormholes indecisively. That was a great example of the AI itself not remembering what IT was trying to do and thus causing problems by not committing to a decision. Versus trying to anticipate traps based on your past decisions, which instead tends to make it actually easier for you to set counter-traps if you are a good player. Even with the whole "commit to a decision" thing, there has to be some threshold after which the AI abandons a decision if it no longer thinks it is a good one. Otherwise, again, you get an AI that is sluggish to react.
True randomness has never been my goal, and nor have I meant to sound like it is. But programming decision trees is perhaps the antithesis of my general approach.
- I do not accept your response that the AI behavior is good (splitting and sending some ships to die).
- I tentatively accept that it may be good enough(for an AI with the level of resources AI Wars AI has) and for the time commitment to further AI development.
- I will suggest that the the better answer is that the AI needs to have agent accessible information about all systems.
- To limit it to the specific issue, it needs to have "retreat plans" for every single system.
- It should know how dangerous it is to cross through any system with ships.
Interesting that you don't find the AI good in AI War. But, ah well. I seem to recall you are one of the uber players, so I'm not surprised you would find that opinion. AI War is designed to be good up to a point for most players, and then be situationally difficult for people like yourself. Basically getting into the "good enough given the circumstances" area there.
None of what you're saying there about agent-accessible information like escape routes and danger estimates and so forth is at all at odds with what I am saying. The AI already does that sort of thing, and that's not a decision tree. Rather, that's weighting of information. The thing to do then is take something like the top 10% of good options, and choose one of those at random and then stick with it. If the quality of options has a massive drop-off past some point, then ignore all those options, which may of course leave you with one.
That's actually more or less how the AI in AI War operates in a number of circumstances, but it hasn't been built up more because it hasn't really seemed needed by the playerbase.
Anyway, I guess the argument that we were pseudo-having (where we both thinking the other was saying the other was not, so actually we were more or less arguing on the same side against imaginary copies of ourselves):
False you: "As an example of the racial AI, the AI should calculate retreat paths, and then choose one, and then stick to it. This should be something that is done very rigidly, and precalculated too soon, so that there is a high chance of the path becoming outdated and nonoptimal by the time it is needed to be used. Also, always find the exact ideal path that is obvious, and take that."
False me: "No, it's stupid to make plans. The AI should just run in random directions and see what happens. Any sort of planning or target evaluation is holding me back, man!"
Actual both of us: "Given time and resources (which is always an iffy given), doing more complex and accurate evaluations of a larger variety of things is a good thing."
Actual me: "And that should be something that is done on the fly, and then kept as the instructions for a certain amount of time, unless the situation turns X amount bad, in which case things should override the normal 'stick with this' logic."[/list][/list][/list]