Author Topic: Ai easly tricked.  (Read 6449 times)

Offline Nalgas

  • Hero Member
  • *****
  • Posts: 680
Re: Ai easly tricked.
« Reply #15 on: July 13, 2011, 09:57:54 am »
Yeah, I'm familiar with the difficulty of making something like that convincingly imperfect (and have only made things that are a good bit less convincing than that myself).  A lot of the time it's easy to make it do the right thing all of the time or do something incredibly stupid all of the time, but it's very hard to make it screw up convincingly, and to do it with limited resources in real time.  The way the AI handles this particular situation has definitely been heading in the right direction, though, and it's already pretty good.

I'm not sure where I'd start with to adjust to improve on it, either.  It doesn't really feel to me either like it'd be something simple or straightforward.  It seems kind of like it involves enough different stuff that it could end up being more subtle and complex.  I'll see if anything comes to mind during our weekly games when I'm actually in one of those situations watching it happen.

Offline Cyborg

  • Master Member Mark III
  • *****
  • Posts: 1,957
Re: Ai easly tricked.
« Reply #16 on: July 13, 2011, 10:50:18 pm »
Fair enough on the tractor beams, mines, etc -- now I see what you meant there.  But, the alternative in the past was them getting clobbered with that anyhow.  And... really, if the AI never falls for any of your traps, where's the fun in making traps?  The AI makes plenty of traps that players tend to fall for, too, so I think that balances out.

That might sound like a cop-out answer, but I'm actually very serious.  If you read about, say, programming games of Billiards, you'll see that one of the biggest challenges there is making the AI play imperfectly.  FPS games have the same problem

I thought AI war was more akin to Chess or Go (which is a better example)? In those games, it's not as simple as billiards or shooting games. It's not a solvable problem with current algorithms, (even though they are both theoretically solvable problems). I don't mind an AI that makes those kinds of mistakes on lower levels, however on upper levels, shouldn't we be taking on the equivalent of 'Chessmaster'?

I think the general problem is that players have unveiled an algorithm to the point of it being abusable. I'm sure we all have our little tactics that we use to farm capturable ships from the AI and lower the buildups. That's part of the game, in a sense. However, maybe the best defense is a better offense for the AI?

I think it would be a good discussion to talk about different strategies the AI should be using. What I think would be fair and effective would be for the AI to actually scout and develop an idea of the player's planets, record player behavior (be able to pick up on repetitive tactics), a persistent learning AI that would get better the more you play the game across games(that would probably be the subject of an entire expansion), and for the AI to make decisions beyond just border planets.




Kahuna strategy guide:
http://www.arcengames.com/forums/index.php/topic,13369.0.html

Suggestions, bugs? Don't be lazy, give back:
http://www.arcengames.com/mantisbt/

Planetcracker. Believe it.

The stigma of hunger. http://wayw.re/Vi12BK

Offline x4000

  • Chris McElligott Park, Arcen Founder and Lead Dev
  • Arcen Staff
  • Zenith Council Member Mark III
  • *****
  • Posts: 31,651
Re: Ai easly tricked.
« Reply #17 on: July 14, 2011, 09:46:16 am »
I think that any realtime RTS game is going to fall somewhere between Chess and Billiards, but it's neither.  There are three main forces going on:

1. The "board state" is vastly more complex than Chess, so even with a supercomputer it's not something that you can just compute all the possibilities and see if you win.  This makes the AI fundamentally different from Deep Blue, because it has to make guesses and take logic shortcuts no matter what.  Same as a human.

2. Unlike Billiards, therefore, it also can't just compute "perfect shots" every time.  So I figure this supports your argument.  But on the flip side, what it CAN do is control thousands of ships on multiple planets all at once, giving them all different orders, whereas a human's attention span can't be segmented that way.

3. This is also a highly emergent system, by necessity because the game state is so complex.  So the more rules that get introduced, the less emergent and more predictable it gets.  In some cases that leads to some good things when it is consistently doing something stupid via emergent logic, but it also leads to situations -- such as this one -- where the AI does something with regularity and players figure out how to exploit it.  In this example the overall effect on the game is still better than the pure emergent style, but that has to be weighed with care.


Also, I would add: who here wants to play against Deep Blue, really?  It would just mop the floor with you every time, unless you're one of the top two or three grandmasters in the world.  So Deep Blue would not be a fun general Chess opponent, is what I posit.  Anyway, on the highest difficulties, sure, there's some things that the AI does that it doesn't do on diff 7 or 8.  Not grand new strategies, but it is allowed a bit more accuracy in calculating certain things, and it also masses it's forces a bit differently, to lead to no encounters until the encounter that it crushes you with.

At any rate, there's no chance of a persistent learning AI -- that's the antithesis of how I designed this specific AI, and it wouldn't work in multiplayer, etc.  This AI is all about looking at current board state, with no past knowledge of what has happened in the game (it literally doesn't remember ten seconds ago), and then making decisions with a lot of randomization that lead to emergent/flocking behaviors with its ships.  I wrote a whole series of articles on this, actually, if you haven't seen those.  But it would have to be a completely new AI system to incorporate learning elements, which I have no desire to do -- that database would get massive, and that sort of approach leads to many, many problems in RTS games in particular in my opinion.  In a lot of respects, these learning databases wind up being like a self-built decision tree, which I think is unsustainable.
Have ideas or bug reports for one of our games?  Mantis for Suggestions and Bug Reports. Thanks for helping to make our games better!

Offline keith.lamothe

  • Arcen Games Staff
  • Arcen Staff
  • Zenith Council Member Mark III
  • *****
  • Posts: 19,505
Re: Ai easly tricked.
« Reply #18 on: July 14, 2011, 10:48:09 am »
I think one potential gain would come in making the AI's approach to forted-up worlds a bit less predictable.  The current approach seems fine, but if you didn't know 100% that it was going to take that approach in a given case it could be more interesting. 

The example in mind where this worked was with the exo-waves: for a while it was a given that it would use a certain percent of its points on a lead ship, a certain percent on escorts, and a certain percent on pickets, and would split up total budget across several battlegroups going for several targets.  People complained that this became very predictable and that they were able to develop tailored tactics to separate the smaller ships out and so on.  So I added a random chance (which varies based on the "source" of the exo-wave) of it using an alternate allocation that really heavily favors a big lead ship and/or bigger escorts and/or concentrating everything into a single battlegroup that goes for a single objective (generally a human home command station).  So you don't know ahead of time whether you're going to get a fairly solid balanced fleet thrown at you from multiple fronts or just one big fleet led by a golem/whatever trying for a deep strike.  It's not complex at all, but it did help the fun-factor as far as I can see.

But it's trickier to do something like that here since the "do I attack" and "do I retreat" logic isn't very punctilliar, it's a question that gets asked over and over as time goes on and the situation changes, and randomizing those individual decisions would probably be like throwing all the colors on the palette together when painting: you start with variety, but you always just get brown ;)
Have ideas or bug reports for one of our games? Mantis for Suggestions and Bug Reports. Thanks for helping to make our games better!

Offline x4000

  • Chris McElligott Park, Arcen Founder and Lead Dev
  • Arcen Staff
  • Zenith Council Member Mark III
  • *****
  • Posts: 31,651
Re: Ai easly tricked.
« Reply #19 on: July 14, 2011, 11:05:52 am »
I think one potential gain would come in making the AI's approach to forted-up worlds a bit less predictable.  The current approach seems fine, but if you didn't know 100% that it was going to take that approach in a given case it could be more interesting. 

Yep, this had been my main thought on what to do with this, too.

But it's trickier to do something like that here since the "do I attack" and "do I retreat" logic isn't very punctilliar, it's a question that gets asked over and over as time goes on and the situation changes, and randomizing those individual decisions would probably be like throwing all the colors on the palette together when painting: you start with variety, but you always just get brown ;)

And this has been exactly what's been holding me up so far with it. :)  It's possible that what we need to do is introduce a system of "overarching AI orders" into the game at some point for things like this.  So the AI can secretly mark (on the main simulation of the game, seen by that and the AI thread and thus saved into savegames and usable by ship autotargeting) things like:

- Hold Here For The Next Hour, or until (x) number of ships are ready to go.
- Hold Here Until We Outnumber Them Goodly (the current logic)

And then by varying the first order a lot, that could really make things more varied -- the number of ships could be 5, or could be 50, or 500.  Etc.  And each time one of those orders expires, or some amount of time passes without them expiring (5 hours?), it gives a new order to replace the prior one, chosen at random.  Then we've suddenly got an AI that is unpredictable again, although it will tend to wind up landing on the second option for long stretches by nature, since the first options will expire more quickly and since the randomizer would eventually get stuck on the second option whenever they don't outnumber you.  But it would certainly be something we could then tune, longer-term.

We could then also start having some very basic AI memory for things, too -- like having "times I have tried to kill X thing on Y planet."  And as that goes upwards, so too does the AI's estimation of what it needs for that planet, which could make it play smarter at higher difficulties.  And past a certain number of failures, it starts ignoring that X thing in terms of it being a high priority target (assuming it's not a home command station), so that it can then get past things like ion cannons that right now players use to trap the AI (which already it randomizes out of some, but this could go even further).

I don't want to get TOO crazy with a system like that, but something along those lines could be the next step in the evolution of the AI War AI.  Definitely something for further down the road when we're in a full development cycle on AI War, but I think we could do some interesting things with that.  And I guess technically it would be adding in the very barest bits of "learning AI" per game, although as little as we can get away with as possible since it does impact our ability to have it be emergent.
Have ideas or bug reports for one of our games?  Mantis for Suggestions and Bug Reports. Thanks for helping to make our games better!

Offline Hearteater

  • Core Member
  • *****
  • Posts: 2,334
Re: Ai easly tricked.
« Reply #20 on: July 14, 2011, 11:36:33 am »
A possibly simple method (given a broad enough definition of simple) would be to give the AI a single optional objective it can set.  Normally, it would have no objective.  But sometimes, if something is causing it a lot of trouble (tractor beams keep locking down most of its waves, waves get wiped out very quickly by optimal counter ships, and so on) it sets itself an objective to counter that problem.

Then many of the other decisions can check if there is an objective set, and if so weight their picks based on that objective.  This could be both "what planet to attack" strategic decisions and "what units to focus in battle" tactical decisions.  After some random time period, if the objective isn't overwritten by a new objective, it gets cleared.

Offline x4000

  • Chris McElligott Park, Arcen Founder and Lead Dev
  • Arcen Staff
  • Zenith Council Member Mark III
  • *****
  • Posts: 31,651
Re: Ai easly tricked.
« Reply #21 on: July 14, 2011, 11:38:39 am »
Something along those lines, yeah -- though it needs to be a per-planet objective, for the most part.  And really I'm not wanting to really make it too much of an objective, but more of a general "this is the behavior mode we're going to use for a while" or "let's weight this factor differently here from now on, or for a while."  Teaching emergent systems to do things better is a tricky thing, and the more you try to make it rules-bound the less you get of the emergent benefits.  Which, at core, is one of the problem here: right now they are bound by one rule 100% of the time, and that's never good.
Have ideas or bug reports for one of our games?  Mantis for Suggestions and Bug Reports. Thanks for helping to make our games better!

Offline keith.lamothe

  • Arcen Games Staff
  • Arcen Staff
  • Zenith Council Member Mark III
  • *****
  • Posts: 19,505
Re: Ai easly tricked.
« Reply #22 on: July 14, 2011, 12:08:01 pm »
Teaching emergent systems to do things better is a tricky thing
Yea, very much so.  My thought is to just focus on getting it to do things _differently_, rather than particularly trying to pick which alternate seems "better".  Basically just probing the player's defenses with different attacks and see if one gets through; trying to specifically pick one based on player deployments would turn it into a game of the player trying to figure out how to manipulate it into picking a supposed "counter" that the player could actually defeat easily.  But even that could work ok as long as the "I think this would work" factor only increases the random chance of picking that by some modest amount rather than making it 100% certain, etc.
Have ideas or bug reports for one of our games? Mantis for Suggestions and Bug Reports. Thanks for helping to make our games better!

Offline x4000

  • Chris McElligott Park, Arcen Founder and Lead Dev
  • Arcen Staff
  • Zenith Council Member Mark III
  • *****
  • Posts: 31,651
Re: Ai easly tricked.
« Reply #23 on: July 14, 2011, 12:08:55 pm »
Yep!
Have ideas or bug reports for one of our games?  Mantis for Suggestions and Bug Reports. Thanks for helping to make our games better!

Offline Nalgas

  • Hero Member
  • *****
  • Posts: 680
Re: Ai easly tricked.
« Reply #24 on: July 14, 2011, 12:14:54 pm »
Yeah, really that's what it comes down to.  It doesn't need to specifically do something better, it just needs to not be 100% predictable in what it does, which it is in that particular situation.  What it does now is "better" than what it used to do, with the exception of it being the exact same thing every time, so you always know what to expect and what to do in response.  Mix it up a bit, and even if it's not necessarily "smarter", it'd still keep people on their toes more, at least as long as it's not actually stupider.  Heh.

Offline chemical_art

  • Core Member Mark IV
  • *****
  • Posts: 3,952
  • Fabulous
Re: Ai easly tricked.
« Reply #25 on: July 14, 2011, 12:25:28 pm »
Yeah, really that's what it comes down to.  It doesn't need to specifically do something better, it just needs to not be 100% predictable in what it does, which it is in that particular situation.  What it does now is "better" than what it used to do, with the exception of it being the exact same thing every time, so you always know what to expect and what to do in response.  Mix it up a bit, and even if it's not necessarily "smarter", it'd still keep people on their toes more, at least as long as it's not actually stupider.  Heh.

There are examples of history where a general passed up the obviously all-round good plan to use a generally less good one that managed to surprise the enemy precisely because they did not pick the best one. In sci-fi, it's a method how sentient creates defeat an AI.
Life is short. Have fun.

Offline Cyborg

  • Master Member Mark III
  • *****
  • Posts: 1,957
Re: Ai easly tricked.
« Reply #26 on: July 14, 2011, 07:18:46 pm »
When I think of emergent technology, I think of behaviors that appear from brief, small minutia. Imagine looking at a pixelated image that when you zoom out looks like Mario. And yet, if you zoom out even more, it happens to be Luigi throwing a fireball. And if you zoom out even more... Well you get the idea. That to me is to be emergent, to be a system of the sum of its parts. That doesn't mean random, and it doesn't mean predictable. It just means it adds up to a system or representation at some point.

When I see Chris say emergent, I'm never quite sure what he means. Especially when in this thread it sounds like randomness, but in other threads sounds like a brain-like quality.

It must seem unfair, this kind of complaint, when it actually takes many hours of gameplay to pull back the curtain on the rules of the AI. But the game is designed to be long, so it's unavoidable that after many hours of gameplay some of the threads become bare and exposed to the player. This is not always a bad thing, except when it begins to feel repetitive and makes the AI look easily manipulated.

Do not be so quick to discount learning. If the AI is not allowed to have a hippocampus, it would not be able to poke and prod the player to see where he/she fails, and a loss of fun factor at higher levels. It's one thing to just throw bigger and better enemies at the player to increase challenge, but the most fun in the game that I have is when the AI does something to suspend player belief, if only for moments. advanced hybrids continue to be the best enemy in the game, as well as the thief AI.

I will patiently wait until the next development cycle.
Kahuna strategy guide:
http://www.arcengames.com/forums/index.php/topic,13369.0.html

Suggestions, bugs? Don't be lazy, give back:
http://www.arcengames.com/mantisbt/

Planetcracker. Believe it.

The stigma of hunger. http://wayw.re/Vi12BK

Offline x4000

  • Chris McElligott Park, Arcen Founder and Lead Dev
  • Arcen Staff
  • Zenith Council Member Mark III
  • *****
  • Posts: 31,651
Re: Ai easly tricked.
« Reply #27 on: July 14, 2011, 07:37:37 pm »
When I think of emergent technology, I think of behaviors that appear from brief, small minutia. Imagine looking at a pixelated image that when you zoom out looks like Mario. And yet, if you zoom out even more, it happens to be Luigi throwing a fireball. And if you zoom out even more... Well you get the idea. That to me is to be emergent, to be a system of the sum of its parts. That doesn't mean random, and it doesn't mean predictable. It just means it adds up to a system or representation at some point.

That's not at all what I mean when I say emergent.  The example you're talking about is more related to fractals, near as I can tell.  I'm not referring to fractals, specifically.

When I see Chris say emergent, I'm never quite sure what he means. Especially when in this thread it sounds like randomness, but in other threads sounds like a brain-like quality.

Emergent: when new behaviors result from a set of simpler rules.  As one example, flocking.  There is no leader in flocking, and yet the group tends to stay and move as a group, sort of like one organism, based on a set of simple rules.  Each fish or whatever is moving based on external inputs (or randomly, if there's no particular external inputs), and yet also reacting to each other.  So they wind up moving around as a school.

When it comes to AI, there are three sources of input for it: randomness, board state, and your actions.  If the random component is not high enough, it is not emergent, it is just rules-bound or reactionary.

It must seem unfair, this kind of complaint, when it actually takes many hours of gameplay to pull back the curtain on the rules of the AI. But the game is designed to be long, so it's unavoidable that after many hours of gameplay some of the threads become bare and exposed to the player. This is not always a bad thing, except when it begins to feel repetitive and makes the AI look easily manipulated.

It doesn't feel unfair at all, there is no harsher critic of the AI than me.  However, I'm also not in a rush to correct every flaw, because typically in AI design the corrections can cause more damage than the flaw.  Such as the "gap in the wall" example that I usually cite.  Check out this series I wrote, unless you've seen it before: http://christophermpark.blogspot.com/2009/06/designing-emergent-ai-part-1.html

Do not be so quick to discount learning. If the AI is not allowed to have a hippocampus, it would not be able to poke and prod the player to see where he/she fails, and a loss of fun factor at higher levels. It's one thing to just throw bigger and better enemies at the player to increase challenge, but the most fun in the game that I have is when the AI does something to suspend player belief, if only for moments. advanced hybrids continue to be the best enemy in the game, as well as the thief AI.

Don't be too quick to discount randomness.  What the AI needs to always do is come up with five or so reasonable options, and then choose randomly between them.  NOT choose the best option every time.  If it chooses the best option every time, that leads to big problems.  Where memory can come in handy is to help the AI make better decisions on what the five-ish best options are; that's all.  Using it to choose the best option is suicide for the AI.  The hybrids are intensely random, despite being somewhat rules-bound as well.  They are the most traditional of the various AIs in this game, but they also still have a lot of randomization in targets they choose, etc.  And part of the reason they are so effective is that they are moving against the backdrop of the rest of the AI.  If they were not, they would seem less impressive.

The problem with the current behavior of the AI around wormholes has nothing to do with emergence: rather, it's a very explicitly-programmed bit of behavior, like the thief logic with tractors.  But the difference with the thief logic is that it still has many random components, such as where to take the ships it finds, and so on and so forth.  The problem with the stalking behavior is that there is no random component in there to make it seem more intelligent.

Gamers and society at large latches onto learning AIs as the thing of the future.  But the reality of how AIs work is a lot more complex.  I'd rather not get into a full discussion of that here, but suffice it to say that I think the only real solution is a very hybridized one.  And when it comes to, say, a Chess AI, knowledge of the past is not important in its ability to beat you.  It only needs to do a sufficient job of projecting into the future.  That's more or less (part of) the approach I've chosen.  But check out the long series of blog articles I wrote on the AI in the game, if you've not seen that.  It should make the methodology I'm using far more clear.  The reason the AI in this game is considered by many to be the best around is this unorthodox approach. 

Showing larger and larger ships into the mix doesn't have much to do with it, by the way; the only reason we did that with 4.0 was to cut down on the massive scale of battles that the AI had.  The bigger guard posts and such were added in as replacements to the masses of ships and turrets we took out (the AI pop caps per planet was roughly thirded).  In terms of other big enemies, well, those are fun extras that people seem to enjoy.  Avengers, etc.  I think Keith has been behind all of those, and he's great with them, but they are added layers onto the game and not really part of the core AI logic.

Hope that helps clear things up!
Have ideas or bug reports for one of our games?  Mantis for Suggestions and Bug Reports. Thanks for helping to make our games better!

Offline Cyborg

  • Master Member Mark III
  • *****
  • Posts: 1,957
Re: Ai easly tricked.
« Reply #28 on: July 15, 2011, 08:58:50 am »
And when it comes to, say, a Chess AI, knowledge of the past is not important in its ability to beat you. 

I think we are on the same page with most of it, although maybe I put more emphasis on learning. The Chessmaster does have a hippocampus; it has a database full of moves already precomputed from past games. Part of the disillusionment with chess is that the first several moves have all been done before with various statistics attached to them, leading to a fairly deterministic opening. Ever heard of chess openings being an object of study? The reason why is because players will memorize them before a match.

I think we have the same understanding of emergent, although you phrased it somewhat differently. I don't mind randomness, but where it has its drawbacks is when you are choosing randomly from very few options or options that are unfit for gameplay. Such as trickling in a few ships at a time to a well defended turret forest. Or in the case of this thread, being predictably baited. You are planning on solving it by adding randomness, but just consider for a moment what would happen if the AI learned through memory that the player was baiting; this thread never would have came up because it would only work a couple times maybe. Under the randomness scenario, it could occur potentially many times. Infinite, even.
Kahuna strategy guide:
http://www.arcengames.com/forums/index.php/topic,13369.0.html

Suggestions, bugs? Don't be lazy, give back:
http://www.arcengames.com/mantisbt/

Planetcracker. Believe it.

The stigma of hunger. http://wayw.re/Vi12BK

Offline x4000

  • Chris McElligott Park, Arcen Founder and Lead Dev
  • Arcen Staff
  • Zenith Council Member Mark III
  • *****
  • Posts: 31,651
Re: Ai easly tricked.
« Reply #29 on: July 15, 2011, 09:11:29 am »
I think we are on the same page with most of it, although maybe I put more emphasis on learning.

I think so, too.  You put more emphasis on learning, I put more emphasis on accurate in-the-moment analysis.

The Chessmaster does have a hippocampus; it has a database full of moves already precomputed from past games. Part of the disillusionment with chess is that the first several moves have all been done before with various statistics attached to them, leading to a fairly deterministic opening. Ever heard of chess openings being an object of study? The reason why is because players will memorize them before a match.

Generally I agree with what you're saying there, except the one thing I'd add is that for a (theoretical) suitably powerful brain (or computer), you can calculate all those moves and statistics when you first look at the board, without prior knowledge.  For the rest of us, sure, I have plenty of chess openings memorized, myself.

I think we have the same understanding of emergent, although you phrased it somewhat differently. I don't mind randomness, but where it has its drawbacks is when you are choosing randomly from very few options or options that are unfit for gameplay. Such as trickling in a few ships at a time to a well defended turret forest. Or in the case of this thread, being predictably baited.

Sure, you definitely need the various options to be fit for gameplay, and for there to be enough options.  In terms of being predictably baited, the operative word is predictable.  Being baited some of the time is fine, and the AI can't tell if you're baiting it anyway (nor could another player), because it's not clear if you want to just bring your forces right back in, or what you plan to do now that they are gone.

You are planning on solving it by adding randomness, but just consider for a moment what would happen if the AI learned through memory that the player was baiting; this thread never would have came up because it would only work a couple times maybe. Under the randomness scenario, it could occur potentially many times. Infinite, even.

And see, then we'd have a different thread: that once you bait the AI a couple of times, it learns that's what you're doing, and then it doesn't go in when you withdraw your forces, making it so that you can protect two planets with one fleet on a further-away planet.  That's what I mean by "squeezing a handful of sand" in my blog posts: the more rules you add, the more likely you are to lose any real emergence and get a decision tree that doesn't work well.  It's SO easy for learning AIs to learn the wrong thing, and in fact that's one of the chief issues talked about in most academic or professional writing on that subject.  That's at core of why I prefer analysis and randomization, as that it more effectively and more cheaply (in terms of development time, runtime CPU costs, and runtime storage costs) simulates something similar to the result of a moderately well-trained learning AI.

Now, having some memory of past failures can add in a bit of learning without much CPU/storage/development time costs, and that's something I was basically thinking about above in the thread.  But even that has to be done with care, because if the AI remembers that it's failed four times to get the ion cannon and then ignores it forever after, that's a big problem.  So hence it has to have an expiration time on what it's learned, or it only has to be weighted as part of a larger weighted calculation about whether or not to attack that target, with a healthy random added to the weighting.  So that really is not so simple, either, but a lot more feasible for a staff of our size (and in general more to my tastes in style of AI programming).

Anyway, I'm not discounting the power of learning AIs -- they are amazing -- but they are also fraught with error as they learn, to a degree that I don't think would be suitable here.  And in general I think that people who haven't messed with AI a fair bit underestimate the complexity of learning AIs.  There's a reason that a bunch of games don't have that sort of AI (which has been well-known for decades), and it isn't sloth! ;)
Have ideas or bug reports for one of our games?  Mantis for Suggestions and Bug Reports. Thanks for helping to make our games better!

 

SMF spam blocked by CleanTalk