GPU computing is awesome, and its great seeing them opening up their library, so that others can refine it and/or build on it.
I'm pretty sure that you were joking with the GPU AI, but I will give a few good reasons why this is a bad idea anyways.
1. Graphic card dependence: Currently CUDA assumes an NVIDIA GPU to work with. (Though I am sure someone will create a fully "software rendering"/CPU only implementation of the CUDA API at some point, even though it's performance would be in the toilet). Thus, using it will create a dependency on the graphics cards, locking out everyone not using a NVIDIA graphics card.
2. GPU Operations: IIRC, GPUs are great for floating point and matrix operations (though CPUs are getting better and better and floating point math). If you aren't using those heavily, you're better off not invoking the overhead of shipping an operation to the GPU and getting the result, and just do it on the CPU. I don't have the source code, so I can't be sure, but I'm pretty sure there aren't many floating point operations or matrix operations in the AI thread, nor do I think many of the the algorithms used by the AI thread could be converted to a matrix representation and operations without tons of overhead to work with. So it is almost certainly not worth it.
So, the really the only place for GPU use in AI War is (appropriately enough) graphics rendering. Thankfully, Unity is smart enough to use a graphics card for graphics if there is one (I think).