Getting the optimal SI unit
That's probably simpler than all this: presumably there's a relatively small set of possible SI scales you're going for, probably 1/K/M/G for a lot of things. Are you often wanting to display more than 3 digits "under" the decimal place, on an in-units-of-one number?
Anyway, if all you're trying to do in a particular case is testing for 1/K/M/G then just: if ( amount >= 1000000000 ) then G, else if ( amount >= 1000000 ) then M, else if ( amount >= 1000) then K.
and rounding the value to as many digits as there is space left in the UI.
From a usability standpoint, do you
want it to display more than a certain number of digits?
One of the things that really threw people about the numbers in TLF was how many of them were only significant in units of 1 (or higher) but it would show two or more places past the decimal due to generic handling of the prediction/result display logic (there is a ridiculous amount of stuff to predict and resolve in that game, so a generic framework was necessary).
One of the most popular changes I've ever made to AI War was dividing all the HP and attack (and a few other) figures by 1000.
Overall my point is that the question of "how many figures do I show?" is not "how many will
fit?" but "how many does the player
want?". And once you know that then it's probably more a matter of designing your GUI to provide the necessary space on the minimum supported resolution and just go with that (plus whatever left/right/center alignment is necessary to make it look right at smaller values) rather than measuring it every frame.
So please do tell Thog about ways to rig it up with rocks and tree trunks, he's not good at math.
If problem hard, make it a different problem until rocks and tree trunks work.
Gah...why don't Math.Log and so on do this already? It seems like a straight-up improvement to have that as an option in there!
The general answer to that sort of question with generic libraries is this: because they're generic libraries, they cannot make very many assumptions about what's actually expected of them. That limits the crazy unshielded-12-foot-diameter-buzzsaw optimizations they can pull without breaking spec on some subset of allowed inputs.
So if you call Math.Pow(double,double), expect it to treat the inputs as doubles and not to try int-specific optimizations on it. Even with ints it has to consider overflow/underflow problems with bitshifting operations. I'm not sure if those are problems that could actually be encountered in the case of Draco18s's approach above, but there's probably at least some edge cases that could bite it. A few erroneous edge cases like that in your own code isn't a big deal. A few erroneous edge cases like that in a core math function in .NET or Mono makes people flip the fleep out.
Yet another reason to write stuff yourself if you really care about how well they do their job
Though that has its own host of potential problems, of course.