Quick'un: How much of a performance-hog are things like Mathf.Log10 / System.Math.Log10?
I didn't know a number (before the later experiment, at least), but a few thoughts:
1) Off the cuff, raising x to y where y is not an integer is simply bound to be painful cpu-wise. And taking the log of an arbitrary number is likely to involve something similar: i.e. no simple conversion to multiplication or division.
2) If all you want is the number of digits (of an integer, or the integer part of a real number, right? Not talking places after the decimal?), realize that Log10 is doing a lot more work than telling you that (note that its result is a floating point value). Doing more work = taking more time. Generally, at least. So it's a matter of asking "how do I answer only the question I'm actually asking?" In which case the for-loop-dividing-by-10 that Draco18s suggested is pretty much your go-to method. I wouldn't suggest the ToString() method as that involves heap allocation and, again, doing a lot more work than is actually necessary.
3) On the other hand, depending on how often you're doing this, it may not matter _at all_ how inefficient you make this particular computation. Is it just for, say, 20 numbers on the GUI, once per frame? Probably not going to notice the difference, and it's not going to get worse in more-intense gamestates (a key point). But if it's multiple times per ship per frame and there could be thousands of ships in the more-intense gamestates, then some attention may be worthwhile.
4) If you really want to know the answer to "how fast is method XYZ" one of the best ways is simply to measure it yourself, by putting something like this in your program's startup code:
System.Diagnostics.Stopwatch stopwatch = new System.Diagnostics.Stopwatch();
int iterations = 100000000;
stopwatch.Start();
for ( int i = 0; i < iterations; i++ )
Mathf.Log10( i );
stopwatch.Stop();
Debug.Log( "stopwatch reported " + stopwatch.ElapsedMilliseconds + "ms for " + iterations + " iterations, average of " +
( (float)stopwatch.ElapsedMilliseconds / (float)iterations ) + "ms per iteration" );
In my case, the result was:
stopwatch reported 4932ms for 100000000 iterations, average of 4.932E-05ms per iteration
Of course, if your compiler is feeling cheeky it may simply optimize out those Log10 calls since the result is not used. So I tried an alternate version:
System.Diagnostics.Stopwatch stopwatch = new System.Diagnostics.Stopwatch();
int iterations = 100000000;
double dummyResult = 0;
stopwatch.Start();
for ( int i = 0; i < iterations; i++ )
{
float thisResult = Mathf.Log10( i );
dummyResult += thisResult;
}
stopwatch.Stop();
Debug.Log( "stopwatch reported " + stopwatch.ElapsedMilliseconds + "ms for " + iterations + " iterations, average of " +
( (float)stopwatch.ElapsedMilliseconds / (float)iterations ) + "ms per iteration (dummyResult=" + dummyResult + ")" );
Which should force the computation of Log10 since the compiler can't say "well, he doesn't really need to see that value in the console log".
And that gave:
stopwatch reported 5426ms for 100000000 iterations, average of 5.426E-05ms per iteration (dummyResult=-Infinity)
So the first result looks pretty accurate for the cost of the actual Log10 call (the cost went up from 4.9 somethings to 5.4 somethings when I added in that += nonsense). Unless it was feeling
really cheeky and put something in like "if it's infinity, then we don't need thisResult for this iteration, which means we don't need the Log10 call from here on out", but I think that's kind of unlikely.
So 0.00004932ms, or 0.04932 microseconds, or 49.32 nanoseconds per call. 50ns is a fairly good chunk for a single mathematical operation. Even Sqrt, which is semi-mythical in how long it can take, only came up as about 20ns per call in a similar test on my machine. That said, it's all a question of how many calls we're talking.
I also tried a version that made sure the parameter to Log10 was a non-integer (by replacing it with "(float)i + ( (float)i * 0.001f )") but the result was very similar. Just checking to see if maybe Log10 was using a more efficient branch for integer cases.
Amusingly, however, I found that the for-loop-dividing-by-10 situation took
90ns per iteration on float inputs, and 56ns on int inputs (fdiv being much worse than div on an ALU level).
So it actually looks like Log10 isn't bad here
So to sum up:
- Ask yourself: "does it matter?"
- If it matters, then don't guess. Measure.