The list of properties was created to summarize important characteristics of all previously suggested formulas.
Your specific formula does have several of those properties.
By definition your formula does change the size of a gain or loss proportional to the current position. This is one of the features that was behind its suggestion in the first place.
It cannot return to its original starting point by taking the same sized "step".
> r.fun(0,20,1) = 10
> r.fun(10,-20,1) = 0.0467884
As R approaches the limits step size decreases.
> r.fun(0,20,1) = 10
> r.fun(10,20,1) = 19.94654
These are intentional properties and ostensibly the main reason why you'd use this system instead of a linear scale.
Likewise, a single big step and multiple little steps do not add up to the same sized step. It accelerates when moving towards 0 and decelerates when moving towards the boundaries. Except that this only occurs with multiple steps in the same direction.
This however is because of approximating the 'sector' system with a simultaneous formula. If you want to add 20, the right way to do it is to add 1 twenty times. Its basically approximating an integral with finite step sizes. Its not too computationally expensive to make that adjustment and improve the approximation.
The actual integral would be something like the solution of:
dR/dF = exp(a*R/100)(100-R)/200 for increasing R
dR/dF = exp(-a*R/100)(100+R)/200 for decreasing R
(F is the total forcing needed to achieve a given change of R)
The resulting integral doesn't have a closed-form solution afaik, but it can be approximated via 'stepping' in F.
It does have symmetrical step sizes so a step moves the same amount as long as the starting point is the same. (Given a fixed R, d or -d result in the same |R'-R|.)
It has a hard cap of 100.
Yes, the hard cap is an intentional aspect.
Assuming R = 0, a = 2, delta = a series of random steps of -10 or +10, a simulation demonstrates that it takes ~175 steps for 99%+ of the simulations to reach the +90 or -90 marks, and then hover about that range until the maximum step number is reached. Reducing delta to -1 or 1 results in 82% of the population reaching the stability points of +90/-90 after 1000 trials. Random steps therefore push the function towards the boundaries, and not towards 0.
Yes, because you're using non-zero 'a'. That's what that parameter does in that variation of the equations. In the physical system, having 'a' sufficiently large causes a transition from the unmagnetized state (average of <R> = 0) to a magnetized state (average of <R> is either +X or -X depending on which direction it randomly went first, where X scales with a).
If you go to a=0, it should push towards zero because of entropy effects. There's some critical 'a' where it switches over, where the system is maximally sensitive to perturbations (the 'critical point'). I think a_critical=1 in this case, incidentally.
Edit: Anyhow, as to the 'why' for these properties.
- A hard cap means that you have concrete endpoints to understand the scale. With a soft/non-cap, you have the problem that if you think -100 is really bad, you also have to conceive of -1000, -10000, etc. Which make -100 look not so bad, and so on. That makes it hard to develop an intuition about the numbers. With a hard cap you know -100 is the worst possible and +100 is the best possible.
- With a hard cap, the problem is that if its too easy to push things either intentionally or via random events then you get to the cap really quickly, which can make it feel like there isn't true variation. So having the system become less responsive to pushes as you approach the hard cap makes it so that you don't just drop and lock into -100 in the first few days of a plague - things can always get worse, but they get worse in a diminishing returns fashion. Diminishing returns also encourages diversity of approach.
- The existing system has the problem that it can get stuck out in the -10000 boonies and take forever to 'fix' back to zero, even if the player puts a lot of effort in. This is a consequence of an open, linear scale. Even if you transform that scale with a sigmoid, you have the issue. Essentially this comes down to a conflict between the idea of 'extreme events should make this aspect of the planet suck' and 'linear adjustments per time that make the planet suck in the short term will make it unrecoverable in the long-term'. So by making it easier to return towards zero (or towards some stable point) you resolve this issue.
- One feature of the a<a_critical model is that the resting point can be driven by persistent bias in the random walk. That is to say, if buildings are always adding a little bit or subtracting a little bit, it will pin the RCI values at some non-zero equilibrium average. Whereas in the open system, if the net change per time is positive then it will wander off towards +infinity and if the net change per time is negative then it will wander off towards -infinity.
- An added bonus: the 'sectors' imagery has a direct connection to the implied underlying structure of planets having regions/etc. That lets you do things like associate buildings with particular sectors and create higher-level organization and ties between the RCI concept and the other aspects of the simulation, in a way that's physically concrete (with a fair amount of dev work though).