What algorithm should I use, to simulate a continual stream of N increments each second — not writing a loop, but instead by timedinterval events, no more than M events per second?
I am implementing a incremental game, and the rate of increment for some resource Scrog has a widelyvarying rate of increase. The system I’m using has an event timer, and that’s the mechanism I want to use to generate the resource over time.
Design constraints
The inputs to this algorithm are: the effective rate of Scrog increase (e.g. “14 per second”, “578 per second”), and the lower bound of the period for each timer (e.g. “no smaller than 10 per second”, “no smaller than 100 per second”).
The outputs of this algorithm are: a small set of tuples (timerinterval, scrogquantity), often just one tuple and typically no more than a handful, that when taken together will effectively produce the specified rate of Scrog increase per second.

I want to simulate anything from 1 increment every few seconds, all the way to trillions per second.

Incrementing the Scrog resource is to be done by constant integer amounts. I want to precalculate the amounts and not deal with fractions of the resource.

The events should be fired by repeatinginterval timers.

The generated timer intervals should be as large as can be, to run the function as infrequently as we can. Given a specified rate of increase, I don’t want a polling function that does useless “is it time yet?” checks; that was known before hand, this algorithm needs to set up timers to avoid polling.

The generated timer intervals should be larger than a specified minimum bound (“M per second”), while small enough to in aggregate simulate the steady rate “N per second”.

No state can be kept in a loop; instead, the algorithm must precompute a collection of timers that will each fire an “increment Scrog by n” event, periodically. The period of each timer, and the integer amount of Scrog produced by each timer, are then constant.

The events should fire steadily, simulating a continual flow; but not indefinitely often, so that the event handler is not overloaded.

It’s acceptable if the algorithm only approximates the specified rate, within the tolerance of Mpersecond.
So I am looking for a generic algorithm, that will simulate a continual flow of Scrog at whatever rate (N per second) is specified, by setting up repeating events at a small number of fixed intervals, each interval no more frequent than M per second.
Example: up to 10 events per second
If I limit the actual rate of events to no faster than 0.1 seconds (10 times per second), that would mean:
 When the rate is “1 Scrog per 5 seconds”, the algorithm may produce the set
{ (5.0 seconds, 1 Scrog) }
.  When the rate is “1 Scrog per second”, the algorithm may produce the set
{ (1.0 seconds, 1 Scrog) }
.  When the rate is “7 Scrog per second”, the algorithm may produce the set
{ (0.143 seconds, 1 Scrog) }
.  When the rate is “10 Scrog per second”, the algorithm may produce the set
{ (0.1 seconds, 1 Scrog) }
.  When the rate is “500 Scrog per second”, the algorithm may produce the set
{ (0.1 seconds, 50 Scrog) }
.
But I’m confused about how the algorithm should handle rates faster than 10persecond, slower than hundredspersecond.
 When the rate is “14 Scrog per second”, the algorithm may produce the set
{ (0.1 seconds, 1 Scrog), (0.25 seconds, 1 Scrog) }
, because events triggered with those intervals will result in 14 Scrog per second.
A simple arithmetic problem?
Stripped of the context of events, timers, etc. this boils down to me trying to take some numberswith units, and produce other numberswithunits.
So this is apparently a fairly simple (?) problem: Design a general algorithm which, given these inputs, and the above constraints, will produce those outputs.
But my abstract arithmetic isn’t powerful enough. How should the algorithm be written so that it produces all these results, given only the constraints and the current effective Scrog rate?