Is this actually optimising code at all?


  • Posts: 213
Event always is being executed 100 times per one frame. That's a lot, and it gives good precision if you want to do something very quickly, but sometimes it may be heavy for a program and you don't need that good precision at all. Is it good to use block [do every 0.1 sec] instead of Always. Always is executed 100 per second, however if do every in this case only 10 times a second. Is this optimising code at all or running timer (I mean, every 0.1 is like a timer that runs infinitely) is more efficient?


  • *
  • Posts: 2576
The problem with solving an optimization issue is that if you just 'barely' fix it, you then have to wonder if the fix is good enough for every user. If you have a problem in your game, and you find a way to make it run twice as fast, is that good enough? I'd be more likely to wager 'no'. There's likely to be a large enough fraction of potential players who would be left out that it would be worth my time to write better code.

One of the most effective ways to think about code speed is to consider its time complexity (using big-O notation). This is more useful for designing algorithms, but you also get real improvements by lowering time complexity.

Time complexity is a difficult subject, but there's an easier shortcut I like that I found works just as well. If you need to improve code efficiency, try to improve it by an order of magnitude. If you have a problem, and you can make your code 10x faster, then it's probably solved for enough cases that you can move on. I wouldn't say it's always solved, though. If you have a really bad problem, then you may need to shoot for two or even three orders of magnitude faster. But if you're at that threshold where things are working okay, but not quite good enough, then one order of magnitude should solve the problem for you and all of your potential players.

Going from 100 updates per second to 10 updates per second would be one order of magnitude faster.

The last question is--does it really matter? Optimization comes up a lot, often unnecessarily. Very large and very small numbers can be hard to visualize, so it's easy to imagine a problem that isn't there. Optimization often comes at the cost of simplicity, which is also important. Particularly, simplicity is important for large-scale projects where you're more likely to need to revisit code you haven't looked at in a few months. If you have to waste time re-learning what you did--or worse, redoing it because you have no clue, then that stretches out your development time. (Been there!)

A good analogy is to think about an organization that has one billion dollars per year to spend--like a state government. (One billion, by the way, is a rough approximation of the number of instructions a computer can run in 0.01 seconds. ref: Suppose you find out that a handful of employees have been taking a few extra post-it notes for personal use, and that the per-year expenditure on notes is $100 instead of $10. That's 10x as expensive! So you come up with some byzantine process to dole out office supplies that works, but has to be reexplained every time someone wants a post-it note. Maybe $90 would be best considered as the price of having a system people can understand.

On the other hand, if you're comparing spending $100 versus $10 every second of the year, efficiency becomes more important than simplicity. With over 31 million seconds in a year, this makes the difference between staying in budget or going over.

If you don't have an efficiency problem, just go with whatever solution feels natural. This is the solution you're most likely to understand if you revisit your code months later. 100 executions per frame is a lot, but only if the contents of the snippet being executed is heavy enough to make a dent in that 1-billion execution budget.

« Last Edit: May 20, 2020, 08:45:39 am by merrak »


  • Posts: 213
Thanks for the explanation, you're right.