Class Best Practices

Questions about the LÖVE API, installing LÖVE and other support related questions go here.
Forum rules
Before you make a thread asking for help, read this.
User avatar
pgimeno
Party member
Posts: 3544
Joined: Sun Oct 18, 2015 2:58 pm

Re: Class Best Practices

Post by pgimeno »

Jaston wrote: Tue Jan 29, 2019 5:17 am When you are saying clearing, are you refering to looping through the list and selling each element to nil?
Setting, yes.
Jaston wrote: Tue Jan 29, 2019 5:17 am Also would the speed still be the same if it is just an array with table references and no explicit key? (i.e. {#eA23dc, #dsf234, etc..})
If you mean {table1, table2, ...} then the key, albeit implicit, is present, numeric, increasing, hole-free and starting with 1. If that's the case, then yes, it still applies. But then the contained tables themselves are also potentially eligible for reuse, as they are subject to the same potential problem of allocation/reallocation.

If you mean {[table1]=value, [table2]=value, ...} I haven't tested that, but the pairs()/next() needed to iterate over the elements are compilation blockers so it can potentially be much slower.
Jaston wrote: Tue Jan 29, 2019 5:17 am Those links had me lost. They use functions way beyond what I understand from lua. All can grasp is that this gets to the root functions of what the compiler and JIT does quick vs. interprets. And one could leverage these very specific techniques to ensure the unoptimized functions are avoided.
The gist of it is that using certain functions slow things down significantly, notably including pairs, next, select with non-constant first argument, unpack, and others, as well as certain operations notably including creating functions and concatenating strings (the last one isn't true if you're in LJ 2.1).
Jaston
Prole
Posts: 18
Joined: Sun Nov 25, 2018 5:43 pm

Re: Class Best Practices

Post by Jaston »

@pgimeo - I use next in my sort algorthim for my objects :(. I have much to revise. I will go to the drawing board and let you know what I found out.

Thank you everyone for your help thus far. I will being the experiments.
User avatar
pgimeno
Party member
Posts: 3544
Joined: Sun Oct 18, 2015 2:58 pm

Re: Class Best Practices

Post by pgimeno »

Well, before going down that rabbit hole, try to profile. Maybe you can't expect that much of a gain.
Jaston
Prole
Posts: 18
Joined: Sun Nov 25, 2018 5:43 pm

Re: Class Best Practices

Post by Jaston »

How do I make the profiler look at a specific function only to figure out how much time it takes? So far profiler, piefiller can't work in real time well for my game. So do I try to just run it on a smaller subset to test the problem areas?

Aka start wide then narrow it down? Also how can one detect if they did anything to cause memory leaks?

Thanks
Jaston
Prole
Posts: 18
Joined: Sun Nov 25, 2018 5:43 pm

Re: Class Best Practices

Post by Jaston »

grump wrote: Sun Jan 27, 2019 11:59 am Maybe try binary insertion sort? Keeps your array sorted at all times, without having to iterate over all elements. Implementing the __lt metamethod for depth-sorted objects may result in a small perf boost too.
@grump - I did this and my sort time went down by 30x!!! :crazy: This is just normal insertion sort :)! That combined with spreading my offscreen zombie updates evenly between frames rather than ever 1/5 seconds resulted in a fps increase of upto 75 fps on top of what I had!!! Thank you so much grump!!!

Now I just have to get rid of the random stutter in my game and I can have 4500 to 6500 zombies depending on the fps I want. :)
grump
Party member
Posts: 947
Joined: Sat Jul 22, 2017 7:43 pm

Re: Class Best Practices

Post by grump »

Jaston wrote: Tue Feb 12, 2019 5:03 am @grump - I did this and my sort time went down by 30x!!! :crazy: This is just normal insertion sort :)! That combined with spreading my offscreen zombie updates evenly between frames rather than ever 1/5 seconds resulted in a fps increase of upto 75 fps on top of what I had!!!
Nice!
Now I just have to get rid of the random stutter in my game and I can have 4500 to 6500 zombies depending on the fps I want. :)
If the GC is responsible for the stuttering, the already mentioned reuse of tables may get rid of it. To find out if it's the GC, periodically print the result of collectgarbage('count') (like once a second or so). If the value is repeatedly growing and shrinking, you will probably gain something from reusing tables. If the shrinking coincides with the stuttering, you have definitely found the culprit.
Jaston
Prole
Posts: 18
Joined: Sun Nov 25, 2018 5:43 pm

Re: Class Best Practices

Post by Jaston »

grump wrote: Tue Feb 12, 2019 8:13 am If the GC is responsible for the stuttering, the already mentioned reuse of tables may get rid of it. To find out if it's the GC, periodically print the result of collectgarbage('count') (like once a second or so). If the value is repeatedly growing and shrinking, you will probably gain something from reusing tables. If the shrinking coincides with the stuttering, you have definitely found the culprit.
I turned off the garbage collector and it wasn't the guilty party. My recycling of lists and removal of quadtrees helped prevent that. Spikes I found were caused by my tilemap collision checks spiking at random times. I figured that out by commenting out that line.

However a bigger revelation is that love2D itself is stuttering randomly, it even stutters randomly on the no game screen. I read in another post that it is related to the fact that the main thread and GPU thread share the same thread. So windows can mess up your thread randomly causing a stutter in your graphics even though the FPS is high like 250+ and no frame processing spikes.

Anyone find a work around for this in Love2D 11.2 (i'm on windows 10)
Post Reply

Who is online

Users browsing this forum: Google [Bot] and 60 guests