Simple FFT demo

Show off your games, demos and other (playable) creations.
Post Reply
User avatar
HDPLocust
Citizen
Posts: 64
Joined: Thu Feb 19, 2015 10:56 pm
Location: Swamp
Contact:

Simple FFT demo

Post by HDPLocust » Fri Feb 14, 2020 12:13 pm

This is not very optimised (FFI FFT preferred), but looks cool and has a basics for realtime sound analysis.
(this post is a promotion of sound analysis in Lua+LÖVE)
Image

UPD: FFT v0.2 is looks ok and (probably) normalized, but lowfreq sound is damped (too normalized : ) ) and highfreq sounds is not displayed.
Attachments
fft.love
v0.2
(3.25 MiB) Downloaded 34 times
fft.love
v0.1
(3.71 MiB) Downloaded 25 times
Last edited by HDPLocust on Fri Feb 14, 2020 10:00 pm, edited 7 times in total.
Science and violence

User avatar
zorg
Party member
Posts: 2776
Joined: Thu Dec 13, 2012 2:55 pm
Location: Absurdistan, Hungary
Contact:

Re: Simple FFT demo

Post by zorg » Fri Feb 14, 2020 2:31 pm

I can get behind this, especially since i'm also working with such things. :3 Two issues are with this implementation though;

- The FFT result seems incorrect (either that, or something related) due to it being repeated 8 times at the bottom... at most, it should reflect around the Nyquist limit in the middle (22050 for 44100 sampling rate), which you could just deal with by only showing n/2 returned values.

You're also don't seem to window the data either, which should be done to get better results.



- Depending on what one wants, your current implementation isn't the best.
Using a Decoder means you really can't set the buffer size for neither the oscilloscope, nor the input size for the FFT.
Using Source:tell to keep track of playback will never be samplepoint-exact, which might be something one might want.
Loading in the song into a SoundData would mean you can keep track of exactly where the "playback pointer/cursor" is, if you copy that data to a smaller buffer SoundData object of your chosen size, and then pass that into a QueueableSource. In turn, this could also be used for input for any visualizer, whether it's time-domain like the oscilloscope, or freq.-domain like the spectrum analyzer you have.

Also, just a bit of a weirdness, but why exactly are you filtering for mp3 and ogg files only? Löve can handle wav, flac and tons of module formats as well; if you want error handling just return false if löve fails to detect the dropped file as an audio format.
Me and my stuff :3True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho.

User avatar
HDPLocust
Citizen
Posts: 64
Joined: Thu Feb 19, 2015 10:56 pm
Location: Swamp
Contact:

Re: Simple FFT demo

Post by HDPLocust » Fri Feb 14, 2020 5:01 pm

The FFT result seems incorrect
Thanks for the explanation. Currently, I have some problems with functional analysis, and I would love to read additional materials.
In addition, this post is still promoting sound analysis stuff, so if someone is interested and wants to start doing better things, my work is partially done : )
Using a Decoder means you really can't set the buffer size for neither the oscilloscope, nor the input size for the FFT.
I can set buffer size by

Code: Select all

Decoder('filename', buffersize)
but it should be preprocessed (by extracting specific samples from it).
..and then pass that into a QueueableSource
Windowing and every-byte-transformation it's for sound equalizers, not for visual stuff or realtime beat detection etc :<
Due to the fact that we work with frame rendering, non-hard-realtime Lua VM and probably lags (non-stable fps etc), we need to determine stuff exactly the time, not the byte offsets. Also QueueableSource has big problems finding "current byte offset" position, so we should use independent byte/time counters. Or we find ourselves too much dependent on the frame rate, and also have to put the only next frame data to the source with only one SoundData slot, so microlag will pause our composition (window moving will stop everything too). But you should see that parallel decoding with time depency in my demo is ok, the waveform represents the audible.

I tried using this, and the best I could think of was something like that. And if you will take a window and held it until the current SoundData in queue is played, our system breaks:

Code: Select all

function a:update()
	if self.state == 'play' then
		if self.source:isPlaying() then
		  -- QueueableSource source returns only "currently played SoundData" tell
			local t  = self.source:tell()
			local dt = t - self.oldtell
			
			-- If source is plays "next" SoundData
			if t < self.oldtell then
				-- We remove already played stuff from our storage
				local chunk = table.remove(self.chunks, 1)
				-- And calcs real deltatime
				dt = chunk:getDuration() - self.__tell + t
			end
			self.oldtell = t
		end
		
		-- Filling our Queue with freshy new chunks
		while self.source:getFreeBufferCount() > 0 do
			local data = self.decoder:decode()
			table.insert(self.chunks, data)
			self.source:queue(data)
			self.source:play()
		end
	end
end
And I will be glad to know better solution.
Science and violence

User avatar
zorg
Party member
Posts: 2776
Joined: Thu Dec 13, 2012 2:55 pm
Location: Absurdistan, Hungary
Contact:

Re: Simple FFT demo

Post by zorg » Sat Feb 15, 2020 10:31 pm

(Sorry for the gigantic wall of text beforehand :D)
HDPLocust wrote:
Fri Feb 14, 2020 5:01 pm
Thanks for the explanation. Currently, I have some problems with functional analysis, and I would love to read additional materials.
There are quite a few different fast discreet fourier (or related) transformation implementations, but those should be more-or-less equivalent in terms of getting near same output from input. (granted, they still make some assumptions, hence windowing -is- a must for accuracy)
HDPLocust wrote:
Fri Feb 14, 2020 5:01 pm
In addition, this post is still promoting sound analysis stuff, so if someone is interested and wants to start doing better things, my work is partially done : )
Yeah, i have been working on such things since a few years ago, been using this library, but it is not the best, so i have been trying to optimize it: https://github.com/h4rm/luafft (Currently i can do a frequency resolution of 16k bins for a mono channel without slowdown)
HDPLocust wrote:
Fri Feb 14, 2020 5:01 pm
I can set buffer size...
...but it should be preprocessed (by extracting specific samples from it).
Whoops, my mistake, that one. :v Then that point of mine can be disregarded.
Still, i do not really agree with the second part, since one calling :decode() already "extracts the specific samples" from the file itself...
HDPLocust wrote:
Fri Feb 14, 2020 5:01 pm
Windowing and every-byte-transformation it's for sound equalizers, not for visual stuff or realtime beat detection etc :<
It is for visual stuff; if you don't window/envelope, a sonograph that would use FFT data will have vertical discontinuities because the input wasn't tapered off to zero on both ends.
HDPLocust wrote:
Fri Feb 14, 2020 5:01 pm
Due to the fact that we work with frame rendering, non-hard-realtime Lua VM and probably lags (non-stable fps etc), we need to determine stuff exactly the time, not the byte offsets.
Yes, all commercial operating systems as far as i know are non-realtime (win, osx, linux); as for the rest, the exact reason for buffering based on samplepoint offsets (not necessarily byte offsets) is that we can exactly know how much data was processed at one time... granted if the code has something like an "if there are empty buffers in this QSource, then process and queue up the small SoundData buffer(s) we're using". I use this method in my tracker music replayer i wrote; it absolutely works.
HDPLocust wrote:
Fri Feb 14, 2020 5:01 pm
Also QueueableSource has big problems finding "current byte offset" position, so we should use independent byte/time counters.
If you were the one making that bitbucket issue, i already answered there :P but to reiterate, you should not get the offset from the QSource; store it in a variable after you processed a SoundData buffer.
HDPLocust wrote:
Fri Feb 14, 2020 5:01 pm
Or we find ourselves too much dependent on the frame rate, and also have to put the only next frame data to the source with only one SoundData slot, so microlag will pause our composition.
The solution to that of course is to queue up at least as much data as needed for one "frame" to happen. If you don't want to sacrifice vsync on the main thread, you could use another thread that doesn't have that issue. (In fact, my advanced source library works like that...) There's no micro-lag pausing my processing in my tracker replayer either so idk...
HDPLocust wrote:
Fri Feb 14, 2020 5:01 pm
(window moving will stop everything too)
That is a "windows is shit" thing though, and we can not do anything about it, as far as i know...
HDPLocust wrote:
Fri Feb 14, 2020 5:01 pm
But you should see that parallel decoding with time depency in my demo is ok, the waveform represents the audible.
Yes, note that i did not say your method is wrong, i only said that there are other ways that might work better depending on what one wants.
HDPLocust wrote:
Fri Feb 14, 2020 5:01 pm
I tried using this, and the best I could think of was something like that. And if you will take a window and held it until the current SoundData in queue is played, our system breaks:
<snip>
And I will be glad to know better solution.
I'm at work at the moment, so sadly i can not give a better solution, but i will try a few things when i get back home.

Also, i am working on a visualization library myself, so i will probably share progress on that soon.
Me and my stuff :3True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho.

User avatar
raidho36
Party member
Posts: 2050
Joined: Mon Jun 17, 2013 12:00 pm

Re: Simple FFT demo

Post by raidho36 » Sun Feb 16, 2020 12:04 am

There might be a logical error somewhere in your code. You can avoid it by avoid having logic - complete your actions immediately instead of deferring them to a later moment detected through some condition.

User avatar
HDPLocust
Citizen
Posts: 64
Joined: Thu Feb 19, 2015 10:56 pm
Location: Swamp
Contact:

Re: Simple FFT demo

Post by HDPLocust » Sun Feb 16, 2020 4:26 am

since one calling :decode() already "extracts the specific samples" from the file itself
"Specific" means "we can skip samples".
Our music has 44100 samples per second, so max displayed freq is ~44100.
Windows size defines minimal frequency that can be catched. But if we skip some samples, we can display lowfreq data with smaller window size.
For example, if we need to display freqs in range 16-16k hz, we should use something like this:
16 hz means 1/16 sec and 44100 / 16 = 2 756 window size.
16 khz means we can skip every second sample from 44.1k, and get the same result (window size also shrinks by two).
So we can do something like this:

Code: Select all

-- decoder bufsuze is 2756
-- our sound is mono
local data_for_fft = {}
local data = decoder:decode()
for i = 0, 2756 - 1, 2 do -- skip some data
  table.insert(data_for_fft, data:getSample(i))
end

for i = #data_for_fft + 1, 2048 do -- zero filling
  data_for_fft[i] = 0
end
-- only 2048 samples for fft
local f = fft(data_for_fft)
So our freq range is ~16-22050 hz, we should trim data and display what we got.
The calculations may be incorrect due to the Nyquist limit.
Science and violence

User avatar
raidho36
Party member
Posts: 2050
Joined: Mon Jun 17, 2013 12:00 pm

Re: Simple FFT demo

Post by raidho36 » Sun Feb 16, 2020 5:35 am

HDPLocust wrote:
Sun Feb 16, 2020 4:26 am
The calculations may be incorrect due to the Nyquist limit.
I call attention to the fact that you're basically doing resampling using the absolute lowest quality algorithm in existence. You should not do any resampling at all unless you're dying for performance*, just feed the entire thing into fourier transform function.

*in which case you still don't resample because that will create distortion which will readily show up in transformed data; resampling is your last resort option after you exhausted all other optimization options such as writing GPU compute shaders.

User avatar
HDPLocust
Citizen
Posts: 64
Joined: Thu Feb 19, 2015 10:56 pm
Location: Swamp
Contact:

Re: Simple FFT demo

Post by HDPLocust » Sun Feb 16, 2020 8:22 am

using the absolute lowest quality algorithm in existence
The specific algorithm does not matter, since it can be changed at any time. I am primarily interested in functional analysis itself. Including the reduction of the samples.
Also, thanks to this, you can make almost any sound look like an eight-bit one (resampling => fft => normalization => rfft) : )

By the way, any audioplayer that consumes more than 5% of the CPU for everything, including visualization, is terrible. Obviously, in the real app, I’ll do a lot of optimization. I can do it.
Science and violence

User avatar
zorg
Party member
Posts: 2776
Joined: Thu Dec 13, 2012 2:55 pm
Location: Absurdistan, Hungary
Contact:

Re: Simple FFT demo

Post by zorg » Sun Feb 16, 2020 2:11 pm

5% really doesn't say much if you don't also state on what CPU; my i7-4820K's 5% might be equivalent to a P4's 25% or a hypothetical futuristic intel consumer CPU's 0.5%... i do agree with the optimization sentiment here though; if we're not linking to already well-tested and well-optimized fft libraries, and just writing our own (in lua even), we do need to squeeze out as much performance from that as we can (including not generating garbage at all)
Me and my stuff :3True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho.

Post Reply

Who is online

Users browsing this forum: No registered users and 4 guests