You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ssqueezepy has a bunch of optimizations around FFT and CWT.
numba jit
fftw if available
optional parallel processing for fft
torch
jit
I tried this once and it was slower than what I had already. However, it's possible that I was using something that number couldn't handle and it fell back to python. I should try again with nopython=True to make sure I'm not providing numba-incompatible code.
fftw
This should be an easy win. The first sample will take much longer to process while fftw does its optimization but subsequent processing will be faster.
parallel
scipy.fft(..., workers=N) -- does this actually help? Isn't the overhead more than the savings?
torch
We will need all the nodes in a standard pipeline to handle torch tensors before the overhead of moving to gpu/mps is worth it.
The text was updated successfully, but these errors were encountered:
ssqueezepy has a bunch of optimizations around FFT and CWT.
jit
I tried this once and it was slower than what I had already. However, it's possible that I was using something that number couldn't handle and it fell back to python. I should try again with
nopython=True
to make sure I'm not providing numba-incompatible code.fftw
This should be an easy win. The first sample will take much longer to process while fftw does its optimization but subsequent processing will be faster.
parallel
scipy.fft(..., workers=N)
-- does this actually help? Isn't the overhead more than the savings?torch
We will need all the nodes in a standard pipeline to handle torch tensors before the overhead of moving to gpu/mps is worth it.
The text was updated successfully, but these errors were encountered: