Bug #13334
openredux not using fftw in python3
0%
redux default to use the slower fftpack if can not load fftw.
Description
redux in myami-python3 branch can currently only works with scipy.fftpack. It is slower, especially in comparison with wisdom in fftw on odd sizes like those from k3. Needs the python hook to fftw to be updated to use fftw3.
Please work on this in myami-python3-fftw branch
You can force redux to use fftw by commenting out the part it attempt fftpack in pyami/fft/registry.py
Files
Updated by Anchi Cheng about 2 years ago
Sargis, https://emg.nysbc.org/projects/leginon/repository/revisions/570d989a5d7727f03a9f100288a7a054886c85b1 still uses numpy fftpack.
Updated by Sargis Dallakyan about 2 years ago
Thank you Anchi. I fixed the import.
Updated by Anchi Cheng about 2 years ago
- File test_fft.py test_fft.py added
- Assignee changed from Sargis Dallakyan to cesar mena
The code works. However, it is not using fftw, but scipy_fft. Cesar, could you compare these timing I got out from the testing script I attached with that you get with your python 27 fftw version that has wisdom setup ?
time:
0.5560429096221924
0.5202159881591797
0.5268056392669678
Updated by Anchi Cheng about 2 years ago
O.K. I misread the manual. This is indeed using fftw, just has an interface like scipy_fft. However, please still test that this is fast enough. It looks good from what I saw. Maybe this new pyFFTW has wisdom-like performance already.
Updated by cesar mena about 2 years ago
TIL that is is possible to use the fftw library as a DFT backend for scipy. See here: https://pyfftw.readthedocs.io/en/latest/source/tutorial.html#quick-and-easy-the-pyfftw-interfaces-module
However it has to be setup, ie: scipy.fft.set_backend(pyfftw.interfaces.scipy_fft).
Did we do that? or are you measuring straight scipy performance?
Also, is there a particular .mrc you want me to test against?
-cm
Updated by Anchi Cheng about 2 years ago
I did this on anchi1 without specific setup. The mrc files referenced in the test script are still there. Please use them.
Updated by cesar mena about 2 years ago
This is using the setup in memcredux (py2). I don't think this sample is representative of a bad case. I've seen wisdom make a 30s to 1s difference in practice. Anything under 1s is OK. Even nccat's redux - on bare metal - does around 1s too.
w/o wisdom:
calc_fftw3: 8 CPUs found, setting threads=8
0.775215148926
0.472229003906
0.493533849716
w/ wisdom:
calc_fftw3: 8 CPUs found, setting threads=8
calc_fftw3: local wisdom imported
0.779235124588
0.477630853653
0.496325016022
wisdom setup:
./fftwsetup.py 1 7676 7420
./fftwsetup.py 1 5760 4092
./fftwsetup.py 1 2880 2046
./fftwsetup.py 1 1440 1023
./fftwsetup.py 1 1008 1008
./fftwsetup.py 1 11520 8184
Updated by Anchi Cheng about 2 years ago
Thanks for running the test. If you have some better example, please try them out.