Project

General

Profile

Actions

Bug #13334

open

redux not using fftw in python3

Added by Anchi Cheng over 1 year ago. Updated over 1 year ago.

Status:
Assigned
Priority:
Normal
Assignee:
Category:
-
Target version:
Start date:
08/19/2022
Due date:
% Done:

0%

Estimated time:
Affected Version:
Appion/Leginon 4.0
Show in known bugs:
No
Workaround:

redux default to use the slower fftpack if can not load fftw.


Description

redux in myami-python3 branch can currently only works with scipy.fftpack. It is slower, especially in comparison with wisdom in fftw on odd sizes like those from k3. Needs the python hook to fftw to be updated to use fftw3.

Please work on this in myami-python3-fftw branch

You can force redux to use fftw by commenting out the part it attempt fftpack in pyami/fft/registry.py


Files

test_fft.py (241 Bytes) test_fft.py Anchi Cheng, 08/24/2022 08:27 PM
Actions #1

Updated by Anchi Cheng over 1 year ago

  • Workaround updated (diff)
Actions #3

Updated by Sargis Dallakyan over 1 year ago

Thank you Anchi. I fixed the import.

Actions #4

Updated by Anchi Cheng over 1 year ago

The code works. However, it is not using fftw, but scipy_fft. Cesar, could you compare these timing I got out from the testing script I attached with that you get with your python 27 fftw version that has wisdom setup ?

time:
0.5560429096221924
0.5202159881591797
0.5268056392669678

Actions #5

Updated by Anchi Cheng over 1 year ago

O.K. I misread the manual. This is indeed using fftw, just has an interface like scipy_fft. However, please still test that this is fast enough. It looks good from what I saw. Maybe this new pyFFTW has wisdom-like performance already.

Actions #6

Updated by cesar mena over 1 year ago

TIL that is is possible to use the fftw library as a DFT backend for scipy. See here: https://pyfftw.readthedocs.io/en/latest/source/tutorial.html#quick-and-easy-the-pyfftw-interfaces-module

However it has to be setup, ie: scipy.fft.set_backend(pyfftw.interfaces.scipy_fft).

Did we do that? or are you measuring straight scipy performance?

Also, is there a particular .mrc you want me to test against?

-cm

Actions #7

Updated by Anchi Cheng over 1 year ago

I did this on anchi1 without specific setup. The mrc files referenced in the test script are still there. Please use them.

Actions #8

Updated by cesar mena over 1 year ago

This is using the setup in memcredux (py2). I don't think this sample is representative of a bad case. I've seen wisdom make a 30s to 1s difference in practice. Anything under 1s is OK. Even nccat's redux - on bare metal - does around 1s too.

w/o wisdom:
calc_fftw3: 8 CPUs found, setting threads=8
0.775215148926
0.472229003906
0.493533849716

w/ wisdom:
calc_fftw3: 8 CPUs found, setting threads=8
calc_fftw3: local wisdom imported
0.779235124588
0.477630853653
0.496325016022

wisdom setup:
./fftwsetup.py 1 7676 7420
./fftwsetup.py 1 5760 4092
./fftwsetup.py 1 2880 2046
./fftwsetup.py 1 1440 1023
./fftwsetup.py 1 1008 1008

./fftwsetup.py 1 11520 8184

Actions #9

Updated by Anchi Cheng over 1 year ago

Thanks for running the test. If you have some better example, please try them out.

Actions

Also available in: Atom PDF