Project

General

Profile

Actions

Bug #3892

closed

Makestack Average Stack is Slow

Added by Neil Voss almost 9 years ago. Updated almost 9 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
Image Processing
Target version:
-
Start date:
01/21/2016
Due date:
% Done:

0%

Estimated time:
Affected Version:
Appion/Leginon 3.2
Show in known bugs:
No
Workaround:

Description

Scott, I seem to remember you reporting the average stack is slow. Is this during the stack making or just the final step. Because during the stack making should be really fast. I am working on/reviewing the the finish part. Do you have any comment?


Related issues 1 (0 open1 closed)

Related to Appion - Bug #3753: makestack2 not inverting averageClosedGabriel Lander11/10/2015

Actions
Actions #1

Updated by Neil Voss almost 9 years ago

  • Related to Bug #3753: makestack2 not inverting average added
Actions #2

Updated by Gabriel Lander almost 9 years ago

Checked with my group - it's not makestack that's super slow, it's alignsubstack.py that's really slow for large datasets. I guess it's because we use proc2d to make the averaged substack.

Actions #3

Updated by Neil Voss almost 9 years ago

  • Status changed from Assigned to In Test
  • Assignee changed from Scott Stagg to Gabriel Lander

This should fix that.

Actions #4

Updated by Gabriel Lander almost 9 years ago

Get this error when making a substack now:

 ... Keeping 15830 and excluding 32267 particles
 ... writing to keepfile /gpfs/group/em/appion/asong/16feb01g/stacks/alignsub67/
keepfile-16feb02s01.list
 ... creating a new stack
    /gpfs/group/em/appion/asong/16feb01g/stacks/alignsub67/start.hed
from the oldstack
    /gpfs/group/em/appion/asong/16feb01g/stacks/stack1/start.hed

!!! WARNING: Assuming apix is 1.0 A/pixel
 ... averaging stack for summary web page
 ... processStack: Free memory: 44.6 GB
 ... processStack: Box size: 96
 ... processStack: Memory used per part: 36.0 kB
 ... processStack: Max particles in memory: 1300253
 ... processStack: Particles allowed in memory: 65012
 ... processStack: Number of particles in stack: 15830
 ... processStack: Particle loop num chunks: 1
 ... processStack: Particle loop step size: 15830
 ... processStack: partnum 1 to 15830 of 15830
 ... processStack: actual partnum 0 to 48095

['/gpfs/home/glander/myami/appion/appionlib', '/gpfs/home/glander/myami/appion']
connecting
Wrote 15830 particles to file 
Traceback (most recent call last):
  File "/gpfs/home/glander/myami/appion/bin/alignSubStack.py", line 325, in <mod
ule>
    subStack.start()
  File "/gpfs/home/glander/myami/appion/bin/alignSubStack.py", line 307, in star
t
    apStack.averageStack(stack=oldstack,outfile=outavg,partlist=includeParticle)
  File "/gpfs/home/glander/myami/appion/appionlib/apStack.py", line 332, in aver
ageStack
    avgStack.start(stackfile, partlist)
  File "/gpfs/home/glander/myami/appion/appionlib/apImagicFile.py", line 1013, i
n start
    stackarray = readParticleListFromStack(stackfile, sublist, msg=False)
  File "/gpfs/home/glander/myami/appion/appionlib/apImagicFile.py", line 859, in
 readParticleListFromStack
    apDisplay.printError("particle numbering starts at 1")
  File "/gpfs/home/glander/myami/appion/appionlib/apDisplay.py", line 65, in pri
ntError
    raise Exception, colorString("\n *** FATAL ERROR ***\n"+text+"\n\a","red")
Exception: %)
 *** FATAL ERROR ***
particle numbering starts at 1

Actions #5

Updated by Neil Voss almost 9 years ago

I'll take a look.

Actions #6

Updated by Neil Voss almost 9 years ago

  • Status changed from In Test to Assigned
  • Assignee changed from Gabriel Lander to Neil Voss
Actions #7

Updated by Neil Voss almost 9 years ago

  • Assignee changed from Neil Voss to Gabriel Lander

Hi Gabe,
I cannot reproduce your error, it works every time for me. The problem is that awhile ago, I setup our substack maker to start at index 1, proc2d started at 0, so I have to add one to each particle number. When I do an alignment substack of groEL, it correctly separated the side and top views into each stack and I make to include particle #1.

projectid=1 --expid=2 --jobtype=makestack ta/appion/06jul12a/stacks/alignsub1 --runname=alignsub1 -- 

 ... Time stamp: 16feb05e03
 ... Function name: alignSubStack
 ... Appion directory: /usr/lib64/python2.6/site-packages
 ... Processing hostname: c020f51827bd
 ... Using split database
Connected to database: 'ap1'
 ... Committing data to database
 ... Run directory: /emg/data/appion/06jul12a/stacks/alignsub1
 ... Writing function log to: alignSubStack.log
 ... Uploading ScriptData....
 ... Found 3 processors on this machine
 ... Running Appion version 'trunk'
 ... Getting stack data for stackid=1
Old stack info: 'testing average stack'
 ... Exclude list: []
 ... Include list: [0, 1]
 ... Querying database for particles
connecting
 ... Completed in 6.04 msec

 ... Parsing particle information
 ... Completed in 1.93 msec

 ... Keeping 595 and excluding 0 particles
 ... writing to keepfile /emg/data/appion/06jul12a/stacks/alignsub1/keepfile-16feb05e03.list
 ... creating a new stack
    /emg/data/appion/06jul12a/stacks/alignsub1/start.hed
from the oldstack
    /emg/data/appion/06jul12a/stacks/stack1/start.hed

!!! WARNING: Assuming apix is 1.0 A/pixel
594 of 595Wrote 595 particles to file 
 ... averaging stack for summary web page
 ... processStack: Free memory: 6.1 GB
 ... processStack: Box size: 160
 ... processStack: Memory used per part: 100.0 kB
 ... processStack: Max particles in memory: 64081
 ... processStack: Particles allowed in memory: 3204
 ... processStack: Number of particles in stack: 595
 ... processStack: Particle loop num chunks: 1
 ... processStack: Particle loop step size: 595
 ... processStack: partnum 1 to 595 of 595
 ... processStack: actual partnum 1 to 595
 ... processStack: finished processing stack in 1.34 sec
got old stackdata in 1.19 msec
 ... created new stackdata in 5.16 msec

 ... Getting list of particles to include
 ... Completed in 0.59 msec

 ... Retrieving original stack information
 ... Completed in 13.25 msec

 ... Assembling database insertion command
 ... Inserting particle information into database

Inserted 595 stack particles into the database in 40.54 msec
 ... 
Inserting Runs in Stack
 ... finished
 ... creating Stack Mean Plot montage for stackid: 2
 ... Getting stack data for stackid=2
Old stack info: 'sefsdfsf ... 595 particle substack with 0,1 classes included'
 ... binning stack by 2
 ... {'minmean': 608.80609374999995, 'maxstdev': 105.94217173, 'minstdev': 73.776147910000006, 'maxmean': 1006.6071875}
 ... 00:     0     5 
 ... 01:     0   207 
 ... 02:    31    38 
 ... 03:   188     0 
 16 of  16, 03x03:      0 ... writing stack to disk from memory: /emg/data/appion/06jul12a/stacks/alignsub1/montage2.hed
 ... wrote 16 particles to header file
 ... finished in 15.29 msec

 ... assembling pngs into montage
 ... reading stack from disk into memory: montage2.hed
 ... read 16 particles equaling 400.0 kB in size
 ... finished in 0.62 msec
. ... montaging
EMAN: montage -geometry +4+4 00x00.png 00x01.png 00x02.png 00x03.png 01x00.png 01x01.png 01x02.png 01x03.png 02x00.png 02x01.png 02x02.png 02x03.png 03x00.png 03x01.png 03x02.png 03x03.png /emg/data/appion/06jul12a/stacks/alignsub1/montage2.png
 ... /bin/mv -v montage2.??? /emg/data/appion/06jul12a/stacks/alignsub1
 ... finished in 1.45 sec
 ... Closing out function log: alignSubStack.log
 ... Ended at Fri, 05 Feb 2016 04:03:20
Total run time:    3.68 sec
Actions #8

Updated by Gabriel Lander almost 9 years ago

couldn't reproduce this either, how odd. There are some upgrades to the cluster going on, maybe something got screwed up.

Actions #9

Updated by Neil Voss almost 9 years ago

  • Status changed from Assigned to Closed
  • Assignee deleted (Gabriel Lander)

I am going to close this issue. Open it again if the problem resurfaces.

Actions

Also available in: Atom PDF