Project

General

Profile

Actions

Bug #2366

open

Particle Clustering is not committing to Database

Added by Julia Luciano almost 11 years ago. Updated almost 11 years ago.

Status:
Assigned
Priority:
Normal
Assignee:
Sargis Dallakyan
Category:
-
Target version:
Start date:
05/20/2013
Due date:
% Done:

0%

Estimated time:
Affected Version:
Appion/Leginon 2.2.0
Show in known bugs:
No
Workaround:

Description

using goby - emportal.scripps.edu/
I've tried to "Run particle clustering" on 2 different "Featured Analysis" a few times each.
It looks like it is not committing to the database since I can not see the results when the job is done.
Thanks


Files

Actions #1

Updated by Sargis Dallakyan almost 11 years ago

Sorry about that. For which session does this happen? I can see that you have one job currently running on goby at /ami/exdata/appion/12dec05b/rctvolume/rct7align35class0. Please let us know the path to the job that is not committing to the database so we can troubleshoot this.
Thanks.

Actions #2

Updated by Amber Herold almost 11 years ago

Sargis,
I see 2 jobs in the database that show an error status instead of done or running. One of the jobs run directory:

/ami/exdata/appion/12dec05b/align/coran3_original_stack_Keap1

It died with an error.
Spider failed to create an expected output file. I checked the spider log and the only thing I see is:
 *** UNDEFINED ENVIRONMENTAL VARIABLE:SPBIN_DIR
 PUT DEFINITION IN YOUR STARTUP FILE.  E.G.
 FOR C SHELL, ADD FOLLOWING TO .cshrc FILE 
 setenv SPBIN_DIR DIRECTORY_OF_BINARY_FILES

I checked the appion wrapper that emportal is using for processing on Goby at /opt/myami-2.2/bin/appion and it looks like none of the 3rd party package paths are set.
I'm not sure this is the issue though, it seems like this would have caused problems with other commands as well, but I am curious where all the environment variables are coming from if not from the wrapper?

Actions #3

Updated by Amber Herold almost 11 years ago

Just to keep all the info with the issue, here is an email from Julia.

Hi There

The output directory should have been: /ami/exdata/appion/12dec05b/align/

and I've tried on both stack files:
/ami/exdata/appion/12dec05b/align/substack_Keap1_spiderref-based_3/aligned.hed
/ami/exdata/appion/12dec05b/align/Keap1_spiderref-based_3_original_stack/aligned.hed

I hope this is the information you are looking for. If not, let me know.

Thanks again

Julia

Actions #4

Updated by Sargis Dallakyan almost 11 years ago

Thanks Amber and Julia. Julia, could you please rerun this job. I wasn't able to find anything in our logs that would help to troubleshoot this. Thanks.

Actions #5

Updated by Julia Luciano almost 11 years ago

Sure.
I just did and so far it shows as a running job ( 2 running jobs) in my Appion job status. It should be in the same directory as I stated above: /ami/exdata/appion/12dec05b/align/
Thanks

Actions #6

Updated by Julia Luciano almost 11 years ago

Hi there
Please see attachment of the error I encountered again.
Any guidance will be much appreciated

Actions #7

Updated by Sargis Dallakyan almost 11 years ago

  • Status changed from New to Assigned
  • Assignee set to Sargis Dallakyan
  • Target version set to Appion/Leginon 2.2.0
  • Affected Version changed from Appion/Leginon 2.1.0 to Appion/Leginon 2.2.0

Hi Julia,

Thank you for the screen shot. I can confirm this bug now and will look into this. So far from the error log at /ami/exdata/appion/12dec05b/align/coran3_original_stack_Keap1/coran3_original_stack_Keap1.appionsub.log:

...
============================
processing class averages for 84 classes
============================

 ... Using the hierarch clustering method

 \__`O O'__/        SPIDER  --  COPYRIGHT
 ,__xXXXx___        HEALTH RESEARCH INC., ALBANY, NY.
  __xXXXx__
 /  /xxx\  \        VERSION:  UNIX  18.10 ISSUED: 03/23/2010
   /     \          DATE:     20-MAY-2013    AT  13:43:05

 **** SPIDER NORMAL STOP ****
finding threshold<<<<<<<<>><>>executing command: ('CL HE', 0.01005651510599379, 'cluster/dendrogramdoc1-4', 'cluster/classdoc_13may20m37_****')
delete existing filescreate class averages.....................................................................................waiting for spider
EMAN: proc2d cluster/classavgstack_13may20m37_084.spi cluster/classavgstack_13may20m37_084.hed
EMAN: proc2d cluster/classvarstack_13may20m37_084.spi cluster/classvarstack_13may20m37_084.hed

14 rounds for 85 classes

Traceback (most recent call last):
  File "/opt/myami-2.2/bin/clusterCoran.py", line 205, in ?
    clusterCoran.start()
  File "/opt/myami-2.2/bin/clusterCoran.py", line 196, in start
    self.insertClusterStack(classavg, classvar, numclass, insert=True)
  File "/opt/myami-2.2/bin/clusterCoran.py", line 110, in insertClusterStack
    apDisplay.printError("could not find average stack file: "+imagicfile)
  File "/opt/myami-2.2/lib/appionlib/apDisplay.py", line 57, in printError
    raise Exception, colorString("\n *** FATAL ERROR ***\n"+text+"\n\a","red")
Exception: 
 *** FATAL ERROR ***
could not find average stack file: /ami/exdata/appion/12dec05b/align/coran3_original_stack_Keap1/cluster/classavgstack_13may20m37_084.hed

I see that it failed to create cluster/classavgstack_13may20m37_084.spi. From myami-2.2/appion/appionlib/apSpider/classification.py#L359 it seems that SPIDER's AS R command is failing quietly. I'll need to read some docs and code to understand why.

Actions

Also available in: Atom PDF