Bug #4414
openarray too big for generating miss stack
0%
Description
generate miss stack do not work in SEMC head.
Here is the error message:
Connected to database: 'nyap_141'
... Getting stack data for stackid=195
Old stack info: 'tst ... 83193 particle substack with 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,23,32,34,46 classes excluded'
... Querying original particles from stackid=195
... original stackid: 163
connecting
Original stack: /gpfs/appion/zzhang/16jul15c/stacks/stack27/start.hed
... Getting stack data for stackid=195
Old stack info: 'tst ... 83193 particle substack with 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,23,32,34,46 classes excluded'
... Stack 195 pixel size: 1.097
... generating stack: '/gpfs/appion/zzhang/16jul15c/stacks/alignsub97/start.hed' with 83193 particles
Traceback (most recent call last):
File "/opt/myamisnap/bin/generateMissingStack.py", line 35, in <module>
apVirtualStack.generateMissingStack(params['stackid'])
File "/opt/myamisnap/lib/appionlib/apVirtualStack.py", line 32, in generateMissingStack
a.run()
File "/opt/myamisnap/lib/appionlib/proc2dLib.py", line 46, in run
self.approc2d.start()
File "/opt/myamisnap/lib/appionlib/proc2dLib.py", line 400, in start
indata = self.readFileData(self.params['infile'])
File "/opt/myamisnap/lib/appionlib/proc2dLib.py", line 193, in readFileData
data = imagic.read(filename)
File "/opt/myamisnap/lib/pyami/imagic.py", line 73, in read
a = readImagicData(pair['img'], header_dict, frame)
File "/opt/myamisnap/lib/pyami/imagic.py", line 43, in readImagicData
a = numpy.memmap(filename, dtype=dtype, mode='r', offset=start, shape=shape, order='C')
File "/usr/lib64/python2.6/site-packages/numpy/core/memmap.py", line 226, in new
offset=offset, order=order)
ValueError: array is too big.
Updated by Anchi Cheng about 8 years ago
- Subject changed from generate miss stack do not work in SEMC head to array too big for generating miss stack
Updated by Neil Voss about 8 years ago
This is a know problem and why I created the StackClass, but I have not ported StackClass to apProc2dLib.
Updated by Anchi Cheng about 8 years ago
- Assignee changed from Sargis Dallakyan to Neil Voss
Any workaround suggestion to the user ?
Updated by Giovanna Scapin about 8 years ago
Same error during a simple stack generation.
Updated by Neil Voss about 8 years ago
This change was made at the 2015 Workshop, what has happened for this to be a problem now?
Updated by Sargis Dallakyan about 8 years ago
I'm looking for a test case to reproduce this. Giovanna or Zhening can you please copy the output directory or use "Just Show Command" button and copy/paste result here. I've searched for this error in all *.log files under /gpfs/appion/zzhang/16jul15c and can't find it there.
Updated by Zhening Zhang about 8 years ago
Here is the command: generateMissingStack.py --projectid=141 --expid=2309 --stackid=194
Here is the error messages:
Connected to database: 'nyap_141'
... Getting stack data for stackid=194
Old stack info: 'tst2 ... 59661 particle substack with 1,2,3,4,5,7,8,9,10,11,12,14,15,16,17,18,20,21 classes included'
... Querying original particles from stackid=194
... original stackid: 163
connecting
Original stack: /gpfs/appion/zzhang/16jul15c/stacks/stack27/start.hed
... Getting stack data for stackid=194
Old stack info: 'tst2 ... 59661 particle substack with 1,2,3,4,5,7,8,9,10,11,12,14,15,16,17,18,20,21 classes included'
... Stack 194 pixel size: 1.097
... generating stack: '/gpfs/appion/zzhang/16jul15c/stacks/alignsub97a/start.hed' with 59661 particles
Traceback (most recent call last):
File "/opt/myamisnap/bin/generateMissingStack.py", line 35, in <module>
apVirtualStack.generateMissingStack(params['stackid'])
File "/opt/myamisnap/lib/appionlib/apVirtualStack.py", line 32, in generateMissingStack
a.run()
File "/opt/myamisnap/lib/appionlib/proc2dLib.py", line 46, in run
self.approc2d.start()
File "/opt/myamisnap/lib/appionlib/proc2dLib.py", line 400, in start
indata = self.readFileData(self.params['infile'])
File "/opt/myamisnap/lib/appionlib/proc2dLib.py", line 193, in readFileData
data = imagic.read(filename)
File "/opt/myamisnap/lib/pyami/imagic.py", line 73, in read
a = readImagicData(pair['img'], header_dict, frame)
File "/opt/myamisnap/lib/pyami/imagic.py", line 43, in readImagicData
a = numpy.memmap(filename, dtype=dtype, mode='r', offset=start, shape=shape, order='C')
File "/usr/lib64/python2.6/site-packages/numpy/core/memmap.py", line 226, in new
offset=offset, order=order)
ValueError: array is too big.
Updated by Giovanna Scapin about 8 years ago
Mine was identical to Zhening (trying to generate a missing stack) but I don't have the output sorry. It also happened when I tried to start a cl2D alignment on the SEMC-Head cluster
Updated by Sargis Dallakyan about 8 years ago
I've looked into this and the reason it gives this error is because SEMC-Head has not enough free memory. It works on the nodes when you run it after doing qsub -I
.
It's trying to read 56G file into memory and on SEMC-head there is only 23G free memory available currently:
[root@SEMC-head generic_webservice]# ls -alh /gpfs/appion/zzhang/16jul15c/stacks/stack27/start.img -rw-r--r-- 1 zzhang emg 56G Aug 5 00:09 /gpfs/appion/zzhang/16jul15c/stacks/stack27/start.img [root@SEMC-head generic_webservice]# free -m -h total used free shared buffers cached Mem: 252G 228G 23G 13M 1.1G 195G -/+ buffers/cache: 32G 219G Swap: 15G 473M 15G
On the node that it works, there is enough free memory:
[sargis@node34 ~]$ free -m -h total used free shared buffers cached Mem: 252G 7.3G 244G 12K 170M 537M -/+ buffers/cache: 6.7G 245G Swap: 15G 105M 15G