Project

General

Profile

Actions

Bug #4248

closed

crash on large substack database insertion

Added by Neil Voss over 8 years ago. Updated over 8 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
Start date:
06/08/2016
Due date:
% Done:

0%

Estimated time:
Affected Version:
Appion/Leginon 3.3
Show in known bugs:
No
Workaround:

Description

I was running the mean stdev substack program on a 128 box stack with 20,876 particles. Command:

stackFilter.py --old-stack-id=1 --description="remove extreme mean stdev" --minx=2200 --maxx=5300 --miny=340 --maxy=700 --commit --rundir=/emg/data/appion/artemia2/stacks/meanfilt1 --runname=meanfilt1 --projectid=1 --expid=3 --jobtype=makestack 

Most of the particles with kept:

 ... received 20876 stack particles in 5.69 sec
20263 included

Error message:

got old stackdata in 2.47 msec
 ... created new stackdata in 8.05 msec

 ... Getting list of particles to include
 ... Completed in 49.61 msec

 ... Retrieving original stack information
 ... Completed in 53.38 msec

 ... Assembling database insertion command
 ... Inserting particle information into database
connecting
Traceback (most recent call last):
  File "/emg/sw/myami/appion/bin/stackFilter.py", line 102, in <module>
    subStack.start()
  File "/emg/sw/myami/appion/bin/stackFilter.py", line 91, in start
    apStack.commitSubStack(self.params, newname, oldstackparts=stackparts)
  File "/usr/lib64/python2.6/site-packages/appionlib/apStack.py", line 590, in commitSubStack
    sinedon.directq.complexMysqlQuery('appiondata',sqlcmd)
  File "/usr/lib/python2.6/site-packages/sinedon/directq.py", line 24, in complexMysqlQuery
    results = cur.selectall(query)
  File "/usr/lib/python2.6/site-packages/sinedon/sqldb.py", line 42, in selectall
    self.c.execute(strSQL, param)
  File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 173, in execute
    self.errorhandler(self, exc, value)
  File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
    raise errorclass, errorvalue
_mysql_exceptions.OperationalError: (1153, "Got a packet bigger than 'max_allowed_packet' bytes")

Not sure who I should assign this to, but any suggestions to fix?


Related issues 1 (1 open0 closed)

Related to Appion - Bug #3938: Aligned particle problemNew02/08/2016

Actions
Actions #1

Updated by Neil Voss over 8 years ago

  • Status changed from New to Won't Fix or Won't Do

I think it was caused from a version mismatch between trunk appionlib and myami-3.2 sinedon.

Actions #2

Updated by Anchi Cheng over 8 years ago

Carl and Sargis are the two know most about databases. This is an error on mysql side.

Found this from google search the error:

http://stackoverflow.com/questions/93128/mysql-error-1153-got-a-packet-bigger-than-max-allowed-packet-bytes

If you wonder what the query causes the error, you can have the query printed in sinedon/directq.py function complexMysqlQuery.
This function bypass sinedon and send the query directly through cursor.

Actions #3

Updated by Neil Voss over 8 years ago

  • Status changed from Won't Fix or Won't Do to New

Opening again, because I am now using 100% myami-3.2, and it still fails.

Instead of printing the query, I printed len(query) and the first and last 100 char:

Length of Query 2908513
INSERT INTO `ApStackParticleData` (`particleNumber`,`REF|ApStackData|stack`,`edgemean`,
','4333.1982','2805.2053','347.8909222','344.6026','1','356.2032','0.1305',

And I am glad I did, because it is 2.9 million characters long.

Actions #4

Updated by Neil Voss over 8 years ago

  • Status changed from New to Assigned
  • Assignee set to Neil Voss

I found the problem in apStack.py and I am working to limit the length of the query and break it up into chunks.

Actions #5

Updated by Gabriel Lander over 8 years ago

stack insertions are currently broken up into groups of 100K inserts at a time. This appears to have been working for everyone since the last appion workshop - are you sure this isn't something you can update in your mysql settings?

Actions #6

Updated by Gabriel Lander over 8 years ago

could this be related to Bug #3938?
solution was increase the max_allowed_packet = 32M

Actions #7

Updated by Neil Voss over 8 years ago

Hi Gabe, yeah I just found it. Initially, I copy mysql huge settings as default:

cp -fv /usr/share/mysql/my-huge.cnf /etc/my.cnf

But it did have:

# The MySQL server
[mysqld]
...
max_allowed_packet = 1M

so I changed it to

max_allowed_packet = 16M

and restarted mysqld as in your comments in the file:

# break up command into groups of 100K inserts
# this is a workaround for the max_allowed_packet at 16MB
n = 100000
 ... Assembling database insertion command
 ... Inserting particle information into database
Length of Query 2908513
connecting

Inserted 20263 stack particles into the database in 56.56 sec

But the default for mysql huge is max_allowed_packet at 1MB.

Actions #8

Updated by Neil Voss over 8 years ago

  • Related to Bug #3938: Aligned particle problem added
Actions #9

Updated by Neil Voss over 8 years ago

yep. definitely the same.

Actions #10

Updated by Neil Voss over 8 years ago

  • Status changed from Assigned to Closed
  • Assignee deleted (Neil Voss)
Actions

Also available in: Atom PDF