Optimal Storage System Setup for DD frame saving
Added by Sebastian Scherer about 10 years ago
Hi
I wonder about the optimal storage system configuration for DD frame saving.
The space on our highly backed-up NAS is limited, thus I don't want to store all stacks there.
My current plan is to install an additional server with no data duplication and store the stacks temporarily there.
In case a user is interested in the aligned stacks, she/he has to rescue the stacks before a cron-job deletes them.
I checked your storage system advise and now I have the following question:
We have the to folders moint_point/whatever/leginon and moint_point/whatever/frames.
Do they have to be on the same storage server? I hope now ;)
What is the optimal way to deal with a huge amount of stacks in your opinion?
Can you share some best practice advise?
Thanks a lot and best,
Sebastian
Replies (4)
RE: Optimal Storage System Setup for DD frame saving - Added by Anchi Cheng about 10 years ago
Hi, Sebastian,
The released code does require the same storage server because of the path switch retains "mount_point/whatever". Our backup system allows separate backup rules between folders. Therefore, we have not had the problem you have. You do have a good point, I will make it configurable in the next release.
Since the main code is written in python, you can make local changes to your myami/leginon/ddinfo.py to modify this behavior for now. Find the function getRawFrameSessionPathFromSessionPath(session_path), and modify
legjoin = legdir.join(legsplit[:-1])
to your liking.
legjoin is equivalent to "mount_point/whatever/" in your example.
Regarding optimal way to deal with a huge amount of stacks, I have given what we do in the wiki page Minimum_Requirements_and_current_NRAMM_setup, and it is in my opinion the best practical way. If anything, I would add a step and have space on the frames path, I would compress the raw stack after whatever days people can finish a default round of frame alignment. 10 days is what would work for us. Only the person running the experiment has the priority in using the gpu clusters. If they wait for too long, the processing time will be become lower priority on queue. A well-organized person with available computers finishes it within 2-3 days after the experiment session if not using K2 super-resolution mode. The reason for a compression step is to help user in backing up and to keep the space more usable. The other huge space-saving would be a practice of deleting the aligned full image stack after particle movie stack is made. Not sure what users would say to a cron job to that idea ....
RE: Optimal Storage System Setup for DD frame saving - Added by Sebastian Scherer about 10 years ago
Hi Anchi
Thanks for your fast reply.
Good to know that I have to change only one location.
Anyhow, the documentation of this function (getRawFrameSessionPathFromSessionPath) is a bit misleading, as:
Possible senerios:
1. input: /mydata/leginon/mysession/rawdata; output: /mydata/leginon/mysession/rawdata.
2. input: /mydata/leginon/myuser/mysession/rawdata; output: /mydata/leginon/myuser/mysession/rawdata.
3. input: /mydata/frames/mysession/rawdata; output=input
shouldn't it be?
1. input: /mydata/leginon/mysession/rawdata; output: /mydata/frames/mysession/rawdata.
2. input: /mydata/leginon/myuser/mysession/rawdata; output: /mydata/frames/myuser/mysession/rawdata.
3. input: /mydata/frames/mysession/rawdata; output=input
Thanks again for your help and I will quickly update ddinfo.py on all the machines of our setup once the new server is in place.
Best,
Sebastian
RE: Optimal Storage System Setup for DD frame saving - Added by Sebastian Scherer about 10 years ago
Hi Anchi
We came up with a handy workaround:
moint_point/whatever/frames
will be a sym-link to our scratch storage system.
Leginon will not even realise that the frames are on a different physical server and our backup-system works on block-level and thus does not backup the content of this folder.
Best,
Sebastian
RE: Optimal Storage System Setup for DD frame saving - Added by Anchi Cheng about 10 years ago
Cool !