Minimum Requirements and current NRAMM setup » History » Revision 59
Revision 58 (Anchi Cheng, 09/13/2017 09:32 PM) → Revision 59/75 (Anchi Cheng, 09/14/2017 04:48 PM)
h1. Minimum Requirements and current NRAMM setup h1. Hardware h2. Electron Microscope/Camera with their Controling Computers (Windows) Microscope need to have the capacity for external control and network connected (See [[Network Configuration]] section for details on that. Here are known examples of Leginon implementation: * FEI: [[FEI TecnaiTitan installation specifics|Tecnai, Polara, Titan Krios]] (Film recording available with [[Windows_Camera_Package_Requirement#Film_(Only_for_FEI_Tecnai/Titan)|adaexp.exe]] * JEOL: [[JEOL installation specifics|1230, 1400, JEM3100FSC, 3200]] h2. Digital Camera Gatan ([[Gatan_on_Windows-32|CCD]] and [[Gatan_K2_installation_and_setup|K2 Summit]]), [[Tietz_camera_installation_and_setup|Tietz]], FEI ([[Eagle_camera_installation_and_setup|Eagle]], [[Ceta camera support|Ceta]] and [[Falcon camera support|Falcon]]), Direct Electron [[Adding_DE-12_to_Leginon|DE-12, DE-20]] *Note: Falcon integration does not include frame processing pipeline* h2. A Second Computer Running Linux (CentOS at NRAMM) At NRAMM, we separate the three activities into different computers that serve about 15 people with three microscopes that could be running at the same time. All scopes share the same database, web server, and file server. Each microscope has its own processing computer. *We don't recommend using a computer with Windows PC as the second computer as an option*. One group is able to use an unusually powerful WIndows PC (The one come with Gatan K2 Summit) as the processing server. However, it is suspected as the reason for the acquisition of frame saving super-resolution mode to fail. h3. CPU Minimal 2 GHz; * Python instance of Leginon runs only on one core on the processing server. If you have multiple core, the rest will only be useful if you want to do other things on it such as using it as the database server, webserver, and basic Appion processing. NRAMM: * Processing server: Single quad core (Intel Xeon E5-1607 v2 @ 3.00GHz). One computer per microscope. * Database server: 6 Xeon E5-2689 v4 E5-2620 v2 @ 3.1 2.10 GHz * Web server: 8 Intel Xeon X5667 3.00GHz h3. RAM The whole system with its image processing, database query and web serving, needs significant memory. Realistically, you will need minimal of 4GB memory for all processing+database+web server activities for one microscope operation with 4k camera that serves two persons (one operates the scope, one just look at the images from the web viewers) at the same time. We know of at least one successful daily usage at this configuration. For 2k camera, an all-in-one computer with 3GB memory has also been used successfully. If you are buying a new computer, get at least 6GB memory would be a good idea. At NRAMM, to serve about 15 people viewing images and with three microscopes that could be running at the same time: * Processing server: 4GB physical memory and 2 GB swap for years, and now at 8GB and 8GB, respectively. One computer per microscope * Database server: 64GB memory and 18GB swap. * Web server: 12GB memory and 12GB swap. h3. Display Pretty much everything today would work for data acquisition. GPU server for frame alignment of DD camera is separated. See below. h3. File server 10 GB for the softwares and maybe a few hours worth of data collection. Much larger for routine use. NRAMM 45Tb on raid and growing although some are archived. h3. Network connection speed 100 Mbps might be possible; NRAMM 1 Gbps h2. Additional need for frame-saving direct detection camera h3. File server Frame-saving camera such as DE-12 and K2 Summit are capable of saving movies of an exposure in addition to returning an integrated image back to Leginon. As a result, if the function is used, the disk space required is multiple of that of the image. Leginon saves the frames as non-gain corrected 16-bit integer, rather than dark/gain-corrected 32-float mrc. Therefore, the additional storage requirement is approximately *number_of_frames/2* times larger. Typical number_of_frames used in DE is 10-50 frames and in K2 Summit 20-30 frames. These frames should be off-loaded from the camera computer or saved to network drive as soon as possible so not to over-load the camera computer. In addition, to use the information in the frame movies, these raw frames must be [[appion:GainDark_correction_of_the_raw_frame_with_or_without_drift_correction|gain/dark corrected]] and saved as 32-bit float mrc stack. For K2 Summit Counted/Super-resolution mode, the alignment of the frame is also esssential. This means that at some point, the data related to one image will be <pre> number_of_frames * (0.5 + 1 + 1) </pre> times more than non-frame saving ones. Factoring in that hundreds of such image may be acquired within a 24 hr session, it is therefore important to take this into account in allocating the data storage system for long term. h4. NRAMM's current setup and policy for file storage: # 10 Gbs network between frame saving camera and the file server. # Raw frames are transferred off camera with rsync using [[DDD raw frame file transfer|rawtransfer.py]] which also removed the finished frame stack on the camera to make room for more to come. # These raw frames are needed if default frame processing does not give optimal results. We keep these on network drive for 30 days and make it user's responsibility to archive this on external drives. bzip or bzip2 are commonly used by the users to compress the files. Since the raw frames are integers, it can be easily compressed to 4-10 times smaller. # In the [[appion:GainDark_correction_of_the_raw_frame_with_or_without_drift_correction|frame processing]] Appion script, if frame alignment is performed (usually finished within a day or two after data acquisition), the aligned frame stack is removed right away after integrating into single sum image to save space. # User will use the summed aligned-movie saved in the Leginon database as "-a" images. 95% of the users do not require frame-aligned stack after this point. h3. DD Frame-alignment server A good gpu is needed for frame alignment using the program described in Li et. al. (2013) Nat. Method vol. 10 pp584-590 and variants of it. If real-time speed is desired, parallel processing on multiple hosts may be needed. *For MotionCor2* h4. minimum: A CUDA 8.0 capable standard linux computer that you don't need to access its monitor (and hence using gpu for display) during the alignment computation. h4. Recommended for direct file server connection: Have one primary frame processing computer per direct detector to be used by the person currently using the scope. And an over-flow frame processing computer to share among scopes and users not able to finish during their session. Our experience is that primary frame processing computer can not keep up with super-resolution data collection if binning is not applied ahead of the time. The shared gpu can also be used for gCTF, gAutomatch and other light-weight gpu programs. Counted mode movies can be processed fast enough with just primary frame processing computer. Strong GPU computer with 2 GPU cards, NVIDIA PNY GeForce GTX 1070 or better, connected to the file server in the fastest network you can. There will be a lot of traffic. Unless network is optimized, adding more GPUs on the same computer does not speed up proportionally. h4. NRAMM's current setup: Dedicated Buffer Server, one per microscope with DD camera supplimented with shared gpu resource The buffere server is reserved for the user currently collecting data on the scope Our current buffer server specs: 2U Dual 2.1GHz Intel E5-2620 v4 with: 128GB memory (8x 16GB), 9x 8TB 7.2K SATA drives, 1x 120GB SSD drive, 2x NVIDIA PNY GeForce GTX 1080, 1x Dual 10GE SFP+ cards (2 ports). Our buffer server is connected to Camera computer with a direct fiber link and we have SFP+ optical module for 10GBASE-LR installed on buffer server. We also have the following card installed on buffer server to connect it to DDN/GPFS file system using Infiniband switches: ConnectX-3 VPI adapter card, dual-port QSFP, FDR IB (56Gb/s) / 40GigE, PCIe 3.0 x8 8GT/s. h5. Shared gpu resource: Connected to DDN/GPFS file system and with 1 or more NVIDIA PNY GeForce GTX 1070 or better GPU card. If made to be stronger, it can also be used for gpu version of Relion and/or CryoSparc. *For CPU programs that do frame alignment* NRAMM: a 12 core-computer with Torque scheduler is used specifically for DE-20 frame alignment. h2. An example of Configure A that includes gpu frame alignment capacity. We don't do this at NRAMM as our scale needs a distributed system, but it is doable, at least for a while. We design this as our backup data collection system so that it support several days of data collection in case the network and rack mounted resources go offline. This is basically the frame alignment gpu server. - Advanced HPC Mercury GPU408 4U rack mountable workstation - Dual 2.1GHz 8-core CPUs (16 cores total can run database, webserver, leginon, and very minimal appion processing) - 128 Gb RAM - 1 Tb SSD scratch space - GTX 1050 GPU for graphical display. - Two GTX 1070 GPU cards (will support real-time frame alignment in counted mode) - 12 Tb of storage in RAID configuration (will support 3-4 days of data collection) - 10GB network card to support direct transfer from K2 camera and connection to the scope. h1. Software h2. Leginon system components developed at Leginon home Leginon Home: "http://www.leginon.org/":http://www.leginon.org/ h2. Supporting packages and programs available through internet or your Linux distribution There are minimum of ten packages or single programs, some of them are included in your Linux distribution. h2. Leginon supporting programs available upon request *adaexp.exe* that is required if film exposure is to be made through Leginon on FEI Tecnai machines is available by request. Please contact Max Otten: (mto at nl.feico.com) and let him know what version of the Tecnai user interface you are using. ______ [[Graphical User Interface|< Graphical User Interface]] | [[Getting Started|Getting Started >]] ______