-student
option for maxlike alignment using a radio button<input type='text' >
box to define the out dirWe want to upload images, including defocal pairs into the database
--images-per-tilt-series=3 --images-per-defocal-group=3
imagename <tab> defocus <tab> angle
--angle-list=-45,0,45 --defocus-list=-2.0e-6,-4.0e-6,-6.0e-6
inc
— include directory, common libraries like appionlib
js
— javascript directory, contains help.js
the help pop-up window textimg
— image director, contains images like program logos (e.g., EMAN, Appion, Spider) and check, cross, and ext icons for image assessorcss
— style sheets, custom look and feel of website, usually not modifiedinc
) folder
appionloop.inc
— contains the common interface for all appionLoop programs (CTF estimators, Particle pickers, and Make Stack)euler.inc
menuprocessing.php
— a single, giant function that generates the menu on the left side of all appion pagesparticledata.inc
— collection of all database queries, e.g., getStackIds(), getParticlesFromImageId(), getPixelSizeFromImgId(), or getStackIdFromReconId()processing.inc
— functions that are common to all Appion pages, e.g., processing_header(), referenceBox(), submitAppionJob(), getProjectId()summarytables.inc
— provides summary tables that are common on many pages, e.g., stacksummarytable(), alignstacksummarytable(), and modelsummarytable()checkAppionJob.php
— follow the progress of the job (common to all scripts, except reconstructions, see checkRefineJobs.php
)index.php
— provides the final report with methods section for any exemplar reconstructionsconfig.php
— the customization file for the viewstack.php
— the famous stack viewerselectParticleAlignment.php
) — a selection page for each program that provides a detailed description to help user decide which on to userunDogPicker.php
) — setup parameters and run the programcheckAppionJob.php
— follow the progress of the job (common to all scripts, except reconstructions, see checkRefineJobs.php
)stackhierarchy.php
) — show all runs of that process type, e.g., stacksstackreport.php
— report on a particular run of that process type, e.g. stack id 12We have two major folders in the appion folder:
Other useful directories:
see also source:trunk/leginon/leginondata.py and source:trunk/leginon/projectdata.py
setupParserOptions
, checkConflicts
, start
--runname
, --commit/--no-commit
, --rundir
, --projectid
, and --description
--sessionname
, --preset
, --limit
, --continue
, --shuffle
, and --no-rejects
--runname
to be definedsvn checkout http://ami.scripps.edu/svn/myami/trunk/ myami/
<center>
At AMI, you do not normally need a sinedon.cfg file, but if you have your own testing environment you will need one.
</center>
h4. Web page
At AMI, put myamiweb into your ami_html directory and it will be available on both cronus3 and fly as http://cronus3.scripps.edu/~username/myamiweb/
You will need to run Eric's web setup wizard to get it working: http://cronus3.scripps.edu/~username/myamiweb/setup/
Your PYTHONPATH should contain both the myami folder and the myami/appion folder.
2010 ADW/Add a new option to a program
2010 ADW/Organization of the Python programs
2010 ADW/Organization of the PHP programs
2010 ADW/Create an alignment program from scratch
2010 ADW/Create an image uploader program from scratch
The 2 Way Viewer allows you to view the selected image in two image view panes side by side. The following example shows the original mrc image next to its Fourier transform. For more details see Image Viewer Overview.
2 Way Viewer Screen:
< Image Viewer | 3 Way Viewer >
The 2 Way Viewer allows you to view the selected image in two image view panes side by side. The following example shows the original mrc image next to its Fourier transform. For more details see Image Viewer Overview.
2 Way Viewer Screen:
The 3 Way Viewer allows you to view the selected image in 3 adjacent Image View panes. The following example shows the original mrc image along with a heat map view and Fourier transform. For more details see Image Viewer Overview.
3 Way Viewer Screen
< 2 Way Viewer | Dual Viewer >
This section contains procedures for calculating initial models from tilted and untilted datasets.
< Particle Alignment | Refine Reconstruction >
Hi everybody,
I think we are close to gettting a working version of ACEMAN -- ACE for EMAN.
Scott - If you want to test it you can copy the directory ace_cvs and run
aceman
from inside matlab. It basically reads in an imagic file and writes out another imagic file which has ctf information embedded in it. So all you need to do to check how good the fits are is do
ctfit outputfilename.hed
A typical fit looks like this
I have changed the way envelope is calculated. See for example
http://graphics.ucsd.edu/~spmallick/ctf ... envfit.png
I have also gone through EMANs source code to figure out how exactly the parameters of ACE and EMAN are related. There is a few things though which I do not understand yet -- noise_const is off around 1% ( I have hardcoded a compensation ) . Secondly it is not clear to me if the we can embed the astigmatism parameter in the imagic files.
I will work on ACEMAN again today evening/night to find the story behind the 1% error.
Satya
Hello everybody,
I was wondering if people want to test an experimental version of ACEMAN -- ACE for EMAN. ACEMAN takes in a stack of picked particles in imagic ( hed/img ) format and embeds the ctf parameters into it. You could then use
ctfit output_file.hed
to see how good the ctfits are. ACE and EMAN define the Envelope function in a slightly different way and so I had to make some changes in the core ace function.
Here are the steps for installation
1. Change to your ace directory.
cd ace_directory
2. Download the above file in the ace directory.
3. Untar unzip it
tar -zxvf aceman.tgz
A few files will be extracted into the ace directory.
4. Start MATLAB
5. Inside MATLAB do
aceman
There is not documentation yet but if you have used acedemo, it should be straight forward. Please let me know if you do get good/bad results.
Again, this is an experimental version, so I would not recommend it for real experiments yet.
Regards
Satya
This algorithm is faster than ACE 1 and includes astigmatism estimation.
< CTF Estimation | Create Particle Stack >
< CTF Estimation | Create Particle Stack >
Hello everybody,
I few people had asked me for scripts ( instead of the GUI ) to run ace. So here it is
I will add it to the next ace release if I do not hear any complains. You will have to edit the script for using it. So start MATLAB and do
edit acescript.m
Read the comments, and if you have used acedemo, it should be fairly easy to follow. If you do
help acescript
it will give you the list of variables you might want to edit. If you do not understand what a particular variable means, just start acedemo and do a side by side comparison.
Regards
Satya
*
After you have added a new refinement job class it needs to be added to the job running agent by editing the file apAgent.py in appionlib.
Ex. elif "newJobType" == jobType: jobInstance = newModuleName.NewRefinementClass(command)
If your webserver installation is successful, a number of tables will be propagated in the databases. There were several options for setting up databse user privileges recommended in Database Server Installation. The following additional steps should be taken, depending on which option you previously used.
GRANT DELETE ON leginondb.ViewerImageStatus TO usr_object@'localhost'; GRANT DELETE ON projectdb.shareexperiments TO usr_object@'localhost'; GRANT DELETE ON projectdb.projectowners TO usr_object@'localhost'; GRANT DELETE ON projectdb.processingdb TO usr_object@'localhost';
GRANT DELETE ON projectdb.gridboxes TO usr_object@'localhost'; GRANT DELETE ON projectdb.grids TO usr_object@'localhost'; GRANT DELETE ON projectdb.gridlocations TO usr_object@'localhost';
GRANT DELETE ON leginondb.ViewerImageStatus TO usr_object@'%.mydomain.edu'; GRANT DELETE ON projectdb.processingdb TO usr_object@'%.mydomain.edu';
GRANT DELETE ON projectdb.gridboxes TO usr_object@'%.mydomain.edu'; GRANT DELETE ON projectdb.grids TO usr_object@'%.mydomain.edu'; GRANT DELETE ON projectdb.gridlocations TO usr_object@'%.mydomain.edu';
< Web Server Installation | Create a Test Project >
name: my_scope hostname: whatever type: Choose TEM
name: my_scope hostname: whatever type: Choose CCDCamera
[Note] If you use Leginon, and still want to upload non-Leginon images, make sure that you create a pair of fake instruments like these on a host solely for uploading. It will be a disaster if you don't, as the pixelsize of the real instrument pair will be overwritten by your upload.
Note: The Administration tool is only available to users who belong to a Group with administrative privileges.
After a new installation, you will have to input Groups,Users and Instrument to the database you have just created. Applications will need to be imported, too. These tasks can be performed through the web-based Administration Tools.
The user named "administrator" is a special user in Leginon. Once the setting preferences in a node that shares the same class and alias are defined by the administrator, all newly created users get these settings when they launch the node until they make changes themselves. This allows a faster setup per database (institute) for the beginners. Therefore, the first user should be named as "administrator" and it should be used normally to ensure the stability of these default preferences.
A Leginon user set in the adminstration tool defines his/her own preferences once changed from the "adminstrator" user default above. It is also not related to the computer login user. Therefore, it is is necessary to go through the steps outlined in "<link linkend="admin_adduser">Set up for a new regular user</link>" section.
See <link linkend="Inst_Adm">Installation Troubleshooting</link> and Leginon Bulletin Board searching for "admin" if you run into problems.
Open a web browser. Go to http://localhost/myamiweb/admin.php
Groups are used to associate users together. At the moment, Leginon does not use the group association for anything.
See the section on <link linkend="instrument_names">Instrument Tool</link> for more details.
<blockquote>
The most commonly used Leginon applications are included as part of the Leginon download. These XML files are in subdirectory of your Leginon download and installation called "applications". The XML files should be imported either using the web based application import tool. Each application includes "(1.5)" in its name to indicate that it will work with this new version of Leginon. The applications that carry the older version name are compatible with the older Leginon.
To find Leginon installation path on Linux:
>start-leginon.py -v
On Windows, You should find a shortcut to your Leginon installation folder in the "Start Menu/All Programs/Leginon". If not, it is likely
C:\Python25\Lib\site-packages\Leginon\
<link linkend="runleg_chapter">Leginon test runs</link> test for tem/ccd controls and network communications. The rest of this chapter is for references.
A Leginon user set in the adminstration tool defines his/her own preferences once changed from the "adminstrator" user default above. It is also not related to the computer login user. Therefore, it is is necessary to go through the following steps to set up an existing computer user as a new Leginon user:
Copy <link linkend="leginon_cfg">leginon.cfg</link> and <link linkend="sinedon_cfg">sinedon.cfg</link> (if not set globally for all users) from an existing user to the home directory of the new user.
Modify the [user] "Fullname" field in <command moreinfo="none">leginon.cfg</command> to correspond the "full name" field in the Leginon Administration User Tools.
Groups are used to associate users together. At the moment, Leginon does not use the group association for anything.
This is used to add details about the microscope and CCD Leginon will be connected to. More than one instrument can be added with different configurations. The "import" function is useful if the instrument information has been stored on different machines in different Leginon databases.
Applications define how nodes are linked together in order to form a specialized Leginon application or program. Because Leginon uses a nodal or modular archetecture, multiple applications can be created by linking together nodes in different fashions suitable for the current experiment. Several default Leginon applications are distributed with the release. This section enables the Leginon user to import and export applications.
It should contain tables of Application Data, NodeSpec Data, and likely BindingSpec Data.
Good calibrations are absolutely essential to running Leginon. They can also be very time consuming. As a way of rudimentarily starting up without calibrating the current instrument specifically or perhaps to revert to a previously saved calibration, this import/export calibration tool can be quite useful.
The goniometer movement must be modeled for finer movements. Leginon calibrates this movement through the Gon Modeler node. Through this feature, the models for these movements can be graphically seen.
Use this option if you want to create substacks of already aligned particles. This option is useful to clean your dataset by excluding bad classaverages.
<More Stack Tools | Particle Alignment >
Use this tool if you want to correlate and box the particles of a Random Conical Tilt Session manually. Before you can run this program you need to pick the particles with either one of the available picking tools Dog Picking, Manual Picking or Template Picking.
< Particle Selection|CTF Estimation >
Currently, there are three working methods for automated or semi-automated alignment:
For developers:
appiondata tables involved in this process
< Tomography | Create Full Tomogram >
First, you can try the executable that already has the plugins installed. Get eclipse.tar.gz. Just copy it to your machine, uncompress it and the executable is available within the eclipse directoy. No further installation steps are required.
If you want to install things yourself, this is what you need:
See also: PHP: Debugging with Eclipse
There are 2 types of development that you will most often do with the MyAMI code, Python for core processing and PHP for the web interface.
Go to Window -> Preferences -> PyDev -> Editor -> Interpreter – python. Press the Auto Config button then press OK.
There are two ways to view the web applications that you are developing in your home directory. If you are developing on a machine that does not have a local Apache server, you can use the Cronus3 web server. The advantage of this is that all the image processing plugins are already installed on Cronus3 so you don't have to worry about them and you don't have to worry about making Apache work. If you do run Apache locally, you can take advantage of integrated debugging tools in Eclipse and learn more about how all the pieces of the project fit together since will will have to set more things up.
Also note that the directions below will not get project_tools running. It is currently undergoing many changes and directions will be added when that process is complete.
Follow Use Cronus3 or Fly to view your web app
Use the setup wizard to create the config file by browsing to myamiweb/setup.
At AMI, you would go to cronus3/~YOUR_HOME_DIR/myamiweb/setup.
IMPORTANT: Never check your local copy of the config files into Subversion. We don't want to share our database user information with the world. You can right click on the config file and select team -> svn:ignore to tell svn to ignore this file.
If you want to work on the databases and you would prefer to have a local copy to play with, read How to set up a local copy of AMI databases.
You also have an option of creating a copy of the database that you wish to work with on the fly server. You will name your DB with your name prepended to the name of the DB that is copied. You will need to update your Config file accordingly. You can work with your DB without affecting formal testing on fly or the production databases.
Merge from trunk r14376 and r14383 to 2.0 branch, Fix for Post-processing does not work with FREALIGN jobs , refs #657
If you have not registered, click on the Register link in the top right corner of the website. After submitting the information requested, an AMI administrator will be notified via email that your registration is pending approval. When you are approved, Sign in using the Sign in link at the top right corner of the website.
Once you have registered, you can post questions on the Forums.
Select Projects from the top left corner of the website. You will see projects that you are a member of, as well as those that are public.
Although there is not a clear division in the software code between Appion and Leginon, we use these as project names to be consistent with how the products are presented to the rest of the world.
The single svn repository holding the code for both products is available from both Redmine projects.
When you are ready for the next level of issue settings check out the Issue Workflow Tutorial.
To edit, click the Edit link at the top of the page.
To create a new wiki page, edit an existing one and include double brackets around the name of the page that you wish to create like this:
[[My new wiki page]]
It will appear as a red link when it is saved.
Click on the link and add content to your new page.
Remember to save it before navigating away or you will lose all your hard work!
More information on Wiki creation is here.
Appion includes software from the following packages (more information on most of these packages can be found in the Wikibook describing EM software packages):
addplugin("processing");
// Check if IMAGIC is installed and running, otherwise hide all functions define('HIDE_IMAGIC', false); // hide processing tools still under development. define('HIDE_FEATURE', true);
$PROCESSING_HOSTS[] = array( 'host' => 'LOCAL_CLUSTER_HEADNODE.INSTITUTE.EDU', // for a single computer installation, this can be 'localhost' 'nproc' => 32, // number of processors available on the host, not used 'nodesdef' => '4', // default number of nodes used by a refinement job 'nodesmax' => '280', // maximum number of nodes a user may request for a refinement job 'ppndef' => '32', // default number of processors per node used for a refinement job 'ppnmax' => '32', // maximum number of processors per node a user may request for a refinement job 'reconpn' => '16', // recons per node, not used 'walltimedef' => '48', // default wall time in hours that a job is allowed to run 'walltimemax' => '240', // maximum hours in wall time a user may request for a job 'cputimedef' => '1536', // default cpu time in hours a job is allowed to run (wall time x number of cpu's) 'cputimemax' => '10000', // maximum cpu time in hours a user may request for a job 'memorymax' => '', // the maximum memory a job may use 'appionbin' => 'bin/', // the path to the myami/appion/bin directory on this host 'appionlibdir' => 'appion/', // the path to the myami/appion/appionlib directory on this host 'baseoutdir' => 'appion', // the directory that processing output should be stored in 'localhelperhost' => '', // a machine that has access to both the web server and the processing host file systems to copy data between the systems 'dirsep' => '/', // the directory separator used by this host 'wrapperpath' => '', // advanced option that enables more than one Appion installation on a single machine, contact us for info 'loginmethod' => 'SHAREDKEY', // Appion currently supports 'SHAREDKEY' or 'USERPASSWORD' 'loginusername' => '', // if this is not set, Appion uses the username provided by the user in the Appion Processing GUI 'passphrase' => '', // if this is not set, Appion uses the password provided by the user in the Appion Processing GUI 'publickey' => 'rsa.pub', // set this if using 'SHAREDKEY' 'privatekey' => 'rsa' // set this if using 'SHAREDKEY' );
// --- Please enter your processing host information associate with -- // // --- Maximum number of the processing nodes -- // // --- $PROCESSING_HOSTS[] = array('host' => 'host1.school.edu', 'nproc' => 4); -- // // --- $PROCESSING_HOSTS[] = array('host' => 'host2.school.edu', 'nproc' => 8); -- // // $PROCESSING_HOSTS[] = array('host' => '', 'nproc' => );
$DEFAULTCS = "2.0";
Appion is closely related to its sister product, Leginon.
They are built on the same code base, require a similar installation procedure, and share user, project and administration management tools.
Because of these similarities, Appion and Leginon also share a single user Forum.
The Forum can be found in the Leginon product's Forums tab.
To post or reply to a message, you must be logged into this site. If you have not yet registered, first go to the Registration page.
Appion is a "pipeline" for processing and analysis of EM images. Appion is integrated with "Leginon":http//leginon.org data acquisition but can also be used stand-alone after uploading images (either digital or scanned micrographs) or particle stacks using a set of provided tools. Appion consists of a web based user interface linked to a set of python scripts that control several underlying integrated processing packages. All data input and output within Appion is managed using tightly integrated SQL databases. The goal is to have all control of the processing pipeline managed from a web based user interface and all output from the processing presented using web based viewing tools.
The underlying packages integrated into Appion include EMAN, Spider, Frealign, Imagic, XMIPP, IMOD, ProTomo, ACE, CTFFind and CTFTilt, findEM, DogPicker, TiltPicker, RMeasure, EM-BFACTOR, and Chimera. These packages must be acknowledged by appropriate citations when used within Appion. Appropriate citations are provided on the individual pages in Appion as well as here.
Follow the Appion installation instructions to download and install Appion.
If you download Appion we strongly encourage you register as an Appion user.
This will allow us to keep you informed of new releases, bug fixes, and other useful information, and also allow us to keep track of the user base which is important to ensure future support of the software.
The developers guide is the primary resource for getting started with code development.
Appion is an open source project. You are free to contribute to it.
Appion is released under the Apache License, Version 2.0
View the entire collection of Appion citations.
Please email appion@scripps.edu with any questions.
< Appion and Leginon Database Tools
related issues: #671
There is a framework in place to write a test script for a particular data set that can be run on demand. The tests may be launched from the website and results of each executed command may be viewed from the web.
The following steps should be taken to add a new test.class Test_zz07jul25b(testScript.TestScript):
if __name__ == "__main__": tester = Test_zz07jul25b()
Applications are for use with the Leginon image acquisition software.
If you are not using Leginon, you may ignore the Applications settings. If you are using Leginon, please refer to the Leginon user manuals section on Applications.
< Revert Settings | Goniometer >
Current Team
Bridget Carragher, Anchi Cheng, Amber Herold, Gabe Lander, Dmitry Lyumkis, Arne Moeller, Clint S. Potter, Jim Pulokas, Joel Quispe, Scott Stagg, Neil R. Voss, Craig Yoshioka, Lauren Fisher
Alumni
Jonathan Brownell, Satya Mallica, Sunita Nayak, Denis Fellmann, Eric Hou, Christopher Irving, Pick-wei Lau, Anke Mulder
From the EM community
Appion exists to provide an integrated interface to the following Image Processing software packages:
Appion also depends on several community supported Open Source packages, including:
< System Requirements | Version Change Log >
Use this tool if you want to correlate and box the particles of a Random Conical Tilt Session automatically. Before you can run this program you need to pick the particles with either one of the available picking tools Dog Picking, Manual Picking or Template Picking.
< Particle Selection|Multi Image Assessment >
This function centers the particles in a stack based on a radial average of all the particles in the stack. This program functions iteratively, using only integer shifts to avoid interpolation artifacts. Particles that do not consistently center are removed from the stack.
<Filter by MeanStdev | Sort Junk >
If you have a new computer(s) for your Leginon/Appion installation, we recommend installing CentOS because it is considered to be more stable than other varieties of Linux.
CentOS is the same as Red Hat Enterprise Linux (RHEL), except that it is free and supported by the community.
We have most experience in the installation on CentOS and this installation guide has specific instruction for the process.
see Linux distribution recommendation for more.
Latest version tested at NRAMM: CentOS 5.8
Note: All formally released versions of Appion (versions 1.x and 2.x) run on CentOS 5.x. Appion developers, please note that the development branch of Appion is targeting CentOS 6.x and Appion 3.0 will run on CentOS 6.x.
Perform a SHA1SUM confirmation:
sha1sum CentOS-5.8-i386-bin-DVD-1of2.iso
The result should be the same as in the sha1sum file provided by CentOS. This is found at the same location you downloaded the .iso file.
For example:
Use dvdrecord in Linux to burn disk.
dvdrecord -v -dao gracetime=10 dev=/dev/dvd speed=16 CentOS-5.8-i386-bin-DVD-1of2.iso
Note: This step is optional, however you will need root access to complete the Appion Installation.
Make sure you have root permission.
Open the file in an editor. ex. vi /etc/sudoers
Look for the line: root ALL=(ALL) ALL.
Add this line below the root version:
your_username ALL=(ALL) ALL
Logout and log back in with your username.
The CentOS installation is complete.
Create the following info.php in your web server document root directory (/var/www/html on CentOS. /srv/www/htdocs on SuSE. You can find its location in httpd.conf mentioned above under the line starting DocumentRoot).
sudo nano /var/www/html/info.php
Copy and paste the following code into info.php:
<?php phpinfo(); ?>
Restrict access to your info.php file.
sudo chmod 444 /var/www/html/info.php
Visit this page at http://HOST.INSTITUTE.EDU/info.php or http://localhost/info.php
You will see comprehensive tables of php and apache information, including the location of the additional .ini files, extension, include path, and what extension is enabled.
Here is an example screen shot of the part of the info.php page that tells you where php.ini and other configuration files are. This information will be used while installing components of the Web Server.
< Install Apache Web Server | Download Appion and Leginon Files >
A list of variables to set in your_cluster.php.
These are variables you need to set in your_cluster.php that you create based on default_cluster.php we provide. The example in default_cluster.php should work if the appion processing disk can be accessed directly from the cluster.
Each define(var,value) sets up a number shown in your emanJobGen.php or restricts the value you can put there so that you do not accidentally set impossible numbers for your cluster when you use the form to submit the job in the future.
Most variables end with _DEF or _MAX where _DEF means default values shows up in the webform and _MAX means the physical limit, normally defined by your cluster machine. Therefore, the web form will complain if you set a number larger than that.
C_NAME means cluster name.
C_NODES means number of nodes used by your job.
C_PPN_DEF means default number of processors per node show up on the web form as default
C_PPN_MAX means maximal number of processors per node
C_RPOCS_DEF means default number of processors per node the web will force if you did not specify how many nodes you want to use per node. This is most likely either equal to C_PPN_MAX or C_PPN_DEF if you don't want people waste processors. We recommend setting C_RPOCS_DEF to C_PPN_MAX.
C_WALLTIME (in hours) means the maximal real time your job is allowed to run. If your cluster is configured properly, it would suspend the job after that so that it does not delay others.
C_CPUTIME (in hours) means the maximal cpu time your job is allowed to run.
When everyone uses the same coding style, it is much easier to read code that someone else wrote. That said, style is not important enough to enforce during a code review. It is much more important to ensure that best practices are followed such as implementing error handling .
This one is a bit old, but lots of good stuff that goes beyond style. Some things are questionable. I prefer Getters/Setters over Attributes as Objects (at least how the example shows it) to allow for better error handling. I prefer no underscores in naming except for constants that use all caps...but that is only a style issue.
From the Zend framework folks:
http://framework.zend.com/manual/en/coding-standard.html
An intro:
http://godbit.com/article/introduction-to-php-coding-standards
Nice Presentation:
http://weierophinney.net/matthew/uploads/php_development_best_practices.pdf
PHP Unit testing
http://www.phpunit.de/pocket_guide/
For automatically checking code against the Pear standards use CodeSniffer:
http://pear.php.net/package/PHP_CodeSniffer/
Best Practices:
http://www.odi.ch/prog/design/php/guide.php
Improved performance:
http://blog.monitis.com/index.php/2011/05/15/30-tips-to-improve-javascript-performance/
Use this option if you want to combine stacks you already created (for example stacks from two different sessions). Simply select the stacks you want from the list and submit the job.
<More Stack Tools | Particle Alignment >
Image Viewers allow you to view the images that are associated with a particular session (or experiment). You may select a project from a drop down list, then select a session in that project. The images belonging to that session appear in an Image List and the selected image is displayed in the Image View.
Name | Description |
---|---|
Project Drop Down List | Projects that you own or have been shared will appear in the list. Select one to view. |
Session Drop Down List | Sessions that belong to the currently selected Project will appear in the list. Select one to view. |
Image List | The images belonging to the currently selected Session will appear in the Image List. The selected image will appear in the Image View. The total number of images is displayed at the top of the list. |
Image View | The selected image will be displayed in the Image View. The Image View may be configured using the Image Tools controls located directly above the Image View. |
Image Tools | Includes many basic image manipulation features such as filtering and Fourier Transform. |
Image Viewer Screen:
Used to remove queued targets on the image shown in the viewer and queued targets chosen on its direct descendant images. Clicking on it gives the number of active queued targets and the user can choose to remove them from the active list.
Viewer Name | Viewer Features |
---|---|
Image Viewer | provides a single image pane |
2 Way Viewer | provides 2 image panes for viewing the same mrc file in different ways side by side |
3 Way Viewer | provides 3 image panes for viewing the same mrc file in different ways |
Dual Viewer | provides 2 image panes for viewing separate mrc images side by side |
RCT | provides 2 images panes for viewing Tomography tilt images |
^ Image Viewers | Image Viewer >
cd ~/myami/myamiweb/processing cat *.php | grep '\$[A-Za-z]' | sed 's/\$_[A-Za-z]*//' | sed 's/[^$]*\(\$[A-Za-z0-9]*\)[^$]*/\1 \ /g' | sort | uniq -c | sort -rn | head -50
<# of occurrences> <variable name>
1066 $command 1001 $particle 943 $ 854 $expId 630 $i 387 $formAction 385 $html 366 $javascript 349 $outdir 337 $projectId 327 $runname 326 $sessionId 299 $extra 213 $description 200 $graph 198 $stackid 198 $sessioninfo 186 $apix 180 $sessiondata 162 $display 160 $title 158 $templatetable 157 $user 136 $line 131 $javafunctions 127 $heading 126 $numpart 125 $jobinfo 117 $errors 114 $stackinfo 110 $t 110 $key 109 $s 108 $templateinfo 101 $sessionpath 98 $bin 96 $tomogram 96 $sub 96 $nproc 96 $filename 94 $stackId 91 $headinfo 90 $sessionname 90 $data 89 $j 89 $cmd 89 $box 89 $alignId 86 $r
Appion presents the user with a menu of options for image processing that is dynamically updated as each step is completed. When the user clicks on one of the menu options, Appion generates a new web page specific to the selected operation that requests inputs and allows the user to launch jobs on one of several processing machines or clusters. The job progress is monitored by updates to the menu. The user can check the progress of the job through the logfile, accessible through the the webpages after the job has been launched. Additionally, the user can kill the job from the webpage by clicking on the "kill job" button when viewing the logfile (note: if the job is manually killed from the terminal, the database does NOT get updated. The user must manually run the updateAppionDB.py script [updateAppionDB.py jobid status [projectid]], e.g. "updateAppionDB.py 1234 D 1") Once a completed job shows up in the menu, the user may click on its entry to generate a web page that reports on the results. Most input options are provided with defaults. Help options for each input are provided as pop-ups on the Appion web pages. Detailed step-by-step instructions for most of the procedures are available within the Appion documentation.
< Terminology | Step by Step Guide >
Name: | Download site: | yum package name | SuSE rpm name |
---|---|---|---|
gcc-objc | gcc-objc | ||
fftw3-devel | fftw3-devel | ||
gsl-devel | gsl-devel |
It is recommended that you use FFTW version 3.2 or later, because there are optimizations in FFTW 3.2 that make Ace2 run significantly faster than FFTW 3.1 (which is distributed with CentOS). But this is much harder. You will need to install FFTW 3.2 from source code and then add the -DFFTW32 flag to the CFLAGS line in the Makefile
cd programs/ace2
make
./ace2.exe -h ./ace2correct.exe -h
sudo cp -v ace2.exe ace2correct.exe /usr/local/bin/
Name: | Download site: | yum package name | SuSE rpm name |
---|---|---|---|
compat-gcc-34-g77 | compat-gcc-34-g77 | ||
gcc-gfortran | gcc-gfortran |
Both 32 and 64 bit findem binaries are already available in the myami/appion/bin directory.
Test it by changing directories to myami/appion/bin and type the following commands:
./findem64.exe (64 bit version) or ./findem32.exe (32 bit version)
If the binary included with Appion does not work, or you wish to compile it yourself follow the instructions to install FindEM from source.
< Install Grigorieff lab software | Install Ace2 >
sudo yum install libgomp
cd myami/modules/radermacher
$ python ./setup.py build
$ sudo python ./setup.py install
$ python
>>> import radermacher >>> <Ctrl-D>
< Compile Ace2 | Install Xmipp >
There are three main components of the Appion system: a Database Server, a Processing Server and a Web Server. These may be installed on separate computers, or on the same computer. Several installation options are listed below. If you are unsure which installation option to choose for your situation, please inquire on the Software Installation Forum. There are also instructions to register for a Redmine account which is needed to make a Forum post.
The Automatic Installation Script installs a fully functional demo version of Appion. This script is intended to be used with a single computer running a fresh installation of the CentOS operating system. The process is very quick and easy and includes groEL images for you to begin processing right away.
The Manual Installation Instructions are intended for a production system. We recommend using the CentOS operating system, but we include instructions for Fedora under the Alternative Options below. Also under Alternative Options you will find instructions for installing Appion with an existing Leginon installation.
< Version Change Log | Upgrade Instructions >
ProcessingHostType=Torque Shell=/bin/csh ScriptPrefix= ExecCommand=/usr/local/bin/qsub StatusCommand=/usr/local/bin/qstat AdditionalHeaders= -m e, -j oe PreExecuteLines=
< Configure sinedon.cfg | Install External Packages >
ProcessingHostType=Torque Shell=/bin/csh ScriptPrefix= ExecCommand=/usr/local/bin/qsub StatusCommand=/usr/local/bin/qstat AdditionalHeaders= -m e, -j oe PreExecuteLines=
For older versions of Appion and Leginon (pre-2.2), please use the following instructions:
Instructions for Appion and Leginon versions prior to 2.2
python import sys sys.path
configcheck.py
A skeleton (default) configuration file is available:
/path/to/myami/leginon/leginin.cfg.template
Copy leginon.cfg.template to leginon.cfg.
sudo cp -v /path/to/myami/leginon/leginin.cfg.template /etc/myami/leginon.cfg
Edit the newly created file and add a directory for images. Make sure you have permission to save files at this location. See File Server Setup Considerations for more details
You may put in a fake path on the microscope PC installation and ignore the error message at the start of Leginon if you follow our general rule of not saving any image directly from the microscope pc,
[Images] path: your_storage_disk_path/leginon
The rest of the configuration options are fine left as default. leginon.cfg is not needed for individual users for Appion purpose
< Install Appion/Leginon Packages | Configure sinedon.cfg >
python import sys sys.path
A skeleton (default) configuration file is available:
$PYTHONSITEPKG/leginon/config/default.cfg
Copy default.cfg to leginon.cfg.
sudo cp -v $PYTHONSITEPKG/leginon/config/default.cfg $PYTHONSITEPKG/leginon/config/leginon.cfg
Edit the newly created file and add a directory for images. Make sure you have permission to save files at this location. See File Server Setup Considerations for more details
You may put in a fake path on the microscope PC installation and ignore the error message at the start of Leginon if you follow our general rule of not saving any image directly from the microscope pc,
[Images] path: your_storage_disk_path/leginon
For older versions of Appion and Leginon (pre-2.2), please use the following instructions:
Instructions for Appion and Leginon versions prior to 2.2
python import sys sys.path
configcheck.py
A skeleton (default) configuration file is available:
/path/to/myami/leginon/leginin.cfg.template
Copy leginon.cfg.template to leginon.cfg.
sudo cp -v /path/to/myami/leginon/leginin.cfg.template /etc/myami/leginon.cfg
Edit the newly created file and add a directory for images. Make sure you have permission to save files at this location. See File Server Setup Considerations for more details
You may put in a fake path on the microscope PC installation and ignore the error message at the start of Leginon if you follow our general rule of not saving any image directly from the microscope pc,
[Images] path: your_storage_disk_path/leginon
Edit the following items in php.ini (found as /etc/php.ini on CentOS and /etc/php5/apache2/php.ini on SuSE)
sudo nano /etc/php.ini
so that they look like the following:
error_reporting = E_ALL & ~E_NOTICE & ~E_WARNING
display_errors = On
register_argc_argv = On
short_open_tag = On
max_execution_time = 300 ; Maximum execution time of each script, in seconds
max_input_time = 300 ; Maximum amount of time each script may spend parsing request data
memory_limit = 256M ; Maximum amount of memory a script may consume (8MB)
You may want to increase max_input_time and memory_limit if the server is heavily used. At NRAMM, max_input_time=600 and memory_limit=4000M.
You should also set timezone using one of the valid string found at http://www.php.net/manual/en/timezones.php like this:
date.timezone = 'America/Los_Angeles'
< Install Web Server Prerequisites | Install Apache Web Server >
Sinedon is an object relational mapping library designed to interact with the Leginon and Appion databases.
For older versions of Appion and Leginon (pre-2.2), please use the following instructions:
Instructions for Appion and Leginon versions prior to 2.2
python import sys sys.path
configcheck.py
[global] #host: your_database_host host: localhost user: usr_object passwd: [projectdata] db: projectdb [leginondata] db: leginondb
Note: If you are a developer, and you need to use sinedon.cfg settings that are different from the global settings, you may create your own sinedon.cfg file and place it in your home directory. This version will override the global version located in the site packages directory.
Appion database is set dynamically through project database. No module entry is needed here.
< Configure leginon.cfg | Configure .appion.cfg >
Sinedon is an object relational mapping library designed to interact with the Leginon and Appion databases.
python import sys sys.path
[global] #host: your_database_host host: localhost user: usr_object passwd: [projectdata] db: projectdb [leginondata] db: leginondb
Note: If you are a developer, and you need to use sinedon.cfg settings that are different from the global settings, you may create your own sinedon.cfg file and place it in your home directory. This version will override the global version located in the site packages directory.
Sinedon is an object relational mapping library designed to interact with the Leginon and Appion databases.
For older versions of Appion and Leginon (pre-2.2), please use the following instructions:
Instructions for Appion and Leginon versions prior to 2.2
python import sys sys.path
configcheck.py
[global] #host: your_database_host host: localhost user: usr_object passwd: [projectdata] db: projectdb [leginondata] db: leginondb
Note: If you are a developer, and you need to use sinedon.cfg settings that are different from the global settings, you may create your own sinedon.cfg file and place it in your home directory. This version will override the global version located in the site packages directory.
This file is no longer needed as of Appion version 2.2. The information that was configured in the cluster.php file is not set in the main config.php file in the PROCESSING_HOST array. (instructions for editing the config.php file)
Edit the file /var/www/html/myamiweb/config.php
and ensure the following changes are made:
addplugin("processing");
// Check if IMAGIC is installed and running, otherwise hide all functions define('HIDE_IMAGIC', false); // hide processing tools still under development. define('HIDE_FEATURE', true);
$PROCESSING_HOSTS[] = array( 'host' => 'LOCAL_CLUSTER_HEADNODE.INSTITUTE.EDU', // for a single computer installation, this can be 'localhost' 'nproc' => 32, // number of processors available on the host, not used 'nodesdef' => '4', // default number of nodes used by a refinement job 'nodesmax' => '280', // maximum number of nodes a user may request for a refinement job 'ppndef' => '32', // default number of processors per node used for a refinement job 'ppnmax' => '32', // maximum number of processors per node a user may request for a refinement job 'reconpn' => '16', // recons per node, not used 'walltimedef' => '48', // default wall time in hours that a job is allowed to run 'walltimemax' => '240', // maximum hours in wall time a user may request for a job 'cputimedef' => '1536', // default cpu time in hours a job is allowed to run (wall time x number of cpu's) 'cputimemax' => '10000', // maximum cpu time in hours a user may request for a job 'memorymax' => '', // the maximum memory a job may use 'appionbin' => 'bin/', // the path to the myami/appion/bin directory on this host 'appionlibdir' => 'appion/', // the path to the myami/appion/appionlib directory on this host 'baseoutdir' => 'appion', // the directory that processing output should be stored in 'localhelperhost' => '', // a machine that has access to both the web server and the processing host file systems to copy data between the systems 'dirsep' => '/', // the directory separator used by this host 'wrapperpath' => '', // advanced option that enables more than one Appion installation on a single machine, contact us for info 'loginmethod' => 'SHAREDKEY', // Appion currently supports 'SHAREDKEY' or 'USERPASSWORD' 'loginusername' => '', // if this is not set, Appion uses the username provided by the user in the Appion Processing GUI 'passphrase' => '', // if this is not set, Appion uses the password provided by the user in the Appion Processing GUI 'publickey' => 'rsa.pub', // set this if using 'SHAREDKEY' 'privatekey' => 'rsa' // set this if using 'SHAREDKEY' );
// --- Please enter your processing host information associate with -- // // --- Maximum number of the processing nodes -- // // --- $PROCESSING_HOSTS[] = array('host' => 'host1.school.edu', 'nproc' => 4); -- // // --- $PROCESSING_HOSTS[] = array('host' => 'host2.school.edu', 'nproc' => 8); -- // // $PROCESSING_HOSTS[] = array('host' => '', 'nproc' => );
$DEFAULTCS = "2.0";
< Install SSH module for PHP | Testing job submission >
This page will help you convert the mass of a macromolecule into a diameter in a micrograph.
DenisitiesThis options allows you to regenerate a stack from the original images. It is extremely useful if you want to change the boxsize of the particles, use different filter or binning parameters.
<More Stack Tools | Particle Alignment >
If you want to test Appion processing, you need a session to work with. There is a script available to create a session for you, loaded with GroEL images.
Use appion/test/CreateTestSession.py found in the subversion repository.
Use -h to see help on how to use it.
You'll want to supply it with a project id and a run directory.
testsuite.py - executes processing modules up to but not including reconstruction
teststack.py - reads and writes stacks in all possible ways
processing db: not set (create processing db) db name ap1You can create the default numbered style database ap... or give it a new name with the same prefix. If you want to specify a database name that does not use the default prefix, please note that your db user specified in the config.php in project_1_2 needs to have the necessary privileges for that database. You may additionally want to change the value assigned to $DEF_PROCESSING_PREFIX in project_1_2/config.php if you want to use your new prefix all the time.
processing db: ap1
See next section on trouble shooting if you get the original page instead.
If you want all your processing databases combined in one single database (not recommended, as this becomes large very fast), just use the same name for all your projects.
The above procedure not only creates the database, but also create some of the tables that you need to start processing.
A name for the database is automatically assigned. You do not need to edit this name.
To change the database assigned to this project, select the unlink button.
< Edit Project Owners | Unlink a Project Processing Database >
Use the following guidelines for creating your first Appion project.
The url will vary based on your host name.
With the username "administrator" and the administrator password created in the wizard, log into myamiweb as shown below.
If you did not enable user login in the setup wizard, you will not be prompted for a password.
.
.
.
You will then see the default layout for the administrator.
.
.
.
More about Users.
Follow the instructions in Create a Processing Database. This will hold all the Appion processing data for your project.
You can download sample images from here.
Then follow the steps in Upload Images.
You can use Pixel size of .83, binning of 1, magnification 100,00, high tension 120, defocus -0.89.
Once your images have been uploaded to a session, you can view them in the Image Viewer application.
From the image viewer, click on the [processing] button at the top of the screen. This will open the Appion processing pipeline application.
From there, follow the directions in Process Images to confirm that your installation is functioning properly.
< Additional Database Server Setup | Setup Remote Processing >
< Align Tilt Series | Upload Tomogram >
< View Projects | Edit Project Description >
This function allows creation of substacks.
< Sort Junk | More Stack Tools >
In order to extract subvolumes from a full-size tomogram, the user first needs to use the particle selection tool to pick which portions of the projected full tomogram the subvolumes are centered on. Subtomograms of uniform size can then be extracted using create subtomogram option under tomography tag.
You are most likely to use "manual picking" for unique object, which is what we outline here. If there are multiple copy of the particle projected to the same plan and you plan to do 3D averaging of the subtomograms, you might be able to use other particle selection methods, too.
< Create Full Tomogram | Average Tomogram Subvolumes >
< Region Mask Creation | Manual Masking >
This algorithm works on tilted images.
< CTF Estimation | Create Particle Stack >
Image formation in EM is distorted by modulation of a contrast transfer function (CTF). Distortion depends on the physical parameters of the microscope, such as keV and lens abberations. Correcting for these aberrations is done by comparing the experimentally observed power spectral density (PSD) of EM images to a theoretically generated CTF.
If you have a large range of defocus in your data, one parameter set or even one estimation method may not work for all of them. In this case, you can start a separate run but choose a different parameters that only process those with low confidence results. Next appion processing (stack making) will pick the result with highest confidence value image by image to be used. Note that Ace and Ace 2 have in general equivalent confidence values while CtfFind values tend to lower which makes it harder to mix and match currently. See forum http://ami.scripps.edu/redmine/boards/13/topics/990
< Particle Selection | Create Particle Stack >
The following is for the computer that hosts the databases. This involves installing MySQL server and creation/configuration of the leginondb and projectdb databases.
Note: You may already have MySQL Server and Client installed. Check by typing mysql at the command line.
If you see a MySQL prompt (mysql>), you may skip this step.
To install Mysql on Linux you have two options (the first option is better):
sudo yum install mysql mysql-server
yast2 -i mysql mysql-client
They are usually located in /usr/share/mysql.
ls /usr/share/mysql/my* /usr/share/mysql/my-huge.cnf /usr/share/mysql/my-innodb-heavy-4G.cnf /usr/share/mysql/my-large.cnf /usr/share/mysql/my-medium.cnf /usr/share/mysql/my-small.cnf
locate my | egrep "\.cnf$" /etc/my.cnf /usr/share/mysql/my-huge.cnf /usr/share/mysql/my-innodb-heavy-4G.cnf /usr/share/mysql/my-large.cnf /usr/share/mysql/my-medium.cnf /usr/share/mysql/my-small.cnf
sudo cp -v /usr/share/mysql/my-huge.cnf /etc/my.cnf
[mysqld]
section):query_cache_type = 1 query_cache_size = 100M query_cache_limit= 100M
default-storage-engine=MyISAM
For CentOS/Fedora/RHEL system use the service command:
sudo /sbin/service mysqld start
For other Unix systems:
sudo /etc/init.d/mysqld start
or on some installations (Suse),
sudo /etc/init.d/mysql start
For future reference: start | stop | restart MySQL Server with similar commands:
For Centos, Fedora
sudo /etc/init.d/mysqld start sudo /etc/init.d/mysqld stop sudo /etc/init.d/mysqld restart
sudo /sbin/service mysqld start sudo /sbin/service mysqld stop sudo /sbin/service mysqld restart
sudo /etc/init.d/mysql start sudo /etc/init.d/mysql stop sudo /etc/init.d/mysql restart
sudo /sbin/chkconfig mysqld on
sudo /sbin/chkconfig --add mysql
ls /var/lib/mysql ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock test
sudo mysqladmin create leginondb
sudo mysqladmin create projectdb
If starting from scratch, the mysql root user will have no password. This is assumed to be the case and we will set it later.
mysql -u root mysql
You should see a mysql prompt: mysql>
You can view the current mysql users with the following command.
select user, password, host from user; +------+----------+-----------+ | user | password | host | +------+----------+-----------+ | root | | localhost | | root | | host1 | | | | host1 | | | | localhost | +------+----------+-----------+ 4 rows in set (0.00 sec)
Create and grant privileges to a user called usr_object for the databases on both the localhost and other hosts involved. For example, use wild card '%' for all hosts. You can set specific (ALTER, CREATE, DROP, DELETE, INSERT, RENAME, SELECT, UPDATE
) privileges or ALL
privileges to the user. See MySQL Reference Manual for details. The following examples demonstrate some of the options available.
CREATE USER usr_object@'localhost' IDENTIFIED BY 'YOUR PASSWORD'; GRANT ALTER, CREATE, INSERT, SELECT, UPDATE ON leginondb.* TO usr_object@'localhost'; GRANT ALTER, CREATE, INSERT, SELECT, UPDATE ON projectdb.* TO usr_object@'localhost';
CREATE USER usr_object@'localhost'; GRANT ALL PRIVILEGES ON leginondb.* TO usr_object@'localhost'; GRANT ALL PRIVILEGES ON projectdb.* TO usr_object@'localhost';
CREATE USER usr_object@'%.mydomain.edu' IDENTIFIED BY 'YOUR PASSWORD'; GRANT ALTER, CREATE, INSERT, SELECT, UPDATE ON leginondb.* to usr_object@'%.mydomain.edu'; GRANT ALTER, CREATE, INSERT, SELECT, UPDATE ON projectdb.* to usr_object@'%.mydomain.edu';
# if your web host is local GRANT ALTER, CREATE, INSERT, SELECT, UPDATE ON `ap%`.* to usr_object@localhost; # for all other hosts if you are accessing the databases from another computer GRANT ALTER, CREATE, INSERT, SELECT, UPDATE ON `ap%`.* to usr_object@'%.mydomain.edu';
To set the root password use the command:
sudo mysqladmin -u root password NEWPASSWORD
Or you can do it from within mysql
update user set password=password('your_own_root_password') where user="root"; Query OK, 2 rows affected (0.01 sec) Rows matched: 2 Changed: 2 Warnings: 0 # run the flush privileges command to avoid problems flush privileges; ^D or exit;
From now on, you will need to specify the password to connect to the database as root user like this:
mysql -u root -p mysql
# at the command prompt, log into the leginon database mysql -u usr_object -p leginondb # At the mysql prompt show variables that begin with 'query'. # Check that the changes you made to my.cfg are in place. SHOW VARIABLES LIKE 'query%'; +------------------------------+-----------+ | Variable_name | Value | +------------------------------+-----------+ | ft_query_expansion_limit | 20 | | have_query_cache | YES | | long_query_time | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 104857600 | ---This should correspond to your change | query_cache_min_res_unit | 4096 | | query_cache_size | 104857600 | ---This should correspond to your change | query_cache_type | ON | ---This should correspond to your change | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | +------------------------------+-----------+ 10 rows in set (0.00 sec) exit;If you do not see your changes, try restarting mysql.
sudo /etc/init.d/mysqld restart
mysqlshow -u root -p +--------------+ | Databases | +--------------+ | mysql | | leginondb | | projectdb | +--------------+
Be sure to edit PASSWORD to the one you previously set for usr_object.
php -r "mysql_connect('localhost', 'usr_object', 'PASSWORD', 'leginondb'); echo mysql_stat();"; echo ""
Expected output:
Uptime: 1452562 Threads: 1 Questions: 618 Slow queries: 0 Opens: 117 Flush tables: 1 Open tables: 106 Queries per second avg: 0.000
If there are any error messages, mysql may be configured incorrectly.
Note: If you do not have php and php-mysql packages installed you need to install them to run the above command. The yum installation is:
sudo yum -y install php php-mysql
< Download additional Software | File Server Setup Considerations >
The following is for the computer that hosts the databases. This involves installing MySQL server and creation/configuration of the leginondb and projectdb databases.
Note: You may already have MySQL Server and Client installed. Check by typing mysql at the command line.
If you see a MySQL prompt (mysql>), you may skip this step.
To install Mysql on Linux you have two options (the first option is better):
sudo yum install mysql mysql-server
yast2 -i mysql mysql-client
They are usually located in /usr/share/mysql.
ls /usr/share/mysql/my* /usr/share/mysql/my-huge.cnf /usr/share/mysql/my-innodb-heavy-4G.cnf /usr/share/mysql/my-large.cnf /usr/share/mysql/my-medium.cnf /usr/share/mysql/my-small.cnf
locate my | egrep "\.cnf$" /etc/my.cnf /usr/share/mysql/my-huge.cnf /usr/share/mysql/my-innodb-heavy-4G.cnf /usr/share/mysql/my-large.cnf /usr/share/mysql/my-medium.cnf /usr/share/mysql/my-small.cnf
sudo cp -v /usr/share/mysql/my-huge.cnf /etc/my.cnf
[mysqld]
section):query_cache_type = 1 query_cache_size = 100M query_cache_limit= 100M
default-storage-engine=MyISAM
For CentOS/Fedora/RHEL system use the service command:
sudo /sbin/service mysqld start
For other Unix systems:
sudo /etc/init.d/mysqld start
or on some installations (Suse),
sudo /etc/init.d/mysql start
For future reference: start | stop | restart MySQL Server with similar commands:
For Centos, Fedora
sudo /etc/init.d/mysqld start sudo /etc/init.d/mysqld stop sudo /etc/init.d/mysqld restart
sudo /sbin/service mysqld start sudo /sbin/service mysqld stop sudo /sbin/service mysqld restart
sudo /etc/init.d/mysql start sudo /etc/init.d/mysql stop sudo /etc/init.d/mysql restart
sudo /sbin/chkconfig mysqld on
sudo /sbin/chkconfig --add mysql
ls /var/lib/mysql ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock test
sudo mysqladmin create leginondb
sudo mysqladmin create projectdb
If starting from scratch, the mysql root user will have no password. This is assumed to be the case and we will set it later.
mysql -u root mysql
You should see a mysql prompt: mysql>
You can view the current mysql users with the following command.
select user, password, host from user; +------+----------+-----------+ | user | password | host | +------+----------+-----------+ | root | | localhost | | root | | host1 | | | | host1 | | | | localhost | +------+----------+-----------+ 4 rows in set (0.00 sec)
Create and grant privileges to a user called usr_object for the databases on both the localhost and other hosts involved. For example, use wild card '%' for all hosts. You can set specific (ALTER, CREATE, DROP, DELETE, INSERT, RENAME, SELECT, UPDATE
) privileges or ALL
privileges to the user. See MySQL Reference Manual for details. The following examples demonstrate some of the options available.
CREATE USER usr_object@'localhost' IDENTIFIED BY 'YOUR PASSWORD'; GRANT ALTER, CREATE, INSERT, SELECT, UPDATE ON leginondb.* TO usr_object@'localhost'; GRANT ALTER, CREATE, INSERT, SELECT, UPDATE ON projectdb.* TO usr_object@'localhost';
CREATE USER usr_object@'localhost'; GRANT ALL PRIVILEGES ON leginondb.* TO usr_object@'localhost'; GRANT ALL PRIVILEGES ON projectdb.* TO usr_object@'localhost';
CREATE USER usr_object@'%.mydomain.edu' IDENTIFIED BY 'YOUR PASSWORD'; GRANT ALTER, CREATE, INSERT, SELECT, UPDATE ON leginondb.* to usr_object@'%.mydomain.edu'; GRANT ALTER, CREATE, INSERT, SELECT, UPDATE ON projectdb.* to usr_object@'%.mydomain.edu';
# if your web host is local GRANT ALTER, CREATE, INSERT, SELECT, UPDATE ON `ap%`.* to usr_object@localhost; # for all other hosts if you are accessing the databases from another computer GRANT ALTER, CREATE, INSERT, SELECT, UPDATE ON `ap%`.* to usr_object@'%.mydomain.edu';
To set the root password use the command:
sudo mysqladmin -u root password NEWPASSWORD
Or you can do it from within mysql
update user set password=password('your_own_root_password') where user="root"; Query OK, 2 rows affected (0.01 sec) Rows matched: 2 Changed: 2 Warnings: 0 # run the flush privileges command to avoid problems flush privileges; ^D or exit;
From now on, you will need to specify the password to connect to the database as root user like this:
mysql -u root -p mysql
# at the command prompt, log into the leginon database mysql -u usr_object -p leginondb # At the mysql prompt show variables that begin with 'query'. # Check that the changes you made to my.cfg are in place. SHOW VARIABLES LIKE 'query%'; +------------------------------+-----------+ | Variable_name | Value | +------------------------------+-----------+ | ft_query_expansion_limit | 20 | | have_query_cache | YES | | long_query_time | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 104857600 | ---This should correspond to your change | query_cache_min_res_unit | 4096 | | query_cache_size | 104857600 | ---This should correspond to your change | query_cache_type | ON | ---This should correspond to your change | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | +------------------------------+-----------+ 10 rows in set (0.00 sec) exit;If you do not see your changes, try restarting mysql.
sudo /etc/init.d/mysqld restart
mysqlshow -u root -p +--------------+ | Databases | +--------------+ | mysql | | leginondb | | projectdb | +--------------+
Be sure to edit PASSWORD to the one you previously set for usr_object.
php -r "mysql_connect('localhost', 'usr_object', 'PASSWORD', 'leginondb'); echo mysql_stat();"; echo ""
Expected output:
Uptime: 1452562 Threads: 1 Questions: 618 Slow queries: 0 Opens: 117 Flush tables: 1 Open tables: 106 Queries per second avg: 0.000
If there are any error messages, mysql may be configured incorrectly.
Note: If you do not have php and php-mysql packages installed you need to install them to run the above command. The yum installation is:
sudo yum -y install php php-mysql
If your webserver installation is successful, a number of tables will be propagated in the databases. There were several options for setting up databse user privileges recommended in Database Server Installation. The following additional steps should be taken, depending on which option you previously used.
GRANT DELETE ON leginondb.ViewerImageStatus TO usr_object@'localhost'; GRANT DELETE ON projectdb.shareexperiments TO usr_object@'localhost'; GRANT DELETE ON projectdb.projectowners TO usr_object@'localhost'; GRANT DELETE ON projectdb.processingdb TO usr_object@'localhost';
GRANT DELETE ON projectdb.gridboxes TO usr_object@'localhost'; GRANT DELETE ON projectdb.grids TO usr_object@'localhost'; GRANT DELETE ON projectdb.gridlocations TO usr_object@'localhost';
GRANT DELETE ON leginondb.ViewerImageStatus TO usr_object@'%.mydomain.edu'; GRANT DELETE ON projectdb.processingdb TO usr_object@'%.mydomain.edu';
GRANT DELETE ON projectdb.gridboxes TO usr_object@'%.mydomain.edu'; GRANT DELETE ON projectdb.grids TO usr_object@'%.mydomain.edu'; GRANT DELETE ON projectdb.gridlocations TO usr_object@'%.mydomain.edu';
The tables that will be affected are in the dbemdata database and the project database.
Migrate the user data from project to dbemdata because dbemdata is already in Sinedon format.
dbemdata
project
Future:
Eventually, we would like to have 3 databases, appion, leginon and project. The user related tables in dbemdata would be moved to project.
All the tables in project still need to be converted to Sinedon format.
Add:
Leave the existing columns as is. Use of "name" and "full name" (with a space) will be phased out.
From users, copy username, firstname, lastname to UserData.
UPDATE UserData, project.users, project.login SET UserData.username=project.users.username, UserData.firstname=project.users.firstname, UserData.lastname=project.users.lastname, UserData.email=project.users.email WHERE UserData.`full name` like concat(project.users.firstname, ' ',project.users.lastname) and project.login.userId = project.users.userId and project.users.userId not in(63,211) and UserData.DEF_id != 54
//Palida? UPDATE UserData, projectdata.users SET UserData.username=projectdata.users.username, UserData.firstname=projectdata.users.firstname, UserData.lastname=projectdata.users.lastname, UserData.email=projectdata.users.email WHERE projectdata.users.userId = 42 AND UserData.DEF_id = 25 //Gabe? UPDATE UserData, projectdata.users SET UserData.username=projectdata.users.username, UserData.firstname=projectdata.users.firstname, UserData.lastname=projectdata.users.lastname, UserData.email=projectdata.users.email WHERE projectdata.users.userId = 65 AND UserData.DEF_id = 29 //Edward Bridgnole UPDATE UserData, projectdata.users SET UserData.username=projectdata.users.username, UserData.firstname=projectdata.users.firstname, UserData.lastname=projectdata.users.lastname, UserData.email=projectdata.users.email WHERE projectdata.users.userId = 78 AND UserData.DEF_id = 41 //Pickwei UPDATE UserData, projectdata.users SET UserData.username=projectdata.users.username, UserData.firstname=projectdata.users.firstname, UserData.lastname=projectdata.users.lastname, UserData.email=projectdata.users.email WHERE projectdata.users.userId = 122 AND UserData.DEF_id = 57 //Mark Daniels UPDATE UserData, projectdata.users SET UserData.username=projectdata.users.username, UserData.firstname=projectdata.users.firstname, UserData.lastname=projectdata.users.lastname, UserData.email=projectdata.users.email WHERE projectdata.users.userId = 199 AND UserData.DEF_id = 65 //Chris Arthur UPDATE UserData, projectdata.users SET UserData.username=projectdata.users.username, UserData.firstname=projectdata.users.firstname, UserData.lastname=projectdata.users.lastname, UserData.email=projectdata.users.email WHERE projectdata.users.userId = 35 AND UserData.DEF_id = 67 //Fei Sun UPDATE UserData, projectdata.users SET UserData.username=projectdata.users.username, UserData.firstname=projectdata.users.firstname, UserData.lastname=projectdata.users.lastname, UserData.email=projectdata.users.email WHERE projectdata.users.userId = 233 AND UserData.DEF_id = 76 //Chi-yu Fu UPDATE UserData, projectdata.users SET UserData.username=projectdata.users.username, UserData.firstname=projectdata.users.firstname, UserData.lastname=projectdata.users.lastname, UserData.email=projectdata.users.email WHERE projectdata.users.userId = 245 AND UserData.DEF_id = 78 //Otomo Takanori uId=79 puId=252 UPDATE UserData, projectdata.users SET UserData.username=projectdata.users.username, UserData.firstname=projectdata.users.firstname, UserData.lastname=projectdata.users.lastname, UserData.email=projectdata.users.email WHERE projectdata.users.userId = 252 AND UserData.DEF_id = 79
This inserts users that have a corresponding project.login entry and have not already been merged into existing dbemdata.UserData entries.
NRAMM usernames with no login entry are not transferred.
INSERT INTO dbemdata.UserData (username, firstname, lastname, email) SELECT projectdata.users.username,projectdata.users.firstname, projectdata.users.lastname,projectdata.users.email FROM projectdata.users WHERE projectdata.users.userId IN (SELECT projectdata.login.userId FROM projectdata.login) AND (projectdata.users.userId NOT IN ( SELECT projectdata.users.userId userId FROM dbemdata.UserData, projectdata.users, projectdata.login WHERE dbemdata.UserData.`full name` LIKE concat( projectdata.users.firstname, ' ', projectdata.users.lastname ) AND projectdata.login.userId = projectdata.users.userId ) AND projectdata.users.userId NOT IN ( 42, 65, 78, 122, 199, 35, 233, 245, 252, 63, 211 ))
UPDATE dbemdata.UserData, projectdata.login SET dbemdata.UserData.password=projectdata.login.password WHERE dbemdata.UserData.username = projectdata.login.username
Remove the email column from the userdetails table.
From users, copy all needed fields.
INSERT INTO projectdata.userdetails (`REF|leginondata|UserData|user`, title, institution, dept, address, city, statecountry, zip, phone, fax, url) SELECT dbemdata.UserData.DEF_id, projectdata.users.title, projectdata.users.institution, projectdata.users.dept, projectdata.users.address, projectdata.users.city, projectdata.users.statecountry, projectdata.users.zip, projectdata.users.phone, projectdata.users.fax, projectdata.users.url FROM dbemdata.UserData, projectdata.users WHERE dbemdata.UserData.username = projectdata.users.username AND projectdata.users.userId NOT IN ( 216, 224, 107, 204, 219, 241, 261 )
ignore:
project.users.userId username
216 nramm_hetzer (dup w/less data)
224 nramm_hjing
107 nramm_jlanman
204 nramm_rkhayat
219 nramm_rkhayat
241 nramm_vinzenz.unger
261 nramm_vinzenz.unger
Move the data from pis table to a new projectowner table in the project database. This table will refer to users in the UserData table.
We will phase out use of the pis table.
Insert users that are project owners and do not have login info and do not have a dbem user name.
Set the passwords to the username.
Add the following project owners to dbemdata.UserData:
nramm_mbevans
nramm_erica
nramm_erwright
nramm_mgfinn
nramm_pucadyil
nramm_abaudoux
nramm_kuzman
nramm_my3r
nramm_liguo.wang
nramm_bbartholomew
nramm_cciferri
nramm_galushin
nramm_nachury
nramm_mfisher1
nramm_nicoles
nramm_gokhan_tolun
nramm_rkirchdo
INSERT INTO dbemdata.UserData (username, firstname, lastname, email, password) SELECT projectdata.users.username,projectdata.users.firstname, projectdata.users.lastname, projectdata.users.email, projectdata.users.username FROM projectdata.users WHERE projectdata.users.username IN ("nramm_mbevans", "nramm_erica", "nramm_erwright", "nramm_mgfinn", "nramm_pucadyil", "nramm_abaudoux", "nramm_kuzman", "nramm_my3r", "nramm_liguo.wang", "nramm_bbartholomew", "nramm_cciferri", "nramm_galushin", "nramm_nachury", "nramm_mfisher1", "nramm_nicoles", "nramm_gokhan_tolun", "nramm_rkirchdo")
Add their details into the userdetails table
INSERT INTO projectdata.userdetails (`REF|leginondata|UserData|user`, title, institution, dept, address, city, statecountry, zip, phone, fax, url) SELECT dbemdata.UserData.DEF_id, projectdata.users.title, projectdata.users.institution, projectdata.users.dept, projectdata.users.address, projectdata.users.city, projectdata.users.statecountry, projectdata.users.zip, projectdata.users.phone, projectdata.users.fax, projectdata.users.url FROM dbemdata.UserData, projectdata.users WHERE dbemdata.UserData.username = projectdata.users.username AND projectdata.users.username IN ( "nramm_mbevans", "nramm_erica", "nramm_erwright", "nramm_mgfinn", "nramm_pucadyil", "nramm_abaudoux", "nramm_kuzman", "nramm_my3r", "nramm_liguo.wang", "nramm_bbartholomew", "nramm_cciferri", "nramm_galushin", "nramm_nachury", "nramm_mfisher1", "nramm_nicoles", "nramm_gokhan_tolun", "nramm_rkirchdo")
Update the pis table with the correct usernames.
The correct usernames are the ones that the users actually use to login to the system.
They have been found by manual inspection.
UPDATE projectdata.pis SET projectdata.pis.username="chappie" WHERE projectdata.pis.username="nramm_chappie" UPDATE projectdata.pis SET projectdata.pis.username="carthur" WHERE projectdata.pis.username="nramm_Christopher.Arthur" UPDATE projectdata.pis SET projectdata.pis.username="cpotter" WHERE projectdata.pis.username="nramm_cpotter" UPDATE projectdata.pis SET projectdata.pis.username="craigyk" WHERE projectdata.pis.username="nramm_craigyk" UPDATE projectdata.pis SET projectdata.pis.username="dfellman" WHERE projectdata.pis.username="nramm_dfellman" UPDATE projectdata.pis SET projectdata.pis.username="dlyumkis" WHERE projectdata.pis.username="nramm_dlyumkis" UPDATE projectdata.pis SET projectdata.pis.username="southworth" WHERE projectdata.pis.username="nramm_dsouthwo" UPDATE projectdata.pis SET projectdata.pis.username="fapalida" WHERE projectdata.pis.username="nramm_fapalida" UPDATE projectdata.pis SET projectdata.pis.username="feisun" WHERE projectdata.pis.username="nramm_feisun" UPDATE projectdata.pis SET projectdata.pis.username="glander" WHERE projectdata.pis.username="nramm_glander" UPDATE projectdata.pis SET projectdata.pis.username="haoyan" WHERE projectdata.pis.username="nramm_hao.yan" UPDATE projectdata.pis SET projectdata.pis.username="jaeger" WHERE projectdata.pis.username="nramm_jaeger" UPDATE projectdata.pis SET projectdata.pis.username="koehn" WHERE projectdata.pis.username="nramm_koehn" UPDATE projectdata.pis SET projectdata.pis.username="mmatho" WHERE projectdata.pis.username="nramm_mmatho" UPDATE projectdata.pis SET projectdata.pis.username="moeller" WHERE projectdata.pis.username="nramm_moeller" UPDATE projectdata.pis SET projectdata.pis.username="muldera" WHERE projectdata.pis.username="nramm_mulderam" UPDATE projectdata.pis SET projectdata.pis.username="paventer" WHERE projectdata.pis.username="nramm_paventer" UPDATE projectdata.pis SET projectdata.pis.username="rharshey" WHERE projectdata.pis.username="nramm_rasika" UPDATE projectdata.pis SET projectdata.pis.username="nramm_langlois" WHERE projectdata.pis.username="nramm_rl2528" UPDATE projectdata.pis SET projectdata.pis.username="rmglaeser" WHERE projectdata.pis.username="nramm_rmglaeser" UPDATE projectdata.pis SET projectdata.pis.username="rtaurog" WHERE projectdata.pis.username="nramm_rtaurog" UPDATE projectdata.pis SET projectdata.pis.username="sstagg" WHERE projectdata.pis.username="nramm_sstagg" UPDATE projectdata.pis SET projectdata.pis.username="tgonen" WHERE projectdata.pis.username="nramm_tgonen" UPDATE projectdata.pis SET projectdata.pis.username="vossman" WHERE projectdata.pis.username="nramm_vossman" UPDATE projectdata.pis SET projectdata.pis.username="ychaban" WHERE projectdata.pis.username="nramm_ychaban"
Add project co-owners (the people who actually access the project).
Many of the project owners do not actually access the data. Add the users who actually work with the project.
INSERT INTO projectdata.pis (projectId, username) VALUES (200,"nramm_fazam"), (230,"glander"), (190,"jlee"), (231,"glander"), (203,"Ranjan"), (181,"kubalek"), (84,"strable"), (222,"nramm_barbie"), (199,"joelq")
Insert rows into projectowners.
All project owners now have usernames in dbemdata.UserData and all projects have an active owner in project.pis.
INSERT INTO projectdata.projectowners (`REF|projects|project`, `REF|leginondata|UserData|user`) SELECT projectdata.pis.projectId, dbemdata.UserData.DEF_id FROM dbemdata.UserData, projectdata.pis WHERE dbemdata.UserData.username = projectdata.pis.username
UPDATE dbemdata.UserData SET dbemdata.UserData.`REF|GroupData|group`= 4 WHERE dbemdata.UserData.`REF|GroupData|group` IS NULL
UPDATE dbemdata.GroupData SET dbemdata.GroupData.`REF|projectdata|privileges|privilege`=3 WHERE dbemdata.GroupData.`REF|projectdata|privileges|privilege` IS NULL
Set the full name in dbemdata.UserData.
UPDATE dbemdata.UserData SET dbemdata.UserData.`full name` = concat(dbemdata.UserData.firstname, ' ', dbemdata.UserData.lastname) WHERE dbemdata.UserData.`full name` IS NULL;
UPDATE dbemdata.UserData SET dbemdata.UserData.username = dbemdata.UserData.name WHERE dbemdata.UserData.username IS NULL;
UPDATE dbemdata.UserData SET dbemdata.UserData.password = dbemdata.UserData.username WHERE dbemdata.UserData.password IS NULL;
UPDATE dbemdata.UserData SET dbemdata.UserData.firstname = "" WHERE dbemdata.UserData.firstname IS NULL;
update shareexperiments
UPDATE project.shareexperiments SET project.shareexperiments.`REF|leginondata|SessionData|experiment` = project.shareexperiments.experimentId WHERE project.shareexperiments.`REF|leginondata|SessionData|experiment` IS NULL;
add usernames where they are missing
UPDATE project.shareexperiments, project.users SET project.shareexperiments.username = project.users.username WHERE project.users.userId = project.shareexperiments.userId AND project.shareexperiments.username IS NULL
update users who have a matching username in dbemdata
UPDATE project.shareexperiments, dbemdata.UserData SET project.shareexperiments.`REF|leginondata|UserData|user` = dbemdata.UserData.DEF_id WHERE dbemdata.UserData.username = project.shareexperiments.username AND project.shareexperiments.`REF|leginondata|UserData|user` IS NULL
Group name | Description | Privilege |
---|---|---|
administrators | may view and modify all groups, users, projects and experiments in the system | All at administration level |
power users | may view and modify anything that is not specifically owned by the default Administrator User | View all but administrate owned |
users | may view and modify project that they own and view experiments that have been shared with the user | Administrate/view only owned projects and view shared experiments |
guests | may view projects owned by the user and experiments shared with the user | View owned projects and shared experiments |
Revert Settings is a tool for use with the Leginon image acquisition software.
Leginon settings for the applications are saved in the database during the installation. When a user use Leginon the first time, the settings or the Appion/Leginon administrator user will be loaded. The user can change them and Leginon will remember the new values from then on.
In the case that a user incorrectly modifies Leginon application settings, the user or an administrator may revert all the settings of a specific user to the default values.
< Instruments | Applications >
Username | Firstname Lastname Displayed | Group | Description |
---|---|---|---|
administrator | Leginon-Appion Administrator | administrators | Default leginon settings are saved under this user |
anonymous | Public User | guests | If you want to allow public viewing to a project or an experiment, assign it to this user |
This guide is primarily intended to help noobs to both Appion and Programming in general get up and running in the development environment that we have created at AMI.
It is a good place to add notes, however basic, that may help someone else accomplish a task related to Appion software development.
Parts of this guide are specific to machines and the environment that we have at AMI. Our apologies.
Different Linux flavors often put web server and mysql-related files in different locations. This can be confusing. From experience, we found the equivalent on CentOS vs SuSE. Here we list them for reference. If your system uses different naming and you are willing to share your experience, please send us the list. We will add it here:
Table Different File locations and Commands on CentOS vs SUSE
File or Command Head | CentOS | SuSE |
---|---|---|
php.ini | /etc/ | /etc/php5/apache2/ |
httpd.conf | /etc/httpd/conf/ | /etc/php5/apache2/ |
default document_root | /var/www/html/ | /srv/www/htdocs/ |
apache start/stop/restart command head | /sbin/service httpd | /etc/init.d/apache2 |
mysql start/stop/restart command head | /sbin/service mysqld | /etc/init.d/mysql |
For a more detailed comparison of Apache file layout on different Linux distributions, see http://wiki.apache.org/httpd/DistrosDefaultLayout
Install Web Server Prerequisites >
change from 5.4 to 5.5
how can you tell what computer you have?
what is SHA1SUM confirmation and how do you find it?
links not found: * http://centos.mirrors.tds.net/pub/linux/centos/5.4/isos/x86_64/sha1sum.txt for 64bit * http://centos.mirrors.tds.net/pub/linux/centos/5.4/isos/i386/sha1sum.txt for 32bit
should i check the box that says packages from centos extras under please select any additional repositories that you want to use for soft ware installation.
how do i add myself as a sudoer?
how do i make sure i have root permission?
so many commands to know for doing the sudoers part.
use a vi cheat sheet on google.
Use Dog Picker if you have no accurate idea of what your particle looks like or you simply want to pick everything (this will include blobs of noise).
< Particle Selection | CTF Estimation >
There are several additional CentOS repositories that you can install. These repositories provide additional packages, such as patented software (MP3 players), closed source applications (Flash plugin, Adobe Acrobat Reader) and lesser used packages (python numpy, Gnu Scientific Library). But some repositories install packages over other packages, which can cause problems and conflicts (ATrpms is bad at this). So we recommend only installing EPEL and RPM Fusion. Read more here:
CentOS Additional Repositories
Particularly, pay attention to the note about protecting yourself from unintended updates from 3rd party packages. The following yum plugin may help you:
yum-priorities plugin
Download repository rpm and install
sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/`uname -i`/epel-release-5-4.noarch.rpm
or CentOS 6:
wget 'http://mirrors.cat.pdx.edu/epel/6/i386/epel-release-6-7.noarch.rpm' sudo yum --nogpgcheck localinstall epel-release-6-7.noarch.rpm
Download repository rpms and install
sudo rpm -Uhv http://download1.rpmfusion.org/free/el/updates/testing/5/`uname -i`/rpmfusion-free-release-5-0.1.noarch.rpm sudo rpm -Uvh http://download1.rpmfusion.org/nonfree/el/updates/testing/5/`uname -i`/rpmfusion-nonfree-release-5-0.1.noarch.rpm
Update the updater to make life easier
sudo yum -y update yum*
sudo yum -y update
NOTE
Download was over 129 MB (in July 2009) and 333 MB (in May 2010). If you have a slow internet connection you can setup presto/deltarpms, see this email and this email for more information
NOTE
Sometimes I have problems with 32bit packages, so uninstall of them:
rpm -qa --qf "%{NAME}.%{ARCH}\n" | grep i.86 | wc -l sudo yum remove `rpm -qa --qf "%{NAME}.%{ARCH}\n" | grep i.86`
NOTE
You can also remove large packages like openoffice, java, and gimp to save space, if you are just making a server
sudo yum remove openoffice* gimp* java*
You will want to restart your computer when this completes.
sudo reboot
General instructions for installation and configuration of some of these packages (such as mysql) are found later in this manual. It may be faster to install them now as a group rather than individually, but it is not necessary.
If you are using an RPM based system (e.g., SuSE, Mandriva, CentOS, or Fedora) this website is good for determining the exact package name that you need. For CentOS 5, just type:
sudo yum -y install \ python-tools python-devel python-matplotlib \ subversion ImageMagick grace gnuplot \ wxPython numpy scipy python-imaging \ gcc-gfortran compat-gcc-34-g77 \ gcc-objc fftw3-devel gsl-devel \ mysql mysql-server MySQL-python \ httpd php php-mysql phpMyAdmin \ gcc-c++ openmpi-devel libtiff-devel \ php-devel gd-devel re2c fftw3-devel php-gd \ xorg-x11-server-Xvfb netpbm-progs \ libssh2-devel
If you have an nVidia video card and setup RPM fusion, install the nVidia binary, will speed things up especially for UCSF Chimera. This command works on Fedora
sudo yum -y install nvidia-x11-drv
for CentOS you will have to download and install the nvidia driver from the nvidia website
sudo yum clean all
sudo updatedb
sudo /sbin/chkconfig httpd on sudo /sbin/chkconfig mysqld on
You can further configure this with the GUI and turn off unnecessary items
system-config-services
sudo reboot
< Instructions for installing CentOS on your computer | Database Server Installation >
Unlike RHEL/CentOS, Fedora comes with an Extras repository by default that contains all of the open source software needed by Appion/Leginon.
That said, there are several additional Fedora repositories that you can install. These repositories provide additional packages that are not allowed in the default Fedora package list, such as patented software (MP3 and Movie players), closed source applications (Nvidia video driver, Flash plugin, Adobe acrobat reader). But some repositories install packages over other packages, which can cause problems and conflicts (ATrpms is especially bad at this), so avoid these repositories. So, we recommend only installing RPM Fusion.
Download repository rpms and install
sudo rpm -Uvh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm sudo rpm -Uvh http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-stable.noarch.rpm
Update the updater to make life easier
sudo yum -y update yum
sudo yum -y update
You will want to restart your computer when this completes.
sudo reboot
General instructions for installation and configuration of some of these packages (such as mysql) are found later in this manual. It may be faster to install them now as a group rather than individually, but it is not necessary.
If you are using an RPM based system (e.g., SuSE, Mandriva, CentOS, or Fedora) this website is good for determining the exact package name that you need. For CentOS 5, just type:
sudo yum -y install \ python-tools python-devel python-matplotlib \ subversion ImageMagick grace gnuplot \ wxPython numpy scipy python-imaging \ gcc-gfortran compat-gcc-34-g77 \ gcc-objc fftw3-devel gsl-devel \ mysql mysql-server MySQL-python \ httpd php php-mysql phpMyAdmin \ gcc-c++ openmpi-devel libtiff-devel \ php-devel gd-devel re2c fftw3-devel php-gd \ xorg-x11-server-Xvfb netpbm-progs \ xorg-x11-drv-nvidia
If you have an nVidia video card and setup RPM fusion, install the nVidia binary, will speed things up especially for UCSF Chimera. This command works on Fedora
sudo yum -y install nvidia-x11-drv
for CentOS you will have to download and install the nvidia driver from the nvidia website
sudo yum clean all
sudo updatedb
sudo /sbin/chkconfig httpd on sudo /sbin/chkconfig mysqld on
You can further configure this with the GUI and turn off unnecessary items
system-config-services
sudo reboot
< Instructions for installing Fedora on your computer | Complete Installation ^
If you have not already downloaded the Appion and Leginon files,
Download Myami 2.2 (contains Appion and Leginon) using one of the following options:
This is a stable supported branch from our code repository.
Change directories to the location that you would like to checkout the files to (such as /usr/local) and then execute the following command:
svn co http://ami.scripps.edu/svn/myami/branches/myami-2.2 myami/
This contains features that may still be under development. It is not supported and may not be stable. Use at your own risk.
svn co http://ami.scripps.edu/svn/myami/trunk myami/
< Check php information | Install the MRC PHP Extension >
Download Myami 2.2 (contains Appion and Leginon) using one of the following options:
This is a stable supported branch from our code repository.
Change directories to the location that you would like to checkout the files to (such as /usr/local) and then execute the following command:
svn co http://ami.scripps.edu/svn/myami/branches/myami-2.2 myami/
This contains features that may still be under development. It is not supported and may not be stable. Use at your own risk.
svn co http://ami.scripps.edu/svn/myami/trunk myami/
< Install supporting packages | Perform system check >
Download Myami 2.2 (contains Appion and Leginon) using one of the following options:
This is a stable supported branch from our code repository.
Change directories to the location that you would like to checkout the files to (such as /usr/local) and then execute the following command:
svn co http://ami.scripps.edu/svn/myami/branches/myami-2.2 myami/
This contains features that may still be under development. It is not supported and may not be stable. Use at your own risk.
svn co http://ami.scripps.edu/svn/myami/trunk myami/
The Dual Viewer splits the browser window to allow two instances of the Image Viewer to appear side by side. The following example shows images from two different Projects being displayed side by side. For more details see Image Viewer Overview.
Dual Viewer Screen:
This method uses multiple iterations of Spider AP SR (reference-free) and Spider AP SH (reference-based) commands to align your particles. It is a set of batch-files used for analysis of conformational flexibility of fatty acid synthase during catalysis in Brignole, et al. Nature Structural and Molecular Biology vol16, 190-197 (2009).
<Run Alignment | Run Feature Analysis >
< Create New Project | Edit Project Owners >
< Edit Project Description | Create a Project Processing Database >
This method relies on the central section theorem, which permits identification of identical intersecting 1D lines for all combinatorial pairs of 2D projections to assign Euler angles needed for 3D reconstruction. This method is only applicable when the specimen does not exhibit preferred orientation.
Note: EMAN Common Lines can be accessed directly from the Appion sidebar, or by clicking on the "Run Common Lines" button displayed above class averages generated through 2D Alignment and Classification
< Ab Initio Reconstruction | Refine Reconstruction >
< Refine Reconstruction|Quality Assessment >
The user is able to retrieve emdb models from the Electron Microscopy Data Bank.
To launch:
Output:
< PDB to Model | Upload Particles >
To enable or disable user authentication, run the setup wizard at http://YOUR_SERVER/myamiweb/setup.
If Appion/Leginon is configured to enable user authentication, the myamiweb user interface will allow users to:When a user points a browser to http://YOUR_SERVER/myamiweb, the following login screen is displayed:
Myamiweb Login Screen
The example config file below may be out of date.
<?php /** * The Leginon software is Copyright 2010 * The Scripps Research Institute, La Jolla, CA * For terms of the license agreement * see http://ami.scripps.edu/software/leginon-license * */ /** * Please visit http://yourhost/myamiwebfolder/setup * for automatically setup this config file for the * first time. */ require_once 'inc/config.inc'; define('WEB_ROOT',dirname(__FILE__)); // --- define myamiweb tools base --- // define('PROJECT_NAME',"myamiweb"); define('PROJECT_TITLE',"Ambers Trunk Appion and Leginon Tools"); // --- define site base path -- // // --- This should be changed if the myamiweb directory is located -- // // --- in a sub-directory of the Apache web directory. -- // // --- ex. myamiweb is in /var/www/html/applications/myamiweb/ then -- // // --- change "myamiweb to "applications/myamiweb" -- // define('BASE_PATH',"~amber/myamiweb"); define('BASE_URL',"/~amber/myamiweb/"); define('PROJECT_URL',"/~amber/myamiweb/project/"); // --- myamiweb login --- // // Browse to the administration tools in myamiweb prior to // changing this to true to populate DB tables correctly. define('ENABLE_LOGIN', true); // --- Administrator email title and email address -- // define('EMAIL_TITLE',"asdfasf"); define('ADMIN_EMAIL',"amber@scripps.edu"); // --- When 'ENABLE_SMTP set to true, email will send out -- // // --- via ADMIN_EMIL's SMTP server. --// define('ENABLE_SMTP', false); define('SMTP_HOST',""); // --- Check this with your email administrator -- // // --- Set it to true if your SMTP server requires authentication -- // define('SMTP_AUTH', false); // --- If SMTP_AUTH is not required(SMTP_AUTH set to false, -- // // --- no need to fill in 'SMTP_USERNAME' & SMTP_PASSWORD -- // define('SMTP_USERNAME',""); define('SMTP_PASSWORD',""); // --- Set your MySQL database server parameters -- // define('DB_HOST',"cronus4.scripps.edu"); define('DB_USER',ask someone); define('DB_PASS',ask someone); define('DB_LEGINON',"dbemdata"); define('DB_PROJECT',"project"); // --- default URL for project section --- // define('VIEWER_URL', BASE_URL."3wviewer.php?expId="); define('SUMMARY_URL', BASE_URL."summary.php?expId="); define('UPLOAD_URL', BASE_URL."processing/uploadimage.php"); // --- Set cookie session time -- // define('COOKIE_TIME', 0); //0 is never expire. // --- defaut user group -- // define('GP_USER', 'users'); // --- XML test dataset -- // $XML_DATA = "test/viewerdata.xml"; // --- Set Default table definition -- // define('DEF_PROCESSING_TABLES_FILE', "defaultprocessingtables.xml"); define('DEF_PROCESSING_PREFIX',"ap"); // --- Set External SQL server here (use for import/export application) -- // // --- You can add as many as you want, just copy and paste the block -- // // --- to a new one and update the connection parameters -- // // --- $SQL_HOSTS['example_host_name']['db_host'] = 'example_host_name'; -- // // --- $SQL_HOSTS['example_host_name']['db_user'] = 'usr_object'; -- // // --- $SQL_HOSTS['example_host_name']['db_pass'] = ''; -- // // --- $SQL_HOSTS['example_host_name']['db'] = 'legniondb'; -- // $SQL_HOSTS[DB_HOST]['db_host'] = DB_HOST; $SQL_HOSTS[DB_HOST]['db_user'] = DB_USER; $SQL_HOSTS[DB_HOST]['db_pass'] = DB_PASS; $SQL_HOSTS[DB_HOST]['db'] = DB_LEGINON; // --- path to main --- // set_include_path(dirname(__FILE__).PATH_SEPARATOR .dirname(__FILE__)."/project".PATH_SEPARATOR .dirname(__FILE__)."/lib".PATH_SEPARATOR .dirname(__FILE__)."/lib/PEAR"); // --- add plugins --- // // --- uncomment to enable processing web pages -- // addplugin("processing"); define('DEFAULT_APPION_PATH',"/ami/data00/appion/"); // --- Add as many processing hosts as you like -- // // --- Please enter your processing host information associate with -- // // --- Maximum number of the processing nodes -- // // --- $PROCESSING_HOSTS[] = array('host' => 'host1.school.edu', 'nproc' => 4); -- // // --- $PROCESSING_HOSTS[] = array('host' => 'host2.school.edu', 'nproc' => 8); -- // $PROCESSING_HOSTS[] = array('host' => 'guppy.scripps.edu', 'nproc' => 8, 'nodesdef' => '2', 'nodesmax' => '8', 'ppndef' => '8', 'ppnmax' => '8', 'reconpn' => '8', 'walltimedef' => '2', 'walltimemax' => '2', 'cputimedef' => '2', 'cputimemax' => '2', 'memorymax' => '30', 'appionbin' => '/opt/myamisnap/bin/appion/', 'baseoutdir' => DEFAULT_APPION_PATH, 'localhelperhost' => 'guppy.scripps.edu', 'dirsep' => '/' ); $PROCESSING_HOSTS[] = array('host' => 'garibaldi.scripps.edu', 'nproc' => 8, 'nodesdef' => '16', 'nodesmax' => '280', 'ppndef' => '4', 'ppnmax' => '8', 'reconpn' => '4', 'walltimedef' => '240', 'walltimemax' => '240', 'cputimedef' => '240', 'cputimemax' => '240', 'memorymax' => '30', 'appionbin' => '~bcarr/appionbin/', 'baseoutdir' => '', //sends appion procession output to a location under the users home directory on the remote host 'localhelperhost' => 'amibox03.scripps.edu', 'dirsep' => '/' ); // --- register your cluster configure file below i.e (default_cluster) --- // // --- $CLUSTER_CONFIGS[] = 'cluster1'; -- // // --- $CLUSTER_CONFIGS[] = 'cluster2'; -- // //$CLUSTER_CONFIGS[] = 'guppy_cluster'; //$CLUSTER_CONFIGS[] = 'garibaldi'; //$CLUSTER_CONFIGS[] = 'test1_cluster'; //$CLUSTER_CONFIGS[] = 'test2_cluster'; // --- Microscope spherical aberration constant // --- Example : 2.0 --- // define('DEFAULTCS',"2.0"); // --- Restrict file server if you want --- // // --- Add your allowed processing directory as string in the array $DATA_DIRS = array(); // --- Enable Image Cache --- // define('ENABLE_CACHE', false); // --- caching location --- // // --- please make sure the apache user has write access to this folder --- // // --- define('CACHE_PATH', "/srv/www/cache/"); --- // define('CACHE_PATH',""); define('CACHE_SCRIPT', WEB_ROOT.'/makejpg.php'); // --- define Flash player base url --- // define('FLASHPLAYER_URL', "/flashplayer/"); // --- define python commands - path --- // // to download images as TIFF or JPEG // $pythonpath="/your/site-packages"; // putenv("PYTHONPATH=$pythonpath"); // To use mrc2any, you need to install the pyami package which is part // of myami. See installation documentation for help. // --- define('MRC2ANY', "/usr/bin/mrc2any" --- // define('MRC2ANY',"/usr/bin/mrc2any"); // --- Check if IMAGIC is installed and running, otherwise hide all functions --- // define('HIDE_IMAGIC', false); // --- Check if MATLAB is installed and running, otherwise hide all functions --- // define('HIDE_MATLAB', false); // --- hide processing tools still under development. --- // define('HIDE_FEATURE', false); // --- temporary images upload directory --- // define('TEMP_IMAGES_DIR',"/tmp"); // --- use appion warpper --- // define('USE_APPION_WRAPPER', true); // --- define('APPION_WRAPPER_PATH', ""); --- // define('APPION_WRAPPER_PATH',"/opt/myamisnap/bin/appion"); // --- sample tracking ---// define('SAMPLE_TRACK', false); // --- exclude projects in statistics. give a string with numbers separated by ',' ---// // --- for example, "1,2" ---// define('EXCLUDED_PROJECTS',""); // --- hide processing tools still under development. --- // define('HIDE_TEST_TOOLS', false); $TEST_SESSIONS = array( 'zz07jul25b' ,'zz06apr27c' ,'zz09feb12b' ,'zz09apr14b' ,'zz09feb18c' ); ?>
MySQL database usernames, Leginon/Appion username, and your linux login username are all different. Each serves its own purpose.
Database names, user, and password need to be entered during web server and sinedon.cfg setup.
In our example, we have:
purpose | example name |
database name for leginon parameters and metadata | leginondb |
database name for project management | projectdb |
database name prefix for appion processing | ap |
database user name | usr_object |
database user password | (not set) |
username for Leginon image viewing and appion processing/reporting from the web = username registered for Leginon running
Firstname used in the registration + Lastname used in the registration = fullname entered in leginon.cfg
Relevant Topics:
Database Server Installation
Configure sinedon.cfg
Web Server Installation
A specific file tree structure has been assumed as default in Appion/Leginon. Until v2.2 release, this can not be altered. Here is a description of it:
The following permission rule is required for multi-unix-user usage of Leginon/Appion:
For Leginon:
For Appion:
< Database Server Installation | Processing Server Installation >
A specific file tree structure has been assumed as default in Appion/Leginon. Until v2.2 release, this can not be altered. Here is a description of it:
The following permission rule is required for multi-unix-user usage of Leginon/Appion:
For Leginon:
For Appion:
Filter particle stack by mean and stdev values.
<View Stacks | Center Particles >
Once the stack is created, go back to "Frealign Refinement"
- Step 2. Select the same stack used for the EMAN reconstruction from the "Stacks" drop down menu.
- Step 4. Make the boxsize and binning factor the same as the selected stack in step 2.
- Step 10. UNcheck the "invert image density" box if you collected ice data. Frealign requires a black on white stack; check or uncheck the box accordingly.
- Step 11. UNcheck "Ctf Correct Particle Images". Frealign requires a non-ctf corrected stack.
< Refine Reconstruction|Quality Assessment >
1. Copy the attached scripts to your working directory.
2. Open and edit lines 47 - 49 of f90c_ubuntu64simple.csh to include the path to the following MRC library commands:
47: /usr/local/image2010/lib/imlib2010.a
48: /usr/local/image2010/lib/misclib.a
49: /usr/local/image2010/lib/genlib.a
3. Compile:
./f90c_ubuntu64simple.csh fastfreehand_v1_01.f90
./f90c_ubuntu64simple.csh totsumstack.f90
These will compile into fastfreehand_v1_01.exe and totsumstack.exe, respectively.
Goniometer settings are for use with the Leginon image acquisition software.
If you are not using Leginon, you may ignore the Goniometer settings. If you are using Leginon, please refer to the Leginon user manuals section on Goniometer.
The grid management tools handle registration, modification, and deletion of grid boxes and grids. A grid registered in the database can be associated the images easily. Currently, the management tool is primarily used for grids to be handled by grid insertion/extraction robots coupled with Leginon's "Robot-MSI screen" applications. Apart from that, only "Manual" application has the interface for selecting grids from the database. Appion-only users can ignore this management tool.
This tool is located at http://your_myamiweb/project/gridtray.php
Grid registration starts with register a new grid box.
After the grid is added, it can be assigned to a location on an existing grid box by choosing the box and click on a location not yet occupied by a grid.
< View a Summary of a Project Session
Groups are used to associate Users with common privileges.
Several default groups are included with your installation and correspond to the available privilege levels.
Group name | Description | Privilege |
---|---|---|
administrators | may view and modify all groups, users, projects and experiments in the system | All at administration level |
power users | may view and modify anything that is not specifically owned by the default Administrator User | View all but administrate owned |
users | may view and modify project that they own and view experiments that have been shared with the user | Administrate/view only owned projects and view shared experiments |
guests | may view projects owned by the user and experiments shared with the user | View owned projects and shared experiments |
Groups may be viewed and managed within the Administration tool by clicking on the Groups Icon:
A list of the available groups is displayed.
Click on a group in the list to show the group information.
Note: This feature is currently disabled.
Helical or tubular crystals, which often occur upon high-density reconstitution of membrane proteins into lipid membranes, offer some unique advantages over 2D crystals. A single image of a helical tube provides all of the information required for calculating a 3D map, and the inclusion of many tubes can be combined to improve the resolution without the need to tilt the sample in the microscope. Helical processing can be performed in real space or in Fourier space. Appion encourages the use of independent methods as every dataset is different and therefore responds differently to various protocols. In addition, the use of multiple packages can be a tool to improve reliability of reconstructions, as each method should converge on a similar result.
NOTE: Steps requiring user feedback/interaction are in BLUE, steps detailing what the program is doing are in GREEN, warning messages are in RED.
After completing steps 1-7 in the General Workflow complete these additional steps:Helical or tubular crystals, which often occur upon high-density reconstitution of membrane proteins into lipid membranes, offer some unique advantages over 2D crystals. A single image of a helical tube provides all of the information required for calculating a 3D map, and the inclusion of many tubes can be combined to improve the resolution without the need to tilt the sample in the microscope. Helical processing can be performed in real space or in Fourier space. Appion encourages the use of independent methods as every dataset is different and therefore responds differently to various protocols. In addition, the use of multiple packages can be a tool to improve reliability of reconstructions, as each method should converge on a similar result.
This method clusters particles using k-means or hierarchical ascendancy according to metrics obtained via feature analysis procedures such as correspondence analysis.
Note: "Run Particle Clustering" can be accessed directly from a feature analysis run, or via the "Run Particle Clustering" link in the Appion sidebar menu. In the latter case, you will be taken to the list of feature analyses that have been completed, where you can select the feature analysis run which you wish to cluster.
<Run Particle Clustering | Ab Initio Reconstruction >
The Hole Template Viewer tool allows Leginon users to view the templates used to find grid holes.
cp -v runAppionScript.php.template runMyProgram.php
$pub = new Publication('appion'); echo $pub->getHtmlTable(); //returns the html reference to the "appion" publication
$simpleParamsForm = new SimpleParamsForm('','','','CHECKED','','','10','30','','20','100','2','2','10','','0.8','40','3','3'); echo $simpleParamsForm->generateForm();
$pub = new Publication('appion'); echo $pub->getHtmlTable();
$simpleParamsForm = new SimpleParamsForm(); $errorMsg .= $simpleParamsForm->validate( $_POST );
/* ******************* PART 2: Create program command ******************** */ $command = "runSimpleCluster.py "; // add run parameters $command .= $runParametersForm->buildCommand( $_POST ); // add simple parameters $command .= $simpleParamsForm->buildCommand( $_POST );
The current database scheme for every refinement method (both single-model and multi-model) is shown below:
database architecture for refinements
For reference, below is a diagram of the modifications to the refinement pipeline that have been performed for the refactoring. Color coding is as follows:
changes to the database architecture for refinements
the ReconUploader base class takes care of a many different functions, specifically:
After you have added the new refinement methods job class it needs to be added to the job running agent by editting the file apAgent.py in appionlib.
Ex. elif "newJobType" == jobType: jobInstance = newModuleName.NewRefinementClass(command)
The script should be titled 'uploadYourPackageRefine.py'
This script performs all of the basic operations that are needed to upload a refinement to the database, such that it can be displayed in AppionWeb. The bulk of the job is performed with the ReconUploader.py base class, which is inherited by each new uploadYourPackageRefine.py subclass script. this means that the developer's job is simply to make sure that all of the particle / package parameters are being passed in a specific format. Effectively, the only things that need to be written to this script are:
def __init__(self): ### DEFINE THE NAME OF THE PACKAGE self.package = "external_package" super(uploadExternalPackageScript, self).__init__() #===================== def start(self): ### determine which iterations to upload; last iter is defaulted to infinity uploadIterations = self.verifyUploadIterations() ### upload each iteration for iteration in uploadIterations: for j in range(self.runparams['numberOfReferences']): ### general error checking, these are the minimum files that are needed vol = os.path.join(self.resultspath, "recon_%s_it%.3d_vol%.3d.mrc" % (self.params['timestamp'], iteration, j+1)) particledatafile = os.path.join(self.resultspath, "particle_data_%s_it%.3d_vol%.3d.txt" % (self.params['timestamp'], iteration, j+1)) if not os.path.isfile(vol): apDisplay.printError("you must have an mrc volume file in the 'external_package_results' directory") if not os.path.isfile(particledatafile): apDisplay.printError("you must have a particle data file in the 'external_package_results' directory") ### make chimera snapshot of volume self.createChimeraVolumeSnapshot(vol, iteration, j+1) ### instantiate database objects self.insertRefinementRunData(iteration, j+1) self.insertRefinementIterationData(iteration, j+1) ### calculate Euler jumps self.calculateEulerJumpsAndGoodBadParticles(uploadIterations)In the single-model refinement case (example Xmipp projection-matching):
def __init__(self): ### DEFINE THE NAME OF THE PACKAGE self.package = "Xmipp" self.multiModelRefinementRun = False super(uploadXmippProjectionMatchingRefinementScript, self).__init__() def start(self): ### database entry parameters package_table = 'ApXmippRefineIterData|xmippParams' ### set projection-matching path self.projmatchpath = os.path.abspath(os.path.join(self.params['rundir'], self.runparams['package_params']['WorkingDir'])) ### check for variable root directories between file systems apXmipp.checkSelOrDocFileRootDirectoryInDirectoryTree(self.params['rundir'], self.runparams['cluster_root_path'], self.runparams['upload_root_path']) ### determine which iterations to upload lastiter = self.findLastCompletedIteration() uploadIterations = self.verifyUploadIterations(lastiter) ### upload each iteration for iteration in uploadIterations: apDisplay.printColor("uploading iteration %d" % iteration, "cyan") ### set package parameters, as they will appear in database entries package_database_object = self.instantiateProjMatchParamsData(iteration) ### move FSC file to results directory oldfscfile = os.path.join(self.projmatchpath, "Iter_%d" % iteration, "Iter_%d_resolution.fsc" % iteration) newfscfile = os.path.join(self.resultspath, "recon_%s_it%.3d_vol001.fsc" % (self.params['timestamp'],iteration)) if os.path.exists(oldfscfile): shutil.copyfile(oldfscfile, newfscfile) ### create a stack of class averages and reprojections (optional) self.compute_stack_of_class_averages_and_reprojections(iteration) ### create a text file with particle information self.createParticleDataFile(iteration) ### create mrc file of map for iteration and reference number oldvol = os.path.join(self.projmatchpath, "Iter_%d" % iteration, "Iter_%d_reconstruction.vol" % iteration) newvol = os.path.join(self.resultspath, "recon_%s_it%.3d_vol001.mrc" % (self.params['timestamp'], iteration)) mrccmd = "proc3d %s %s apix=%.3f" % (oldvol, newvol, self.runparams['apix']) apParam.runCmd(mrccmd, "EMAN") ### make chimera snapshot of volume self.createChimeraVolumeSnapshot(newvol, iteration) ### instantiate database objects self.insertRefinementRunData(iteration) self.insertRefinementIterationData(package_table, package_database_object, iteration) ### calculate Euler jumps self.calculateEulerJumpsAndGoodBadParticles(uploadIterations) ### query the database for the completed refinements BEFORE deleting any files ... returns a dictionary of lists ### e.g. {1: [5, 4, 3, 2, 1]} means 5 iters completed for refine 1 complete_refinements = self.verifyNumberOfCompletedRefinements(multiModelRefinementRun=False) if self.params['cleanup_files'] is True: self.cleanupFiles(complete_refinements)
def __init__(self): ### DEFINE THE NAME OF THE PACKAGE self.package = "XmippML3D" self.multiModelRefinementRun = True super(uploadXmippML3DScript, self).__init__() def start(self): ### database entry parameters package_table = 'ApXmippML3DRefineIterData|xmippML3DParams' ### set ml3d path self.ml3dpath = os.path.abspath(os.path.join(self.params['rundir'], self.runparams['package_params']['WorkingDir'], "RunML3D")) ### check for variable root directories between file systems apXmipp.checkSelOrDocFileRootDirectoryInDirectoryTree(self.params['rundir'], self.runparams['cluster_root_path'], self.runparams['upload_root_path']) ### determine which iterations to upload lastiter = self.findLastCompletedIteration() uploadIterations = self.verifyUploadIterations(lastiter) ### create ml3d_lib.doc file somewhat of a workaround, but necessary to make projections total_num_2d_classes = self.createModifiedLibFile() ### upload each iteration for iteration in uploadIterations: ### set package parameters, as they will appear in database entries package_database_object = self.instantiateML3DParamsData(iteration) for j in range(self.runparams['package_params']['NumberOfReferences']): ### calculate FSC for each iteration using split selfile (selfile requires root directory change) self.calculateFSCforIteration(iteration, j+1) ### create a stack of class averages and reprojections (optional) self.compute_stack_of_class_averages_and_reprojections(iteration, j+1) ### create a text file with particle information self.createParticleDataFile(iteration, j+1, total_num_2d_classes) ### create mrc file of map for iteration and reference number oldvol = os.path.join(self.ml3dpath, "ml3d_it%.6d_vol%.6d.vol" % (iteration, j+1)) newvol = os.path.join(self.resultspath, "recon_%s_it%.3d_vol%.3d.mrc" % (self.params['timestamp'], iteration, j+1)) mrccmd = "proc3d %s %s apix=%.3f" % (oldvol, newvol, self.runparams['apix']) apParam.runCmd(mrccmd, "EMAN") ### make chimera snapshot of volume self.createChimeraVolumeSnapshot(newvol, iteration, j+1) ### instantiate database objects self.insertRefinementRunData(iteration, j+1) self.insertRefinementIterationData(package_table, package_database_object, iteration, j+1) ### calculate Euler jumps self.calculateEulerJumpsAndGoodBadParticles(uploadIterations) ### query the database for the completed refinements BEFORE deleting any files ... returns a dictionary of lists ### e.g. {1: [5, 4, 3, 2, 1], 2: [6, 5, 4, 3, 2, 1]} means 5 iters completed for refine 1 & 6 iters completed for refine 2 complete_refinements = self.verifyNumberOfCompletedRefinements(multiModelRefinementRun=True) if self.params['cleanup_files'] is True: self.cleanupFiles(complete_refinements)
http://ami.scripps.edu/svn/myami/trunk/appion/bin/uploadXmippRefine.py (simplest)
http://ami.scripps.edu/svn/myami/trunk/appion/bin/uploadXmippML3DRefine.py (simple multi-model refinement case)
http://ami.scripps.edu/svn/myami/trunk/appion/bin/uploadEMANRefine.py (complicated, due to additional features / add-ons)
Below is a list of necessary functions, everything else is optional:
In order to utilize the base class ReconUploader.py to upload all parameters associated with the refinement the following files must exist:
Manual picking and mask making use GUIs and require user interaction.
When running these at AMI, you can use amibox02 or amibox03.
When you ssh, you need to use the -X flag to tell the terminal to display the GUI.
ssh -X amibox02
Then put the correct path to appion in front of the command, such as /ami/sw/bin/appion.
/ami/sw/bin/appion makestack2.py --single=start.hed --selectionid=1002 --invert --normalized --maskassess=manualrun1 --boxsize=16 --description="test" --projectid=5 --preset=upload --session=10may13l35 --runname=stack7 --rundir=/ami/data00/appion/10may13l35/stacks/stack7 --no-rejects --no-wait --commit --reverse --limit=1 --continue
Just had to do this so taking some notes:
I have images that I want to upload located in my home directory.
I want to use my sandbox on the web side of appion.
I want to use the wrapper/appion snapshot/beta appion for the python parts. However you want to call it, I just need a recent version.
I want to upload the images to my own private database that is located on the fly server.
When the images are uploaded, I want them stored in my home directory rather than on /ami/data00.
Make sure sinedon.cfg is in your home directory and the host, user and password correspond to your database.
Make sure the projectdata and leginondata settings are set to the name of your db.
Make sure leginon.cfg is in your home directory and the images path is set to a folder in your home directory.
Make sure the database information matches what is found in sinedon.cfg.
Fly does not have the python parts installed.
Guppy does not have access to your home directory to get the images.
example:
/opt/myamisnap/bin/appion uploadImages.py --projectid=268 --image-dir=/home/amber/uploadedimages/pairedimages --mpix=1E-09 --type=defocalseries --images-per-series=2 --defocus-list=-1E-10,-2E-10 --mag=50000 --kv=120 --description="defocal test" --jobtype=uploadimage
From the Appion and Leginon Tools start page, select Image Viewer to view images associated with your Project Sessions in a single viewing pane.
The following screen is displayed. For more details see Image Viewer Overview.
Image Viewer Screen:
< Image Viewer Overview | 2 Way Viewer >
< Project DB | LOI - Leginon Observer Interface >
Coming soon! Working out some bugs...
< Ab Initio Reconstruction | Refine Reconstruction >
This method uses the IMAGIC M-R-A command to align your particles.
<Run Alignment | Run Feature Analysis >
Coming Soon! Working out a few bugs...
< Refine Reconstruction|Quality Assessment >
Appion provides various tools for importing 2D images as well as 3D volumes for various usage ranging from templates for alignment to 3D models for model refinement.
< CTF Estimation | Image Assessment >
File Name | Path | Server | Test | Purpose | Install Script | Test Tool | Wizard | Added to |
---|---|---|---|---|---|---|---|---|
syscheck.py | myami/leginon/ | Processing | Package Installation | tells you which versions of python and third party python packages you have installed | X | checkprocessingserver.py | ||
check.sh | myami/appion/ | Processing | Appion Installation | imports Appion libraries and runs binaries | X | X | checkprocessingserver.py | |
test1.py and test2.py | myami/leginon/ | Networking | detect problems due to a firewall or host name resolution | X | X | |||
createtestsession.py | myami/appion/test | Web | Image Viewer, Pipeline (Manual) | loads up a session filled with sample images for one to test with | X | |||
testsuite.py | myami/appion/test | Web | Pipeline | executes processing pipeline | X | X | ||
teststack.py | myami/appion/test | Web | Pipeline | reads and writes stacks | X | |||
ex1.php, ex2.php, mymrc.mrc | myami/programs/php_mrc-5.3 | Web | MRC installation | show that mrc extensions have been correctly installed | X | X | checkwebserver.php |
Server | Test | Purpose | Install Script | Test Tool | Wizard | Added to... |
---|---|---|---|---|---|---|
DB | Package Installation | X | X | |||
DB | MySQL variables | make sure user modified my.cnf | X | |||
Processing | Scipy/Numpy | make sure scipy and numpy work correctly | X | checkProcessingServer.py | ||
Processing | leginon.cfg | check that the file was created by user | X | X | checkProcessingServer.py | |
Processing | sinedon.cfg | check that the file was created by user | X | X | checkProcessingServer.py | |
Processing | EMAN Installation | check that help window is displayed | X | checkProcessingServer.py | ||
Processing | Spider Installation | launch spider | X | |||
Processing | Xmipp Installation | launch Xmipp | X | checkProcessingServer.py | ||
Web | Package Installation | X | X | checkwebserver.php | ||
Web | mrc.so | check that it exists | X | X | ||
Web | mrc tools | verify installation with info.php | X | checkwebserver.php | ||
Web | config.php | ensure there are no extra lines at the end after the php tag | X | X | checkwebserver.php |
You can run all of our troubleshooting scripts in a terminal.
cd /path/to/myami/install python troubleshooter.py
Or, you can run the individual scripts described below.
In addition to the downloads from our svn repository, there are several other requirements that you will get either from your OS installation source, or from its respective website. The system check in the Leginon package checks your system to see if you already have these requirements.
cd myami/leginon/ python syscheck.py
If python is not installed, this, of course will not run. If you see any lines like "*** Failed...", then you have something missing. Otherwise, everything should result in "OK".
setup.py
you are ready to test out appion.cd myami/appion/ ./check.sh
You need to edit leginon.cfg.
cd myami/appion/test python check3rdPartyPackages.py
Note: check3rdPartyPackages.py is currently only available with development svn checkout, will be included in version 2.2
A web server troubleshooting tool is available at http://YOUR_HOST/myamiweb/test/checkwebserver.php.
You can browse to this page from the Appion and Leginon Tools home page (http://YOUR_HOST/myamiweb) by clicking on [test Dataset] and then [Troubleshoot].
This page will automatically confirm that your configuration file and PHP installation and settings are correct and point you to the appropriate documentation to correct any issues.
You may need to configure your firewall to allow incoming HTTP (port 80) and MySQL (port 3306) traffic:
$ system-config-securitylevel
Security-enhanced linux may be preventing your files from loading. To fix this run the following command:
$ sudo /usr/bin/chcon -R -t httpd_sys_content_t /var/www/html/
see this website for more details on SELinux
Make sure you have the latest version of Leginon installed. Then follow the steps below to install image processing packages on your Processing Server.
The 64bit Ace2 binary is already available in the myami/appion/bin directory.
Test it by changing directories to myami/appion/bin and type the following commands:
./ace2.exe -h ./ace2correct.exe -h
If it is working the help commands will display.
It is highly recommended to use the Ace2 binary, if it works.
< Compile FindEM | Install Imod >
1. Install the Apache Web Server with the YaST or yum utility.
2. Find "httpd.conf".
This is /etc/httpd/conf/httpd.conf on CentOS and /etc/apache2/httpd.conf on SuSE
sudo nano /etc/httpd/conf/httpd.conf
3. Edit the "httpd.conf" configuration file with the following:
DirectoryIndex index.html index.php HostnameLookups On
Note: It may be possible to edit httpd.conf in YaST2 as well.
4. If you plan to enable the web interface user login feature, the ServerName directive should be set to a resolvable host name and UseCanonicalnames should be turned on. This will ensure the link provided in the email to verify user registration is valid. Follow the example below replacing YourServer.yourdomain.edu with your servers name.
ServerName YourSever.yourdomain.edu UseCanonicalName On
5. Restart the web server.
apachectl restart or sudo /sbin/service httpd restart (ON CentOS/RHEL/Fedora) or /etc/sbin/rcapache2 restart (ON SuSE) or /sbin/service httpd restartIf you want to start the web server automatically at boot on SuSE
sudo /sbin/chkconfig apache2 on #SuSE sudo /sbin/chkconfig httpd on #CentOS/RHEL/Fedora
< Configure php.ini | Check php information >
NOTICE: We are in the process of releasing a new version of the installation tool. There are a few bugs right now, so you might want to hold off on using it until this notice is removed.
As of Appion/Leginon 2.1.0, there is an quick installation script available for use. It currently installs both Appion and Leginon on a single computer running the CentOS Linux operating system. This is not really intended for production systems but is instead perhaps a useful way to evaluate the software prior to undertaking a more complex installation across multiple CPU's. Please note that the auto-installation is intended for use with a clean installation of CentOS and might fail if you already have several other packages installed.
Follow the instructions below to install Appion and Leginon using the auto-installation script.
/usr/sbin/selinuxenabled echo $?
python centosAutoInstallation.py
If your installation completes and you see a Welcome to Leginon window, a second Leginon window that appears blank and a web page with the Web Tools Setup wizard in a System updating state as shown below, you have installed CentOs with Security Enhanced Linux enabled. You need to disable SE Linux:
Once SELinux is disabled, the configuration will complete on its own.
You may need to reconfigure the database if the Tools web page is not displayed.
If you see the errors shown in the screen below, you may have previously configured a database with mysql. You will need to re-install CentOS and run the installation script again.
Since the appion package includes many executable scripts, it is important that you know where they are being installed. To prevent cluttering up the /usr/bin directory, you can specify an alternative path, typically /usr/local/bin, or a directory of your choice that you will later add to your PATH environment variable. Install appion like this:
cd /path/to/myami-VERSION/appion sudo python setup.py install --install-scripts=/usr/local/bin/appion
cd /path/to/myami-VERSION/myami sudo ./pysetup.sh install
That will install each package, and report any failures. To determine the cause of failure, see the generated log file "pysetup.log". If necessary, you can enter a specific package directory and run the python setup command manually. For example, if sinedon failed to install, you can try again like this:
cd sinedon sudo python setup.py install
Python installer put the packages you installed into its site-packages directory. This enables all users on the same computer to access them. The easiest way to discover where your installed package is loaded from by python is to load a module from the package using interactive python command lines like this:
Start the python command line from shell:
python
Import a module from the package. Let's try sinedon here. All packages installed through the above setup.py script should go to the same place.
At the python prompt (python>) type:
import sinedon
If the module is loaded successfully, call the module attribute path (two underscrolls before "path" and two underscrolls after) will return the location of the module it is loaded from
sinedon.__path__ ['/usr/lib/python2.4/site-packages/sinedon']
In this case, /usr/lib/python2.4/site-packages/ is your python-site-package-path. If you go to that directory, you will find all the packages you just installed.
Save this value as an environment variable for use later, for bash:
export PYTHONSITEPKG='/usr/lib/python2.4/site-packages'
setenv PYTHONSITEPKG '/usr/lib/python2.4/site-packages'
Finally, you will need to set the the MATLABPATH environment variable in order to get the Appion utilities that use Matlab to work.
For bash:
export MATLABPATH=$MATLABPATH:<your_appion_directory>/ace
setenv MATLABPATH $MATLABPATH:<your_appion_directory>/ace
< Perform system check | Configure leginon.cfg >
cd /path/to/myami-VERSION/myami sudo ./pysetup.sh install
That will install each package, and report any failures. To determine the cause of failure, see the generated log file "pysetup.log". If necessary, you can enter a specific package directory and run the python setup command manually. For example, if sinedon failed to install, you can try again like this:
cd sinedon sudo python setup.py install
Python installer put the packages you installed into its site-packages directory. This enables all users on the same computer to access them. The easiest way to discover where your installed package is loaded from by python is to load a module from the package using interactive python command lines like this:
Start the python command line from shell:
python
Import a module from the package. Let's try sinedon here. All packages installed through the above setup.py script should go to the same place.
At the python prompt (python>) type:
import sinedon
If the module is loaded successfully, call the module attribute path (two underscrolls before "path" and two underscrolls after) will return the location of the module it is loaded from
sinedon.__path__ ['/usr/lib/python2.4/site-packages/sinedon']
In this case, /usr/lib/python2.4/site-packages/ is your python-site-package-path. If you go to that directory, you will find all the packages you just installed.
Save this value as an environment variable for use later, for bash:
export PYTHONSITEPKG='/usr/lib/python2.4/site-packages'
setenv PYTHONSITEPKG '/usr/lib/python2.4/site-packages'
cd /your_download_area/myami sudo ./pysetup.sh install
That will install each package, and report any failures. To determine the cause of failure, see the generated log file "pysetup.log". If necessary, you can enter a specific package directory and run the python setup command manually. For example, if sinedon failed to install, you can try again like this:
cd /your_download_area/myami/sinedon sudo python setup.py install
Important: You need to install the current version of Appion packages to the same location that you installed the previous version of Appion packages. You may have used a flag shown below (--install-scripts=/usr/local/bin) in your original installation. If you did, you need to use it this time as well. You can check if you installed your packages there by browsing to /usr/local/bin and looking for ApDogPicker.py. If the file is there, you should use the flag. if the file is not there, you should remove the flag from the command to install Appion to the default location.
The pysetup.py script above did not install the appion package. Since the appion package includes many executable scripts, it is important that you know where they are being installed. To prevent cluttering up the /usr/bin directory, you can specify an alternative path, typically /usr/local/bin, or a directory of your choice that you will later add to your PATH environment variable. Install appion like this:
cd /your_download_area/myami/appion sudo python setup.py install --install-scripts=/usr/local/bin
EMAN1 is a fundamental package used in Appion for general file conversion and image filtering.
(download)
next to EMAN1
Download 1.9
linkcluster
version of EMAN1tar -zxvf eman-linux-x86_64-cluster-1.9.tar.gz
sudo mv -v EMAN /usr/local/
cd /usr/local/EMAN/ ./eman-installer
BASH
, create an eman.sh and add the following lines:export EMANDIR=/usr/local/EMAN export PATH=${EMANDIR}/bin:${PATH} export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${EMANDIR}/lib export PYTHONPATH=${EMANDIR}/lib
C
shell, create an eman.csh and add the following lines:setenv EMANDIR /usr/local/EMAN setenv PATH ${EMANDIR}/bin:${PATH} setenv LD_LIBRARY_PATH ${EMANDIR}/lib setenv PYTHONPATH ${EMANDIR}/lib
sudo cp -v eman.sh /etc/profile.d/eman.sh sudo chmod 755 /etc/profile.d/eman.sh - or - sudo cp -v eman.csh /etc/profile.d/eman.csh sudo chmod 755 /etc/profile.d/eman.csh
You may need to log out and log back in for these changes to take place.
Run proc2d
proc2d help
Should popup a window displaying help for proc2d
< Install External Packages | Install EMAN2 >
It is best to install EMAN2/SPARX from source, so that do not have conflicts with having two different versions of python on your system. Binaries of EMAN2/SPARX all come with their own python pre-installed.
This documentation assumes you are using CentOS 6 (written as of CentOS 6.2)
sudo yum install fftw-devel gsl-devel boost-python numpy \ PyQt4-devel cmake ipython hdf5-devel libtiff-devel libpng-devel \ PyOpenGL ftgl-devel db4-devel python-argparse openmpi-devel
Additionally you need to install the python-bsddb3 library (not available via YUM). I just use the pypi easy_installer, yum will never know.
sudo easy_install bsddb3
tar zxvf eman-source-2.06.tar.gz
cd EMAN2/src/build
cmake ../eman2/
ccmake ../eman2/
and configure all the parametersmake
sudo make install
sudo nano /etc/profile.d/eman2.sh
export EMAN2DIR=/usr/local/EMAN2 export PATH=${EMAN2DIR}/bin:${PATH} export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${EMAN2DIR}/lib export PYTHONPATH=${EMAN2DIR}/lib:${EMAN2DIR}/bin
sudo nano /etc/profile.d/eman2.csh
setenv EMAN2DIR /usr/local/EMAN2 setenv PATH ${EMAN2DIR}/bin:${PATH} setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:${EMAN2DIR}/lib setenv PYTHONPATH ${EMAN2DIR}/lib:${EMAN2DIR}/bin
wget -O pydusa-1.15-sparx.tgz \ 'http://sparx-em.org/sparxwiki/MPI-installation?action=AttachFile&do=get&target=pydusa-1.15-sparx.tgz'
tar zxvf pydusa-1.15-sparx.tgz
cd pydusa-1.15-sparx
nano configure
elif test -d ${PY_PREFIX}/lib/python$PY_VERSION/site-packages/numpy/core/include; then PY_HEADER_NUMPY="-I${PY_PREFIX}/lib/python$PY_VERSION/site-packages/numpy/core/include"
elif test -d ${PY_PREFIX}/lib64/python$PY_VERSION/site-packages/numpy/core/include; then PY_HEADER_NUMPY="-I${PY_PREFIX}/lib64/python$PY_VERSION/site-packages/numpy/core/include"
setenv MPIROOT /usr/lib64/openmpi setenv MPIINC /usr/include/openmpi-x86_64 setenv MPILIB ${MPIROOT}/lib setenv MPIBIN ${MPIROOT}/bin ./configure
export MPIROOT=/usr/lib64/openmpi export MPIINC=/usr/include/openmpi-x86_64 export MPILIB=${MPIROOT}/lib export MPIBIN=${MPIROOT}/bin ./configure
make
sudo mkdir /usr/lib64/python2.6/site-packages/mympi/ sudo touch /usr/lib64/python2.6/site-packages/mympi/__init__.py sudo cp -v src/mpi.so /usr/lib64/python2.6/site-packages/mympi/mpi.so
sudo nano /usr/lib64/python2.6/site-packages/mpi.py
import ctypes mpi = ctypes.CDLL('libmpi.so.1', ctypes.RTLD_GLOBAL) from mympi.mpi import *
python: symbol lookup error: /usr/lib64/openmpi/lib/openmpi/mca_paffinity_hwloc.so: undefined symbol: mca_base_param_reg_int
python -c 'import mpi' python -c 'import sys; from mpi import mpi_init; mpi_init(len(sys.argv), sys.argv)'
sxisac.py start.hdf
see http://blake.bcm.edu/emanwiki/EMAN2/FAQ/EMAN2_unittest
cd EMAN2/test/rt ./rt.py
see http://sparx-em.org/sparxwiki/MPI-installation
or https://www.nbcr.net/pub/wiki/index.php?title=MyMPI_Setup
This fixes this problem:
from mpi import mpi_init ImportError: No module named mpi
This module was very difficult to get working, it seems to be a poorly supported python wrapper for MPI. So, what we are going to do is compile the module, rename it, and create a wrapper. So, essentially we are creating a wrapper around the wrapper. We can only hope they switch to [http://mpi4py.scipy.org/ mpi4py] in the future.
< Install EMAN 1 | Install SPIDER >
Appion allows you to use and pass data between multiple image processing packages from one integrated user interface. The image processing packages must be installed on your computer so that Appion can interface with them. You do not need to have all the packages installed for Appion to run, but you must have the packages installed that support the specific operations you wish to execute.
< Configure sinedon.cfg | Install EMAN >
If the binary included with Appion does not work, or you wish to compile it yourself follow these instructions.
$ make
$ make test
WARNING
Only if the first part fails, you must add the path to libg2c.so library file.
Otherwise skip to next section.
$ ls /usr/lib/gcc/`uname -i`-redhat-linux/3.4.6/libg2c.so
$ locate libg2c.so
EXLIBS=-L/usr/lib/gcc/i386-redhat-linux/3.4.6/ -lg2c
These scripts originally came from John Rubinstein at University of Toronto and have been incorporated into Appion by Michael Cianfrocco with John's permission. Please get in touch with John for specific questions regarding issues related to compiling or the execution of these scripts.
/home/acheng/michaelappion uploadstack.py --session=12aug15x --file=/home/acheng/myami/test_freeHand/stack00_dc4_sel.img --apix=6.02 --diam=380 --description="UNtilted stack" --commit --normalize --not-ctf-corrected --rundir=/ami/data00/appion/12aug15x/stacks/stack4 --runname=stack4 --projectid=371 --expid=10286 --jobtype=uploadstack --synctype=tilt --syncstack=2
The Grigorieff lab at Brandies provides several individual programs that are used in Appion. See their main software page to download their software
< Install UCSF Chimera | Compile FindEM >
IMOD is used for tomography processing developed primarily by David Mastronarde, Rick Gaudette, Sue Held, Jim Kremer, and Quanren Xiong at the Boulder Laboratory for 3-D Electron Microscopy of Cells.
Go to http://bio3d.colorado.edu/imod/ for download and installation instruction.
To use it with Appion, its bin directory need to be in user's direct path and its lib directory in LD_LIBRARY_PATH in addition to other IMOD environment variable that is set in a typical installation.
Appion scripts creates and run vms-styled IMOD command files like what eTomo produces and they do not use the gui (eTomo, 3dmod etc.) from IMOD package.
< Install Ace2 | Install Protomo >
The following describes how we did myami-2.2 installation on guppy running CentOS 6.
[root@guppy opt]# cd /opt [root@guppy opt]# ln -s myami-2.2 myami
You are not required to install phpMyAdmin for Appion or Leginon, however, it is a useful tool for interfacing with the mysql databases.
Name: | Download site: | yum package name | SuSE rpm name |
---|---|---|---|
PHP | http://php.net/downloads.php | php | |
php-mysql | php-mysql |
If you have not already installed phpMyAdmin, do so. The yum installation is:
sudo yum -y install phpMyAdmin
Edit the phpMyAdmin config file /etc/phpMyAdmin/config.inc.php
and change the following lines:
sudo nano /etc/phpMyAdmin/config.inc.php
$cfg['Servers'][$i]['AllowRoot'] = FALSE; $cfg['Servers'][$i]['host'] = 'mysqlserver.INSTITUTE.EDU';
Edit the phpMyAdmin apache config file /etc/httpd/conf.d/phpMyAdmin.conf
and change the following lines:
sudo nano /etc/httpd/conf.d/phpMyAdmin.conf
<Directory /usr/share/phpMyAdmin/> order deny,allow deny from all allow from 127.0.0.1 allow from YOUR_IP_ADDRESS </Directory>
Note: If you want to access phpMyAdmin from another computer, you can also add it to this config file with an allow from
tag
Next restart the web server to take on the new setting
sudo /sbin/service httpd restart
To test the phpMyAdmin configuration, point your browser to http://YOUR_IP_ADDRESS/phpMyAdmin or http://localhost/phpMyAdmin and login with the usr_object user.
A common problem is that the firewall may be blocking access to the web server and mysql server. On CentOS/Fedora you can configure this with the system config:
system-config-securitylevel
Firewall configuration is specific to different Unix distributions, so consult a guide on how to do this on non-RedHat machines.
< Install the Web Interface | Potential Problems >
The "protomo" package contains programs and shell scripts for electron tomography of thin specimens. Developed by H. Winkler, et. al. at Florida State University.
References:
H. Winkler and K.A. Taylor, Ultramicroscopy 106, 240-254, 2006.
K.A.Taylor, J.Tang, Y.Cheng and H.Winkler, J. Struct. Biol. 120, 372-386, 1997.
The package is available at
http://www.electrontomography.org/software.html
To use it with Appion, the environment variable PROTOMOROOT need to be set to the installed package and its bin and (x86_64/bin or i686/bin) directories need to be in user's path.
Appion scripts do not use the user interface come with the package, just the shell scripts and programs called by them.
< Install Imod | Test Appion >
Get the libs and devel of these packages:
Try yum:
yum search ...
test
tomoalign-gui -help
This is the current, but not yet officially released version:
Marker-free tilt series alignment:
http://www.sb.fsu.edu/~winkler/protomo/protomo-2.2.0.tar.bz2
http://www.sb.fsu.edu/~winkler/protomo/protomo-users-guide-2.0.12.pdf
Tutorial:
http://www.sb.fsu.edu/~winkler/protomo/protomo-tutorial-2.0.12.pdf
http://www.sb.fsu.edu/~winkler/protomo/protomo-tutorial-2.0.12.tar.bz2
3rd-party libraries:
http://www.sb.fsu.edu/~winkler/protomo/deplibs.tar.bz2
mkdir /sw/packages/SIMPLE cd /sw/packages/SIMPLE mkdir downloads cd downloads wget http://simple.stanford.edu/binaries/simple_linux_120521.tar.gz cd .. tar zxf downloads/simple_linux_120521.tar.gz cd simple_linux_120521 ./simple_config.pltype "local" when prompted.
This actually modifies the files in simple_linux_120521 to configure them with the correct path. This cannot later be modified by running ./simple_config.pl again on the same directory, so if you want to install somewhere other than /sw/packages/SIMPLE/simple_linux_120521, then you have to unpack a new copy from the tar.gz and run simple_config.pl once it is in the new location.
To set up your environment to use the executables in this new SIMPLE installation, you need both "apps" and "bin" subdirectory in your PATH. For example:
export PATH=$PATH:/sw/packages/SIMPLE/simple_linux_120521/apps:/sw/packages/SIMPLE/simple_linux_120521/bin
WARNING: I notice that some commands in the bin directory are fairly generic names, so just be aware of this when adding this to PATH. For example, there is a command "align", so hopefully there is no other "align" command which conflicts with this.
The Wadsworth Institute provides detailed documentation on how to install SPIDER on various systems. Below we cover our way to get it working on your system.
Most of our SPIDER scripts were originally designed around SPIDER v14 and v15, but we are diligently working toward compatibility with SPIDER v18. That said you are probably best off using the newest version of SPIDER (v18.10 as of May 2010) and then reporting any bugs to us.
tar -zxvf spiderweb.18.10.tar.gz
The archive will create 3 folders: spider, spire, and web. At this time only the spider program is used within Appion, you can safely ignore web and spire.
sudo mv -v spider /usr/local/
cd /usr/local/spider/bin ls spider* spider_linux spider_linux_mp_intel64 spider_linux_mp_opt64 spider_osx_64_pgi spider_linux_mp_intel spider_linux_mpi_opt64 spider_osx_32_pgi
binary file | system information |
---|---|
spider_linux | AMD/Intel 32 (single processor) |
spider_linux_mp_intel | AMD/Intel 32 (multiple processors) |
spider_linux_mp_opt64 | AMD Opteron 64 (multiple processors) |
spider_linux_mp_intel64 | Intel xeon 64 (multiple processors) |
spider_linux_mpi_opt64 | AMD Opteron 64 (for MPI use) |
spider_osx_32_pgi | Intel Apple 32 bit (multiple processors) |
spider_osx_64_pgi | Intel Apple 64 bit (multiple processors) |
file spider_linux_mp_intel64 spider_linux_mp_intel64: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.4, statically linked, for GNU/Linux 2.6.4, not stripped
spider_linux_mp_intel
for 32bit Intel/AMD systemsspider_linux_mp_opt64
for 64bit Intel/AMD systemssudo ln -sv /usr/local/spider/bin/spider_xxxxx /usr/local/bin/spider
BASH
, create an spider.sh and add the following lines:¶export SPIDERDIR=/usr/local/spider export SPMAN_DIR=${SPIDERDIR}/man/ export SPPROC_DIR=${SPIDERDIR}/proc/ export SPBIN_DIR=${SPIDERDIR}/bin/ export PATH=$PATH:${SPIDERDIR}/bin
C
shell, create an spider.csh and add the following lines:¶setenv SPIDERDIR /usr/local/spider setenv SPMAN_DIR ${SPIDERDIR}/man/ setenv SPPROC_DIR ${SPIDERDIR}/proc/ setenv SPBIN_DIR ${SPIDERDIR}/bin/ setenv PATH $PATH:${SPIDERDIR}/bin
sudo cp -v spider.sh /etc/profile.d/spider.sh sudo chmod 755 /etc/profile.d/spider.sh -or- sudo cp -v spider.csh /etc/profile.d/spider.csh sudo chmod 755 /etc/profile.d/spider.csh
You may need to log out and log back in for these changes to take place.
spider bat/spi
\__`O O'__/ SPIDER -- COPYRIGHT ,__xXXXx___ HEALTH RESEARCH INC., ALBANY, NY. __xXXXx__ / /xxx\ \ VERSION: UNIX 18.10 ISSUED: 03/23/2010 / \ DATE: 13-MAY-2010 AT 09:32:42 Results file: results.bat.0 Running: spider .OPERATION:
EN D
**** SPIDER NORMAL STOP ****
< Install EMAN | Install Xmipp >
This installation occurs on the web server.
First, install the following prerequisites:
Name: | Download site: | CentOS yum package name | Fedora yum package name | SuSE rpm name |
---|---|---|---|---|
php devel | http://www.php.net | php-devel | php-devel | |
libssh2 devel | http://www.libssh2.org | libssh2-devel (found in epel repo) | libssh2-devel | |
SSH PECL extension | http://www.php.net/manual/en/ssh2.installation.php | - | php-pecl-ssh2 |
For newer systems the extension is available through the repository, e.g., on Fedora 12 type "
sudo yum install php-pecl-ssh2
"
sudo yum install php-pecl-ssh2
This setup is almost identical to the Install the MRC PHP Extension. See http://www.php.net/manual/en/ssh2.installation.php for more information.
wget http://pecl.php.net/get/ssh2-0.11.3.tgz
tar zxvf ssh2-0.11.3.tgz
cd ssh2-0.11.3
phpize ./configure make
sudo make install
sudo touch /etc/php.d/ssh2.ini sudo chmod 666 /etc/php.d/ssh2.ini echo "; Enable ssh2 extension module" > /etc/php.d/ssh2.ini echo "extension=ssh2.so" >> /etc/php.d/ssh2.ini sudo chmod 444 /etc/php.d/ssh2.ini cat /etc/php.d/ssh2.ini
sudo /sbin/service httpd restart
< Setup job submission server | Configure web server to submit job to local cluster >
This installation occurs on the web server.
First, install the following prerequisites:
Name: | Download site: | CentOS yum package name | Fedora yum package name | SuSE rpm name |
---|---|---|---|---|
php devel | http://www.php.net | php-devel | php-devel | |
libssh2 devel | http://www.libssh2.org | libssh2-devel (found in epel repo) | libssh2-devel | |
SSH PECL extension | http://www.php.net/manual/en/ssh2.installation.php | - | php-pecl-ssh2 |
For newer systems the extension is available through the repository, e.g., on Fedora 12 type "
sudo yum install php-pecl-ssh2
"
sudo yum install php-pecl-ssh2
This setup is almost identical to the Install the MRC PHP Extension. See http://www.php.net/manual/en/ssh2.installation.php for more information.
wget http://pecl.php.net/get/ssh2-0.11.3.tgz
tar zxvf ssh2-0.11.3.tgz
cd ssh2-0.11.3
phpize ./configure make
sudo make install
sudo touch /etc/php.d/ssh2.ini sudo chmod 666 /etc/php.d/ssh2.ini echo "; Enable ssh2 extension module" > /etc/php.d/ssh2.ini echo "extension=ssh2.so" >> /etc/php.d/ssh2.ini sudo chmod 444 /etc/php.d/ssh2.ini cat /etc/php.d/ssh2.ini
sudo /sbin/service httpd restart
This installation occurs on the web server.
First, install the following prerequisites:
Name: | Download site: | CentOS yum package name | Fedora yum package name | SuSE rpm name |
---|---|---|---|---|
php devel | http://www.php.net | php-devel | php-devel | |
libssh2 devel | http://www.libssh2.org | libssh2-devel (found in epel repo) | libssh2-devel | |
SSH PECL extension | http://www.php.net/manual/en/ssh2.installation.php | - | php-pecl-ssh2 |
For newer systems the extension is available through the repository, e.g., on Fedora 12 type "
sudo yum install php-pecl-ssh2
"
sudo yum install php-pecl-ssh2
This setup is almost identical to the Install the MRC PHP Extension. See http://www.php.net/manual/en/ssh2.installation.php for more information.
wget http://pecl.php.net/get/ssh2-0.11.3.tgz
tar zxvf ssh2-0.11.3.tgz
cd ssh2-0.11.3
phpize ./configure make
sudo make install
sudo touch /etc/php.d/ssh2.ini sudo chmod 666 /etc/php.d/ssh2.ini echo "; Enable ssh2 extension module" > /etc/php.d/ssh2.ini echo "extension=ssh2.so" >> /etc/php.d/ssh2.ini sudo chmod 444 /etc/php.d/ssh2.ini cat /etc/php.d/ssh2.ini
sudo /sbin/service httpd restart
< Install the MRC PHP Extension | Install the Web Interface >
Using the Required Supporting Packages table below, install any missing prerequisite packages by following the instructions for your specific Linux distribution.
For example, SUSE users can use YaST to install them; RedHat and CentOS users can use yum
, Debian and Ubuntu uses apt-get
.
We highly recommend using pre-built binary packages to install the programs. Installing from source can quickly become a nightmare! See also the previous page, Instructions_for_installing_CentOS_on_your_computer for Red Hat based systems.
Name: | Download site: | yum package name | SuSE rpm name |
---|---|---|---|
Python 2.4 or newer | http://www.python.org | python | python-devel |
wxPython 2.5.2.8 or newer | http://www.wxpython.org | wxPython | python-wxGTK |
MySQL Python client 1.2 or newer | http://sourceforge.net/projects/mysql-python | MySQL-python | python-mysql |
Python Imaging Library (PIL) 1.1.4 or newer | http://www.pythonware.com/products/pil/ | python-imaging | python-imaging |
NumPy 1.0.1 or newer | http://numpy.scipy.org/ | numpy | numpy |
SciPy 0.5.1 (tested, others may work)* | http://www.scipy.org | scipy | python-scipy |
If you use Python 2.4 or earlier, you also need to install the PyXML module . For more recent versions of Python, it is already included in the main Python package.
For CentOS, see Download additional Software page, if you have trouble finding these packages.
You can test your numpy and scipy install with their build in test functions:
python -c 'import numpy; numpy.test(level=1)' python -c 'import scipy; scipy.test(level=1)'
Numpy is more stable should be successful. Expect to see lots of errors with scipy.
You can successfully install most of these packages on a Mac by downloading DMG files and clicking on the install programs. This is not for novice Mac user. Be warned, wxpython problems on Mac will make your life difficult in using Leginon gui. Don't make your mac the only processing server if it is Leginon that you will use mainly.
Mac OS X Installer Disk Image
(v2.6 recommended) from http://python.org/download/ and install the newer version of pythonDownload Appion/Leginon Files >
Using the Required Supporting Packages table below, install any missing prerequisite packages by following the instructions for your specific Linux distribution.
For example, SUSE users can use YaST to install them; RedHat and CentOS users can use yum
, Debian and Ubuntu uses apt-get
.
We highly recommend using pre-built binary packages to install the programs. Installing from source can quickly become a nightmare! See also the previous page, Instructions_for_installing_CentOS_on_your_computer for Red Hat based systems.
Name: | Download site: | yum package name | SuSE rpm name |
---|---|---|---|
Python 2.4 or newer | http://www.python.org | python | python-devel |
wxPython 2.5.2.8 or newer | http://www.wxpython.org | wxPython | python-wxGTK |
MySQL Python client 1.2 or newer | http://sourceforge.net/projects/mysql-python | MySQL-python | python-mysql |
Python Imaging Library (PIL) 1.1.4 or newer | http://www.pythonware.com/products/pil/ | python-imaging | python-imaging |
NumPy 1.0.1 or newer | http://numpy.scipy.org/ | numpy | numpy |
SciPy 0.5.1 (tested, others may work)* | http://www.scipy.org | scipy | python-scipy |
If you use Python 2.4 or earlier, you also need to install the PyXML module . For more recent versions of Python, it is already included in the main Python package.
For CentOS, see Download additional Software page, if you have trouble finding these packages.
You can test your numpy and scipy install with their build in test functions:
python -c 'import numpy; numpy.test(level=1)' python -c 'import scipy; scipy.test(level=1)'
Numpy is more stable should be successful. Expect to see lots of errors with scipy.
You can successfully install most of these packages on a Mac by downloading DMG files and clicking on the install programs. This is not for novice Mac user. Be warned, wxpython problems on Mac will make your life difficult in using Leginon gui. Don't make your mac the only processing server if it is Leginon that you will use mainly.
Mac OS X Installer Disk Image
(v2.6 recommended) from http://python.org/download/ and install the newer version of pythonMRC Tools is installed as a php extension and is required for displaying mrc files live on the web browser.
Note: The MRC PHP Extension is not compatible with PHP 5.3 and greater. For this reason, Appion/Leginon version 3.0 and greater no longer require the MRC PHP extension. Appion/Leginon versions older than 3.0 still require the MRC PHP extension as well as a PHP 5.2.x.
You may find installation information for the following packages under Install Web Server Prerequisites.
You can check whether php-devel is installed by typing:
phpize
Do not worry about any error message as long as the command is found.
Make sure that php-GD and FFTW 3 devel libraries are installed. Visit or refresh http://yourhost/info.php which you created earlier. It should have a section looking like this:
Note:
MRCtools are compiled and added to php extension with php-devel package. MRCtools use GD and FFTW3 that need to be compiled from their development libraries while the extension is compiled. If GD and FFTW3 sources were downloaded and compiled directly on your computer, these development files are included. If (as in most cases) GD and FFTW3 are installed from rpm, they are not included. An error message will appear when you attempt to compile mrctools. In this case, you will need separate download and installation of GD-devel and FFTW3-devel. Search http://rpmfind.net/linux/rpm2html/ for GD-devel and FFTW3-devel for the rpm distribution needed for your system. More information on the gd library can be found here. If you find that you can only view images as png instead of jpg, it may be that you do not have gd jpeg support installed.
cd myami/programs/php_mrc
phpize ./configure make sudo make install
/usr/lib64/php/modules
on 64bit CentOS/RHEL/Fedora). If you are unsure where the php module directory is, use http://localhost/info.php to find it under extension_dir.ls /usr/lib64/php/modules mrc.so
ls /usr/lib64/php5/extensions mrc.so
php.ini
.cd /etc/php.d sudo touch /etc/php.d/mrc.ini sudo chmod 666 mrc.ini echo "; Enable mrc extension module" > mrc.ini echo "extension=mrc.so" >> mrc.ini sudo chmod 444 mrc.ini cat mrc.ini
extension=mrc.so
#SuSe /etc/init.d/apache2 restart #CentOS sudo /sbin/service httpd restart
sudo reboot
1. find in the info.php web page the location of "additional .ini files parsed" in the first table (such as /etc/php.d/conf.d/*).
2. Go to the directory and make a copy of any ini file to use as a template for mrc.ini
>cd [additional_ini_directory] >cp gd.ini mrc.ini
3. Edit mrc.ini to the following
; comment out next line to disable mrc extension in php extension=mrc.so
4. Comment out mrc extension from php.ini (found in /etc/php.ini/ on a typical PHP installation)
;extension=mrc.so
5. Restart your webserver
> /etc/init.d/httpd restart
In the myami/php_mrc (or myami/programs/php_mrc if installing from trunk) directory, you will find two test scripts, "ex1.php" and "ex2.php" and a test MRC image "mymrc.mrc".
Copy them to your top level web directory (for example on CentOS: /var/www/html/):
cd myami/programs/php_mrc sudo cp ex1.php ex2.php mymrc.mrc /var/www/html/
Run the scripts with the following commands and visit the corresponding pages from the web server:
The expected results are shown below. If you get the same images, you've installed the extension properly.
Note: the "display" command is part of the ImageMagick package, which you may have to install.
web server: http://localhost/ex1.php
php -q ex1.php | display
web server: http://localhost/ex2.php
php -q ex2.php | display
Test files work but images not showing up in the ImageViewers?
Here's one way this was fixed.
< Download Appion and Leginon Files | Install SSH module for PHP >
Install Leginon and Appion web tools for viewing images and performing image processing through the web server.
TODO: put the prereqs for this in Web Preq page rather than linking to the processing page.
If you have not yet installed Leginon/Appion python packages on this server, the web interface will at least need the myami/pyami package to do MRC to JPEG conversion. First install the supporting packages. Then install myami/pyami as follows:
cd myami/pyami sudo python setup.py install
which mrc2any
You will need to know that location when configuring below.
Example:
cd myami #CentOS example sudo cp -vr myamiweb /var/www/html/ #this is temporary for setup, revert to 755 when finished with this page sudo chmod 777 /var/www/html/myamiweb #if you have SELinux enabled this command will help sudo chcon -R --type=httpd_sys_content_t /var/www/html
There is a setup wizard available to help you set the configuration parameters for your installation. If you prefer not to use the wizard, there are instructions for manually editing the configuration file. If this is your first time creating the web tool configuration file, we recommend using the setup wizard.
The setup wizard will check your database connection, create required database tables, and perform default data initialization.
sudo egrep -iw --color=auto '^(user|group)' /etc/httpd/conf/httpd.conf
Go to Install the Web Interface Advanced for the advanced configuration.
sudo chmod 755 /var/www/html/myamiweb
Visit http://yourhost/myamiweb or http://localhost/myamiweb to confirm functionality.
You may also browse to the automatic web server troubleshooter at: http://localhost/myamiweb/test/checkwebserver.php
Edit the following items in php.ini (found as /etc/php.ini on CentOS and /etc/php5/apache2/php.ini on SuSE) so that they look like the following:
display_errors = Off
< Install SSH module for PHP | Install phpMyAdmin >
Note: You may skip this section if you configured your installation with the setup wizard at http://localhost/myamiweb/setup.
Copy config.php.template to config.php and edit the latter by adding these parameters:
"config.php" should be located in /var/www/html/myamiweb/ on CentOS and /srv/www/htdocs/myamiweb/ on SuSE.
define('BASE_PATH',"myamiweb");
// Browse to the administration tools in myamiweb prior to // changing this to true to populate DB tables correctly. define('ENABLE_LOGIN', false);
define('EMAIL_TITLE', 'The name of your institute'); define('ADMIN_EMAIL', "example@institute.edu");
define('ENABLE_SMTP', false); define('SMTP_HOST', 'mail.institute.edu'); //your smtp host
// --- Check this with your email administrator -- // // --- Set it to true if your SMTP server requires authentication -- // define('SMTP_AUTH', false); // --- If SMTP_AUTH is not required(SMTP_AUTH set to false, -- // // --- no need to fill in 'SMTP_USERNAME' & SMTP_PASSWORD -- // define('SMTP_USERNAME', ""); define('SMTP_PASSWORD', "");
define('DB_HOST', ""); // DB Host name define('DB_USER', ""); // DB User name define('DB_PASS', ""); // DB Password define('DB_LEGINON', ""); // Leginon database name define('DB_PROJECT', ""); // Project database name
// --- Enable Image Cache --- // define('ENABLE_CACHE', true); // --- caching location --- // // --- please make sure the apache user has write access to this folder --- // define('CACHE_PATH', '/srv/www/cache/'); define('CACHE_SCRIPT', WEB_ROOT.'/makejpg.php');
addplugin("processing");
// Check if IMAGIC is installed and running, otherwise hide all functions define('HIDE_IMAGIC', false); // hide processing tools still under development. define('HIDE_FEATURE', true);
$PROCESSING_HOSTS[] = array( 'host' => 'LOCAL_CLUSTER_HEADNODE.INSTITUTE.EDU', // for a single computer installation, this can be 'localhost' 'nproc' => 32, // number of processors available on the host, not used 'nodesdef' => '4', // default number of nodes used by a refinement job 'nodesmax' => '280', // maximum number of nodes a user may request for a refinement job 'ppndef' => '32', // default number of processors per node used for a refinement job 'ppnmax' => '32', // maximum number of processors per node a user may request for a refinement job 'reconpn' => '16', // recons per node, not used 'walltimedef' => '48', // default wall time in hours that a job is allowed to run 'walltimemax' => '240', // maximum hours in wall time a user may request for a job 'cputimedef' => '1536', // default cpu time in hours a job is allowed to run (wall time x number of cpu's) 'cputimemax' => '10000', // maximum cpu time in hours a user may request for a job 'memorymax' => '', // the maximum memory a job may use 'appionbin' => 'bin/', // the path to the myami/appion/bin directory on this host 'appionlibdir' => 'appion/', // the path to the myami/appion/appionlib directory on this host 'baseoutdir' => 'appion', // the directory that processing output should be stored in 'localhelperhost' => '', // a machine that has access to both the web server and the processing host file systems to copy data between the systems 'dirsep' => '/', // the directory separator used by this host 'wrapperpath' => '', // advanced option that enables more than one Appion installation on a single machine, contact us for info 'loginmethod' => 'SHAREDKEY', // Appion currently supports 'SHAREDKEY' or 'USERPASSWORD' 'loginusername' => '', // if this is not set, Appion uses the username provided by the user in the Appion Processing GUI 'passphrase' => '', // if this is not set, Appion uses the password provided by the user in the Appion Processing GUI 'publickey' => 'rsa.pub', // set this if using 'SHAREDKEY' 'privatekey' => 'rsa' // set this if using 'SHAREDKEY' );
// --- Please enter your processing host information associate with -- // // --- Maximum number of the processing nodes -- // // --- $PROCESSING_HOSTS[] = array('host' => 'host1.school.edu', 'nproc' => 4); -- // // --- $PROCESSING_HOSTS[] = array('host' => 'host2.school.edu', 'nproc' => 8); -- // // $PROCESSING_HOSTS[] = array('host' => '', 'nproc' => );
$DEFAULTCS = "2.0";
We will not include the cluster registration now. It is covered in the last part of this document.
Go back to Install the Web Interface
There are two main versions available for linux the normal version and the headless version.
If you plan on using the install computer also as a desktop computer (i.e. you want to open MRC files and manipulate them in UCSF Chimera) then you should install version 1.5.3. The known version to work both in desktop mode and background mode for generating images without opening a window.
chmod 755 chimera-1.5.3-linux_x86_64.exe
./chimera-1.5.3-linux_x86_64.exe
/usr/local/chimera
) and let it install all of its filesln -s /usr/local/chimera/bin/chimera /usr/local/bin/chimera
1.2509
.On May 6, 2010, the UCSF Chimera team released a working version of their headless version of the program. The headless version runs the program but does not allow any interaction from the user. The version is ideal for servers, because it allows UCSF Chimera to create images of your molecule without having to install X windows.
chmod 755 chimera-1.6.2-linux_x86_64_osmesa.exe
./chimera-alpha-linux_x86_64_osmesa.exe
/usr/local/chimera
) and let it install all of its filesln -s /usr/local/chimera/bin/chimera /usr/local/bin/chimera
The only way to test if UCSF Chimera is working within Appion is to have Appion completely installed.
< Install Xmipp | Install Grigorieff lab software >
The myamiweb files are mostly php scripts that run at the web server. PHP, PHP-devel, gd, and fftw3 packages are required before installation of myamiweb and the mrc extension that handles the display of mrc files. Some of these packages may be found on the SuSE Linux DVD or included in common package repository. MySQL and the Apache Web Server can be downloaded from their respective websites.
CentOS> sudo yum install php-gd SuSE10.2 and above> zypper install php-gd
Prerequisite packages for myamiweb:
Name: | Download site: | yum package name | SuSE rpm name |
---|---|---|---|
Apache | www.apache.org | httpd | apache2 |
php | www.php.net | php | php |
php-devel* | rpmfind.net/linux/RPM/Development_Languages_PHP.html | php-devel | php-devel |
php-mysql* | rpmfind.net/linux/RPM/Development_Languages_PHP.html | php-mysql | php-mysql |
php-gd | www.php.ned/gd (Use gd2) | php-gd | php-gd |
fftw3 library (including development libraries and header *) | www.fftw.org (Use fftw3.x) | fftw3-devel | fftw3-devel |
libssh2 developmental libraries | http://www.libssh2.org | libssh2-devel | |
phpMyAdmin (optional) | http://www.phpmyadmin.net | phpMyAdmin | |
GCC, the GNU Compiler Collection | http://gcc.gnu.org | gcc | |
Apache SSL module | mod_ssl |
#CentOS> sudo yum install \ php-gd gcc phpMyAdmin libssh2-devel php-pecl-ssh2 \ mod_ssl httpd php-mysql php-devel php fftw3-devel
Note: There are additional requirements for the Redux image server
Notes:
< Differences between Linux flavors | Configure php.ini >
Biocomputing Unit at the Spanish National Center of Biotechnology (CNB-CSIC) provides detailed documentation on how to install Xmipp on various systems. Below we cover our way to get it working on your system.
Name: | Download site: | CentOS package name | Fedora package name | SuSE rpm name |
---|---|---|---|---|
gcc-c++ | gcc-c++ | |||
openmpi-devel | openmpi-devel | |||
libtiff-devel | libtiff-devel | |||
libjpeg-devel | libjpeg-devel | libjpeg-turbo-devel | ||
zlib-devel | zlib-devel |
We recommend installing Xmipp from source to properly use the openmpi libraries that allows you to run on multiple processors
tar zxvf Xmipp-2.4-src.tar.gz
Alternatively, you may download from the svn repo:
svn co http://newxmipp.svn.sourceforge.net/svnroot/newxmipp/tags/release-2.4/xmipp/ Xmipp-2.4-src
As of Feb 2012, this was required to compile the 2.4 source code.
locate libmpi.so /usr/lib/openmpi/1.2.7-gcc/lib/libmpi.so
Note: If you can not find the openmpi directory, make sure you have installed the openmpi package. The installation on CentOS using yum is: yum -y install openmpi-devel.
export PATH=$PATH:/usr/lib64/openmpi/1.3.2-gcc/bin ./scons.configure \ MPI_LIBDIR=/usr/lib/openmpi/1.2.7-gcc/lib/ \ MPI_INCLUDE=/usr/lib/openmpi/1.2.7-gcc/include/ \ MPI_LIB=mpi
Note: If you are installing Xmipp on x86_64 CentOS 6, you can use the following commands instead.
export PATH=/usr/lib64/openmpi/bin:$PATH ./scons.configure \ MPI_LIBDIR=/lib/usr/lib64/openmpi/lib \ MPI_INCLUDE=/lib/usr/lib64/openmpi/include \ MPI_LIB=mpi
* Checking for MPI ... yes
./scons.compile
/usr/local
sudo mv -v Xmipp-2.4-src /usr/local/Xmipp
export XMIPPDIR=/usr/local/Xmipp export PATH=${XMIPPDIR}/bin:${PATH} export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${XMIPPDIR}/lib
setenv XMIPPDIR /usr/local/Xmipp setenv PATH ${XMIPPDIR}/bin:${PATH} setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:${XMIPPDIR}/lib
sudo cp -v xmipp.sh /etc/profile.d/ sudo chmod 755 /etc/profile.d/xmipp.sh - or - sudo cp -v xmipp.csh /etc/profile.d/ sudo chmod 755 /etc/profile.d/xmipp.csh
You may need to log out and log back in for these changes to take place, or source the environment script:
source /etc/profile.d/xmipp.sh
Test Xmipp by running ml_align2d program
xmipp_ml_align2d -h
2104:Argument -i not found or invalid argument File: libraries/data/args.cpp line: 502 Usage: ml_align2d [options] -i <selfile> : Selfile with input images -nref <int> : Number of references to generate automatically (recommended) OR -ref <selfile/image> OR selfile with initial references/single reference image [ -o <rootname> ] : Output rootname (default = "ml2d") [ -mirror ] : Also check mirror image of each reference [ -fast ] : Use pre-centered images to pre-calculate significant orientations [ -thr <N=1> ] : Use N parallel threads [ -more_options ] : Show all possible input parameters
< Install SPIDER | Install UCSF Chimera >
If you have a new computer(s) for your Leginon/Appion installation, we recommend installing CentOS because it is considered to be more stable than other varieties of Linux.
CentOS is the same as Red Hat Enterprise Linux (RHEL), except that it is free and supported by the community.
We have most experience in the installation on CentOS and this installation guide has specific instruction for the process.
see Linux distribution recommendation for more.
Latest version tested at NRAMM: CentOS 5.8
Note: All formally released versions of Appion (versions 1.x and 2.x) run on CentOS 5.x. Appion developers, please note that the development branch of Appion is targeting CentOS 6.x and Appion 3.0 will run on CentOS 6.x.
Perform a SHA1SUM confirmation:
sha1sum CentOS-5.8-i386-bin-DVD-1of2.iso
The result should be the same as in the sha1sum file provided by CentOS. This is found at the same location you downloaded the .iso file.
For example:
Use dvdrecord in Linux to burn disk.
dvdrecord -v -dao gracetime=10 dev=/dev/dvd speed=16 CentOS-5.8-i386-bin-DVD-1of2.iso
Note: This step is optional, however you will need root access to complete the Appion Installation.
Make sure you have root permission.
Open the file in an editor. ex. vi /etc/sudoers
Look for the line: root ALL=(ALL) ALL.
Add this line below the root version:
your_username ALL=(ALL) ALL
Logout and log back in with your username.
The CentOS installation is complete.
< Select Linux distribution to use | Download additional Software >
If you have a new computer(s) for your Leginon/Appion installation, Fedora is a cutting edge system that has all the latest and greatest feature.
While Fedora is great for a desktop computer, it can be a hassle for a server. Fedora recommends that you upgrade the computer every 6 months and all but requires a upgrade every year. With servers you like things to remain the same for longer periods of time, this is not Fedora. Fedora always has the latest versions. If you want something more stable we recommend installing CentOS. See Instructions for installing CentOS on your computer
Fedora is a cutting edge distribution produced by a community of programmers that is maintained by Red Hat.
wget -c http://download.fedoraproject.org/pub/fedora/linux/releases/13/Fedora/x86_64/iso/Fedora-13-x86_64-DVD.iso
wget -c http://download.fedoraproject.org/pub/fedora/linux/releases/13/Fedora/i386/iso/Fedora-13-i386-DVD.iso
perform a SHA256SUM confirmation:
sha256sum Fedora-13-x86_64-DVD.iso dab657e1475832129ea3af67fd1c25c91b60ff1acc9147fb9b62cef14193f8d2
The result should be the same as in the sha256sum file provided by Fedora:
Use dvdrecord in Linux to burn disk
dvdrecord -v -dao gracetime=10 dev=/dev/dvd speed=16 Fedora-13-x86_64-DVD.iso
The
/dev/dvd
may not link to your DVD drive on my machine it was called/dev/dvd1
. Do als /dev/dvd*
to look for alternate names
Note: In one case we had to use the option "Install with basic video driver", because after doing the normal install the screen went blank and it was not usable.
Make sure you have root permission.
Open the file in an editor. ex. vi /etc/sudoers
Look for the line:
root ALL=(ALL) ALLAdd this line below the root version:
username ALL=(ALL) ALL
Logout and log back in with your username.
Your Fedora installation is complete.
Download additional Software (Fedora Specific) >
The Instruments tool is for use with Leginon, Appion's sister image capture software.
If you are using Leginon, you may find more information about Instruments in the Leginon user manual.
There are several types of Trackers available:
All fall under the general category of Issue. For each Tracker type, there is the ability to set a status. The status types available vary for each Tracker type.
The normal status work flow for a Bug or Feature request is New -> Assigned -> In Code Review -> In Test -> Closed.
New - The issue has been created, and there is not a particular person assigned to address the issue. In this case, the Issue Administrator (currently Amber) will review the new issue and assign it to the appropriate person.
Assigned - The issue has been assigned to someone. That person is responsible for addressing the issue. If it is a Bug, this person will fix it. If it is a feature, this person will implement it. This person will also indicate in the issue how their changes should be tested.
In Code Review - The person responsible for fixing or implementing the feature has completed the job and has checked the code into subversion. It is now ready for a code review. The person the issue was assigned to selects another person to perform a code review. The Assigned To field of the issue is changed to the person who will perform the code review.
In Test - The code has been reviewed and any potential problems have been addressed. Someone other than the person who implemented the code change is assigned to test the change. The person who implemented the change should indicate which Test Cases can be used to test the code changes.
Closed -> All testing of the code change is complete and successful.
There are several cases where the normal work flow will not apply:
Duplicate - indicates that there is already an issue addressing the same topic. In this case, be sure to make a reference to the existing issue.
Wont Fix / Wont Do - indicates that the bug or feature request will be ignored. Please provide a detailed explanation for making this choice.
Good guide from Drupal, we could incorporate
This wiki is to organize my (Anchi) findings on what modules does what and how options are passed and set and what are logged into database and when in the two different way of running Appion currently.
sys.argv[0]
This option allows to exclude particles which are referred to as Euler jumpers. An Euler jumper is a particle which changed the orientation from iteration to iteration. By using this method you can clean up your dataset from particles that do not converge to a single Euler angle assignment.
<More Stack Tools | Particle Alignment >
We list our experience and current progress here.
If you have a new computer(s) for your Leginon/Appion installation, we recommend installing CentOS because it is considered to be more stable than other varieties of Linux.
CentOS is the same as Red Hat Enterprise Linux (RHEL), except that it is free and supported by the community.
We have most experience in the installation of the supporting packages on CentOS and this installation guide has specific instruction for the process.
Start at Instructions for installing CentOS on your computer.
Start at Instructions for installing Fedora on your computer
After logging into the system, from any page you may press the [Logout] button at the top left corner of the screen.
This will end your session with the Appion and Leginon Tools.
< Modify Your Profile | User Management ^
The Leginon Observer Interface is a tool to view images being collected from a Microscope in real time. This is used for Leginon installations only and may be ignored in Appion only installations.
< Image Viewers | Tomography Tool >
This option allows you to create a stack of the picked particles Stack Creation
<More Stack Tools | Particle Alignment >
Manual masking is used to mask out regions of cruds on micrographs such that particles picked in the masked out regions will not be used in subsequent processing (e.g. stack creation).
Output:
The manual particle picker allows the user to select the targets by eye. This can be extremely time consuming. However, if no starting model is available or the desired particles are represented in a very low concentration it is sometimes worth to spent some time selecting the particles manually. After several hundred particles have been collected a preliminary initial model or 2D averages can be generated and used as templates. Use this option if you want to select the particles manually from scratch or edit particle picks done by Dog Picking or Template Picking.
To run the user interface for manual picking, you will be asked to copy and paste the command into a terminal. If you are connecting to a processing server, you may need to ssh with a -X flag to enable display.
< Particle Selection | CTF Estimation >
You can check the Files tab for updated minor release versions of your installed release. These will include any critical bug fixes that have been addressed since the original release.
You may update by either downloading a released tar file or doing an svn update if your original installation was via svn checkout. To do the svn update, simply change directories to your myami installation and run
svn update.
cd /your_download_area/myami sudo ./pysetup.sh install
That will install each package, and report any failures. To determine the cause of failure, see the generated log file "pysetup.log". If necessary, you can enter a specific package directory and run the python setup command manually. For example, if sinedon failed to install, you can try again like this:
cd /your_download_area/myami/sinedon sudo python setup.py install
Important: You need to install the current version of Appion packages to the same location that you installed the previous version of Appion packages. You may have used a flag shown below (--install-scripts=/usr/local/bin) in your original installation. If you did, you need to use it this time as well. You can check if you installed your packages there by browsing to /usr/local/bin and looking for ApDogPicker.py. If the file is there, you should use the flag. if the file is not there, you should remove the flag from the command to install Appion to the default location.
The pysetup.py script above did not install the appion package. Since the appion package includes many executable scripts, it is important that you know where they are being installed. To prevent cluttering up the /usr/bin directory, you can specify an alternative path, typically /usr/local/bin, or a directory of your choice that you will later add to your PATH environment variable. Install appion like this:
cd /your_download_area/myami/appion sudo python setup.py install --install-scripts=/usr/local/bin
Copy the entire myamiweb folder found at myami/myamiweb to your web directory (ex. /var/www/html). You may want to save a copy of your old myamiweb directory first.
cp -rf myamiweb /var/www/html
Running the following script will indicate if you need to run any database update scripts.
cd /your_download_area/myami/dbschema python schema_update.py
This will print out a list of commands to paste into a shell which will run database update scripts.
You can re-run schema_update.py at any time to update the list of which scripts still need to be run.
< Retrieve Forgotten Password | Logout >
<Stacks | Particle Alignment >
sudo mkdir -p /ami/amishare
sudo mount -o resvport -t nfs colossus.scripps.edu:/export/amishare /ami/amishare
The user is able to assess multiple images at one time on the web by using the "Multi Img Assessment"
WARNING: This is a preliminary document. Use at your own risk!
This is mainly a log of various experiences installing myami on Ubuntu which will gradually evolve into a more formal document. This will attempt to demonstrate the installation of all of myami including both leginon and appion on a single Ubuntu host. This will also include running MySQL, Apache Torque, etc. on this local host without needing to connect to any other host.
We use Ubuntu 12.04 LTS (Precise) Desktop 64 bit. Install a basic system from an image on CD or bootable USB drive. Default selections during the install process, no 3rd party repositories, no network configuration and no updates during the installation. This makes it easier in the remainder of this document to know that we have started from a known base system. Reboot.
Following the initial reboot:sudo apt-get update sudo apt-get upgrade
You now have a basic up-to-date Ubuntu system
There are many additional packages to install, but we can try to condense that list to the smallest set necessary to pass to the update command which will then figure out all the other dependencies.
Here is a single command to install all the necessary packages:
sudo apt-get install subversion vim python-MySQLdb python-wxgtk2.8 python-fs mysql-server php5 php5-mysql libssh2-php php5-gdDuring installation, you will be prompted several times to create a mysql root password. This can optionally be left blank at the expense of less security. Note: installing the vim text editor is my own preference... use an inferior text editor if you wish :)
The Scipy package is also required, but the current version 0.9.0 that comes with Ubuntu 12.04 is broken. You need to grab the more recent version of Scipy from Ubuntu Quantal (development version). To make a clean installation that will not confuse the package manager we have prepared a mini-repository that includes this new scipy. You can download and install scipy using the following set of commands copied into your terminal.
sudo mkdir /usr/local/share/myamiThat should get your password in the sudo cache so you can copy the rest of these without entering a password:
cd /usr/local/share/myami sudo wget http://ami.scripps.edu/redmine/attachments/download/1488/ubuntu-scipy-0.10.1.tar.bz sudo tar jxf ubuntu-scipy-0.10.1.tar.bz sudo sh -c 'echo "deb file:///usr/local/share/myami/ubuntu-scipy-0.10.1 ./" > /etc/apt/sources.list.d/myami.list' sudo apt-get update sudo apt-get install python-scipyMake sure that last line executes (may have to hit enter). Also, you may have to confirm installing it without authentication.
#bind-address = 127.0.0.1 bind-address = your.fixed.ip.address
[mysqld] default-storage-engine=myisam query_cache_size = 100M query_cache_limit = 100M
sudo service mysql restart
sudo mysqladmin create leginondb sudo mysqladmin create projectdb
mysql -u root mysql
create user usr_object@'localhost'; grant all privileges on leginondb.* to usr_object@'localhost'; grant all privileges on projectdb.* to usr_object@'localhost'; grant alter, create, insert, select, update on `ap%`.* to usr_object@localhost;
create /etc/php5/conf.d/myami.ini with the following contents:
error_reporting = E_ALL & ~E_NOTICE & ~E_WARNING display_errors = On register_argc_argv = On short_open_tag = On max_execution_time = 300 max_input_time = 300 memory_limit = 256Mrestart apache:
sudo service apache2 restart
cd svn co http://ami.scripps.edu/svn/myami/trunk myami-trunk
cd myami-trunk sudo ./pysetup.sh install ADD APPION PYTHON PACKAGE INSTALL HERE
sudo mkdir /data sudo mkdir /data/leginon sudo chmod a+w /data/leginon
cp leginon/leginon.cfg.template ~/leginon.cfg cp sinedon/examples/sinedon.cfg ~/sinedon.cfg cp pyscope/instruments.cfg.template ~/instruments.cfgthen edit and config all three of the above files.
sudo cp -r myamiweb /var/www sudo chmod 777 /var/www/myamiweb
sudo mkdir /var/cache/myami sudo mkdir /var/cache/myami/redux sudo chown -R www-data.www-data /var/cache/myami sudo vim /user/local/lib/python2.7/dist-packages/redux/pipeline.py(set cache path and size)
start-leginon.py
sudo apt-get install nfs-common
sudo apt-get install nis
This is the query that I wanted to do because it is easy to understand (= easier to maintain).
Unfortunatly there is a bug in mysql (http://bugs.mysql.com/bug.php?id=10312) which makes this
so slow that no one has seen it complete. The subsequent query is a bit more difficult to follow
but gets around this problem.
SELECT DB_PROJECT.projectexperiments.projectId FROM DB_PROJECT.projectexperiments WHERE DB_PROJECT.projectexperiments.name IN ( SELECT DB_LEGINON.SessionData.name FROM DB_LEGINON.SessionData WHERE DB_LEGINON.SessionData.`DEF_id` IN ( SELECT DB_PROJECT.shareexperiments.`REF|leginondata|SessionData|experiment` FROM DB_PROJECT.shareexperiments WHERE DB_PROJECT.shareexperiments.`REF|leginondata|UserData|user` = ".$userId." ) );
SELECT DB_PROJECT.projectexperiments.projectId FROM DB_PROJECT.projectexperiments INNER JOIN ( SELECT DB_LEGINON.SessionData.name AS SessionName FROM DB_LEGINON.SessionData INNER JOIN ( SELECT DB_PROJECT.shareexperiments.`REF|leginondata|SessionData|experiment` AS SessionId FROM DB_PROJECT.shareexperiments WHERE DB_PROJECT.shareexperiments.`REF|leginondata|UserData|user` = ".$userId." ) AS SessionIds ON DB_LEGINON.SessionData.`DEF_id` = SessionIds.SessionId ) AS SessionNames ON DB_PROJECT.projectexperiments.name = SessionNames.SessionName;
See another example here.
/etc/init.d/mysqld restart
which mysql
To get access to our databases, you need to ask Christopher (or someone else with the correct privileges) to add you as a user.
Our databases reside on Cronus4 (Appion and Leginon things), ami (websites and tools like Redmine), and Fly (a copy of Cronus4 used for testing).
Once added, you will need to change your password from the default to something all your own on each server that you are added to. Here's how:
At a terminal type:
mysql -h cronus4 -u [yourUserName] -p
You are prompted to type your default password. Then, change your password with:
set password = password("[new password]");
show databases;
use [name of db]
show tables;
describe [name of table];
mysql> showprocesslist; mysql> kill [process Id number];
Mysql Nested Subqueries Problem
An email confirmation will be sent to the user.
Myamiweb Registration Screen
NOTE:
If your Appion system is installed on multiple servers, each Appion user's Web Server and Processing Server user names and passwords should be identical.
< Enable User Authentication | Retrieve Forgotten Password >
Moving forward, refinements will all be split into 2 steps, prep and run.
The web server then calls prepRefine.py located on the local cluster to prepare the refinement.
Each processing host (eg. Garibaldi, Guppy, Trestles) will define a class extended from a base ProcessingHost class.
The extended classes know what headers need to be placed at the top of job files and they know how to execute a command based on the specific clusters requirements.
The base ProcessingHost class could be defined as follows:
abstract class ProcessingHost(): def generateHeader(jobObject) # abstract, extended classes should define this, returns a string def executeCommand(command) # abstract, extending classes define this def createJobFile(header, commandList) # defined in base class, commandList is a 2D array, each row is a line in the job file. def launchJob(jobObject) # defined in base class, jobObject is an instance of the job class specific to the jobtype we are running header = generateHeader(jobObject) jobFile = createJobFile(header, jobObject.getCommandList()) executeCommand(jobFile)
Each type of appion job (eg Emanrefine, xmipprefine) will define a class that is extended from a base Job class.
The extending classes know parameters that are specific to the job type and how to format the parameters for the job file.
The base Job class could be defined as follows:
class Job(): self.commandList self.name self.rundir self.ppn self.nodes self.walltime self.cputime self.memory self.mempernode def __init__(command) # constructor takes the command (runJob.py --runname --rundir ....) self.commandList = self.createCommandList(paramDictionary) def createCommandList(command) # defined by sub classes, returns a commandList which is a 2D array where each row corresponds to a line in a job file
There will be an Agent class that is responsible for creating an instance of the appropriate job class and launching the job.
It will be implemented as a base class, where sub classes may override the createJobInst() function. For now, there will be only one sub class defined
called RunJob. The same runJob.py will be installed on all clusters. This implementation will allow flexibility for the future.
The base Agent class may be defined as follows:
class Agent(): def main(command): jobType = self.getJobType(command) job = self.createJobInst(jobType, command) processHost = new ProcessingHost() jobId = processHost.launchJob(job) self.updateJobStatus() def getJobType(command) # parses the command to find and return the jobtype def createJobInst(jobType, command) # sub classes must override this the create the appropriate job class instance def updateJobStatus() # not sure about how this will be defined yet
Sub classes of Agent will define the createJobInst() function.
We could create a single subclass that creates a job class for every possible appion job type.
(we could make a rule that job sub classes are named after the jobtype with the word job appended. then this function would never need to be modified)
A sample implementation is:
class RunJob(Agent): def createJobInst(jobType, command) switch (jobType): case "emanrefine": job = new EmanJob(command) break case "xmipprefine": job = new XmippJob(command) break return job
This page provides details of the major features of object oriented programming and definitions of terminology.
interface IAnimal { function getName(); function talk(); } abstract class AnimalBase implements IAnimal { protected $name; public function __construct($name) { $this->name = $name; } public function getName() { return $this->name; } } class Cat extends AnimalBase { public function talk() { return 'Meowww!'; } } class Dog extends AnimalBase { public function talk() { return 'Woof! Woof!'; } } $animals = array( new Cat('Missy'), new Cat('Mr. Mistoffelees'), new Dog('Lassie') ); foreach ($animals as $animal) { echo $animal->getName() . ': ' . $animal->talk(); }
class Animal: def __init__(self, name): # Constructor of the class self.name = name def talk(self): # Abstract method, defined by convention only raise NotImplementedError("Subclass must implement abstract method") class Cat(Animal): def talk(self): return 'Meow!' class Dog(Animal): def talk(self): return 'Woof! Woof!' animals = [Cat('Missy'), Cat('Mr. Mistoffelees'), Dog('Lassie')] for animal in animals: print animal.name + ': ' + animal.talk() # prints the following: # # Missy: Meow! # Mr. Mistoffelees: Meow! # Lassie: Woof! Woof!
The orthogonal tilt reconstruction method is an approach to generating single-class volumes with no missing cone for ab initio reconstruction of asymmetric particles (Leschziner & Nogales, 2005). The method involves collecting data at +45° and −45° tilts and only requires that particles adopt a relatively large number of orientations on the grid. One tilted data set is used for alignment and classification and the other set—which provides views orthogonal to those in the first—is used for reconstruction, resulting in the absence of a missing cone.
There are two general methods to run OTR Volume, just like how one would run RCT Volume
< Ab Initio Reconstruction | Refine Reconstruction >
Regardless of the method eventually utilized for 3D reconstruction, a good starting point for single particle EM investigations is 2D alignment and classification of the dataset. This type of analysis intimately acquaints the scientist with the types of particles, distribution of views, and the relative amount of "junk" contained in the dataset.
< Stacks | Ab Initio Reconstruction >
The first step in single particle analysis is to pick the particles within the micrographs. Basically three main ways exist to do this, all of them are integrated within Appion. Based on the shape of the particle, the prior knowledge and the amount of data collected, the user has to make a decision which approach is the best or use different approaches simultaneously and see which works best.
< Processing Cluster Login | CTF Estimation >
The user is able to retrieve pdb models from the Protein Data Bank and generate a 3D density volume from the atomic model.
To launch:
Output:
< Import Tools | EMDB to Model >
The user is able to retrieve pdb models from the Protein Data Bank and generate a 3D density volume from the atomic model.
< Import Tools | EMDB to Model >
In addition to the downloads from our svn repository, there are several other requirements that you will get either from your OS installation source, or from its respective website. The system check in the Leginon package checks your system to see if you already have these requirements.
cd myami/leginon/ python syscheck.py
If python is not installed, this, of course will not run. If you see any lines like "*** Failed...", then you have something missing. Otherwise, everything should result in "OK".
< Download Appion/Leginon Files | Install Appion/Leginon Packages >
In addition to the downloads from our svn repository, there are several other requirements that you will get either from your OS installation source, or from its respective website. The system check in the Leginon package checks your system to see if you already have these requirements.
cd myami/leginon/ python syscheck.py
If python is not installed, this, of course will not run. If you see any lines like "*** Failed...", then you have something missing. Otherwise, everything should result in "OK".
This one is a bit old, but lots of good stuff that goes beyond style. Some things are questionable. I prefer Getters/Setters over Attributes as Objects (at least how the example shows it) to allow for better error handling. I prefer no underscores in naming except for constants that use all caps...but that is only a style issue.
From the Zend framework folks:
http://framework.zend.com/manual/en/coding-standard.html
An intro:
http://godbit.com/article/introduction-to-php-coding-standards
Nice Presentation:
http://weierophinney.net/matthew/uploads/php_development_best_practices.pdf
PHP Unit testing
http://www.phpunit.de/pocket_guide/
For automatically checking code against the Pear standards use CodeSniffer:
http://pear.php.net/package/PHP_CodeSniffer/
Best Practices:
http://www.odi.ch/prog/design/php/guide.php
< Testing job submission | Setup Remote Processing ^
A web server troubleshooting tool is available at http://YOUR_HOST/myamiweb/test/checkwebserver.php.
You can browse to this page from the Appion and Leginon Tools home page (http://YOUR_HOST/myamiweb) by clicking on [test Dataset] and then [Troubleshoot].
This page will automatically confirm that your configuration file and PHP installation and settings are correct and point you to the appropriate documentation to correct any issues.
You may need to configure your firewall to allow incoming HTTP (port 80) and MySQL (port 3306) traffic:
$ system-config-securitylevel
Security-enhanced linux may be preventing your files from loading. To fix this run the following command:
$ sudo /usr/bin/chcon -R -t httpd_sys_content_t /var/www/html/
see this website for more details on SELinux
If you want to run processing jobs directly from the Appion Data Proccessing interface, you must log into the processing server with the steps below. You may also choose not to log in. In this case you can copy and paste processing commands directly into a SSH session.
< Common Workflow | Particle Selection >
Appion and Leginon shared steps:
Continue with the following steps unique to Appion:
< File Server Setup Considerations | Web Server Installation >
The Project DB tool allows users to:
< User Management | Image Viewers >
This document is a list of python coding standards. To add a new standard copy the template below and modify it.
see also http://ami.scripps.edu/wiki/index.php/Appionscripts_formatting_rules
What is the coding standard
Why is the coding standard important
GOOD:
this is a good example code
BAD:
this is a bad example code
Use tabs instead of spaces for inline code
It is important to be consistent. People like different sizes of columns, some like 8 spaces, others 4, 3, or 2. With tabs each individual can customize their viewer.
GOOD:
if True: <tab>while True: <tab><tab>print "tab" <tab>break
BAD:
if True: while True: print "tab" break
Use ''.startswith() and ''.endswith() instead of string slicing to check for prefixes or suffixes.
startswith() and endswith() are cleaner and less error prone.
GOOD:
if foo.startswith('bar'):
BAD:
if foo[:3] == 'bar':
from module import *
¶Never use from module import *
, use import module
instead
It is hard to track where functions come from when import *
is used
GOOD:
import numpy a = numpy.ones((3,3))
BAD:
from numpy import * a = ones((3,3))
If you are consistent with your names people can read your code.
GOOD:
for imgdict in imgtree: imgarray = imgdict['image'] imgname = imgdict['filename']
BAD:
for image in imgs: array = image['image'] name = image['filename']
Use descriptive variables, asdf
is not a variable.
No one understands shorthand variables.
GOOD:
imgarray = mrc.read('leginon_image.mrc') particle1 = imgarray[47, 21] particle1 = imgarray[10, 15] stack = [particle1, particle2]
BAD:
i = mrc.read('x.mrc') prtl1 = i[47, 21] prtl2 = i[10, 15] s = [prtl1, prtl2]
Functions that have a global use, should go in appionlib folder.
Functions that will only be used by a single program go into that program's file.
Upload to the database function are typically only used by a single program and should be within that program not in appionlib
Keep the code clean an organized.
GOOD:
from appionlib import commonFunctions class AppionScript(): def customUploadToDB(self): """stuff""" def run(self): commonFunctions.commonFunction() self.customUploadToDB()
BAD:
from appionlib import apUploadCustom class AppionScript(): def commonFunction(self): """stuff""" def run(self): self.commonFunction() apUploadCustom.customUploadToDB()
import traceback one, two, tb = sys.exc_info() traceback.print_tb(tb)
Purpose: Describe tools used for routine quality assessment in Leginon and Appion.
A. RCT reconstructions Summary Page lists all RCT reconstructions completed for the particular dataset
B. Clicking on a particular reconstruction opens a summary page displaying all relevant information for processing steps including and leading up to the reconstruction.
C. A link is provided to a plot summarizing the quality of the 2D alignment preceding the RCT reconstruction.
D. A link to the 2D alignment output opens a page that allows further processing, including selecting a particular stack (green), and viewing its corresponding raw particles (purple). Alternatively, another RCT reconstruction can be calculated, raw particles viewed, or an alternate method utilized to calculate another 3D reconstruction.
E. The raw particles for any stack can be viewed in the web browser with further processing options such as creating a substack by clicking on particular images to include or exclude (red) and then selecting "create substack" (blue).
< Step by Step Guide to Appion | Processing Cluster Login >
Purpose: Basic workflow for processing an RCT data set.
< Step by Step Guide to Appion | Processing Cluster Login >
If you ran a random conical tilt session you have to pick and correlate (align particle pairs) the particles on the tilt pairs. To find the particles you can use Template Picking, Manual Picking or Dog Picking.
< Particle Selection | CTF Estimation >
The RCT viewer is used for viewing random conical tilt (RCT) or orthogonal tilt reconstruction (OTR) pairs of images.
This method relies on physical tilting of the specimen in the microscope to obtain 2D projection views for samples with preferred orientation. Images are taken at 0 and 45-60 degrees. Alignment and classification of the 0 degree data determines orientation parameters to be applied to the tilted data. This method was originally described by Raddermacher, M. et. al Journal of Microscopy v141,RP1-2 (1986).
Note: RCT Volume can be accessed directly from the Appion sidebar, or by clicking on the "Create RCT Volume" button displayed above class averages generated through 2D Alignment and Classification
< Ab Initio Reconstruction | Refine Reconstruction >
mysql -h localhost -u root show databases;
drop database projectdb; drop database leginondb;
create database projectdb; create database leginondb;
chmod 777 /var/www/html/myamiweb/config.php
cd /myamiImages/leginon rm -rf sample
http://localhost/myamiweb/setup/autoInstallSetup.php?password=your_root_password
Name | a.k.a. | Description | Used By | Make common to all? | Notes |
---|---|---|---|---|---|
General Refinement Parameters | |||||
Outer Mask Radius | Particle outer mask radius (RO),Outer Radius, mask | radius from center of particle to outer edge (frealign in angstroms, spider in pixels), raduius of external map in pixels | frealign, spider,xmipp, eman, imagic | y | |
Inner Mask Radius | Inner Radius,Particle inner mask radius | inner radius for alignment in pixels | xmipp, frealign, spider | y | not xmipp |
Outer Alignment Radius | xmipp, eman, imagic | y | not eman, frealign | ||
Inner Alignment Radius | xmipp, eman, imagic | y | not eman, frealign | ||
Symmetry Group | sym | ex. c1, c2... | xmipp,eman,frealign,spider,imagic | y | |
Number of iterations | iteration number | xmipp | y | ||
Angular Sampling Rate | ang | angular step for projections in degrees | xmipp,EMAN | y | not frealign |
Percentage of worst images to discard | xmipp | y | not frealign | ||
Filter reconstructed volume to estimated resolution | flt3d | y | not frealign | ||
Filter reconstructed volume to resolution computed by fsc | Low Pass Filter the reference?, Constant to add to the estimated resolution | y | not frealign, eman | ||
stack preparation parameters | |||||
Last particle to use | frealign | y | specific to stack preparation, not refinement alg | ||
lp | filtering | low pass filter in angstroms | spider, imagic | y | specific to stack preparation, not refinement alg |
hp | filtering | high pass filter in angstroms | spider, imagic | y | specific to stack preparation, not refinement alg |
Algorithm Specific Refinement Parameters | |||||
imask | radius of internal mask (in pixels for eman and spider, Angstroms for frealign) | EMAN | |||
amask | amask=[r],[threshold],[iter] | eman | |||
Mask Filename | xmipp | ||||
Max angular change | xmipp | ||||
max change offset | xmipp | ||||
Search range for 5d transitional search | xmipp | ||||
Reconstruction Method | xmipp | ||||
Values of lambda for ART | xmipp | ||||
Initial max frequency used by reconstruct fourier | xmipp | ||||
Compute resolution? | xmipp | don't need this, should always be set to yes | |||
maxshift | max translation during image alignment in pixels | eman | |||
hard | hard limit for make3d program | eman | |||
clskeep | =[std dev multipier] how many raw particles discarded for each class average | eman | |||
clsiter | iterative alignmnet to each other | eman | |||
xfiles | =[mas in kD] | eman | |||
shrink | scale down by a factor of [n] before classification | eman | |||
euler2 | =[oversample factor] | eman | |||
median | use median value instead of average for each pixel | eman | |||
phscls | use signal to noise ratio weighted phase residual | eman | |||
refine | do subpixel alignment | eman | |||
tree | decimate reference populatiuon | eman | |||
coran | use coran algorithm | eman | |||
eotest | use even odd test | eman | remove this, should always be yes, takes place of Compute Resolution? | ||
amplitude contrast (WGH) | frealign | ||||
Standard deviation filtering | fealign | ||||
Phase B-factor weighting constant (PBC) | frealign | ||||
B-factor offset (BOFF) | frealign | ||||
Number of randomized search trials (ITMAX) | frealign | ||||
Number of potential matches to refine (IPMAX) | frealign | ||||
Target phase residual (TARGET) | frealign | ||||
Worst phase residual for inclusion (THRESH) | frealign | ||||
Resolution limit of reconstruction (RREC) (in Ångstroms; default Nyquist) | frealign | ||||
Lower resolution limit or high-pass filter (RMAX1) (in Ångstroms) | frealign | ? is this the same as stack prep lp/hp filter | |||
Higher resolution limit or low-pass filter (RMAX2) (in Ångstroms; default 2*Nyquist) | frealign | ? is this the same as stack prep lp/hp filter | |||
B-factor correction (RBFACT) (0 = off) | frealign | ||||
Only use CTFFIND values | frealign | ||||
MSA - num class averges to produce from raw images | imagic | ||||
MSA - num factors to use for classification | imagic | ||||
MSA - percentage of worst class members to ignore after classification | imagic | ||||
threed reconstruction | object size, low-pass filter 3d volume | imagic | |||
Center stack prior to alignment | imagic | ||||
Mirror references for alignment | imagic | ||||
MRA - min radius for rotational alignment in pixels | imagic | ||||
MRA - max radius for rotational alignment in pixels | imagic | ||||
MSA - fraction of final class averages to keep | imagic | ||||
MRA - angular increment of forward projections | imagic | ||||
MRA - max radial shift compared to original images | imagic | ||||
MRA - max radial shift during this iteration | imagic | ||||
MSA - percentage of images to ignore when calculating eiganimages | imagic | ||||
Angular reconstitution | angular increment of forward projections, ang inc of euler search, fraction of best ordered images to keep | imagic | |||
firstring | similar to alignment radius | Any pixels this far away from center will not be used | spider | ||
lastring | similar to alignment radius | Only pixels this far away will be used | spider | ||
xysearch | translational search during projection matching will be limited to this many pixels from the center ofthe image | spider | |||
angular increments | Angular Sampling Rate | list of angular increments for projection mapping | spider | ||
keep | determines which particles are kept for back-projection, -1 is one standard deviation worse than mean | spider | |||
xyshift | particles only allowed to shift this far from center | spider | |||
Approximate mass in Kd | imagic |
package | web launch | multi-node | garibaldi | follow progress | DB upload | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
EMAN | Yes | Yes | Yes | Yes | Yes | ||||||
FREALIGN | Yes | Yes | Yes | Yes | Almost | ||||||
Xmipp | Broken | No | No | No | Yes | ||||||
SPIDER | Broken% | No | No | No | No | ||||||
IMAGIC | Broken* | ?? | No | ?? | Almost# |
* data00 is hard-coded in, did not launch # does not use refinement tables % single iteration only
Other packages: EMAN2, SPARX, ...
Multi-model refinement: EMAN (Pick-wei)
For each refinement, the Reconstruction summary page automatically displays for each iteration information such as the FSC curve, Euler angle distributions, good and bad classes, 3D snapshots of the model, and allows for the following post-processing procedures.
Remove Jumpers will remove particles with ambiguous orientation.
< Refine Reconstruction | Quality Assessment >
Most initial models establish nothing more than a preliminary sense of the overall shape of the biological specimen. In order to reveal structural information that can answer specific biological questions, the model requires refining. In single particle analysis, a refinement is an iterative procedure, which sequentially aligns the raw particles, assign to them appropriate spatial orientations (Euler angles) by comparing them against the a model, and then back-projects them into 3D space to form a new model. Effectively, a full refinement takes as input a raw particle stack and an initial model and is usually carried out until no further improvement of the structure can be observed, often measured by convergence to some resolution criterion.
When dealing with icosahedral particles such as viral capsids, particular attention should be given to properly orient the model (the icosahedral axes) to match specific conventions adopted by different softwares.
proc3d model_Viper.mrc model_Crowther.mrc rot=0,0,90
To convert a model from the Crowther orientation to the EMAN one, the following proc3d command is required:
proc3d model_Crowther.mrc model_EMAN.mrc icos2fTo5f
To assess visually in which orientation a model is, the simplest way is to do it in UCSF Chimera. After opening the map, press the Orient button in the Volume viewer to orient the z axis perpendicularly to the screen. Then open a file (named axis.bild for example) containing the following lines:
.color red .cylinder 500.0 0.0 0.0 -500.0 0.0 0.0 1 .color green .cylinder 0.0 500.0 0.0 0.0 -500.0 0.0 1 .color blue .cylinder 0.0 0.0 500.0 0.0 0.0 -500.0 1”
Below are the conventions used by the different softwares present within appion to generate and refine reconstructions (most allow to use different conventions).
< Ab Initio Reconstruction|Quality Assessment >
The following class diagram shows the BasicForm class with it's extended classes as well as the FormParameter class and it's extended classes.
It also shows associations among the classes.
Notice that the specific refine parameter classes use polymorphism to override the validate() function. This allows the extended classes to provide more complex validations than a typical form requires.
Other forms, such as RunParameters and stack prep, just use the base FormParameters class that their parent, BasicForm, uses.
The following sequence diagram shows how the Form and Parameter classes work together to display a form, validate the user input, and create a command string.
class XmippParams extends RefineFormParameters { function __construct( $id='', $label='', $outerMaskRadius='', $innerMaskRadius='', $outerAlignRadius='', $innerAlignRadius='', $symmetry='', $numIters='', $angSampRate='', $percentDiscard='', $filterEstimated='', $filterResolution='', $filterComputed='', $filterConstant='', $mask='', $maxAngularChange='', $maxChangeOffset='', $search5DShift='', $search5DStep='', $reconMethod='', $ARTLambda='', $doComputeResolution='', $fourierMaxFrequencyOfInterest='' ) { parent::__construct($id, $label, $outerMaskRadius, $innerMaskRadius, $outerAlignRadius, $innerAlignRadius, $symmetry, $numIters, $angSampRate, $percentDiscard, $filterEstimated, $filterResolution, $filterComputed, $filterConstant ); $this->addParam( "mask", $mask, "Mask filename" ); $this->addParam( "maxAngularChange", $maxAngularChange, "Max. Angular change " ); $this->addParam( "maxChangeOffset", $maxChangeOffset, "Maximum change offset " ); $this->addParam( "search5DShift", $search5DShift, "Search range for 5D translational search " ); $this->addParam( "search5DStep", $search5DStep, "Step size for 5D translational search " ); $this->addParam( "reconMethod", $reconMethod, "Reconstruction method " ); $this->addParam( "ARTLambda", $ARTLambda, "Values of lambda for ART " ); $this->addParam( "doComputeResolution", $doComputeResolution, "Compute resolution? " ); $this->addParam( "fourierMaxFrequencyOfInterest", $fourierMaxFrequencyOfInterest, "Initial maximum frequency used by reconstruct fourier " ); // disable any general params that do not apply to this method $this->hideParam("innerMaskRadius"); } function validate() { $msg = parent::validate(); if ( !empty($this->params["mask"]["value"]) && !empty($this->params["outerMaskRadius"]["value"]) ) $msg .= "<b>Error:</b> You may not define both the outer mask raduis and a mask file."; return $msg; } }
// based on the type of refinement the user has selected, // create the proper form type here. If a new type is added to // Appion, it's form class should be included in this file // and it should be added to this function. No other modifications // to this file should be necessary. function createSelectedRefineForm( $method, $stacks='', $models='' ) { switch ( $method ) { case emanrecon: $selectedRefineForm = new EmanRefineForm( $method, $stacks, $models ); break; case frealignrecon: $selectedRefineForm = new FrealignRefineForm( $method, $stacks, $models ); break; case xmipprecon: $selectedRefineForm = new XmippRefineForm( $method, $stacks, $models ); break; case xmippml3drecon: $selectedRefineForm = new XmippML3DRefineForm( $method, $stacks, $models ); break; default: Throw new Exception("Error: Not Implemented - There is no RefineForm class avaialable for method: $method"); } return $selectedRefineForm; }
A region mask can be created on images. Automated masking can also be assessed manually. During stack creation, particle selections within the assessed masks will not be considered for creating the stack.
< Appion Processing|Manual Masking >
To request a software registration key, you must first register as an Appion/Leginon user.
If you have not already created an account on this website, please do so now.
Here you can find a list of all jobs previously run on this project. If you want to rerun a job from another session with identical settings on your current session, for example rerun a particle picker, click on the specific job.
Lost Password Screen
< New User Registration | Modify Your Profile >
In order to extract quantitative information out of the inherently low SNR data obtained by EM, 2D averaging must be applied to homogenous subsets of single particles. This requires the single particles to be brought into alignment with one another, so that the signal of common motifs is amplified. Alignment protocols typically operate by shifting, rotating, and mirroring each particle in the data set in order to find the orientation of particle A that maximizes a similarity function with particle B. Depending upon the existence of templates obtained from a priori information about the specimen, particle alignment algorithms are separated into reference-free and reference-based approaches.
<Particle Alignment | Run Feature Analysis >
Running the following script will indicate if you need to run any database update scripts.
cd /your_download_area/myami/dbschema python schema_update.py
This will print out a list of commands to paste into a shell which will run database update scripts.
You can re-run schema_update.py at any time to update the list of which scripts still need to be run.
Feature analysis refers to systematic techniques for extracting features from a series of aligned particles with the intent of clustering images with with similar features together. Feature anaylsis is closely related to multivariate statistics . All of these feature analysis techniques fall into two categories: principal component analysis (PCA) (Spider Coran and IMAGIC MSA) and neural networks (Xmipp KerDen SOM).
<Run Alignment | Run Particle Clustering >
The user is able to globally reject images that do not meet certain criteria
The user chooses the options to reject "bad" images. This can be based upon different critieria:
After feature analysis, particles are ordered and summed according to their relative similarity (proximity in reduced multidimensional image point space).
<Run Feature Analysis | Ab Initio Reconstruction >
We list our experience and current progress here.
If you have a new computer(s) for your Leginon/Appion installation, we recommend installing CentOS because it is considered to be more stable than other varieties of Linux.
CentOS is the same as Red Hat Enterprise Linux (RHEL), except that it is free and supported by the community.
We have most experience in the installation of the supporting packages on CentOS and this installation guide has specific instruction for the process.
Start at Instructions for installing CentOS on your computer.
Start at Instructions for installing Fedora on your computer
Instructions for installing CentOS on your computer >
In this case, we are setting up a job submission server that will have all of the data directories mounted and external packages installed (EMAN, Xmipp, etc.) on the compute nodes. Most institutions have a job submission server already, but the data is not accessible. Appion is not set up for this scenario except for large reconstruction jobs.
PBS stands for a Portable Batch System. It is a job submission system meaning that users submit many jobs and the server prioritizes and executes each job as resources permit. Below we show how to install the popular open source PBS system called TORQUE.
A TORQUE cluster consists of one head node and many compute nodes. The head node runs the pbs_server daemon and the compute nodes run the pbs_mom daemon. Client commands for submitting and managing jobs can be installed on any host (including hosts not running pbs_server or pbs_mom). More documentation about Torque is available here.
Torque available with Fedora and CentOS 5.4 (through the EPEL). For YUM based systems type:
sudo yum -y install torque-server torque-scheduler torque-client
Make sure the directory containing the pbs_server executable is in your PATH. For CentOS this is usually /usr/sbin.
sudo pbs_server -t create
Enable the torque pbs_mom daemon on reboot:
sudo /sbin/chkconfig pbs_server on sudo /sbin/service pbs_server restart sudo /sbin/chkconfig pbs_sched on sudo /sbin/service pbs_sched start
The format is:
node-name[:ts] [np=] [properties]
To add the localhost with two processors as a node, you would add:
localhost np=2
You should add every compute node to this file, e.g.,
node01.INSTITUTE.EDU np=2 node02.INSTITUTE.EDU np=4 node03.INSTITUTE.EDU np=2
Torque available in with Fedora and CentOS 5.4 (through the EPEL). For YUM based systems type:
sudo yum -y install torque-mom torque-client
see http://www.clusterresources.com/products/torque/docs/1.2basicconfig.shtml#initializenode for more details
Edit the /var/torque/mom_priv/config (CentOS 5) OR /var/lib/torque/mom_priv/config (CentOS 6) file:
$pbsserver headnode.INSTITUTE.EDU # hostname running pbs_server
For the localhost add:
$pbsserver localhost # hostname running pbs_server
Enable the torque pbs_mom daemon on reboot:
sudo /sbin/chkconfig pbs_mom on sudo /sbin/service pbs_mom start
http://www.clusterresources.com/torquedocs/1.3advconfig.shtml
Munge is a tool to prevent users from certain nodes and other features
sudo create-munge-key sudo /sbin/chkconfig munge on sudo service munge start sudo qmgr -c 'set server authorized_users=user01@host01' sudo qmgr -c 'set server authorized_users=user01@host02' sudo qmgr -c 'set server authorized_users=user01@*'
On the head node, see if you can run a qstat
:
qstat
You can type:
pbsnodesto check the state of the compute clusters.
On the head node, create a job and submit it:
echo "sleep 60" > test.job echo "echo hello" >> test.job qsub test.job qstat
get all settings
sudo qmgr -c 'list server'
^ Setup Remote Processing | Install SSH module for PHP >
Follow the installation instructions.
Also install phpMyAdmin.
Note that phpMyAdmin version 2.11.10 works with older versions of PHP (that we happen to use).
This will grab the actual data that we use so you can play with it.
Log into cronus3 so that you can access cronus4.
$ ssh cronus3
Use mysqldump to get any table data that you want as in the example below.
Cronus4 is the host.
We do not lock the tables because we don't have permission to.
"project" is the name of the database and "login" is the name of the Table.
We make up a file name for the data to dump to.
$ mysqldump -h cronus4 -u usr_object --skip-lock-tables --extended-insert project login > ProjectLogin.sql
mysqldump -h cronus4 -u amber -p --skip-lock-tables --extended-insert project > Project.sql
The --extended-insert
option causes mysqldump to generate multi-value INSERT commands inside the backup text file which results in the file being smaller and the restore running faster. ref
More info on mysqldump is here.
Exit cronus3 when you are done dumping tables and load the dump files into your database.
If you followed the instructions for setting up MySQL in the Leginon Install guide, you have already created dbemdata and projectdata databases.
If you don't have them, create them first.
mysql -u root projectdata < ProjectLogin.sql
This is the Myami config file. It is being changed right now so this is in flux. Will update soon.
It should look like this:
// --- Set your leginon MySQL database server parameters $DB_HOST = "localhost"; $DB_USER = "usr_object"; $DB_PASS = ""; $DB = "dbemdata"; // --- XML test dataset $XML_DATA = "test/viewerdata.xml"; // --- Project database URL $PROJECT_URL = "project"; $PROJECT_DB_HOST = "localhost"; $PROJECT_DB_USER = "usr_object"; $PROJECT_DB_PASS = ""; $PROJECT_DB = "projectdata";
Point your web browser to http://localhost/myamiweb/.
Navigate to the Administration page and then to the ProjectDB page.
Doing this will populate your database with the schema defined in myami/myamiweb/project/defaultprojecttables.xml.
If you need to repopulate tables, use phpMyadmin to empty the Install table in the project DB. Then repeat the steps above.
The web interface for Appion allows one to directly login to a computer and process Appion jobs. But this requires a job submission system to be installed.
Note: The Local Cluster and Refine Reconstruction Cluster can be the same machine, but you will still need to perform all the setup instructions below for each type of cluster.
The following applies to both the web-server computer (setup earlier) and a job submission system on a local cluster. The job submission system usually consists of a head node (main computer) for receiving and scheduling jobs and individual processing nodes (slave computers) for running jobs. All of these system CAN exist on an single computer.
< Upload Images to a new Project Session | View a Summary of a Project Session >
Xmipp Sort by Statistics: This function sorts the particles in a stack by how closely they resemble the average. In general, this will sort the particles by how likely that they are junk. After sorting the particles a new stack will be created, you will then have to select at which point the junk starts and Apply junk cutoff. The second function, Apply junk cutoff will then create a third stack with no junk in it.
< Center Particles | Create Substack >
This method uses the Spider CA S command to run correspondence analysis (coran), a form of principal components analysis, and classify your aligned particles.
Note: If you accessed "Run Feature Analysis" directly from an alignment run, you will be greeted by the screen displayed on the left below. Alternatively, if you accessed the "Run Feature Analysis Run" from the Appion sidebar menu, you will be greeted by the screen displayed on the right below.
<Run Feature Analysis | Run Particle Clustering >
This method uses the Spider AP MQ command to align your particles to the selected templates. Multiprocessing additions has made this extremely fast.
<Run Alignment | Run Feature Analysis >
This method uses the Spider AP SR command to align your particles.
<Run Alignment | Run Feature Analysis >
Coming Soon! Working out a few bugs...
< Refine Reconstruction|Quality Assessment >
After particle selection, individual particles are boxed out of the micrographs and placed into stack files for further processing.
< CTF Estimation | Particle Alignment >
This procedure boxes out particles and is also able to apply CTF and astigmatism correction.
<Stacks | Particle Alignment >
Appion Install Manual Referenced to Leginion Installation
=====================
The Appion Team
email: appion@scripps.edu for any help you need.
This document describes a general installation of Appion, concentrating on the installation and setup on the database and web servers. Most sections refer to the Leginon installation documentation since the two share the same general architecture. If you want to run real Leginon on the microscope, you just need to follow the additional installation starting from Processing Server Window Installation Chapter of Leginon installation manual.
We need to remove a table in the project database called "install". This will allow the new default tables to be defined when we set up on the web-server side.
$ mysql projectdata -u usr_object mysql> drop table install; mysql> exit
You need to decide what prefix you will use for the processing databases. We will be creating them on the fly later. Our default is ap followed by a project id number. More about this later.
At this point, you need to do the following to grant privileges to users for any database whose name starts with ap
$ mysql -u root -p
Note: If you didn't set a mysql root password, don't use -p option.
mysql> GRANT ALL PRIVILEGES ON `ap%`.* TO usr_object@"%"; mysql> exit
TODO: The following steps are most likely no longer needed here:
$DEF_PROCESSING_PREFIX = "ap";
addplugin("processing");
$PROCESSING_DB_HOST = "your_db_host"; $PROCESSING_DB_USER = "usr_object"; $PROCESSING_DB_PASS = ""; $PROCESSING_DB = "";Remember that the last line should be kept empty as this will be set dynamically.
We will not include the processing host or cluster registration now. It is covered in the last part of this document.
processing db: not set (create processing db) db name ap1You can create the default numbered style database ap... or give it a new name with the same prefix. If you want to specify a database name that does not use the default prefix, please note that your db user specified in the config.php in project_1_2 needs to have the necessary privileges for that database. You may additionally want to change the value assigned to $DEF_PROCESSING_PREFIX in project_1_2/config.php if you want to use your new prefix all the time.
processing db: ap1
See next section on trouble shooting if you get the original page instead.
If you want all your processing databases combined in one single database (not recommended, as this becomes large very fast), just use the same name for all your projects.
The above procedure not only creates the database, but also create some of the tables that you need to start processing.
use_processingdb_table = True
[appionData] user: usr_object
[Note] The module names in brackets are case sensitive and need to be exact.
The user name needs to match the name for which privileges have been granted on the `ap%` databases.
Find a list of these packages here.
Follow instructions for the individual packages.
Instruction for compiling Xmipp for OpenMPI is here.
name: my_scope hostname: whatever type: Choose TEM
name: my_scope hostname: whatever type: Choose CCDCamera
[Note] If you use Leginon, and still want to upload non-Leginon images, make sure that you create a pair of fake instruments like these on a host solely for uploading. It will be a disaster if you don't, as the pixelsize of the real instrument pair will be overwritten by your upload.
A more advanced way to run appion script is done through an ssh session. This is equivalent of having you ssh into a computer and starting the appion processes.
There are two kinds of appion processes. The first is a single-node process that can be run on a stand-alone workstation or the head-node of a computer cluster without PBS. The second is a multiple-node process that requires PBS to run on the cluster. When you use "Just Show Command" option, it is always a single-node process, but if you run through ssh it could be either, depending on the demand of the process. For example,
imageuploader.py is always run as single-node process while maxlikeAlignment.py is either run on single-node with "Just Show Command" or as a PBS job submission when you run through ssh.
The extension module is added to php in the same way as does the php-mrc module we distribute for the viewing mrc images through php. To check whether it worked and for alternative way to make php recognize the module used in newer php, see http://ami.scripps.edu/documentation/leginon/bk02ch04s07.php under the section Check php information and Alternative approach if mrc module does not show up in info.php output
$PROCESSING_HOSTS[]="your_stand-alone_processing_workstation"; $PROCESSING_HOSTS[]="your_cluster"; $CLUSTER_CONFIGS= array ( 'your_cluster' );
Follow these instructions to set up PBS on your processing server.
Check your info.php as you did with mrctool installation. Corrected installed extension should show up in the output of info.php. Reference http://ami.scripps.edu/documentation/leginon/bk02ch04s07.php under the section Check php information and Alternative approach if mrc module does not show up in info.php output.
Use the top right form on the processing page to log in as if doing an ssh session. The page will acknowledge that you have been logged in if the setup is correct. You will be able to edit description of a run and to hide failed runs when logged in. The option for submitting the job appears at the bottom of the processing form whenever available.
For reconstructions involving iterations of different parameters such as EMAN reconstruction by refinement, the your_cluster.php is used to generate the script. Examine the script created on the web form and modify your_cluster.php You can copy the script to your cluster and test run/modify it until it is correct.
If you are installing on CentOS, this section outlines and streamlines the installation of prerequisite packages.
Purpose: Reconstruct a 3-dimensional electron density map in Appion using a streamlined protocol.
< Terminology | Quality Assessment >
$ svn copy http://ami.scripps.edu/svn/myami/trunk http://ami.scripps.edu/svn/myami/branches/myami-2.1 -m "Creating a branch for myami 2.1" Committed revision 14869.
This section enables the user to create a synthetic projection dataset from an input 3D model with the application of randomized rotations and translations, white Gaussian noise, as well as an envelope and contrast transfer function.
This method uses projections of a 3D model in order to create a synthetic dataset. Although it can be modified according to the options specified, the scheme consists of 12 basic steps, as shown and summarized below:
Some of the flavors have good package installation management. Therefore, it is to your advantage to use them.
We have a step-by-step installation guide for CentOS. If you use other flavors, it is up to you to find the details.
Check to see if you already have these packages, if not download and install them with your package management program.
You can find more detail installation notes at: Complete Installation Chapter of Leginon installation manual.
Processing serverName: | Download site | |||
Python 2.4 or newer | http://www.python.org | |||
wxPython 2.5.2.8 or newer | http://www.wxpython.org | |||
MySQL Python client 1.2 or newer | http://sourceforge.net/projects/mysql-python | |||
Python Imaging Library (PIL) 1.1.4 or newer | http://www.pythonware.com/products/pil/ | |||
Python XML module 0.8.3 or newer | http://pyxml.sourceforge.net | |||
NumPy 1.0.1 or newer | http://numpy.scipy.org | |||
SciPy 0.5.1 or newer | http://www.scipy.org, http://repos.opensuse.org/science |
Name: | Download site | |||
MySQL-Server 5.0 or higher | http://www.mysql.com | |||
MySQL-Client 5.0 or higher | http://www.mysql.com |
Name: | Download site | |||
Apache | www.apache.org | |||
php | www.php.net | |||
php-devel | rpmfind.net/linux/RPM/Development_Languages_PHP.html | |||
php-gd (including GD library, its development libraries and header) | www.libgd.org (Use gd2) | |||
fftw3-devel library (including development libraries and header) | www.fftw.org (Use fftw3) |
Name | which program needs it | |||
ImageMagick | appion stack creation | |||
Grace | appion summary reports | |||
Matplot Lib | appion summary reports | |||
GNU Plot | SPIDER | |||
GCC Fortran95 | FINDEM | |||
GCC Fortran77 | FINDEM | |||
GCC Objective-C | ACE2 | |||
GNU Scientific Library | ACE2 |
name | where to get installation instruction | |||
ssh2 extension for php | http://us.php.net/manual/en/book.ssh2.php |
Since we are not up-to-date on all packages, we can't guarantee that the newest version you have will work.
RequiredPackage | Version | Notes | ||||
EMAN | 1.9 cluster | download binary | ||||
UCSF Chimera | 1.2509 | download v1.2509 binary | ||||
rmeasure | 1.05 | download binary |
Package | Version | Notes | ||||
SPIDER | 15 | download binary | ||||
Xmipp | 2.3 | download source | ||||
ctftilt | ? | download binary | ||||
FREALIGN | 8.08 | download binary | ||||
IMAGIC | 5 | |||||
EM-BFACTOR | ? | download binary |
Package | Version | Notes | ||||
Matlab | 7.5.0 | for ACE1 but not ACE2 |
NRAMM software is available from two separate sites our local servers
Template picking is usually the most accurate and convenient way to extract particles. Once an initial model or 2D averages have been acquired they can be used as templates to identify similar particles within the micrograph.
< Particle Selection | CTF Estimation >
A. Project name, EM Session name for current dataset, and directory path for images
B. Appion SideBar is where all jobs are launched and tracked. It consists of several drop-down menus, organized in accordance with image processing stage. Only the processing steps possible for a given project at a given time are displayed (i.e. if you are working with untilted data, all tilt-data processing options are hidden). The top of this menu contains options to hide, expand, or contract the side bar.
C. Submenus contain links for running a particular process, and also keeps track of the the jobs completed, queued, or running at a given time. Clicking on the link for running a particular procedure opens the options available for that procedure in a window next to the appion sidebar. If the procedure has several packages associated with it (such as alignment), an initial page opens with click able links to the Appion processing pages for the various algorithms available.
D. The left side of processing pages display the minimal parameters that a user should check before running, and provides drop down menus where appropriate.
E. The right side of processing pages is a gray box containing parameters that more experienced users are familliar with. Floating help boxes appear when mousing over a particular operation to guide the user in appropriately setting these parameters. Default parameters are automatically entered.
F. To run a procedure the user can click "Run Command" to submit the job or "Just Show Command" to copy and paste the command into a unix terminal. If the "Commit to Database" box is checked, either method will track the process in the database and display the results in the appion processing pages.
G. Underneath user defined parameter boxes appion displays any additional information relevant to the process. In this case, the template that was selected for reference based alignment is displayed.
H. References to the particular software used for any given procedure are provided. Please let us know if we have missed or need to update a reference!
Check your info.php as you did with mrctool installation. Corrected installed extension should show up in the output of info.php. Reference http://ami.scripps.edu/documentation/leginon/bk02ch04s07.php under the section Check php information and Alternative approach if mrc module does not show up in info.php output.
Use the top right form on the processing page to log in as if doing an ssh session. The page will acknowledge that you have been logged in if the setup is correct. You will be able to edit description of a run and to hide failed runs when logged in. The option for submitting the job appears at the bottom of the processing form whenever available.
For reconstructions involving iterations of different parameters such as EMAN reconstruction by refinement, the your_cluster.php is used to generate the script. Examine the script created on the web form and modify your_cluster.php You can copy the script to your cluster and test run/modify it until it is correct.
< Configure web server to submit job to local cluster | Potential job submission problems >
setup.py
you are ready to test out appion.cd myami/appion/ ./check.sh
You need to edit leginon.cfg.
cd myami/appion/test python check3rdPartyPackages.py
Note: check3rdPartyPackages.py is currently only available with development svn checkout, will be included in version 2.2
< Install Ace2 | Processing Server Installation ^
setup.py
you are ready to test out appion.cd myami/appion/ ./check.sh
You need to edit leginon.cfg.
cd myami/appion/test python check3rdPartyPackages.py
Note: check3rdPartyPackages.py is currently only available with development svn checkout, will be included in version 2.2
The AMI database includes several test sessions that point to copies of collected images that can be used for testing purposes.
See issue #1229 for more information.
H1 TEST PAGE!!!!
< Appion and Leginon Database Tools
This section contains the step-by-step procedures for calculating tomograms.
< CTF Estimation | Align Tilt Series >
For information about Tomography, see Tomography.
< LOI | Hole Template Viewer >
An alignment run is one or more iterations of alignment of images from one or more tilt series of the same interested area.
A tilt series is defined as a group of images acquired during a single axis tilt sequence.
appiondata.ApTomoAlignmentRunData has references to one of the three tables that stores parameters for each method of alignment.
What do we Tom to do while he is here:
The following is an internal e-mail that describes a case in which we were able to run php-mrc module through text terminal with php command but not being able to see the images it produced through a network mounted drive,
The bottom line is that there is a permission issue. The tests were to eliminate the possibility one by one.
Hi, Amber,
I got it to work.
The webserver user apache has no permission to serve files from
/home/linux_user/
What I did was:
(1) as root
$ cd / $ mkdir data $ chmod -R 755 data
This way, if you check with ls -l
you will get
drwxr-xr-x 5 root root 73 Dec 4 11:42 data
(2) change leginon.cfg in the installation
$ cd /usr/lib/python2.4/site-packages/leginon/config $ vi leginon.cfg
[Images]
/data/leginon
This way, when I upload images to a new session, it will create a
directory under /data/leginon that is readable by everyone.
I figured it out by changing the user assigned
in /etc/httpd/conf/httpd.conf to linux_user, then, after restarting
apache, it could read the test images in /linux_user/Desktop/myami/myamiweb/test.
Then I realize that the system we use at linux_box allows read access
to all, and that a file is still not readable by others if its
parent directories are not readable by others.
It is probably something that we need to formulate better with
our system administrator. It is likely that we can do something more
acceptable by other groups. Apache has all kinds of permission
settings I didn't read through.
Anchi
A web server troubleshooting tool is available at http://YOUR_HOST/myamiweb/test/checkwebserver.php.
You can browse to this page from the Appion and Leginon Tools home page (http://YOUR_HOST/myamiweb) by clicking on [test Dataset] and then [Troubleshoot].
This page will automatically confirm that your configuration file and PHP installation and settings are correct and point you to the appropriate documentation to correct any issues.
You may need to configure your firewall to allow incoming HTTP (port 80) and MySQL (port 3306) traffic:
$ system-config-securitylevel
Security-enhanced linux may be preventing your files from loading. To fix this run the following command:
$ sudo /usr/bin/chcon -R -t httpd_sys_content_t /var/www/html/
see this website for more details on SELinux
Unlinking a project and a database does not delete the database.
< Create a Project Processing Database | Upload Images to a new Project Session >
Download Myami 2.2 (contains Appion and Leginon) using one of the following options:
This is a stable supported branch from our code repository.
Change directories to the location that you would like to checkout the files to (such as /usr/local) and then execute the following command:
svn co http://ami.scripps.edu/svn/myami/branches/myami-2.2 myami/
This contains features that may still be under development. It is not supported and may not be stable. Use at your own risk.
svn co http://ami.scripps.edu/svn/myami/trunk myami/
cd /your_download_area/myami sudo ./pysetup.sh install
That will install each package, and report any failures. To determine the cause of failure, see the generated log file "pysetup.log". If necessary, you can enter a specific package directory and run the python setup command manually. For example, if sinedon failed to install, you can try again like this:
cd /your_download_area/myami/sinedon sudo python setup.py install
Important: You need to install the current version of Appion packages to the same location that you installed the previous version of Appion packages. You may have used a flag shown below (--install-scripts=/usr/local/bin) in your original installation. If you did, you need to use it this time as well. You can check if you installed your packages there by browsing to /usr/local/bin and looking for ApDogPicker.py. If the file is there, you should use the flag. if the file is not there, you should remove the flag from the command to install Appion to the default location.
The pysetup.py script above did not install the appion package. Since the appion package includes many executable scripts, it is important that you know where they are being installed. To prevent cluttering up the /usr/bin directory, you can specify an alternative path, typically /usr/local/bin, or a directory of your choice that you will later add to your PATH environment variable. Install appion like this:
cd /your_download_area/myami/appion sudo python setup.py install --install-scripts=/usr/local/bin
Copy the entire myamiweb folder found at myami/myamiweb to your web directory (ex. /var/www/html). You may want to save a copy of your old myamiweb directory first.
Running the following script will indicate if you need to run any database update scripts.
cd /your_download_area/myami/dbschema python schema_update.py
This will print out a list of commands to paste into a shell which will run database update scripts.
You can re-run schema_update.py at any time to update the list of which scripts still need to be run.
Download Myami 2.2 (contains Appion and Leginon) using one of the following options:
This is a stable supported branch from our code repository.
Change directories to the location that you would like to checkout the files to (such as /usr/local) and then execute the following command:
svn co http://ami.scripps.edu/svn/myami/branches/myami-2.2 myami/
This contains features that may still be under development. It is not supported and may not be stable. Use at your own risk.
svn co http://ami.scripps.edu/svn/myami/trunk myami/
cd /your_download_area/myami sudo ./pysetup.sh install
That will install each package, and report any failures. To determine the cause of failure, see the generated log file "pysetup.log". If necessary, you can enter a specific package directory and run the python setup command manually. For example, if sinedon failed to install, you can try again like this:
cd /your_download_area/myami/sinedon sudo python setup.py install
Important: You need to install the current version of Appion packages to the same location that you installed the previous version of Appion packages. You may have used a flag shown below (--install-scripts=/usr/local/bin) in your original installation. If you did, you need to use it this time as well. You can check if you installed your packages there by browsing to /usr/local/bin and looking for ApDogPicker.py. If the file is there, you should use the flag. if the file is not there, you should remove the flag from the command to install Appion to the default location.
The pysetup.py script above did not install the appion package. Since the appion package includes many executable scripts, it is important that you know where they are being installed. To prevent cluttering up the /usr/bin directory, you can specify an alternative path, typically /usr/local/bin, or a directory of your choice that you will later add to your PATH environment variable. Install appion like this:
cd /your_download_area/myami/appion sudo python setup.py install --install-scripts=/usr/local/bin
Copy the entire myamiweb folder found at myami/myamiweb to your web directory (ex. /var/www/html). You may want to save a copy of your old myamiweb directory first.
Running the following script will indicate if you need to run any database update scripts.
cd /your_download_area/myami/dbschema python schema_update.py
This will print out a list of commands to paste into a shell which will run database update scripts.
You can re-run schema_update.py at any time to update the list of which scripts still need to be run.
If you have a pre-2.0 Appion release and would like to Upgrade to 2.0:
< Complete Installation | Appion User Guide >
< Create Full Tomogram | Create Tomogram Subvolume >
The cs value that you enter during upload should be the correct value for the scope that the images were collected on. It is highly recommended that you make an effort to discover this value prior to uploading images to Appion. If you have uploaded images with the incorrect cs value, a system administrator with access to the Appion and Leginon databases can assist with changing the cs value associated with your images.
These are the steps that need to be done to change the cs value. (Applies to version 2.2 and later)< Unlink a Project Processing Database | Share a Project Session with another User >
The user is able to upload 3D models to be used in 3D refinement.
To launch:
Output:
< Upload Template | Upload More Images >
The user is able to upload raw micrographs to either existing or new session to be used in appion processing
To launch:
/home/abc/xxx.mrc 1.63e-10 1 1 50000 -2e-06 200000 55
/home/abc/xyz.mrc 1.63e-10 1 1 50000 -2e-06 200000 10
For a series of untilted images, it might be easier to enter parametes manually (assuming all your micrographs are in the same directory and all the other informations are the same for each image).
For tilted images, the user will definiltey need to create a parameter file to incorporate the stage alpha tilt information during upload.
Output:
< Upload Model | Upload Stack >
Users can upload particle picks from doing a manual picking session outside of appion using EMAN's boxer program.
To launch:
Output:
< EMDB to Model | Upload Template >
An example command is:
uploadExternalRefine.py --rundir=/ami/data17/appion/11jan11a/recon/external_package_test --runname=external_package_test --description="testing out external upload on 11jan11 data, emanrecon11, first iteration" --projectid=224 --no-commit --expId=8397 --uploadIterations=1,2,3,4,5 --stackid=127 --modelid=19 --mass=3800 --apix=2.74 --box=160 --numberOfReferences=1 --numiter=1 --timestamp=11jul18z --symid=25
The user is able to upload particle stack to be used within Appion processing.
To launch:
Output:
< Upload More Images | Image Assessment >
The user is able to upload 2D templates to be used in template-based particle picking or reference-based alignment.
To launch:
Output:
< Import Tools | Upload Model >
Link to top 10 cheat sheets.
chmod -R g+rw eman_recon14
chown -R <usename> <folder>
qstat -au YOUR_USER_NAME
qstat -an
pbsnodes
xpbsmon
ssh fly
ps -ef |grep [your_username]
kill [process id]
rm [destination folder]
top
qsub <jobfilename>
qsub -I
df -h .
df -h
du
du -sch *
id <username>
# cat /etc/*release*
Do a lazy unmount followed by mount.
umount -l <drive> mount <drive>
On Garibaldi at least:
module avai
From the Administration tool, an administrator may:
Note: Currently, users may not be removed from the database.
The following tasks may be completed with the Project tool:
Username | Firstname Lastname Displayed | Group | Description |
---|---|---|---|
administrator | Leginon-Appion Administrator | administrators | Default leginon settings are saved under this user |
anonymous | Public User | guests | If you want to allow public viewing to a project or an experiment, assign it to this user |
Users may be viewed and managed within the Administration tool.
You may sort the users by clicking on the column headers.
< Administration | Project DB >
(Fly is a good choice since it is not a production server. You can create your own databases on Fly as well.)
basicReport.inc is a class that can be used to quickly display run information as well as parameters and results. It creates html tables by reading database tables.
The only input it needs is the expId, jobtype and a database table name. BasicReport is used for reporting results from automated testing.
This is a sample file using the class to display a list of all the makestack runs that have occured for the session:
<?php require_once "inc/basicreport.inc"; $expId = $_GET['expId']; try { // Create an instance of the BasicReport class to display all the makestack runs from this session. $testSuiteReport = new BasicReport( $expId, "makestack2", "ApStackRunData"); if ($testSuiteReport->hasRunData()) { $runDatas =$testSuiteReport->getRunDatas(True); // For each testsuite run, set the URL for it's report page and display it's summary info. foreach ($runDatas as $runData) { $runReportPageLink = 'testsuiterunreport.php?expId='.$expId.'&rId='.$runData['DEF_id']; $summaryTable .= $testSuiteReport->displaySummaryTable($runData, $runReportPageLink); } } else { $summaryTable = "<font color='#cc3333' size='+2'>No Test Run information available</font>\n<hr/>\n"; } } catch (Exception $e) { $message = $e->getMessage(); $summaryTable = "<font color='#cc3333' size='+2'>Error creating report page: $message </font>\n"; } // Display the standard Appion interface header processing_header("Test Suite Results", "Test Suite Results", $javascript, False); // Display the table built by the BasicReport class or errors echo $summaryTable; // Display the standard Appion interface footer processing_footer(); ?>
Here is a sample file using the class to display all the information from a single makestack run:
<?php require_once "inc/basicreport.inc"; $expId = $_GET['expId']; $runId = $_GET['rId']; try { // Create an instance of the BasicReport class using the makeStack jobtype and DB table $testSuiteReport = new BasicReport( $expId, "makestack2", "ApStackRunData"); // Get the run data for the specific test run we are reporting on $runData =$testSuiteReport->getRunData($runId); // The jobReportLink provides a link to another page for further information. // If there is no more data to display, this should link back to the current page. // If the 3rd param to displaySummaryTable() is True, sub tables will be parsed and displayed. $jobReportLink = 'testsuiterunreport.php?expId='.$expId.'&rId='.$runData['DEF_id']; $summaryTable = $testSuiteReport->displaySummaryTable($runData, $jobReportLink, True); } catch (Exception $e) { $message = $e->getMessage(); $summaryTable = "<font color='#cc3333' size='+2'>Error creating report page: $message </font>\n"; } // Display the standard Appion interface header processing_header("MakeStack Run Report","MakeStack Run Report for $runData[stackRunName]"); // Display the table built by the BasicReport class or errors echo $summaryTable; // Display the standard Appion interface footer processing_footer(); ?>
Appion/Leginon 2.0.0 will be the initial deployment of Appion. Work toward this milestone will focus on ease of installation and user friendliness as well as robustness.
The Version number is 2.0 so that Appion and Leginon may continue with the same versioning.
Major Goals:
Appion/Leginon 2.2 will focus on the Extensibility of Appion.
The goal is to make it easy for outside labs to add new processing modules to the Image Pipeline.
Appion/Leginon 2.3 will expand the ways the user may interact with data. Add the ability to filter particles based on DB fields and create reports.
Use the following links to view new features, bug fixes and known bugs for all versions of Appion and Leginon.
New Features
Bug Fixes
Known Bugs
< An Introduction to Appion | Complete Installation >
This is the destination for Feature to implement or Bugs to fix that have no specific timeline
< Share a Project Session with another User | Grid Management >
This option takes the user to the stack summary page, which is also accessible from the "X Complete" link under the "Stack" submenu in the Appion SideBar.
<More Stack Tools | Particle Alignment >
The user is able to assess individual images on the web by either:
Output:
The following applies to the computer that will host the web-accessable image viewers and project management tools. This also provides the main user interface for Appion.
< Processing Server Installation | Additional Database Server Setup >
The following applies to the computer that will host the web-accessable image viewers and project management tools. This also provides the main user interface for Appion.
Purpose of optional user authentication system in combination of Project Management in the Leginon/Appion Database Tools on the webserver is to provide different levels of user privileges at institution where the webserver is available to all. In addition, by assigning project owners, users in the lower privileged group will not see projects from others. It makes finding an experiment session easier once data are accumulated. Enabling of the system is not required, but is recommended if the web server can be accessed freely outside the intended group. Once enabled, no myamiweb pages can be accessed without login at its required privilege.
Four levels of group privileges are included with the complete installation of Leginon/Appion 2.0, and four user groups are created by default to reflect them, respectively
privilege level | default group name |
All at administration level | administrators |
View all but administrate owned | power users |
Administrate/view only owned projects and view shared experiments | users |
View owned projects and shared experiments | guests |
These four default groups would not appear in the database of a system upgraded from earlier version to 2.0. During the upgrading, the group in which the "administrator" user is in is assigned to "All at administration level" privilege. All other groups are assigned to "Administrate/view only owned projects and view shared experiments" privilege. This can be changed after the database upgrade is completed.
Rule examples:
To enable or disable user authentication, run the setup wizard at http://YOUR_SERVER/myamiweb/setup.
Related Topics:
Install the Web Interface
Leginon upgrade instruction
User Guide on User Authentication/Management
Appion is a "pipeline" for single particle reconstruction. Appion is integrated with Leginon data acquisition but can also be used stand-alone after uploading images (either digital or scanned micrographs) or stackes using a set of provided tools. Appion consists of a web based user interface linked to a set of python scripts that control several underlying integrated processing packages. These include EMAN, Spider, Frealighn, Imagic, XMIPP, findEM, ACE, Chimera. All data input and output is managed using tightly integrated MySQL databases. The goal is to have all control of the processing pipeline managed from the web based user interface and all output from the processing presented using web based viewing tools.
These notes are provided as a rough guide to using the pipeline but are not guaranteed to be up to date or accurate.
Appion users usually start off at a web page that presents them with a range of options for processing, reconstruction, analysis. This may look something like the following:
The user can select to proceed with any of the steps in the left hand menu options but some of these may be dependent on earlier steps. For example a stack cannot be made until particles have been selected. After any of the steps has been run the user can chose to view the results by clicking on the "completed" or "available" labels.
Appion and Leginon depend on the same basic architecture so you can install either one or both together with almost no extra effort. You will need to perform the same basic three parts of system installation for either or both packages. Following this basic installation, if you want to run Leginon on the microscope, you will need to perform a few additional steps, and instructions can be found in the Leginon Manual.
The four basic parts of Appion are :
Installation instructions for all of these parts are included in the Appion installation instructions.
In addition, Appion also needs:All 4 servers can run on the same machine. However, for an installation where high volume of data, processing and users is anticipated, it is recommended that the first three parts of the system are installed onto 3 separate computers.
Breadcrumbs are links that appear at the top of a wiki page that show the previous pages that you visited.
To add breadcrumbs you must set up parent/child relationships for wiki pages.
To set the parent of the page that you are currently viewing, select "Rename" at the top right.
Copy and paste the name of the parent page from the desired parent page's "Rename" section.
If your wiki page is long with multiple headings, you may want a table of contents.
{{toc}} adds it to the left hand side of your page.
{{>toc}} adds it to the right hand side.
[[AMI Redmine Quick Start Guide]] displays a link to the page named 'AMI Redmine Quick Start Guide': AMI Redmine Quick Start Guide
[[AMI_Redmine_Quick_Start_Guide|AMI Redmine QSG]] displays a link to the same page but with a different text: AMI Redmine QSG
[[AMI_Redmine_Quick_Start_Guide#1-Register-as-a-user|How to register]] displays a link to the header on the same page with a different text: How to register
Note that when linking to a header on another wiki page, the header must be labeled h1., h2., or h3. (h4. will not work.) Also, there may not be special characters such as period(.) or dash(-) in the header that you are linking to.
At the bottom of the wiki page is an "upload file" link. Use this to upload your image file to Redmine.
Then right click on the link to the file and select "Copy Link Location".
Next, edit the wiki page and paste the link location. Put an exclamation point(!) at the start and end of the URL.
You can also just put the name of the file you uploaded between the exclamation points...as long as you are referring to an images that is attached to the specific page you are editing. You can use the url on any wiki page.
Example:
Move it to the right hand side of the page with a greater than symbol(>) after the first (!).
Example:
You can also turn the image into a link to a url by adding a colon (:) after the last (!) and then the url to link to.
Example:
Since a Keynote file is actually a folder, it will not upload properly in Redmine. You will need to put the Keynote into an archive like Zip and then upload the zip file.
Reference: http://www.redmine.org/boards/2/topics/992?r=1032#message-1032
If you add refs and the bug number to your subversion message it automatically links them, e.g., 'refs #139'.
Issue #828
The following files contain a call to showOrSubmitCommand() or submitAppionJob(). These function calls need to be executed to verify the following:
1. Commands using showOrSubmitCommand should pre-pend the Appion Wrapper Path defined in the config file when both Show Command and Run Command are selected by the user.
2. Commands using submitAppionJob should pre-pend the path when Run Command is selected.
3. Commands using submitAppionJob should NOT pre-pend the path when Just Show Command is selected. In this case, the user will need to manually modify the command prior to executing it to include the wrapper path.
Appion wrapper path should look like: "/opt/myamisnap/bin/appion prepFrealign.py --stackid=1337 --modelid=22 ..."
To get to the trunk installation: http://cronus3.scripps.edu/betamyamiweb/
Filename | uses new showOrSubmitCommand() | uses old submitAppionJob() | Show Command (pass/fail) | Submit Command (pass/fail) | Notes |
---|---|---|---|---|---|
alignSubStack.php | X | pass | |||
applyJunkCutoff.php | X | ||||
bootstrappedAngularReconstitution.php | X | ||||
centerStack.php | X | ||||
coranSubStack.php | X | ||||
createmodel.php | X | ||||
createSyntheticDataset.php | X | ||||
emdb2density.php | X | ||||
imagicMSA.php | X | pass | pass | ||
jumpSubStack.php | X | ||||
makegoodavg.php | X | ||||
multiReferenceAlignment.php | X | ||||
pdb2density.php | X | ||||
postproc.php | X | FAIL (adding "appionlib" to Appion_Lib_Dir var in wrapper fixes this | FAIL (same issue) | ||
prepareFrealign.php | X | pass | pass | ||
runAppionScript.php.template | X | ||||
runMaskMaker.php | X | pass | pass | ||
runMaxLikeAlign.php | X | pass | pass | ||
sortJunk.php | X | ||||
subStack.php | X | pass | |||
uploadFrealign.php | X | ||||
uploadmodel.php | X | ||||
uploadParticles.php | X | ||||
uploadrecon.php | X | pass | pass | ||
uploadstack.php | X | ||||
uploadtemplate.php | X | ||||
uploadTemplateStack.php | X | ||||
uploadtomo.php | X | pass | |||
uploadXmippRecon.php | X | ||||
imagic3d0.php | X(Dmitry wants this left alone) | not working | |||
imagic3dRefine.php | a bit complicated to test | X (Dmitry wants this left alone) | not working | ||
manualMaskMaker.php | r15438 | pass | n/a | ||
runAce2.php | r15424 | ||||
runAffinityProp.php | r15439 | ||||
runClusterCoran.php | r15459 | ||||
runCombineStacks.php | r15440 | ||||
runCoranClassify.php | r15461 | pass | |||
runCtfEstimate.php | r15431 | ||||
runDogPicker.php | r15423 | pass | pass | ||
runEdIterAlignment.php | r15441 | getting a file error | getting a file error | ||
runEmanRefine2d.php | r15455 | ||||
runImgRejector.php | r15444 | ||||
runJpgMaker.php | r15490 | ||||
runKerDenSom.php | r15445 | pass | |||
runLoopAgain.php | r15446 | ||||
runMakeStack2.php | r15435 | pass | pass | ||
runOtrVolume.php | r15492 | need tilt pairs | |||
runPyAce.php | r15447 | has errors with matlab stuff | |||
runRctVolume.php | r15493 | need a tilted stack | |||
runRefBasedAlignment.php | r15451 | ||||
runRefBasedMaxlikeAlign.php | r15452 | ||||
runRotKerDenSom.php | r15448 | ||||
runSignature.php | r15491 | ||||
runSpiderNoRefAlignment.php | r15449 | ||||
runStackIntoPicks.php | r15450 | ||||
runSubTomogram.php | r15277 | pass | pass | ||
runTemplateCorrelator.php | r15475 | ||||
runTiltAligner.php | r15494 | ||||
runTiltAutoAligner.php | r15495 | ||||
runTomoAligner.php | r15275 | pass | pass | ||
runTomoAverage.php | r15277 | pass | pass | ||
runTomoMaker.php | r15277 | pass | pass | ||
runTopolAlign.php | r15454 | ||||
runUploadMaxLike.php | r15458 | error with database - student key | |||
uploadimage.php | X | pass | pass if from expt;fail if from project (disabled, see #864) | ||
imagicMSAcluster.php | r15496 |
Kerden SOM stands for 'Kernel Probability Density Estimator Self-Organizing Map'. It maps a set of high dimensional input vectors (aligned particles) onto a two-dimensional grid as described in Pascual-Montano et. al Journal of Structural Biology v133(2),233-245 (2001). Note that this method combines feature analysis and clustering into a single step.
Note: If you accessed "Run Feature Analysis" directly from an alignment run, you will be greeted by the screen displayed on the left below. Alternatively, if you accessed the "Run Feature Analysis Run" from the Appion sidebar menu, you will be greeted by the screen displayed on the right below.
<Run Feature Analysis | Ab Initio Reconstruction >
This method is unbiased and very thorough, but also the slowest of the methods (~days). Maximum likelihood also does a course search, integer pixels shifts and ~5 degree angle increments, so it is best to get templates with this method and use ref-based alignment to get better alignment parameters.
<Run Alignment | Run Feature Analysis >
This method is similar to reference-free maximum likelihood but you select templates first.
<Run Alignment | Run Feature Analysis >
This function applies the Kerden SOM to rotational symmetric particles after alignment. This is especially useful for classifying particles with difference cyclic symmetries.
Note: If you accessed "Run Feature Analysis" directly from an alignment run, you will be greeted by the screen displayed on the left below. Alternatively, if you accessed the "Run Feature Analysis Run" from the Appion sidebar menu, you will be greeted by the screen displayed on the right below.
<Run Feature Analysis | Ab Initio Reconstruction >