Notes from Recon meeting » History » Version 1
Amber Herold, 06/22/2011 03:39 PM
1 | 1 | Amber Herold | h1. Notes from Recon meeting |
---|---|---|---|
2 | |||
3 | Moving forward, refinements will all be split into 2 steps, prep and run. |
||
4 | |||
5 | h2. Prepare refine |
||
6 | |||
7 | When the user selects to prep a refinement, a web form is provided to select the: |
||
8 | # refinement method - eman, xmipp, frealign, etc... |
||
9 | # stack |
||
10 | # model |
||
11 | # run parameters - runname, rundir, description |
||
12 | # stack prep params - lp, hp, last particle, binning |
||
13 | |||
14 | The web then calls prepRefine.py located on the local cluster to prepare the refinement. |
||
15 | |||
16 | h2. Run Refine |
||
17 | |||
18 | When the user selects to run a prepared refinement, a web form is provided to select the: |
||
19 | # prepped refine |
||
20 | # cluster parameters - ppn, nodes, walltime, cputime, memory, mempernode |
||
21 | # refine params, both general and method specific |
||
22 | |||
23 | The web server will then: |
||
24 | # verify the cluster params by checking default_cluster.php |
||
25 | # if needed, copy the stack and model to a location that can be accessed by the selected cluster |
||
26 | # verify the user is logged into the cluster |
||
27 | # pass the list of commands to runJob.py (extended from the Agent class), located on the remote cluster |
||
28 | |||
29 | runJob.py will: |
||
30 | # format the command tokens in a dictionary of key-value pairs |
||
31 | # set the job type which was passed in the command |
||
32 | # create an instance of the job class based on the job type |
||
33 | # create an instance of the processing host class |
||
34 | # launch the job based via the processing host |
||
35 | # update the job status in the appion database (do we have db access from the remote cluster?) |
||
36 | |||
37 | h2. Object Model |
||
38 | |||
39 | h3. Processing Host |
||
40 | |||
41 | Each processing host (eg. Garibaldi, Guppy, Trestles) will define a class extended from a base ProcessingHost class. |
||
42 | The extended classes know what headers need to be placed at the top of job files and they know how to execute a command based on the specific clusters requirements. |
||
43 | The base ProcessingHost class could be defined as follows: |
||
44 | <pre> |
||
45 | abstract class ProcessingHost: |
||
46 | def generateHeader(jobObject) - abstract, extended classes should define this, returns a string |
||
47 | def executeCommand(command) - abstract, extending classes define this |
||
48 | def createJobFile(header, commandList) - defined in base class, commandList is a 2D array, each row is a line in the job file. |
||
49 | def launchJob(jobObject) - defined in base class, jobObject is an instance of the job class specific to the jobtype we are running |
||
50 | header = generateHeader(jobObject) |
||
51 | jobFile = createJobFile(header, jobObject.getCommandList()) |
||
52 | executeCommand(jobFile) |
||
53 | </pre> |
||
54 | |||
55 |