Notes from Recon meeting » History » Version 2
Amber Herold, 06/22/2011 03:59 PM
1 | 1 | Amber Herold | h1. Notes from Recon meeting |
---|---|---|---|
2 | |||
3 | Moving forward, refinements will all be split into 2 steps, prep and run. |
||
4 | |||
5 | h2. Prepare refine |
||
6 | |||
7 | When the user selects to prep a refinement, a web form is provided to select the: |
||
8 | # refinement method - eman, xmipp, frealign, etc... |
||
9 | # stack |
||
10 | # model |
||
11 | # run parameters - runname, rundir, description |
||
12 | # stack prep params - lp, hp, last particle, binning |
||
13 | |||
14 | The web then calls prepRefine.py located on the local cluster to prepare the refinement. |
||
15 | |||
16 | h2. Run Refine |
||
17 | |||
18 | When the user selects to run a prepared refinement, a web form is provided to select the: |
||
19 | # prepped refine |
||
20 | # cluster parameters - ppn, nodes, walltime, cputime, memory, mempernode |
||
21 | # refine params, both general and method specific |
||
22 | |||
23 | The web server will then: |
||
24 | # verify the cluster params by checking default_cluster.php |
||
25 | # if needed, copy the stack and model to a location that can be accessed by the selected cluster |
||
26 | # verify the user is logged into the cluster |
||
27 | # pass the list of commands to runJob.py (extended from the Agent class), located on the remote cluster |
||
28 | |||
29 | runJob.py will: |
||
30 | # format the command tokens in a dictionary of key-value pairs |
||
31 | # set the job type which was passed in the command |
||
32 | # create an instance of the job class based on the job type |
||
33 | # create an instance of the processing host class |
||
34 | # launch the job based via the processing host |
||
35 | # update the job status in the appion database (do we have db access from the remote cluster?) |
||
36 | |||
37 | h2. Object Model |
||
38 | |||
39 | h3. Processing Host |
||
40 | |||
41 | Each processing host (eg. Garibaldi, Guppy, Trestles) will define a class extended from a base ProcessingHost class. |
||
42 | The extended classes know what headers need to be placed at the top of job files and they know how to execute a command based on the specific clusters requirements. |
||
43 | The base ProcessingHost class could be defined as follows: |
||
44 | <pre> |
||
45 | 2 | Amber Herold | abstract class ProcessingHost(): |
46 | def generateHeader(jobObject) # abstract, extended classes should define this, returns a string |
||
47 | def executeCommand(command) # abstract, extending classes define this |
||
48 | def createJobFile(header, commandList) # defined in base class, commandList is a 2D array, each row is a line in the job file. |
||
49 | def launchJob(jobObject) # defined in base class, jobObject is an instance of the job class specific to the jobtype we are running |
||
50 | 1 | Amber Herold | header = generateHeader(jobObject) |
51 | jobFile = createJobFile(header, jobObject.getCommandList()) |
||
52 | executeCommand(jobFile) |
||
53 | </pre> |
||
54 | |||
55 | 2 | Amber Herold | h3. Job |
56 | |||
57 | Each type of appion job (eg Emanrefine, xmipprefine) will define a class that is extended from a base Job class. |
||
58 | The extending classes know parameters that are specific to the job type and how to farmat the parameters for the job file. |
||
59 | The base Job class could be defined as follows: |
||
60 | <pre> |
||
61 | class Job(): |
||
62 | self.commandList |
||
63 | self.name |
||
64 | self.rundir |
||
65 | self.ppn |
||
66 | self.nodes |
||
67 | self.walltime |
||
68 | self.cputime |
||
69 | self.memory |
||
70 | self.mempernode |
||
71 | def __init__(paramDictionary) |
||
72 | self.commandList = self.createCommandList(paramDictionary) |
||
73 | def createCommandList(paramDictionary) # defined by sub classes, returns a commandList which is a 2D array where each row corresponds to a line in a job file |
||
74 | </pre> |