Notes from Recon meeting » History » Version 3
Amber Herold, 06/22/2011 04:38 PM
1 | 1 | Amber Herold | h1. Notes from Recon meeting |
---|---|---|---|
2 | |||
3 | Moving forward, refinements will all be split into 2 steps, prep and run. |
||
4 | |||
5 | h2. Prepare refine |
||
6 | |||
7 | When the user selects to prep a refinement, a web form is provided to select the: |
||
8 | # refinement method - eman, xmipp, frealign, etc... |
||
9 | # stack |
||
10 | # model |
||
11 | # run parameters - runname, rundir, description |
||
12 | # stack prep params - lp, hp, last particle, binning |
||
13 | |||
14 | The web then calls prepRefine.py located on the local cluster to prepare the refinement. |
||
15 | |||
16 | h2. Run Refine |
||
17 | |||
18 | When the user selects to run a prepared refinement, a web form is provided to select the: |
||
19 | # prepped refine |
||
20 | # cluster parameters - ppn, nodes, walltime, cputime, memory, mempernode |
||
21 | # refine params, both general and method specific |
||
22 | |||
23 | The web server will then: |
||
24 | # verify the cluster params by checking default_cluster.php |
||
25 | # if needed, copy the stack and model to a location that can be accessed by the selected cluster |
||
26 | # verify the user is logged into the cluster |
||
27 | # pass the list of commands to runJob.py (extended from the Agent class), located on the remote cluster |
||
28 | |||
29 | runJob.py will: |
||
30 | # format the command tokens in a dictionary of key-value pairs |
||
31 | # set the job type which was passed in the command |
||
32 | # create an instance of the job class based on the job type |
||
33 | # create an instance of the processing host class |
||
34 | # launch the job based via the processing host |
||
35 | # update the job status in the appion database (do we have db access from the remote cluster?) |
||
36 | |||
37 | h2. Object Model |
||
38 | |||
39 | h3. Processing Host |
||
40 | |||
41 | Each processing host (eg. Garibaldi, Guppy, Trestles) will define a class extended from a base ProcessingHost class. |
||
42 | The extended classes know what headers need to be placed at the top of job files and they know how to execute a command based on the specific clusters requirements. |
||
43 | The base ProcessingHost class could be defined as follows: |
||
44 | <pre> |
||
45 | 2 | Amber Herold | abstract class ProcessingHost(): |
46 | def generateHeader(jobObject) # abstract, extended classes should define this, returns a string |
||
47 | def executeCommand(command) # abstract, extending classes define this |
||
48 | def createJobFile(header, commandList) # defined in base class, commandList is a 2D array, each row is a line in the job file. |
||
49 | def launchJob(jobObject) # defined in base class, jobObject is an instance of the job class specific to the jobtype we are running |
||
50 | 1 | Amber Herold | header = generateHeader(jobObject) |
51 | jobFile = createJobFile(header, jobObject.getCommandList()) |
||
52 | executeCommand(jobFile) |
||
53 | </pre> |
||
54 | |||
55 | 2 | Amber Herold | h3. Job |
56 | |||
57 | Each type of appion job (eg Emanrefine, xmipprefine) will define a class that is extended from a base Job class. |
||
58 | The extending classes know parameters that are specific to the job type and how to farmat the parameters for the job file. |
||
59 | The base Job class could be defined as follows: |
||
60 | <pre> |
||
61 | class Job(): |
||
62 | self.commandList |
||
63 | self.name |
||
64 | self.rundir |
||
65 | self.ppn |
||
66 | self.nodes |
||
67 | self.walltime |
||
68 | self.cputime |
||
69 | self.memory |
||
70 | self.mempernode |
||
71 | 3 | Amber Herold | def __init__(command) # constructor takes the command (runJob.py --runname --rundir ....) |
72 | 1 | Amber Herold | self.commandList = self.createCommandList(paramDictionary) |
73 | 3 | Amber Herold | def createCommandList(command) # defined by sub classes, returns a commandList which is a 2D array where each row corresponds to a line in a job file |
74 | </pre> |
||
75 | |||
76 | h3. Agent |
||
77 | |||
78 | There will be an Agent class that is responsible for creating an instance of the appropriate job class and launching the job. |
||
79 | It will be implemented as a base class, where sub classes may override the createJobInst() function. For now, there will be only one sub class defined |
||
80 | called RunJob. The same runJob.py will be installed on all clusters. This implementation will allow flexibility for the future. |
||
81 | The base Agent class may be defined as follows: |
||
82 | <pre> |
||
83 | class Agent(): |
||
84 | def main(command): |
||
85 | jobType = self.getJobType(command) |
||
86 | job = self.createJobInst(jobType, command) |
||
87 | processHost = new ProcessingHost() |
||
88 | jobId = processHost.launchJob(job) |
||
89 | self.updateJobStatus() |
||
90 | def getJobType(command) # parses the command to find and return the jobtype |
||
91 | def createJobInst(jobType, command) # sub classes must override this the create the appropriate job class instance |
||
92 | def updateJobStatus() # not sure about how this will be defined yet |
||
93 | </pre> |
||
94 | |||
95 | Sub classes of Agent will define the createJobInst() function. |
||
96 | We could create a single subclass that creates a job class for every possible appion job type. |
||
97 | (we could make a rule that job sub classes are named after the jobtype with the word job appended. then this function would never need to be modified) |
||
98 | A sample implementation is: |
||
99 | <pre> |
||
100 | class RunJob(Agent): |
||
101 | def createJobInst(jobType, command) |
||
102 | switch (jobType): |
||
103 | case "emanrefine": |
||
104 | job = new EmanJob(command) |
||
105 | break |
||
106 | case "xmipprefine": |
||
107 | job = new XmippJob(command) |
||
108 | break |
||
109 | return job |
||
110 | 2 | Amber Herold | </pre> |