pdgjob package¶
Submodules¶
pdgjob.mayarpc module¶
-
class
pdgjob.mayarpc.CommandHandler(request, client_address, server)¶ Bases:
SocketServer.StreamRequestHandler-
handle()¶
-
-
pdgjob.mayarpc.exec_then_eval(code)¶ Executes the first N-1 lines and then evals and returns the result of the last line.
-
pdgjob.mayarpc.get_ip_address()¶ Get the IP address our primary adapter is bound to
-
pdgjob.mayarpc.main()¶
-
pdgjob.mayarpc.recv_all_data(sock)¶ Recieve all data from the given socket, returns a bytearray object.
-
pdgjob.mayarpc.redirect_streams()¶
-
pdgjob.mayarpc.start_server(address, port)¶
-
pdgjob.mayarpc.substitute_scheduler_vars(data)¶
pdgjob.pdgcmd module¶
-
class
pdgjob.pdgcmd.WorkItemMessage¶ Bases:
object-
ECancelled= '3'¶
-
ECheckReady= '4'¶
-
EFailed= '2'¶
-
EMaxMessageType= '5'¶
-
EResultData= '1'¶
-
EStartCook= '5'¶
-
ESuccess= '0'¶
-
clear()¶
-
datatag¶
-
static
decodeFromJSON(payloadview, viewoffset)¶
-
duration¶
-
encodeAsJSON()¶
-
hashcode¶
-
jobid¶
-
kDataTag= '4'¶
-
kDuration= '6'¶
-
kHashCode= '5'¶
-
kJobID= '7'¶
-
kMessageType= '0'¶
-
kName= '1'¶
-
kResultData= '3'¶
-
kSubIndex= '2'¶
-
messagetype¶
-
n= 5¶
-
name¶
-
resultdata¶
-
subindex¶
-
-
pdgjob.pdgcmd.batchPollExp(item_name, batch_index_str, extra_vars=[], server_addr='__PDG_RESULT_SERVER__')¶ Returns a callback expression to use for polling if a batch sub item can begin cooking
-
pdgjob.pdgcmd.delocalizePath(local_path)¶ Delocalize the given path to be rooted at __PDG_SHARED_ROOT__ Requires PDG_SHARED_ROOT env var to be present
-
pdgjob.pdgcmd.execBakeResultGenerated(item_name, server_addr, result_data, result_data_tag='', batch_index_str=None, extra_vars=[], and_success=False, to_stdout=True, duration=0.0)¶ DEPRECATED: Use reportResultData instead
Executes an item callback directly to report when a file has been generated. This path will be added to the workitem resultData with a “file” tag.
item_name: name of the associated workitem server_addr: callback server in format ‘IP:PORT’, or emptry string to ignore result_data: string path to bake result file. Must not include ‘;’ result_data_tag: result tag to categorize result. Eg: ‘file/geo’
Default is empty which means attempt to categorize using file extension.and_success: if True, report success in addition to result_data to_stdout: also emit status messages to stdout
If the item is a batch item: The batch_index_str value is included as the expression to use for the batch index itself. For example, “$F-1”. extra_vars can be used to pass in extra variables that should be used during expression parsing, for example batch_index_str = “$F-%i”, extra_vars = [“upstream_item.index”].
-
pdgjob.pdgcmd.execItemFailed(item_name, server_addr, to_stdout=True)¶ Executes an item callback directly to report when an item has failed.
item_name: name of the associated workitem server_addr: callback server in format ‘IP:PORT’, or emptry string to ignore to_stdout: also emit status messages to stdout
If there is an error connecting to the callback server an error will be printed, but no exception raised.
Note: Batch items not supported.
-
pdgjob.pdgcmd.execStartCook(item_name, server_addr, batch_index_str=None, extra_vars=[], to_stdout=True)¶ Executes an item callback directly to report than a work item with a specific index has started cooking
-
pdgjob.pdgcmd.localizePath(deloc_path)¶ Localize the given path. This means replace any __PDG* tokens and expand env vars with the values in the current environment
-
pdgjob.pdgcmd.reportResultData(result_data, item_name=None, server_addr=None, result_data_tag='', batch_index_str=None, extra_vars=[], and_success=False, to_stdout=True, duration=0.0, hash_code=0, job_id=0)¶ Reports a result to PDG via the callback server.
item_name: name of the associated workitem (default $PDG_ITEM_NAME) server_addr: callback server in format ‘IP:PORT’ (default $PDG_RESULT_SERVER)
if there is no env var it will default to stdout reporting only.result_data: result data - treated as bytes if result_data_tag is passed result_data_tag: result tag to categorize result. Eg: ‘file/geo’
Default is empty which means attempt to categorize using file extension.and_success: If True, report success in addition to result_data to_stdout: also emit status messages to stdout duration: cook time of the item in seconds, only report with and_success hash_code: hashcode for result job_id: scheduler-specific jobid
If the item is a batch item: The batch_index_str value is included as the expression to use for the batch index itself. For example, “$F-1”. extra_vars can be used to pass in extra variables that should be used during expression parsing, for example batch_index_str = “$F-%i”, extra_vars = [“upstream_item.index”].
-
pdgjob.pdgcmd.reportResultDataPB(result_data, item_name=None, server_addr=None, result_data_tag='', batch_index_str=None, extra_vars=[], and_success=False, to_stdout=True, duration=0.0, hash_code=0)¶ Reports a result to PDG via the callback server.
item_name: name of the associated workitem (default $PDG_ITEM_NAME) server_addr: callback server in format ‘IP:PORT’ (default $PDG_RESULT_SERVER)
if there is no env var it will default to stdout reporting only.result_data: result data - treated as bytes if result_data_tag is passed result_data_tag: result tag to categorize result. Eg: ‘file/geo’
Default is empty which means attempt to categorize using file extension.and_success: If True, report success in addition to result_data to_stdout: also emit status messages to stdout duration: cook time of the item in seconds, only report with and_success hash_code: hashcode for result
If the item is a batch item: The batch_index_str value is included as the expression to use for the batch index itself. For example, “$F-1”. extra_vars can be used to pass in extra variables that should be used during expression parsing, for example batch_index_str = “$F-%i”, extra_vars = [“upstream_item.index”].
Uses the protobuf protocol.