-
Notifications
You must be signed in to change notification settings - Fork 15
Developer Q&A
Harvester has a multi-threading architecture by default. Each thread invokes plugin's functions synchronously. Plugins can spawn background processes and/or threads to do tasks asynchronously. However plugins must control the number of background processes or threads and must call wait() or join() by themselves.
Harvester uses sqlite and multi-threading be default to tightly limit resource usage, which is required to run on edge nodes in HPC centers. However, it is possible to configure harvester instances to use more powerful database backend and multi-processing based on Apache+WSGI as shown in this section, which could be better for some use-cases where harvester instances can run on dedicated machines.
Each worker is identified by a unique identifier in the batch system like batch jobid and condor jobid. Plugins take actions for the worker with the identifier.
Harvester's propagator agents send heartbeats every 30 min for running jobs or immediately for finished/failed jobs.
It uses external components as libraries. i.e. in the same process and same memory space.
Normally pilots kill themselves once they get the kill command from PanDA through heartbeats. However, even if pilots stop sending heartbeats Harvester will be able to get the list of stuck pilots from PanDA to directly kill them using condor_rm etc.
Each harvester instance will have a unique identifier. Config files for harvester instances are stored on PanDA. A config file is downloaded with the identifier when the instance is up. The config file contains the list of PQs for which the instance works.
It is possible to have multiple harvester instances per PQ. For example, queue depth can be dynamically set by PanDA in an harvester instance. An easiest solution would be to set queue depth to 1000 when only one instance is running, then it would be reduced to 500 when another instance is up for the same PQ.
In the pull model workflow, ultimately only status would be enough since the pilot directly reports other information to PanDA. In the push model workflow, all information which the pilot reports would be desirable.
Job attributes are stored in the harvester DB as a serialized dictionary, so that it is easy to add new attributes.
The idea is to periodically upload contents of harvester DB to Oracle. There will be a full or slimmed mirror table of the harvester DB in Oracle. BigPandaMon will show views on the table which will be harvester-based resource monitoring. There will be monitoring for harvester instances which shows aliveness of those instances.
See dummy plugins like DummySubmitter and DummyMonitor.
Two pickle files jobspec.pickle and workspec.pickle are available in the examples directory. For example
$ wget https://github.com/HSF/harvester/raw/master/examples/workspec.pickle
$ python -i -c "import pickle;obj = pickle.load(open('workspec.pickle'))"
Note that you need to setup runtime beforehand. Otherwise, pickle cannot import XyzSpec classes.
harvester_id is specified in panda_harvester.cfg. It is an arbitrary string (max 50 chars, w/o whitespace) to identify the harvester instance. Some mechanism such as auto boot and pandamon views will work with the ID. It will be automatically registered in the central database when you run your harvester instance.
External packages are specified in the install_requires attribute of setup.py only when they are used by harvester core or many plugins. If a plugin needs some external packages they must be specified in the plugin spec wiki page.
When launching harvester there could be error messages like
File ".../harvestercore/db_proxy.py", line 48, in __init__
if harvester_config.xyz.blah == 1:
AttributeError: _SectionClass instance has no attribute 'blah'
This error implies that your panda_harvester.cfg is incomplete and you need to add the
blah
attribute to thexyz
section. Here is an example of the latest config file.
How to convert a resource specific Panda Queue to a pseudo Panda Queue to work with unified Panda Queue + pull
The name of a pseudo Panda Queues is composed of the name of the unified Panda Queue and the name of the resource type concatenated with a slash. For example, FZK-LCG2/MCORE. Currently 4 resource types are defined such as SCORE, SCORE_HIMEM, MCORE, and MCORE_HIMEM. In the pull mode, harvester works with pseudo Panda Queues, which means that harvester has 4 pseudo queues for each unified Panda Queue and submits workers for each pseudo queue. Plugins should be aware that names of unified Panda Queues need to be used when retrieving data from the information system. Also names of unified Panda Queues need to be used as argument for the -s option of pilot.py and resource types need to be set as -R <resourceType> when getting jobs from Panda.
First, you should check log files of monitor and propagator agents since generally they are busy. For example, if you see something like
2018-03-14 10:45:16,092 panda.log.propagator: DEBUG run <id=140637210924800> done : took 230.705 sec
2018-03-14 10:46:06,828 panda.log.propagator: DEBUG run <id=140637219317504> done : took 226.866 sec
in the propagator's log, this means that propagator took ~230 sec to process a single cycle. Note that you need to have a larger lockInterval than 230 sec, to prevent another thread from picking the same jobs during a thread is processing them. Concerning lockInterval, see this section. At the same time, you may consider to decrease the number of jobs per cycle, decrease the sleep interval, or increase the number of threads.
Getting started |
---|
Installation and configuration |
Testing and running |
Debugging |
Work with Middleware |
Admin FAQ |
Development guides |
---|
Development workflow |
Tagging |
Production & commissioning |
---|
Scale up submission |
Condor experiences |
Commissioning on the grid |
Production servers |
Service monitoring |
Auto Queue Configuration with CRIC |
SSH+RPC middleware setup |
Kubernetes section |
---|
Kubernetes setup |
X509 credentials |
AWS setup |
GKE setup |
CERN setup |
CVMFS installation |
Generic service accounts |
Advanced payloads |
---|
Horovod integration |