This repository is a reinforcement learning solution to the Optimal Camera Placement (OCP) problem.
# build virtual environment
python -m venv .
# install required packages
pip install -r requirements.txt
pip install -e .
-
pyntcloud
- For fewer warnings, change code in
Lib\site-packages\pyntcloud\io\ply.py
from
def describe_element(name, df): # ...... else: for i in range(len(df.columns)): # get first letter of dtype to infer format f = property_formats[str(df.dtypes[i])[0]] ### change this line !!! element.append('property ' + f + ' ' + df.columns.values[i]) return element
to
def describe_element(name, df): # ...... else: for i in range(len(df.columns)): # get first letter of dtype to infer format f = property_formats[str(df.dtypes.iloc[i])[0]] ### add `.iloc` !!! element.append('property ' + f + ' ' + df.columns.values[i]) return element
- For fewer warnings, change code in
Make sure your current working directory is labsurv/
, then
# train a new agent for cart pole
python tools/train.py --config configs/cart_pole/cart_pole.py
# test an existing agent for cart pole
# should change `load_from` in config file
python tools/test.py --config configs/cart_pole/cart_pole.py
This repository uses black
, isort
and flake8
for code formatting.
To use shell on Windows
OS, add your Git
installation directory git/bin/
to your system environmental variables. Then sh
command is available on terminals.
To format changed files, run
sh infra/scripts/format.sh --git
And find more formatting options with --help
.
The project mainly uses mmcv
registry module as the module administration system. An abstraction of the project structure is provided below.
The Runner
class is the information processing center during the learning process. It deals with all the information from other components and checks if the episode truncates due to specific conditions.
The Agent
is the class to perform RL algorithms. It is recommended to inherit from BaseAgent
class when creating a new Agent
.
An Agent
must implement the take_action()
and update()
methods.
take_action()
: theAgent
takes action according to the observation input and its exploration startegy.update()
: theAgent
updates its behaviour strategy (which is the target of optimization in RL) according to some experience sample(s), usually taken from theReplayBuffer
.
The Environment
is the class to model the extrinsic reward and terminating condition. It is designed to be a little bit different from the gym.Envs
class, to better decouple the Agent
and Environment
and to adapt to dynamically changing Environment
s. It is recommended to inherit from BaseEnv
class when creating a new Env
.
An Environment
must implement the step()
and reset()
methods.
step()
: Run one timestep of the environment's dynamics. TheEnvironment
takes in the current observation and action to perform a step. The observation is designed to be a sort of information spreading across theAgent
andEnvironment
, rather than a static state ofEnvironment
.reset()
: Reset a dynamimc environment to the initial state and returns an initial observation according to the initial state distribution. For staticEnvironment
s, there is no need to do state-initialization.Environment
should specify its initial state distribution to generatereset()
outputs.
The ReplayBuffer
is the class to store experience and to sample batches for Agent
to update its strategy. A BaseReplayBuffer
class is created for further uses.
This class generates the SurveillanceRoom
structure for the environment BaseSurveillanceEnv
.
python tools/demo/room/build_surveil_room.py
And find output in output/surv_room/
.
You may also find generated samples in tools/demo/room/samples/
.
The SurveillanceRoom
class implements add_block()
and add_cam()
method as its main function. You may add occupancy
, install_permitted
and must_monitor
blocks by add_block()
, and add multiple sorts of cameras by add_cam()
. The intrinsics of the cameras should be saved in cfg_path
beforehand.
An example of the camera configuration is given in configs\surveillance\_base_\envs\std_surveil.py
. You may simply add new types of cameras as value dict of cam_intrinsics
, and it is recommended to choose from existed intrinsic params dict clips
, focals
and resols
.