Skip to content

Commit

Permalink
Merge pull request #1 from TDHTTTT/tony/resturcture
Browse files Browse the repository at this point in the history
Tony/resturcture
  • Loading branch information
iitaku authored Nov 20, 2019
2 parents 15f6594 + f929e84 commit b5c8606
Show file tree
Hide file tree
Showing 13 changed files with 1,053 additions and 26 deletions.
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,5 @@
__pycache__
output/
/data/
log/
*.swp
.ipynb_checkpoints/
39 changes: 21 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
## CARLA-DeepDriving
# CARLA-DeepDriving
Implementing [DeepDriving][dd-url] with [CARLA simulator][carla-url].


### Background
## Background
**DeepDriving**:

DeepDriving shows that by extracting certain information (i.e. affordance indicators) using CNN from an image taken by a typical RGB dash cam, one will be able to control the vehicle in highway traffic with speed adjusting and lane changing ability.
Expand All @@ -25,7 +25,7 @@ Despite the somewhat narrow scope, DeepDriving still demonstrates some interesti
CARLA is an open urban driving simulator focused on supporting the development autonomous driving systems. Various measurements (e.g. the location of the car, the width of the lane, etc.) are readily available during simulation thanks to its convenient [PythonAPI][carla-py-url] and fully annotated maps. Also, various sensors and cameras (e.g. RGB camera, depth camera, lidar, etc.) are available. Some other nice features are configurable (vehicle|map|weather), synchronous mode and no-rendering mode. The synchronous mode turns out to be critical to record the data in the way we want.


### Data Collection
## Data Collection

[comment]: # (I am not sure if I should write "how to use the code" or "how did I implement this" kind of documentation. Also, I need to update the usage once cli flag is supported)

Expand All @@ -37,7 +37,7 @@ To start the simulation, execute `<carla_dir>/CarlaUE4.sh Town04 --benchmark -f

Since DeepDriving is based on highway, [Town04][town04-url] is being used. Also, since Town04 incluedes some non-highway road, during the data collection, once the ego vehicle found not on the highway, the frames and groundtruth will not be recorded. Note it is possible that after some time the vehicle will be on highway and its frame will be recorded once it is on the highway so you might notice some discontinuity in the collected frames.

To start collecting data, execute `generate_data.py`
To start generating data, execute `src/data/generate_data.py`

All the parameters such as the number of ego vehicles, NPCs, the simulation time limit, etc. can be configured through cli. The only required arguments are `duration` and `name` of the simulation and if one argument is missing it will be provided with one tested default value that should work.

Expand All @@ -64,34 +64,37 @@ python3 generate_data.py --duration 300 --name exp --debug
When the simulation ends, you get (e.g. for 5 ego vehicles):

```bash
output/
├ images
│   ├ v0
│   ├ v1
│   ├ v2
│   ├ v3
│   └ v4
└ labels.csv
data/{name}/
├── {name}_labels.csv
├── v0
├── v1
├── v2
├── v3
└── v4
```

While the labels.csv has the following header:
While the `{name}_labels.csv` has the following header:

```
image-id,angle,toMarking_L,toMarking_M,toMarking_R,dist_L,dist_R,toMarking_LL,toMarking_ML,toMarking_MR,toMarking_RR,dist_LL,dist_MM,dist_RR,velocity(m/s),in_intersection
```

The frame number together with the ego vehicle number are used as the unique identifier for the image-id. The `in_intersection` boolean can be used to filter out the images we don't want later in the deep learning stage, according to the assumptions made by DeepDriving.
The frame number together with the ego vehicle number and experiment name are used as the unique identifier for the image-id. The `in_intersection` boolean can be used to filter out the images we don't want later in the deep learning stage, according to the assumptions made by DeepDriving.

Once you have enough data and ready to train the neural networks, execute `merge.sh` to merge labels from multiple experiments into a single dataset. Addtionally, you can use `--verbose` flag to see some useful information about the dataset and `--remove-file` flag to have the original labels removed.

[comment]: # (**Details on how the `generate_data.py` script works:** I will add how the code works later, probably in another md file like contributions.md)


### Nerual Network
## Nerual Network

Jupyter notebooks used for quick exploration are included in `notebook/`. The corresponding python code are included in `src/models/`.

Neural Network part is not included yet. It is a work in progress.
Following [DeepDriving's][dd-url] suggestions, the standard AlexNet is tried. However, due to time constriant, not enough data is collected to effectively evaluate the model. Note that you can check `notebook/train.ipynb` to see some **preliminary** results.


### Reference
## Reference
+ DeepDriving: [Website][dd-url] | [Paper][dd-paper]
+ CARLA: [Website][carla-url] | [Paper][carla-paper]

Expand All @@ -105,5 +108,5 @@ Neural Network part is not included yet. It is a work in progress.
[town04-url]: http://carla.org/2019/01/31/release-0.9.3/
[town04-fig]: https://www.ics.uci.edu/~daohangt/img/town04.PNG "Beautiful Town04 with highway"

### Remark
## Remark
This source code is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO).
27 changes: 27 additions & 0 deletions doc/demo.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
## DEMO

### Automatically

Many configurations can be customized (300 seconds simulation called demo-a with 5 ego cars and 300 NPC cars):
```bash
python3 generate_data.py --duration 300 --name demo-a --ego-cars 5 --npc-cars 300
```

The cameras' angle, location (wrt to the ego vehicle), orientation and resolution are fully configurable:
```bash
python3 generate_data.py --duration 300 --name demo-b --resolution-x 200 --resolution-y 100 --cam-yaw 90 --cam-pitch 10 --cam-z 1.4 --fov 115
```

### Manually

Since Carla's builtin autopilot function is somewhat limited and if you might want to control the ego vehicle manually.

First do
```bash
python3 manual_control.py --filter <EGO_CAR_TYPE>
```

Then
```bash
python3 generate_data.py --duration 300 --name demo-c --debug 2 --npc-cars 10 --ego-cars 1 --ego-type <EGO_CAR_TYPE>
```
File renamed without changes.
621 changes: 621 additions & 0 deletions notebook/train.ipynb

Large diffs are not rendered by default.

File renamed without changes.
File renamed without changes.
23 changes: 17 additions & 6 deletions generate_data.py → src/data/generate_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,16 @@
import os
import sys
import shutil
import glob

try:
sys.path.append(glob.glob('CARLA_0.9.5/PythonAPI/carla/dist/carla-*%d.%d-%s.egg' % (
sys.version_info.major,
sys.version_info.minor,
'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
except IndexError:
pass

import carla
import random
import time
Expand Down Expand Up @@ -54,7 +64,7 @@
CAM_ROT = (args['cam_yaw'],args['cam_pitch'],args['cam_roll'])
FPS = 10
INTERVAL = 1/FPS
NAME = "{}-{}".format(args['name'],MAX_TIME)
NAME = "{}".format(args['name'])
CSV_NAME = "{}_labels.csv".format(NAME)
MAXD = args['max_dist']
DEBUG = args['debug']
Expand Down Expand Up @@ -91,6 +101,7 @@ def main():
hero += 1
vx = None
for a in world.get_actors():
print(a.type_id,EGO_TYPE)
if EGO_TYPE in a.type_id:
vx = a
actor_list.append(vx)
Expand Down Expand Up @@ -157,7 +168,7 @@ def main():
avs = avss[1]
avs = [x if x != None else -1 for x in avs]
# e.g. v1-frame#
avs.insert(0,"v{}-{}".format(i,timestamp.frame_count))
avs.insert(0,"{}-v{}-{}".format(NAME,i,timestamp.frame_count))

# Filter out entiries that are (not on highway|front cars too far away)
lanes = avs[-1]
Expand Down Expand Up @@ -194,10 +205,10 @@ def main():
settings.synchronous_mode = False
world.apply_settings(settings)
csvfile.close()
if not os.path.exists('./output'):
os.mkdir('./output')
shutil.move("./{}".format(NAME),"./output")
shutil.move("./{}".format(CSV_NAME),"./output")
if not os.path.exists('../../data'):
os.mkdir('../../data')
shutil.move("./{}".format(NAME),"../../data")
shutil.move("./{}".format(CSV_NAME),"../../data/{}".format(NAME))
print('done.')


Expand Down
2 changes: 1 addition & 1 deletion manual_control.py → src/data/manual_control.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@


try:
sys.path.append(glob.glob('../carla/dist/carla-*%d.%d-%s.egg' % (
sys.path.append(glob.glob('CARLA_0.9.5/PythonAPI/carla/dist/carla-*%d.%d-%s.egg' % (
sys.version_info.major,
sys.version_info.minor,
'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
Expand Down
22 changes: 22 additions & 0 deletions src/data/merge.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
#!/bin/bash


while [[ "$#" -gt 0 ]]; do case $1 in
-r|--remove-files) remove=1; shift;;
-v|--verbose) verbose=1;;
*) echo "Unknown parameter passed: $1"; exit 1;;
esac; shift; done

cat ../../data/*/*.csv > ../../data/all.csv

if [ "$verbose" = "1" ]
then
read lines words chars <<< $(wc ../../data/all.csv)
echo "Done!"
echo "Total number of samples: $lines"
fi

if [ "$remove" = "1" ]
then
rm ../../data/*/*.csv
fi
94 changes: 94 additions & 0 deletions src/models/dataset.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
import sys
import os
import numpy as np
from pandas import read_csv
from skimage import io, transform
from torch.utils.data import Dataset

value_range = [
(-0.5, 0.5), # angle
(-7, -2.5), # toMarking_L
(-2, 3.5), # toMarking_M
( 2.5, 7), # toMarking_R
( 0, 75), # dist_L
( 0, 75), # dist_R
(-9.5, -4), # toMarking_LL
(-5.5, -0.5), # toMarking_ML
( 0.5, 5.5), # toMarking_MR
( 4, 9.5), # toMarking_RR
( 0, 75), # dist_LL
( 0, 75), # dist_MM
( 0, 75), # dist_RR
( 0, 1) # fast
]

min_nv = 0.1
max_nv = 0.9

def normalize(av):
def f(v, r):
v = float(v)
min_v = float(r[0])
max_v = float(r[1])
v = (v - min_v) / (max_v - min_v)
v = v * (max_nv - min_nv) + min_nv
v = min(max(v, 0.0), 1.0)
return v

for (i, v) in enumerate(av):
av[i] = f(v, value_range[i])

return av

def denormalize(av):
def f(v, r):
v = float(v)
min_v = float(r[0])
max_v = float(r[1])
v = (v - min_nv) / (max_nv - min_nv)
v = v * (max_v - min_v) + min_v
return v

for (i, v) in enumerate(av):
av[i] = f(v, value_range[i])

return av

class CarlaDataset(Dataset):
"""CARLA dataset."""

def __init__(self, csv_file, root_dir, valid, transform=None):
self.metadata = read_csv(csv_file, header=None)
self.root_dir = root_dir
self.transform = transform
self.valid = valid

def __len__(self):
return len(self.metadata)

def __getitem__(self, idx):
img_id = self.metadata.iloc[idx,0].split('-')
img_name = os.path.join(self.root_dir, img_id[0], img_id[1], "{}.png".format(img_id[2]))
image = io.imread(img_name)

# Delete alpha channel
if image.shape[-1] == 4:
image = np.delete(image, 3, 2)

# Scale to 280x210
image = transform.resize(image, (210, 280, 3), mode='constant', anti_aliasing=True)

# Make it CHW
image = image.transpose(2, 0, 1).astype('float32')

av = self.metadata.iloc[idx,1:].values
av = av.astype('float32')
av = av[self.valid]
av = normalize(av)
sample = {'image': image, 'affordance_vector': av}

if self.transform:
sample = self.transform(sample)

return sample

Loading

0 comments on commit b5c8606

Please sign in to comment.