Roadmap

Introduction

A lot of people have heard of AWS DeepRacer, but if not, here is short information from their site:

AWS DeepRacer gives you an interesting and fun way to get started with reinforcement learning (RL). RL is an advanced machine learning (ML) technique that takes a very different approach to training models than other machine learning methods. Its super power is that it learns very complex behaviors without requiring any labeled training data, and can make short term decisions while optimizing for a longer term goal.

Developers of all skill levels can get hands on with machine learning through a cloud-based 3D racing simulator, fully autonomous 1/18th scale race car driven by reinforcement learning, and global racing league.

But… only 10 hours free for 30 days with the AWS Free Tier… Maybe there is something that can be launched locally?

deepracer-for-cloud provides a quick and easy way to get up and running with a DeepRacer training environment in AWS or Azure, using either the Azure N-Series Virtual Machines or AWS EC2 Accelerated Computing instances, or locally on your own desktop or server.

That is a good option: train your car locally for any number of hours and then just upload the model.

But what if you want to gain more control of your car? Or you want to somehow incorporate Behavioral cloning and train your car with manual steering and velocity inputs in hope that it will have a better performance than with random inputs of RL? Or you just want to simply simulate a car, race track and play with it?

Finished simulation

If you want just to run the simulation you can clone my repository and run it with Docker.

# Download this repository
$ git clone https://github.com/CatUnderTheLeaf/deepRacerSim.git

# change dir to this repository
$ cd deepRacerSim

# Build docker image with
$ docker build -f deepRacerSim.Dockerfile -t deep-simulator .

# change directory
$ cd deepRacerSim/deep_ws

# launch a Docker container
$ ./launch_docker.sh

# launch simulation
$ roslaunch simulation simulation.launch

To control a car in launch another terminal

# change directory
$ cd deepRacerSim/deep_ws

# launch a Docker container
$ ./launch_docker.sh

# launch keyboard teleoperation
$ roslaunch teleop_ackermann key_teleop.launch

# or launch joy teleoperation
$ roslaunch teleop_ackermann joy_teleop.launch

If you don’t want to use Docker, then read instructions on GitHub.

But if you want to make it from scratch - continue reading.

Software setup

It turned out there are three possible software configurations:

  • DeepRacer: Ubuntu 16.04, ROS1 Kinetic(EOL) and Python 2(EOL)
  • DeepRacer with update(see note): Ubuntu 20.04, ROS2 Foxy Fitzroy(EOL) and Python 3.8
  • deepracer-for-cloud: Ubuntu 20.04, ROS1 Noetic and Python 3.8

As can be seen the last configuration is the right option to simulate and train a car locally. Noetic is the last version of ROS1 and Python 3.8 will meet its EOL in October 2024.

Note from AWS docs: “Update is required to run AWS DeepRacer open-source projects but is otherwise optional. AWS DeepRacer only supports Ubuntu 20.04 Focal Fossa and ROS2 Foxy Fitzroy.”

The reasons why they chose to update to ROS2 Foxy Fitzroy which has reached its EOL are unclear. There are still ROS1 Noetic (EOL May, 2025) and stable and supported distros of ROS2 (Humble and Iron).

The ROS version should not affect training and using RL model as it has only four velocity and two steering values as inputs. And you can pass it using whichever ROS version you want.

Installation

# Install Git and Python3 if not installed
$ apt-get update && apt-get install -y git python3-pip
# or just add it to ~/.bashrc with
# echo "source /opt/ros/noetic/setup.bash" >> ~/.bashrc
$ source /opt/ros/noetic/setup.bash

Next: Launch race track in Gazebo