Just what the heck is DeepRacer? Well, according to Amazon Web Services (AWS, the web services, and cloud computing arm of Amazon) it’s: “…an integrated learning system for users of all levels to learn and explore reinforcement learning and to experiment and build autonomous driving applications.” Huh? Right. In a nutshell, DeepRacer is for developers and is a type of training course. Specifically, DeepRacer focuses on the development of autonomous racing vehicles on a small scale to learn reinforcement learning, in a fun way. Autonomous development/programming is the essence of the development of self-driving vehicles. I’m on board with anything, especially something as relevant as programming that helps me get a grasp of it in an interesting way, not the tedium of classroom-type study. So, let’s take a look at what you should know about the AWS DeepRacer EVO.
You do not have to already be a developer to get involved with DeepRacer. It was built to specifically teach the developer basics of machine learning (ML) for those interested in, new to, or even those already established as developers. DeepRacer is a fun way to encourage new people to get involved with the technology and the ML process. To get involved, all you have to have is an interest in this type of technology.
Operated by RL, the vehicle can be controlled either manually on a physical track or deployed to drive autonomously on a virtual or physical track. The DeepRacer EVO is an advancement on the original DeepRacer in that it has left and right front-facing stereo cameras as opposed to one. Stereo cameras help the vehicle learn depth information that can be used to sense and avoid objects. DeepRacer EVO also comes with a LiDAR (Light and Detection Ranging) sensor that uses lasers to detect objects beside and behind the vehicle, i.e. no blindspots. The vehicle can be purchased on Amazon as the original DeepRacer, the more advanced DeepRacer EVO, or just the EVO sensor kit.
DeepRacer is a learning module comprised of three elements
- The DeepRacer Console, which is a machine learning service to develop, train, and evaluate reinforcement learning (RL) modules in a simulated driving environment. It is a graphical user interface that allows you to interact with the DeepRacer service. With the Console, you can create a training job to RL train your vehicle, select a virtual track, submit your vehicle to virtual races, etc.
- The DeepRacer vehicle, which is where you get your hands dirty diving right in learning autonomous driving
- The AWS DeepRacer League, where you can compete with other DeepRacer enthusiasts to win prizes including the DeepRacer Championship Cup.
RL is a sort of machine learning method or algorithm that learns through experience. The focus of RL is autonomous decision-making by an agent (the thing being programmed to learn to operate on its own) in order to achieve goals in a particular environment. It is a trial and error method of learning. Think of RL as the way AI (Artificial Intelligence) learns. There is actually a rewards function built into DeepRacer to encourage the algorithm to learn.
Once you get the grasp of RL and how to train your DeepRacer EVO, you can compete in online or offline in the DeepRacer League. There are three types of race formats in the League: Head to Head where you complete laps avoiding AWS bots on the track; Object Avoidance where the goal is to have the fastest time avoiding a number of objects on the track; and Time Trial where you complete a certain number of laps while staying on the track. There are a variety of prizes to be won by competing in the DeepRacer League. Anyone can register to enter the League.
The software is Ubuntu-based and has been open-sourced to allow developers to prototype new skills and uses for the DeepRacer EVO. Open-source also allows developers to prototype new robotic apps and even games. In AWS’ own words, “…anyone with the car and an idea can make new uses for their device a reality.”