The Self-Driving Car Research Studio
Posted on December 11, 2019

The self-driving car craze continues to gain traction as more players step into the race to build the first fully-functioning autonomous vehicle. A simple Google search of “self-driving cars” pulls up articles published by the likes of CNBC, Forbes, and Automotive News all within the last week.

Big names like Tesla, Waymo, Google, Cruise (GM) and Aurora (Amazon) tout advancements in technology and hours logged by their vehicles. Even so, none of these companies have met original projections for consumer-ready vehicles.

There are a number of factors that have caused the delay: Accidents have instilled a greater sense of caution in testing. Legislation has a long way to go to be ready for autonomous vehicles. And perhaps most importantly, there are just too many unique situations for on-the-road vehicles to encounter and test them all.

Some companies have turned to simulations to test more real-world situations. But what if we took this experimentation to academia, where the next generation of innovators are given space to create and test?

Self-Driving Car Research Studio

Quanser is doing just that with its new Self-Driving Car Research Studio.

The vision behind the Self-Driving Car Research Studio is to provide researchers with a flexible platform and enough tools to test their theories. With the right combination of hardware and software, this product could be the catalyst that enables your students to design the very vehicles we will be driving in the near future.

Hardware

The QCar comes equipped with a number of high-tech components.

GPU Power: The NVidia Jetson TX2 connects to a custom PCB – perfect for real-time image processing and AI functionality. There is also a USB 3.0 hub for additional devices, including a Intel RealSense D435 depth camera.

Quanser Self-Driving Car Research Studio - LAB MidwestVision and Navigation: Four onboard, wide angle CSI color cameras provide 360 degrees of vision with close to 4K resolution at 10 bpp per camera (or 120fps at lower resolution). There will also be 2D LIDAR on top of the QCar for 360 degree ranging, which can be used in conjunction with the CSI cameras or alone.

Audio: Stereo microphones are an experimental component that can be used to design automated responses to ground surface, emergency vehicles, honking horns, and other audio cues.

Signals and Lights: Standard brake lights, turn signals, reverse indicators and headlights allow users to design algorithms to detect and respond to various light patterns that could indicate another vehicle’s movements. Headlights allow researchers to consider how night driving will impact image data and processing for safe autonomous driving.

Ports for Customization: To allow for customized additions, the QCar is equipped with ports for keyboard, mouse and HDMI for direct access to Ubuntu OS. There are also ports for SPI, I2C, CAN bus, serial, Ethernet, and USB 3 as well as general purpose digital IO.4 user encoder channels with quadrature decoding and PWM outputs that support standard servo PWM, DShot, Oneshot, and multishot.

Read Quanser’s full blog post on the hardware of the Self-Driving Car Research Studio

Software

Quanser’s proprietary software, which is the basis of all their products, is called QUARC. The QCar will be run on QUARC as well, primarily in conjunction with Matlab/Simulink. QUARC sends code to remote targets (the NVidia TX2 in this case) and enables Simulink to display real-time data and allows for real-time parameter changes to the vehicle.

QUARC also provides support in Java, C, C++, C#, VB, and with the development of the Self-Driving Car Research Studio, Quanser will be adding Python 3 to that list.

HIL (Hardware in the Loop): Using the HIL API, users can replicate algorithms written for one piece of hardware and use it on another piece of hardware. The HIL Open function will read analog outputs, write PWM outputs, set encoder counts etc., creating a single unified interface for every target, on every platform, for every language.

Stream: The Stream API creates ease of communications across languages, channels and hardware. This allows you to add custom sensors or equipment with an accompanying library of code, run multiple processes across different channels, or use a different language for each hardware process on the QCar.

Media: The Media API will allow you to extract raw camera data from any of the built-in cameras or customized cameras to translate into your language of choice.

HIL Simulation: HIL Simulation will allow for greater depth of research; for example, virtual objects could be entered into the physical environment a QCar is driving in.

Read Quanser’s full blog post on the software of the Self-Driving Car Research Studio

Get a Quote

We know you want this self-driving car research studio in your lab. Let’s chat about it! Fill out the form below and our team will be in touch shortly.

No Fields Found.