Publicerad den Lämna en kommentar

Applanix joins with NOAA on hurricane assessment imagery

Hurricane Ida assessment imagery and data are now available. Ida was a Category 4 Atlantic hurricane from Aug. 26 to Sep. 4. (Screenshot: NOAA)

Hurricane Ida assessment imagery and data are now available. Ida was a Category 4 Atlantic hurricane from Aug. 26 to Sep. 4. (Screenshot: NOAA)

Applanix, a Trimble Company, and the National Oceanic and Atmospheric Administration (NOAA) have collaborated to provide critical information to first responders in the wake of Hurricanes Henri and Ida.

Applanix’s high-accuracy direct georeferencing (DG) technology enabled NOAA to quickly collect aerial mapping imagery to

  1. provide valuable disaster remediation information to first responders
  2. demonstrate the value of mapping technology in preparing for and responding to emergency situations such as hurricanes, tornadoes and other disasters.

Within hours of Hurricanes Henri and Ida making landfall, NOAA’s National Geodetic Survey collected post-storm imagery using the latest generation Digital Sensor System (DSS). The sixth-generation DSS, designed and manufactured for Applanix by Lead’Air, is the most powerful to date, thanks to several new features introduced within the solution:

  • simultaneous full color and near-infrared image capture using high-performance Phase One iXM 100 MP NIR and 150 MP RGB cameras
  • option to fly the cameras in wide coverage oblique or traditional overhead (straight line down) mode for mapping with uninterrupted measurement
  • embedded Trimble AP60 GNSS + inertial OEM DG solution for mapping without the need for ground control or aerial triangulation
  • Applanix POSPac post-processing software featuring the Trimble post-processed CenterPoint RTX correction service (PP-RTX) for centimeter-level mapping without GNSS reference stations
  • in-air development of raw imagery to JPEG-ready files for creating map products immediately upon landing
  • Lead’Air’s X-Track flight management, which enables the system to be flown outside of planned flight lines to follow roads, rivers and coastlines.

Applanix’s DG technology suite provides direct GNSS inertial georeferencing, meaning that all pixels in the aerial images taken by NOAA are mapped at their exact location on the ground.

“We have worked with Applanix for nearly 20 years,” said Michael L. Aslaksen Jr., chief of the remote-sensing division, NOAA’s National Geodetic Survey. “The level of sophistication they bring to aerial imagery and mapping keeps our team at the forefront of the industry. Their customer support team is always open to new ideas, new innovations and doing whatever it takes to get the job done.”

First responders have access to this imagery and mapping within 24 hours via the cloud (as does anyone at storms.ngs.noaa.gov) and can map detailed response plans based on highly accurate data highlighting where the greatest need lies.

Access to this turnkey emergency response imagery is available to any federal agency, municipality, insurance company or other entity that depends on highly accurate information to plan for and recover from disasters.

Publicerad den Lämna en kommentar

Komatsu adds Smart Construction Drone and Field to line-up

Image: Komatsu

Image: Komatsu

Heavy-equipment maker Komatsu has added two new “smart” products to its j0b-site solutions for construction contractors, Smart Construction Field and Smart Construction Drone.

Smart Construction Field

Komatsu has partnered with Moovila, an experienced provider of project management software, to develop Smart Construction Field, a mobile app that allows contractors to easily record job site activity and analyze operational efficiencies in near real time.

Reports generated by Smart Construction Field can track daily job-site conditions. Task progress can be broken down by labor, equipment and materials, including machine utilization and fuel distribution, receipts, timecards and subcontractor work. Regardless of equipment brand, Smart Construction Field can collect machine data from an entire fleet.

Smart Construction Drone

Smart Construction Drone survey technology collects accurate topography, including quantities for production tracking and billing, without personnel walking the job site to do a manual survey.

Contractors can gather and analyze data throughout each project phase with topographic surveys that incorporate hundreds of thousands of data points. With the capability to take still photos from up to 400 feet above ground level or under bridge decks, Smart Construction Drone can be used as pre-job verification or to keep stakeholders up to date.

Smart Construction Drone is designed to work with Smart Construction Dashboard. Built to combine data from multiple sources into one comprehensive picture, Smart Construction Dashboard combines 3D design data with aerial mapping and intelligent machine data, allowing contractors to confirm quantities and visualize job-site progress.

Smart Construction Dashboard is powered by the 3D visualization power and geospatial accuracy of Cesium, a platform that visualizes, analyzes and shares 3D data.

Both Smart Construction Drone and Smart Construction Field are part of Komatsu’s Smart Construction solutions, an umbrella of smart applications created to help construction customers optimize their business remotely, and in near-real time.

“When the Smart Construction group came in, they integrated everything, and the transition felt seamless,” said Kevin Hawkinson, vice president of operations, A.W. Oakes & Son. “Now, we can take the data, transfer it to the machines, get data back from the machines to the office, and utilize all of that information across the board for bidding, customer reference and billing.”

Publicerad den Lämna en kommentar

Seen & Heard: Grizzlies, ports and autonomous trucks

“Seen & Heard” is a monthly feature of GPS World magazine, traveling the world to capture interesting and unusual news stories involving the GNSS/PNT industry.


Photo: g01xm/iStock/Getty Images Plus/Getty Images

Photo: g01xm/iStock/Getty Images Plus/Getty Images

Supply Chain Snafus

GNSS technology aids in tracking cargo across the globe, but it can’t defeat a shortage of goods, and of trucks, railcars and ships to move them from ports to their destinations. Nevertheless, some touted solutions are seeking to help. One company, CallPass, is offering a 3D imaging system that claims to eliminate noise from images, providing more accurate cargo measurement accuracy. 3D imaging enables shipping companies to better optimize the space inside trailers and containers. Along with a high-precision GPS/GLONASS receiver, the Lana Vision also uses an ultrasonic-based cargo sensor.


Photo: Gregory_DUBUS/E+/Getty Images

Photo: Gregory_DUBUS/E+/Getty Images

Scouting Radioactivity

Azur Drones and AVNIR Energy have developed a drone package for detecting radioactivity, designed for environmental monitoring of nuclear sites both in France and abroad. The “drone-in-a-box” product integrates a radioactivity sensor into Azur’s Skeyetech drone, the first drone system approved in Europe for beyond-visual-line-of-sight (BVLOS) flights without a remote pilot. AVNIR’s Ionized Zone Inspection Device scintillation detector measures radioisotopes at operational nuclear sites, both routinely and during alerts.


Photo: U.S. Geological Survey

Photo: U.S. Geological Survey

Stay Safe, Mama Bear

Two yearling cubs of world-famous Grizzly 399 have been fitted with GPS-enabled tracking collars near Jackson Hole, Wyoming. Grizzly 399 and her four cubs — an extraordinarily large litter — have been frequenting developed areas for food, but with the collars tracking their movements, the U.S. Fish and Wildlife Service is better positioned to keep the unique family alive and out of trouble until they hibernate for the winter. At age 25, Grizzly 399 is the oldest known female with offspring in the Greater Yellowstone Ecosystem.


Photo: DeepRoute.ai

Photo: DeepRoute.ai

Nighttime Special Deliveries

DeepRoute.ai has begun operating self-driving, medium-duty trucks in Shenzhen, China. The trucks drive only at night, when there is far less competing traffic. The company expects official operation to launch in 2022 after driverless regulations loosen. The company is also testing Robotaxi service in Shenzhen, to train and validate its algorithm. The current fleet of five trucks could grow to dozens as the company partners with a logistics company to deliver goods.

Publicerad den Lämna en kommentar

Navigating Urban Roads

From its very first issues, 31 years ago, this magazine has covered the role of GPS, now GNSS, in guiding ships, trains and automobiles. What were then some of the most aspirational visions of future applications are now routine. For all forms of transportation, navigation is a safety-critical issue. This is particularly true in the case of cars on public roads, which is also where the technical challenges are the greatest. Ships mostly travel in deep waters, far away from other traffic and fixed obstructions, and nearly always enjoy an unobstructed line-of-sight to GNSS satellites. So do trains, which have the additional advantages of being kept, literally, on track and of operating in controlled environments, with hardly any concerns for unexpected intrusions on their path. Cars, trucks, and busses, on the other hand, must contend with many other vehicles, including those with distracted, drowsy, drunk, or drugged drivers, as well as cyclists, pedestrians, accidents, construction and a bedeviling myriad of sudden and often unpredictable circumstances. Additionally, their view of the sky is often limited by overpasses, tunnels and tall buildings, which challenge GNSS-based navigation with signal occultation and multipath, and their view of their surroundings is often blurred by weather conditions.

Currently, prototype autonomous vehicles carry cameras, lidar scanners, radars and ultrasonic sensors to provide positioning relative to mapped features, as well as for collision avoidance. However, some use cases require absolute positioning sensors, consisting of GNSS receivers coupled with inertial sensors. For example, autonomy levels 3 and 4 require dynamic error bounds of no more than a few meters most of the time under challenging highway conditions and levels 4 and 5 will require this level of accuracy even in deep urban canyons.

This month’s cover story highlights progress in several transportation-related GNSS/PNT applications

u-blox partners with Bird e-scooters

Domino’s delivers with Nuro

u-blox shares autonomous insights

Hexagon guides Indy Autonomous Challenge

Swift Navigation provides precise corrections

Skytraq Technology modules meet market needs

SBG Systems drives GNSS+inertial in Paris

Publicerad den Lämna en kommentar

Research Roundup: Meeting urban navigation challenges

Photo: eli_asenova/E+/Getty Images

Photo: eli_asenova/E+/Getty Images

Researchers presented hundreds of papers at the 2021 Institute of Navigation (ION) GNSS+ conference, which took place virtually and in person Sept. 20–24 in St. Louis, Missouri. The following five presentations focused on the challenges of urban navigation. The papers are available at www.ion.org/publications/browse.cfm.

Integrating Autonomous Air Vehicles

The emergence and development of advanced technologies and vehicle types has created a growing demand for the introduction of new forms of flight operations. These new and increasingly complex operational paradigms, such as Advanced and Urban Air Mobility (AAM/UAM) present regulatory authorities and the aviation community with several design and implementation challenges — particularly for highly autonomous vehicles.

An overarching and daunting task is finding methods to integrate these emerging operations without compromising safety or disrupting traditional airspace operations. Predictive risk mitigation is critical to meeting this challenge. The authors of this study focus on the development and testing of a prognostic service aimed at estimating the quality of GNSS performance for an autonomous aircraft in complex environments. Flight operations would be able to factor into pre-flight and in-flight route planning an estimate of GNSS quality, thereby predicting poor or unacceptable navigation system performance. The authors provide methodologies for producing quality estimates, and provide results for selected simulation and flight-test cases.

Citation. Dill, Evan, Gutierrez, Julian, Young, Steven, Moore, Andrew, Scholz, Arthur, Bates, Emily, Schmitt, Ken, Doughty, Jonathan, “A Predictive GNSS Performance Monitor for Autonomous Air Vehicles in Urban Environments,” Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021), St. Louis, Missouri, September 2021, pp. 125–137. https://doi.org/10.33012/2021.18138

Processing Scheme for Integrity Monitoring

Integrity monitoring is of great importance for GNSS applications. Unlike classical approaches based on probabilistic assumptions, the alternative interval-based integrity approach depends on deterministic interval bounds as inputs. Different from a quadratic variance propagation, the interval approach has intrinsically a linear uncertainty propagation adequate to describe remaining systematic uncertainty.

To properly characterize all ranging error sources and determine the improved observation interval bounds, the authors propose a processing scheme. The team validated how the sensitivity analysis is a feasible way to determine uncertainty intervals for residual ionospheric errors and residual tropospheric errors, taking advantage of long-term statistics against reference data. Transforming the navigation problem into a convex optimization problem, the interval bounds are propagated from the range domain to the position domain. The authors implemented this strategy for multi-GNSS positioning in an experiment with static data from International GNSS Service (IGS) station Potsdam (POTS) and an experiment with kinematic data from a measurement campaign conducted in the urban area of Hannover, Germany, on Aug. 26, 2020.

Citation. Su, Jingyao, Schön, Steffen, “Improved Observation Interval Bounding for Multi-GNSS Integrity Monitoring in Urban Navigation,” Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021), St. Louis, Missouri, September 2021, pp. 4141–4156. https://doi.org/10.33012/2021.18078

Removing Multipath Errors

In the urban environment, multipath and non-line-of-sight cause measurement errors and signal power loss. In urban canyons, while multi-GNSS provides the required number of satellites to obtain a position, the signals may be affected by gross multipath errors, leading to a potentially unsafe position. In this paper, the authors use machine-learning techniques to model multipath error distributions. The features assessed are commonly used parameters such as elevation, S/N and user speed.

The authors drove a sensor-equipped vehicle in Toulouse, France, collecting hours of experimental data for evaluation of their model’s validity. The multipath error component was extracted from data processed from a single-frequency GNSS receiver using measurement differential, clock bias estimation and other techniques. The quantile of multipath error was then modeled using a neural-network-based regression technique. Results using the proposed method are validated by an integrity assessment of the experimental data.

Citation. No, Heekwon, Milner, Carl, “Machine Learning Based Overbound Modeling of Multipath Error for Safety Critical Urban Environment,” Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021), St. Louis, Missouri, September 2021, pp. 180–194. https://doi.org/10.33012/2021.17874

GNSS/INS/Radar Sensor Fusion

Autonomous driving has gathered much interest in recent years with significant research directed at solving the localization problem. To enable a fully autonomous platform, the navigation system must provide accurate solutions at high rates, be reliable, and be available in all types of environments. These requirements necessitate the use of multiple sensors while remaining cost-effective to enable widespread adoption.

To maintain accurate positioning in GNSS-challenged areas, perception sensors such as cameras, lidar or radar provide another source of absolute positioning information. This paper presents a multi-radar integrated version of AUTO, a real-time integrated navigation system that provides an accurate, reliable, high rate and continuous (always available) navigation solution for autonomous platforms by integrating INS, GNSS-RTK, odometer and multiple radars sensors with high-definition maps. AUTO uses a tight nonlinear integration scheme to fuse information from multiple imaging radars with the INS/GNSS/odometer solution. The HD maps may come from a map provider or be crowdsourced from radar data.

The results in this paper compare multi-radar configurations of one to five imaging radars for a vehicle and demonstrate the accurate solution achieved through the tightly integrated system. Key performance indices are presented for a multi-radar configuration of AUTO for vehicle and robot. The results show how radar data contributes significantly with other sensors to provide a high-rate, accurate, reliable and robust navigation solution in GNSS-degraded environments and adverse weather conditions.

Citation. Krupity, Dylan, Ali, Abdelrahman, Chan, Billy, Omr, Medhat, Salib, Abanob, Al-Hamad, Amr, Wang, Qingli, Georgy, Jacques, Goodall, Christopher, “AUTO: Multiple Imaging Radars Integration with INS/GNSS for Reliable and Accurate Positioning for Autonomous Vehicles and Robots,” Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021), St. Louis, Missouri, September 2021, pp. 77–92. https://doi.org/10.33012/2021.17903

Feature Matching for Visual Nav

Typical feature matching on aerial imagery results in a majority of features being placed on trees and other seasonally variable features. The researchers tested the effectiveness of using semantic segmentation to create and force robust features onto desired areas of an image for the purpose of visual navigation. The process involves testing several segmentation algorithms to achieve state-of-the-art segmentation results and evaluating the effectiveness of feature matching on segmented imagery. The aim is to develop a near state-of-the-art semantic segmentation model for aerial imagery that can extract desired buildings from an image.

The research will then focus on feature-selection and feature-matching algorithms to compare the segmented aerial key features with a database of features from satellite imagery. So far, results show that feature selection algorithms such as SIFT fail to overcome the nuances among multisource aerial imagery. Improving the feature selection algorithm ideally will allow for an increased quantity and quality of matches, ultimately resulting in a camera pose estimation sufficient to be a reliable alternative to GPS.

Citation. Hussey, Tyler, Leishman, Robert C., Woodburn, David, “Towards More Robust Vision-based Map Matching Through Machine Learning & Improved Feature Matching,” Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021), St. Louis, Missouri, September 2021, pp. 1647–1653. https://doi.org/10.33012/2021.17911

Publicerad den Lämna en kommentar

Editorial Advisory Board Q&A: Should all GNSS follow NavIC?

Would it be beneficial for GNSS constellations to transmit signals at higher frequencies, such as in the S-band or the C-band, following the example of the Indian NavIC?

Jean-Marie Sleewaegen

Jean-Marie Sleewaegen

“The S- and C-bands refer to frequency bands centered around 2492 MHz and 5020 MHz. The main advantage compared to L-band is the reduced effect of the ionosphere. However, this comes at the expense of higher propagation losses, increased phase jitter due to the lower wavelength, and extra cost in the receiver and antenna when combined with L-band. The added value for existing GNSS systems already transmitting multiple signals in L-band is probably low. However, because they are less congested than L-band, those bands could be attractive to new space-based PNT services.”
— Jean-Marie Sleewaegen, Septentrio


Alison Brown

Alison Brown

“The main challenge with adding additional bands to GNSS constellations (other than getting frequency allocations) is that these will not be compatible with any existing GNSS chip sets or fielded antennas. The cost/benefit analysis is unlikely to be attractive for most GNSS chip vendors to develop products with this capability.”
— Alison Brown, NAVSYS Corporation


Ellen Hall

Ellen Hall

There are benefits that the higher bands can offer in GNSS, however the constellation and system must be designed to take advantage of them, which makes it very difficult for the legacy systems that were designed around L-band only to tap into any of these benefits. Higher bands have lower ionospheric distortion, which enables better single-frequency accuracy and unlocks some interesting multi-frequency capability, while shorter wavelengths can allow for smaller antennas in user equipment. However, the tropo/atmospheric distortion gets worse as well as the spreading losses. Another consideration for the higher bands is spectrum interference, as the S-band area especially is extremely busy.

— Ellen Hall, Spirent Federal Systems

Publicerad den Lämna en kommentar

SBG Systems drives GNSS+inertial in Paris

Autonomous vehicles require lane-level accuracy at all times and in all conditions. However, under many conditions, such as in urban canyons and tunnels, they may lose line-of-sight to enough GNSS satellites to achieve accurate and robust positioning or may have no signal at all. In these situations, they need data from other sensors, including an odometer and an inertial measurement unit (IMU). Creating reliable and safe autonomous navigation requires fusing GNSS and inertial technology in a multi-layered system.

SBG Systems and its partners LeoDrive.ai and Intempora, have been doing this to develop solutions for autonomous vehicles. SBG’s technology enables multi-sensor integration while addressing such autonomous navigation challenges as time synchronization, integrity, precise positioning and high-definition mapping.

“To ensure performance and build trust, we assemble our own IMUs from carefully selected industrial-grade parts, then we calibrate all our products individually,” said Laurent Le Thuant, business manager for SBG, in a recent webinar.

For safe operation, Le Thuant explained, the vehicle’s true positional error (PE) must be smaller than its protection level (PL), which in turn must be smaller than its alert limit (AL): PE < PL < AL. Otherwise, the solution is declared unavailable or reports misleading information.

In automotive tests conducted in a business district near Paris, an SBG vehicle was equipped with both a GNSS-only, automotive-grade multiband RTK receiver equipped with a PL determination algorithm and an RTK GNSS receiver tightly-coupled with an IMU and an odometry input. A comparison showed that the former was not suited for self-driving, while the latter significantly improved the solution availability, accuracy and protection levels.

For self-driving in the most severe conditions, even this solution requires integration of supplementary sensors, such as cameras, lidars and radars for precise localization.

Publicerad den Lämna en kommentar

Skytraq Technology modules meet market needs

SkyTraq Technology, a fabless semiconductor company, develops GPS/GNSS chipsets and modules for meter-level accuracy vehicle navigation and tracking applications and for centimeter-level accuracy real-time kinematic (RTK) surveying and precision guidance applications.

The company’s chipset design is driven by market trends, said Oliver Huang, the company’s general manager. He explained the company has moved from single-frequency to dual-frequency devices.

SkyTraq’s chipset is designed to be common hardware for different target applications enabled by customized software. Traditionally, in the automotive market, vehicle navigation systems have relied on fusing GNSS receivers with dead-reckoning technology that uses micro-electromechanical (MEMS) inertial measurement units (IMUs) and wheel-tick data. “We are now seeing more aftermarket vehicle tracking applications that take advantage of superior GNSS/DR performance using untethered dead-reckoning technology that uses sensor fusion of GNSS receiver and MEMS IMUs without the need for wheel-tick data,” Huang said. “GNSS receivers with decimeter or better accuracy, combined with dead-reckoning that uses low drift IMUs, will be important in emerging autonomous vehicle applications.”

SkyTraq’s PX100 chipset for L1 meter-level accuracy applications and centimeter-level accuracy RTK applications uses L1
and L1/L2 signals from all four major GNSS constellations (GPS, GLONASS, Galileo and BeiDou).

Because of the trend toward high-precision, which requires good carrier-phase raw measurement data, the biggest challenge in receiver design is with the antenna, Huang explained. “Using an advanced semiconductor process, one can have low power, small size chipsets taking advantage of all the available GNSS signals, yet there is no small antenna capable of producing high-quality carrier phase data for high-precision GNSS applications. So far, we have only seen bulky RTK antennas capable of generating high-precision results.”

Publicerad den Lämna en kommentar

Engaging data for scooters, cars and trains

Photo: Swift Navigation

Photo: Swift Navigation

Swift Navigation designs, manufactures and integrates GNSS receivers, as well as providing the Skylark wide-area GNSS corrections service. Its markets are automotive, transportation (last mile delivery, commercial trucking, rail), robotics/machine control (construction, mining, precision agriculture, landscaping), UAVs, micromobility and mobile devices and applications.

The company’s technology is compatible and interoperable with most major GNSS receivers for multiple markets. Its Starling positioning engine and Skylark corrections “are scalable to bring precision to legacy low-cost single-frequency receivers, all the way to the most sophisticated state-of-the-art triple-frequency multi-constellation systems,” said Joel Gibson, Swift’s executive vice president of Automotive. “By working with a multitude of receiver vendors for different applications, Swift leverages all constellations and all signals and maximizes the performance required for the application.”

The most accurate and reliable navigation system for every application would take advantage of all available GNSS signals, as well as all available corrections, dead reckoning and fused data from other sensors, such as cameras, lidar and radar. However, of course, that is not possible due to cost, size, weight and power considerations. Swift’s approach to the trade-offs required depends on each use case.

Micromobility

In the area of micromobility (such as scooters), the main constraints for implementing a positioning solution are cost and power, coupled with the challenge of satellite signal outages and multipath in dense urban environments where these vehicles primarily operate, Gibson explained. “Cost-effective dual-frequency GNSS receivers are now showing up in micromobility architectures. Pairing them with our Starling positioning engine, which integrates inertial sensor data and wheel ticks, and augmenting them with Skylark corrections data, makes it possible to meet such compliance requirements as geofencing and limiting sidewalk use.”

Additionally, by achieving decimeter-level positioning, Swift’s micromobility solution makes it easier for both users and service staff to find scooters, which increases the scooter companies’ revenues.

Automotive

In the automotive industry, inertial sensors and wheel odometry are ubiquitous and pair naturally with GNSS to mitigate satellite signal outages, Gibson pointed out. Likewise, cameras and radar — cornerstones of ADAS — are very complementary to GNSS for safety applications, and lidar further complements GNSS in feature-rich environments such as dense urban areas.

Rail

Rail applications, such as Positive Train Control, have traditionally needed an accuracy of one or two meters, coupled with ruggedized hardware. “Swift’s precise positioning solution is deployed across continental rail systems today, and we are now engaging rail OEM and operator programs requiring sub-meter accuracy to ensure track-to-track accuracy and safety requirements in support of the transition to more autonomous rail operations,” said Gibson. “Leading rail companies are also looking for operational efficiencies by transitioning away from the high operational costs of maintaining reference base stations along track routes, instead moving to the more cost effective, reliable and seamless Skylark corrections coverage.”

Publicerad den Lämna en kommentar

Racing to an autonomous finish

Photo: Penske Entertainment: Walt Kuhn

Photo: Penske Entertainment / Walt Kuhn

Flipping the traditional scenario, in which car racers risk their lives on a racetrack, the Indy Autonomous Challenge (IAC) aimed to help save lives by improving collision avoidance systems, train future automotive engineers, and make the public more comfortable with autonomous cars. Held Oct. 23 at the Indianapolis Motor Speedway and organized by Energy Systems Network, the race saw 21 universities from nine countries forming nine teams to compete for a $1 million grand prize. Following in the footsteps of the DARPA Grand Challenge, first held in 2004 and later renamed the DARPA Urban Challenge, the IAC was the world’s first high-speed autonomous race. The winning team was TUM Autonomous Motorsport from the Technical University of Munich, Germany.

All competing teams were given the same identical vehicle to work with, a Dallara AV-21, modified to carry no one in the cockpit and equipped with two Hexagon | NovAtel PwrPak7-Ds multi-frequency, multi-constellation GNSS receivers, six cameras (two of which faced backward), three lidar scanners and four radars. Each team had to develop its own autonomy-enabling software stack, including the algorithms and neural networks. All the components, except the computer, had to be commercial-off-the-shelf, available on the market. No sensors could be custom-made.

Since 2001, Dallara has been the sole supplier of the Indy Lights series, a championship to prepare drivers for the NTT IndyCar Series. The Dallara AV-21 is a collaboration between Dallara’s Italian headquarters in Varano Melegari (Parma) and Dallara IndyCar Factory in Speedway, Indiana. The new car offers a modern, stylish appearance and provides the proper training required for drivers as the final step on the ladder to the NTT IndyCar Series.

The process by which the automated vehicle sensors and computers were fused into a singular package and integrated into the AV-21 was led by Clemson University’s International Center for Automotive Research’s Deep Orange 12 (DO12) project. The Deep Orange process mirrors that of automotive original equipment manufacturers (OEMs), and the DO12 project scope allowed for engineering and innovation across multiple subsystems. Student groups within the DO12 team explored solutions within and across multiple subsystems, including:

  • vehicle-to-vehicle communications
  • perception systems
  • onboard computing
  • drive-by-wire chassis control systems
  • vehicle dynamics
  • vehicle-to-infrastructure communications
  • powertrain design and integration
  • vehicle demonstration based on high precision GPS.

Hexagon’s Autonomy & Positioning division provided GNSS receivers and subject-matter experts to the Deep Orange 12 team. The team architected the sensor kit for the Dallara reference vehicle, which AutonomousStuff then replicated 10 times. The team did not compete in the IAC to avoid a conflict of interest and allow students to work closely with competitor teams from universities around the world. The PwrPak7-E1 contains a MEMS IMU to deliver Hexagon | NovAtel’s SPAN technology, a deeply coupled GNSS + inertial engine in a single-box solution. Each GNSS receiver has two antennas to provide heading. The Deep Orange 12 team used HxGN SmartNet RTK corrections, which brought the accuracy down to a few centimeters.

Without developing a driverless decision-making algorithm, Clemson students tested the vehicle with the help of a high-precision positioning system. They developed a control algorithm that can track the optimal line around the Indianapolis Motor Speedway such that all vehicle systems could be validated in a simulated racing environment. Data from these tests were shared with the competition teams to aid in their development of driverless algorithms.

Energy Systems Network will host a head-to-head, high-speed autonomous racecar passing competition at the Las Vegas Motor Speedway on Jan. 7, 2022, during the Consumer Electronics Show. Several of the teams that competed in the IAC, including the winner and finalists, will participate. The primary goal is to advance technology to speed commercialization of fully autonomous vehicles and deployments of advanced driver-assistance systems.