Publicerad den Lämna en kommentar

Auterion and Phase One partner to integrate open drone ecosystem

Auterion, the company building an open and software-defined future for enterprise drone fleets, has partnered with Phase One, a developer and manufacturer of medium and large format aerial photography systems. The companies will make the Phase One P3 Payload lineup easily accessible, with a plug-and-play integration to Auterion’s open drone ecosystem.

Enterprise inspections today are limited to periodic inspections of selected assets in a small geographic area. Enterprises are forced to use either internal drone operators or operators who are trained in the organization’s workflow to effectively collect pertinent data. Scaling the inspections from tens of assets to thousands of assets requires a platform-agnostic, end-to-end, streamlined workflow. This enables drone operators to conduct the inspections across a large region, lowering the cost and increasing the coverage.

Known for its image quality in high-precision and time-critical inspections, Phase One’s P3 Payload consists of a high-resolution 100MP iXM camera—uniquely designed for UAVs—containing a BSI sensor with the highest dynamic range of 83dB, a rangefinder with smart focus, and a broad array of lenses including 35 mm, 80 mm and 150 mm. The partnership joins the P3 Payload’s inspection capabilities with the versatility native to Auterion’s ecosystem of software-defined and connected drones— enabling customers to integrate real-time inspection data into their existing applications and workflows. The P3 Payload is Phase One’s first payload compatible with the Auterion ecosystem.

Drones leveraging the Phase One P3 Payload and the power and connectivity of Auterion’s Skynode and Suite are capable of dramatically scaling high-value, high-risk and time-critical inspections including those of wind turbine fields (on land and offshore), oil refineries and offshore rigs, power masts and utility lines, bridges, dams, nuclear facilities, large infrastructure projects and other use cases. The combination also benefits faster geospatial mapping, bringing world-renowned image quality with very high resolution, dynamic range, color fidelity and geometric accuracy to projects.

Phase One A/S researches, develops, and manufactures medium format and large format digital cameras and imaging systems. Auterion provides enterprise and government with an ecosystem of software-defined drones, payloads, and third-party applications within a single, easy-to-use platform based on open-source standards.

Publicerad den Lämna en kommentar

Bathymetric surveys dip into Dead Sea

Photo: SPH Engineering

A drone equipped with an echo sounder surveys the Dead Sea. (Photo: SPH Engineering)

Israeli drone service provider ERELIS has conducted a number of pilot projects using a drone equipped with a single-beam echo sounder in the Mediterranean Sea and the Dead Sea. The data was validated by authorized local surveyors and reports from previous surveys of the same areas by the Michmoret Campus, Faculty of Marine Science.

The reference bathymetric data was collected using a manned boat and multi-beam and single-beam echo sounders and demonstrated a good match between the results of new drone-based and traditional methods.

The bathymetric system consisted of a standard commercial DJI drone (UgCS SkyHub onboard computer and terrain-following system with radar altimeter) and Echologger ECT400 single-beam echo sounder provided by SPH Engineering, Latvia. For data processing, the Eye4Software Hydromagic software package was employed.

“I was surprised by the maneuverability of the system and how easy it is to conduct bathymetric surveys using a UAV equipped with an echo sounder,” said Roman Kirsanov, CEO of ERELIS. “Some of our survey areas were 400 to 500 meters away from take-off and landing positions, and that means that remote sensing comes to the world of hydrography and becomes available to any drone service companies.”

Screenshot: SPH Engineering

Screenshot: SPH Engineering

“It was good to see that applicability of our system with a single-beam echo sounder validated in conditions outside of its initial focus on small-scale surveys of inland water bodies,” said Alexey Dobrovolskiy, CTO of SPH Engineering. “We can now recommend our system for small-scale surveys in coastal areas and virtually in any liquids. The density of water in the Dead Sea is 1.24 kg/l.”

In May, SPH Engineering launched a UAV integrated with an echo sounder, as a new product for bathymetric surveys of inland and coastal waters. This data-collection method has since been used in Denmark and the UAE, and is suitable for mapping, measuring and inspections, as well as environmental monitoring.

Publicerad den Lämna en kommentar

Seabed 2030 and Kongsberg Maritime enter partnership

The Nippon Foundation-GEBCO Seabed 2030 Project and Kongsberg Maritime have entered a memorandum of understanding (MOU) in support of the global initiative to produce the complete map of the ocean floor. Under the terms of the MOU, the two parties will work together to advance understanding of ocean bathymetry. The effort complements the goals of the United Nations Decade of Ocean Science for Sustainable Development.

Seabed 2030 is a collaborative project between The Nippon Foundation and GEBCO to inspire the complete mapping of the world’s ocean by 2030 and to compile all bathymetric data into the freely available GEBCO Ocean Map. GEBCO is a joint project of the International Hydrographic Organization (IHO) and the Intergovernmental Oceanographic Commission (IOC) and is the only organization with a mandate to map the entire ocean floor.

Kongsberg Maritime provides solutions for safe, efficient, and sustainable maritime operations. The solutions are suitable for offshore energies, seaborne transportation, hydrography, science, navy, coastal marine, aquaculture, training services and more. Kongsberg Maritime is the largest business area within Kongsberg Gruppen ASA. The Group has an integrated portfolio of solutions for businesses, partners and nations operating from the depths of the sea to outer space and to the digital frontier.

All data collected and shared with the Seabed 2030 Project is included in the GEBCO global grid, which is free and publicly available.

The Nippon Foundation-GEBCO Seabed 2030 Project is a collaborative project between The Nippon Foundation and GEBCO. The Seabed 2030 Project, launched at the United Nations Ocean Conference in 2017 by Chairman Sasakawa of The Nippon Foundation, coordinates and oversees the sourcing and compilation of bathymetric data from different parts of the world’s ocean through its five centers into the freely-available GEBCO Grid.

Kongsberg Maritime is a global marine technology company providing technology solutions for all marine industry sectors including merchant, offshore, cruise, subsea and naval.

Publicerad den Lämna en kommentar

UAV Navigation integrates Iris Automation’s Detect and Avoid System Casia with its VECTOR autopilots

Autopilot platform developer UAV Navigation is integrating Iris Automation’s detect and avoid Casia software into its advanced autopilot solution, VECTOR. UAVs equipped with VECTOR and Casia now can detect uncooperative crewed aircraft in their airspace and autonomously or manually take corrective action, avoiding potential collisions.

The integration comes as Iris Automation releases Casia Software v2.2. The release also includes improvements to performance, track fusion and flight data uploads. Casia Software is embedded in all Casia systems and uses computer vision and artificial intelligence to detect and classify aircraft intruders, similar to human pilots.

VECTOR autopilots are specifically designed to execute flight completely autonomously, even if the remote-control datalink becomes unavailable or fails. They are used by a wide range of commercial clients flying rotary wing, target drone, fixed wing, and VTOL uncrewed aerial vehicles, worldwide.

UAV Navigation specializes in the design of guidance, navigation and control solutions for unmanned aerial vehicles (UAVs). Iris Automation is a safety avionics technology company pioneering detect-and-avoid (DAA) systems and aviation policy services that enable customers to build scalable operations for commercial drones.

Publicerad den Lämna en kommentar

National Academies Proposes Team to Study FCC Ligado Decision

The National Academies has announced its proposed team to examine the analysis and decision-making process by the Federal Communications Commission (FCC) in the matter of Ligado Networks. Individuals and organizations wishing to comment on the appropriateness of any of the members of that team or on any other aspect of this study have until September 19.

The April 2020 decision by the FCC has generated significant controversy and opposition within the public and Congress. This resulted in, among other things, seven separate petitions for reconsideration being filed, all of which are still pending, and several provisions in the National Defense Authorization Act for 2021. One of those provisions requires the Department of Defense to sponsor a study of the technical assumptions and analyses that went into the FCC’s decision to allow Ligado Networks to operate.

According to the post on the National Academies website, the study will consider:

“(1) Which of the two prevailing proposed approaches to evaluating harmful interference concerns — one based on a signal-to-noise interference protection criterion and the other based on a device-by-device measurement of the GPS position error — most effectively mitigates risks of harmful interference with GPS services and DOD operations and activities.

(2) The potential for harmful interference from the proposed Ligado network to mobile satellite services including GPS and other commercial or DOD services including the potential to affect Department of Defense (DOD) operations, and activities.

(3) The feasibility, practicality, and effectiveness of the mitigation measures proposed in the FCC order with respect to DOD devices, operations, and activities.”

This announcement is the first significant public step for the effort which is expected to take approximately 12 to 18 months. Sources say that there will likely be public and classified versions of the report. The classified version is likely to take significantly longer to compile.

The proposed study team members are:

Chair: J. Michael McQuade

Members:

  • Jennifer Lacroix Alvarez
  • Kristine M. Larson
  • John L. Manferdelli
  • Preston F. Marshall
  • Y. Jade Morton
  • Richard Reaser, Jr.
  • Jeffrey H. Reed
  • Nambirajan Seshadri
  • Stephen J. Stafford
  • Staff Officer: Jon Eisenberg

Individuals and organizations wishing to comment on these proposed team members may do so through the project web page or at https://www8.nationalacademies.org/pa/feedback.aspx?type=committee&key=DEPS-CSTB-21-02

Mr. Dana A. Goward is the President of the non-profit Resilient Navigation and Timing Foundation

Publicerad den Lämna en kommentar

The surveyor and augmented reality – ready for the future

Photo: ipopba/iStock / Getty Images Plus/Getty Images

Photo: ipopba/iStock / Getty Images Plus/Getty Images

The surveying profession has experienced a plethora of advancing technology over the past two decades and it does not look like there will be a slowdown any time soon. From robotic total stations to laser scanning to the use of multiple GNSS constellations, the profession is constantly adapting these emerging technologies into a useful tool for daily applications. For most practicing surveyors, it is a challenge to keep up with not just the hardware of these advancements, but also with software, which is being developed in parallel. Have you tried to open and draw a simple figure in any of the industry standard CAD programs lately?

The complexity of these programs, while advancing the capability of many technical professions, forces even the casual user to maintain a regular habit of software education and training. While it may seem primitive to say that a practitioner is a “practicing” surveyor, on-the-job training never stops. Just when the profession thinks there are no more significant advancements, something comes out of left field that truly blindsides us. (See the adoption of UAVS by the surveying profession compared to the public sector…) What do I think will be one of the next “big things” to revolutionize surveying? The technology is already here, and we need to seriously get on board with adoption before we miss another opportunity to highlight the expertise of the profession.

VIRTUAL REALITY & AUGMENTED REALITY (VR & AR)

First, we need to know that virtual reality (VR) and augmented reality (AR) are different, even though many people use these terms interchangeably. The differences are as follows:

Virtual Reality (VR)

  • VR is a virtual world generated by computers and programming.
  • VR is a closed environment that is fully immersive.
  • VR requires a device (specialized glasses and/or a headset).
  • Users in the VR experience are limited by the programming and their computer’s abilities.
  • The VR experience may be based upon real-world conditions but is a fictional setting.
  • Users of VR can travel and experience conditions in real and fictitious places.
  • VR can allow users to have experiences that are not physically possible in the real world.
  • VR is 75% virtual + 25% real (industry “rule of thumb”)

Augmented Reality (AR)

  • AR is typically based on actual physical places.
  • AR is an open environment that is partly immersive.
  • In AR, the user controls the environment.
  • AR combines virtual elements and experiences with real world conditions.
  • Experiences in AR can be accessed by computer, tablet, and smartphones.
  • AR is useful for product visualization and evaluation.
  • AR is 75% real + 25% virtual (industry “rule of thumb”)

It is important to know these difference between the two technologies in order to implement the correct one for the task at hand. However, both will play an important in surveying for generations to come.

Photo: Georgijevic/iStock / Getty Images Plus/Getty Images

Photo: Georgijevic/iStock / Getty Images Plus/Getty Images

USES OF VIRTUAL REALITY TECHNOLOGY FOR SURVEYING

One of the surveyor’s biggest responsibilities is to complete an accurate site conditions model by topographic methods. Once the topographic survey is completed, site designers will utilize this information to create a unique project that works with the existing site conditions. Advances in CAD software and technology allow engineers and architects to design in 3D and blend the new site with the existing conditions, drainage, and utilities. These designs can be further refined into virtual reality models to give the project’s stakeholders a better indication of what the final product will be when construction is completed.

The key takeaway here is that the surveyor is responsible for delivering the existing conditions model. A model that accurately represents the subject site but in digital form enables the design of the project to be more efficient and realistic to meet the client’s expectation. Surveyers, however, will not use virtual reality as much as augmented reality, for many good reasons.

USES OF AUGMENTED REALITY TECHNOLOGY FOR SURVEYING

AR is still in its infancy. Because surveyors have an interest in the existing and proposed conditions of sites, the use of AR becomes an important tool for the future. Merging proposed information with existing site conditions can become the norm, but like many emerging technologies, the profession will need to learn how to embrace it.

To get a better idea of how the technology works and why surveyors need to consider using it, let us look at an application that showcases AR: Pokémon Go. Yes, the smartphone game app that took the world by storm in 2016 and captivated many “trainers” to search the streets for Ultra Balls and characters. (There are still more than 100 million active players worldwide.) Players of all ages have continued to search for elusive items and characters in a high-tech scavenger hunt that is constantly changing, and all based upon the real world around us. By merging a real-time view with game entities at random geographic locations, players move about our world using one of the best examples of AR.

How does this apply to the surveying profession? Surveyors could utilize AR in everyday tasks but that would require having a fully developed 3D design model that could merge with the existing conditions in their visual device. There are a variety of devices for utilizing AR, including smartphones and tablets. Many of the new data collectors running Windows and Android operating systems can also be used for incorporating AR into the field operation. Here are some examples of AR how can be utilized for surveying tasks:

  • While construction staking, AR can be used to assist with structure and improvement location. A quick visual check can help confirm staking calculations are consistent with engineering design.
  • Use AR to visually check installed improvements, including curbs, utility structures, and paving. Any deviation from the proposed design should be quite evident.
  • When establishing property corners, AR will help the field crew quickly determine whether the calculated location is accessible. This can be used for staking out pre-calculated boundary points and/or proposed lot corners in a new subdivision.
Photo: AnnaFrajtova/iStock / Getty Images Plus/Getty Images

Photo: AnnaFrajtova/iStock / Getty Images Plus/Getty Images

Here are a few ideas as to how surveyors could utilize AR in everyday tasks in the future:

  • As public utilities are becoming more available within GIS shape files with geographic locations, they could be utilized with AR to help visually establish locations in the field. Mainline utilities and service lines would become easier to physically verify using AR.
  • Another GIS shapefile entity, the parcel line layer, could be used to help the surveyor understand where the property owner believes the line(s) to be as opposed to the actual monumented location.
  • All reference monuments and benchmarks established by public agencies using geographic location information could enhance the “treasure hunt” of confirming local datum points.

SURVEYING USING AR TO PROTECT THE PUBLIC

Geospatial information has revolutionized our world, so using AR to help when trouble strikes can potentially be a lifesaver. Recently, an oceanfront condominium in Florida collapsed due to structural failure. While the age of the structure precluded it from having any digital geographic location data, any new similar development could be measured and recorded to assist with future emergency needs. Almost all new development has digital surveying, engineering, and architecture and must use local horizontal and vertical datums. Using the proposed information and verifying with post-construction record drawings, the digital record can be created.

It doesn’t take a design flaw to create a public hazard. For instance, a gas leak could render any building, such as the Florida condo, susceptible to catastrophic damage. By having a digital model of the underground structure, emergency crews could use AR to help locate potential open spaces in the building. As is the case with installing fire suppression systems and emergency exits, the cost to create a digital model of a completed building will be well worth it to save lives.

Underground utility corridors within cities, campuses, or manufacturing facilities could also utilize geospatial locations to establish a digital map for future use with AR. It will take time and significant cost to map existing facilities, yet it should be required for new sites to provide this information for emergencies and for use when designing expansions within the site. Having this utility information to use with AR during the design phase could lead to identifying potential problems before construction starts.

Photo: 1001nights/E+/Getty Images

Haiti after an earthquake. (Photo: 1001nights/E+/Getty Images)

Another reason to plan for future safety is how much uncertainty we face in today’s society. At press time, we are coming up on the 20th anniversary of 9/11. We also just watched Haiti suffer another devastating earthquake. The 2021 hurricane season has also been very active, so that danger looms large, too. Disasters happen all the time with little to no warning. Our world is much more advanced than we were at the turn of the century, so we can use these advancements to map our infrastructure. Let us hope we never need to use the digital information for another disaster akin of 9/11. Instead, let us use it to ensure that we can get to someone in a remote spot if necessary.

THE ROAD TO FUTURE MAPPING AND AUTOMATION

As previously discussed, establishing a digital twin of our world could help provide a better map for establishing parcel ownership, reducing construction conflicts, and offering better planning tools for future expansion. Will it be completed within my lifetime? No, and I doubt it will be done within the next couple of generations after me.

We can, however, get a significant start on capturing the necessary information to begin the process of digitization. Technology has exceeded my expectations just within the past decade, so I can only hope that more advancements will help with building this digital beast. More architects and engineers are utilizing BIM (building information modeling) for 3D design and collaboration. Most municipalities and counties have built some form of GIS that uses one of the standard geographic datums. Surveyors have fully embraced GNSS technology so state plane and national geographic coordinate systems have become the norm. In addition, we are seeing a wide number of consultants use autonomous vehicles (aerial, hydro, and terrestrial) with photogrammetry, LiDAR, and SLAM remote sensing. Another bit of good news is that computing power is higher than ever and that storage space is cheap for all this data. We should also include how 5G has expanded our reach and, with cloud storage, we can work from just about anywhere. We can do so much more than most of us ever dreamed of, so we need to leverage that into creating a digital entity that can be helpful.

Photo: RyanJLane/E+/Getty Images

Photo: RyanJLane/E+/Getty Images

HOW TO IMPLEMENT THE LATEST TECHNOLOGY

Augmented reality is one of many new technologies surveyors need to introduce into their toolbox. Many of you may be asking where to begin; my answer, depending on your age, may offend you.

Hire a Gen Zer. Really.

As a Gen Xer, I have come to realize my limitations on technology and being able to fully implement it. The Z generation, while lacking the experience of us wily old guys, see things much differently. The smartphone/tablet/computer, and even the latest data collectors, are designed with them in mind. They grew up playing computer games based in virtual reality, developed excellent hand-eye coordination, and find efficient ways of getting things done. Our surveying world is almost completely digital (when is the last time a client only wanted paper copies of a plat?), so now is the time to make the leap and ditch the drafting table. We have as much to learn from them as they do from us. Together, we can get the surveying profession ready for the next generations. It has been a great profession for us, so let us hand it off to the Z generation. They will (eventually) be glad we did.

Publicerad den Lämna en kommentar

UAVs speed surveying and construction projects in United Kingdom

Screenshot: Propeller

Screenshot: Propeller

For a major project, surveying with traditional GPS equipment would normally take many days, Learn how Trimble and Propeller helped speed progress.

Wills Bros, a family-run contractor based in the UK and Ireland, has begun work on the £29 million (USD $40 million) Maybole Bypass project in Scotland. The 6-km (~ 3.75-mi) project involves 900,000 cubic meters of earth removal and a further 15,000 cubes of rock that needs to be excavated and removed. In addition, Wills Bros is responsible for the construction of 10 culverts to deal with water flow in the area.

For a project this size, surveying the entire site with traditional, ground-based GPS equipment would normally take six days, estimates Jonathan Wills, who was instrumental in the company’s recent investment in Trimble and Propeller equipment. But considering the increased accuracy tolerance required for some of the structural elements involving the culverts, getting useful survey data from the ground would actually take weeks for this project.

As an alternative, Wills Bros is using Propeller PPK, a drone surveying workflow that combines DJI’s Phantom 4 RTK drone; AeroPoints’ “smart” ground-control points; offloaded data processing; and the Propeller Platform software that allows measuring of the site using 3D models generated from drone images. Wills Bros also is using Trimble Stratus for cloud-based drone survey processing, visualization and analytics with Propeller Platform.

Wills Bros was able to collect an initial earthwork takeoff of the Maybole project area in a fraction of the time.

“Savings on labor costs alone have been considerable given the fact that on so many occasions we can now obtain detailed project data within a second rather than sending a man on site to survey for information,” Wills said. “The drone comes in a backpack and is up in the air doing its thing within minutes. From the outset, the time savings are immense.”

Once the drone and ground-control data are uploaded, Propeller transforms them into a 3D terrain model that can be measured in the cloud-based Propeller Platform.

Publicerad den Lämna en kommentar

Two years since the Tesla GPS hack

Photo:

Photo: Regulus Cyber

In June 2019, Regulus Cyber’s experts successfully spoofed the GPS-based navigation system of a Tesla Model 3 vehicle. This experiment provided an important warning for all companies using GNSS location and timing: these technologies, on which they depend, are highly vulnerable to spoofing attacks. In the two years since the experiment, companies and governments have continued to research the potential harm that can be caused by spoofing attacks and are learning more about how to defend themselves from them.

The Tesla experiment was groundbreaking because it was the first time that a level 2.5 autonomous vehicle was exposed to a sophisticated GPS spoofing attack and its behavior recorded.

We chose Tesla’s Model 3 because it had the most sophisticated advanced driver assistance system (ADAS) at the time, called Navigate on Autopilot (abbreviated NOA or Autopilot), which uses GPS to make several driving decisions. However, this experiment exposed several cybersecurity issues potentially affecting all vehicles relying on GPS as part of their sensor fusion for autonomous decision making.

NOA makes lane changes and takes interchange exits once a destination is determined, without requiring any confirmation by the driver. Its several other features include autonomous deceleration and acceleration according to the speed limit, autonomous lane changing, and adaptive cruise control.

These features use a variety of sensors, including cameras, radar, speedometers and more. The researchers wanted to test the extent to which the Model 3 relied on its GNSS receiver to make these driving decisions and how it behaved when receiving contradicting information from its GNSS receiver and its other sensors.

The researchers used hardware and software purchased online to mimic the tools potential hackers would use. The experiment involved two software-defined radio (SDR) devices purchased online, one to spoof GPS and one to jam all other constellations, connected to an external antenna to simulate an external attack. The software used to simulate the GPS signal was downloaded from an online source, available for free.

The test included three scenarios the researchers assumed would involve usage of GNSS, each one using a different spoofing pattern:

Scenario 1. Exiting the highway at the wrong location

Scenario 2. Enforcing an incorrect speed limit

Scenario 3. Turning into incoming traffic

 

Photo:

A Tesla Model 3 was remotely hacked in a test of a GPS spoofing attack. Photo: Regulus Cyber

Scenario 1: Exiting the Highway at the Wrong Location


The car was driving normally at a constant speed of 95 KPH with NOA enabled. The destination determined for this ride was a town nearby and the car designated a certain interchange as the destination for an autonomous exit maneuver. The experiment began 2.5 km before the vehicle reached that interchange; however, the researchers’ fake GPS signal resulted in coordinates of a location on the same highway but only 150 m before the exit.

As soon as its GNSS receiver was spoofed, the car assumed that it had reached the correct exit and began to maneuver to the right, activating the blinker, slowing down, turning the wheel, and crossing a dotted white line to its right side, exiting to an emergency pit-stop, confusing it with the exit 2.5 km ahead.

To be clear, this would not have happened at any location along the highway, because sensor fusion with the radar and the camera enables the car to avoid physical obstacles and ensures that it does not cross a solid white line that makes a turn illegal.

The spoofing attack succeeded, in that it enabled the attacker to remotely manipulate the car’s sensor fusion and make it exit the highway at the wrong location.

Scenario 2: Enforcing an Incorrect Speed Limit

The car was driving to a random city far away on a highway, at a constant speed of 90 KPH, which was 10 KPH below the highway’s speed limit, with NOA enabled. The researchers generated a fake GPS signal, with the coordinates of a nearby town road that has a speed limit of 33 KPH. Shortly thereafter, the vehicle assumed the speed limit had just changed to 33 KPH and instantly began decelerating. Each time the driver attempted to accelerate using the gas pedal, as soon as he lifted his foot off the pedal the car engaged in heavy braking to quickly decelerate back to 33 KPH.

To be clear, this would not have happened if NOA had been turned off. The cruise mode can be disabled by either using the touch screen or by pressing the brakes, which would allow the driver to regain full manual control over the vehicle’s speed.

Again, the spoofing attack succeeded, in that it allowed the attacker to remotely manipulate the car’s speed and made it enforce a speed limit much lower than the actual one on the highway.

Scenario 3: Turning into Incoming Traffic

The car was being driven manually on a two-lane road with one lane in each direction, the type of road on which NOA cannot be used. The researchers generated a fake GPS signal, with coordinates of a nearby three-lane highway, with all lanes in the same direction. Furthermore, the spoofed location was 150 m from a designated exit that the vehicle’s navigation system was programmed to take, requiring a left turn.

Shortly after the car’s GNSS receiver was spoofed, the vehicle assumed it was on a highway and engaged NOA. Next, it triggered the exit maneuver, which began with activating the left blinker, followed by turning the wheel to the left. The driver had to quickly grab the wheel and manually drive the car back to its lane to avoid a collision with oncoming traffic.

To be clear, this kind of scenario would not be possible without the driver enabling the NOA. Once a Tesla driver enables NOA, it automatically turns on once the vehicle is on the highway with a set destination. This is why the researchers assumed that NOA would be turned on by default, and as long as NOA is activated, the vehicle is susceptible to the attacks mentioned in the experiment.

Once again, the spoofing attack was successful in that it enabled the attacker to remotely steer the vehicle into the opposing lane, placing it on a direct collision course with oncoming traffic. Out of the three scenarios described, this one proved that GNSS spoofing can endanger lives.

Photo:

The hardware used for the GPS spoofing test. Photo: Regulus Cyber

GPS Cybersecurity for Automotive Applications

The NOA system in the Tesla Model 3, being an ADAS, allows drivers to rely on the car and its sensors for basic driving functions. Therefore, it enables drivers to briefly take their hands off the wheel and reduces the number of actions they are required to take. Nevertheless, drivers are still required to be fully attentive to the road so that they can take control of the vehicle at any time.

However, since this spoofing attack had such a sudden and instant impact on the car’s driving behavior, a driver who is not fully attentive and aware would not be prepared to quickly take control and prevent an accident. By the time the driver notices that something is wrong and reacts, it might be too late to prevent an accident. Already drivers have been found sleeping at the wheel, driving under the influence of alcohol, and doing other inappropriate tasks with NOA engaged.

Furthermore, this situation assumes a level 2.5 autonomous vehicle as was tested. But what happens in level 3 vehicles, in which driver engagement is limited, or level 4 and 5, in which driver response is non-existent? This research provides us with a glimpse into the crucial importance of sensor cybersecurity and particularly of GNSS cybersecurity.

The Tesla hack experiment and its results were eye-opening for the autonomous vehicles sector – the danger is real and rising as more and more vehicles are depending on GNSS technology as part of their sensors for assisted or automated driving. Up to 97% of new vehicles since 2019 incorporate GNSS receivers and most if not all are still vulnerable to the same spoofing attacks presented in this research.

In January 2021, the UN’s World Forum for Harmonization of Vehicle Regulations (WP.29) issued Regulation No. 155, which sets guidelines for cybersecurity in the automotive industry with the goal of addressing every possible cyber threat that it might encounter. Annex 5 of the regulation defines cyber attacks and states that in order to get approvals in the future vehicle manufacturers will need to provide solid evidence that their vehicles are sufficiently protected against them.

Among the cyber threats mentioned in the Annex is spoofing of data received by the vehicle — both sybil spoofing attacks and spoofing of messages. The Annex also lists the appropriate protection that vehicle manufacturers should implement and states that vehicle manufacturers will be required to provide evidence of the effectiveness of the mitigation measures they choose. These upcoming regulatory requirements can make the difference between life and death in situations caused by GNSS spoofing and ensure that only reliable and resilient positioning is used within vehicles, both today and in the future.


Please note: Tesla released a statement saying that it is “taking steps to introduce safeguards in the future which we believe will make our products more secure against these kinds of attacks.” Regulus Cyber researchers did not perform any further experiments with Tesla Model 3 since this research was published two years ago.

See the Tesla GPS spoofing experiment from the driver’s point of view:

Publicerad den Lämna en kommentar

61st CGSIC meeting scheduled for Sept. 21

CGSIC logo

The U.S. Department of Transportation (DOT) and the U.S. Coast Guard Navigation Center (NAVCEN) will hold the 61st meeting of the Civil GPS Service Interface Committee (CGSIC) on Sept. 20-21.

The meeting will be conducted at the St. Louis Union Station Hotel in St. Louis, Missouri, in conjunction with the Institute of Navigation’s 2021 ION GNSS+ conference.

The 61st CGSIC meeting will also be broadcast live online to provide a virtual option. This is a unique opportunity for anyone in the world with access to a computer to attend these public meetings of the U.S. Civil GPS program. CGSIC meetings are free and open to the public.

The three subcommittees of the CGSIC will meet on Sept. 20: Timing; International Information; and Surveying, Mapping, and Geosciences.

Summaries of the subcommittee meetings will be presented to the CGSIC plenary session Sept. 21 with a keynote address by Juliana Blackwell, director of NOAA’s National Geodetic Survey (NGS).

The CGSIC agenda in development can be found on the CGSIC section of GPS.gov.

Publicerad den Lämna en kommentar

Inertial Labs explains lidar, GPS-aided INS and data management

A new blog offered by Inertial Labs discusses the scope of work to turn lidar point-cloud data collection into actionable deliverables. The blog, “Providing Actionable LiDAR Point Cloud Deliverables and the Inertial Labs RESEPI” by Luke Wilson, is also available as a downloadable PDF.

A digital terrain model, a digital surface model, and a digital elevation model (from top). (Image: Inertial Labs)

A digital terrain model, a digital surface model, and a digital elevation model (from top). (Image: Inertial Labs)

The blog introduces lidar and creation of point clouds, then discusses the use of GPS-aided inertial navigation systems (INS). “A lidar point cloud is the product of sensor fusion across a GPS-aided INS and a lidar scanner. Each sensor plays a critical role in how a lidar payload functions and the applicability of its point cloud output,” explains Wilson.

Wilson describes complications with converting datum reference frames, both traditional and reference ellipsoid such as WGS84. He also discusses projected coordinate systems. He concludes with analysis of the data using point classification — the foundation to create models including digital terrain, surface and elevation models (DTM, DSM and DEM respectively).

Finally, Wilson explains how Inertial Labs RESEPI is a quick and efficient way to generate models of an environment, including in fields such as construction and utility management.

Read the full blog.