Publicerad den Lämna en kommentar

PNT by Other Means: Xona Space Systems and Spirent

An exclusive interview with Jaime Jaramillo, Director of Commercial Services, Xona Space Systems. For more exclusive interviews from this cover story, click here


Image: Xona Space Systems

Space X Launch. (Image: Xona Space Systems)

It has been said that “the only alternative to a GNSS is another GNSS”. Your website’s homepage claims that Xona will be “[t]he next generation of GNSS.” Will it provide all the positioning navigation and timing services that the four existing GNSS provide?

JJ: The answer at a high level is “Yes, it will provide all the services that legacy GNSS provides.” GPS has a military service called SAASM/M-Code. Since we’ve designed our constellation and signals from the ground up, we will be providing our signals in an encrypted format. We’ve designed into the system the commercial equivalent of M-Code, which will be an encrypted signal using strong encryption algorithms, that are commercially available.
For modern applications, we’re going to be able to provide all the services that GNSS provides plus a couple of additional benefits. Pulsar, which is the name of our service, will transmit secure signals. For autonomous applications, security is very important. If you’re riding in an autonomous car, you certainly don’t want somebody to be able to spoof the GNSS signal and veer your car off course. With commercial-grade encryption, we’ll be able to virtually eliminate that type of situation.

So, the short answer to my question is, “Yes. All of that, and then some.”

JJ: Yes, all of that. The traditional GNSS constellations provide some sort of secure signal for defense and military applications, Pulsar will offer the same thing for commercial applications.

How many satellites and orbital planes will the full constellation have?

JJ: The target is approximately 300 satellites. That will include a small number of spares. There will be about 20 orbital planes and a combination of polar and inclined orbits.

So, about 15 satellites per plane?

JJ: Roughly, yes.

When all the satellites are up, their locations and broadcast frequencies will be public, right? They will have to be disclosed to various regulatory bodies.

JJ: You hit it on the head. Because we’re in the process of going through all the regulatory approvals, we can’t talk a lot about our frequencies and a lot of the specifics. Once we get the approvals, then we will make an ICD publicly available.

Roughly, when do you expect to achieve initial operational capability (IOC)? And when do you expect to achieve full operational capability (FOC)?

Image: Xona Space Systems

Image: Xona Space Systems

JJ: Unfortunately, the answer is, “it depends.” As you can imagine, it is expensive to put up all 300 satellites. We’re a startup, a commercial company, not a government. So, we’re working diligently on funding. Our target now to launch our first satellites into operation is the end of 2024 or beginning of 2025. We’ll have a phased roll-out. In our first phase, we’re going to offer services that only require one satellite in view — for timing services and GNSS augmentation. Then, as we roll out to phase two, we’ll be able to start to offer positioning services in mid-latitudes. As we move to phase three, it will include PNT and enhancements globally. We designed the constellation with polar orbits to provide much better coverage in the polar regions which will be an improvement over what GNSS provides today. That’s why our system has a combination of polar and inclined orbits.

With climate change and more traffic through the Arctic, that’s going to become more important.

JJ: Exactly. When we talk to potential customers today, that question comes up.

What will be the capabilities at various points between initial capability and full capability?

JJ: There will be three phases. The first one, when we can use one satellite in view; the second one, when there’ll be at least four satellites in view in the mid-latitudes and some in the polar regions some of the time; and the third one, when we have more than four satellites in view, basically anywhere on the globe.

When do you expect to complete your constellation?

JJ: Our target for full operational capability is the end of 2026 beginning of 2027.

So, two or three years to fill out the constellation.

JJ: It really depends on how the funding goes. We have basically locked down our signal and our hardware technology. Now, it’s a matter of getting enough funding to launch. If we get more funding sooner, then those dates will come sooner. If it takes longer to achieve the funding, then those dates may slip. But those are the targets that we have today.

Who will launch your satellites?

JJ: That decision has not been made yet, but the demo satellite that we have in space was launched last year in May on a SpaceX Falcon nine rocket.

What is your business model? Will you have different tiers of service? Will your rate structure enable mass adoption?

Image: Xona Space Systems

Image: Xona Space Systems

JJ: We are targeting both mass market applications and high-performance ones. LEO brings many benefits in comparison to MEO in just about every industry to which it can be applied. For the mass market applications, we are targeting a business model that includes what we call a lifetime fee: a customer or OEM pays a fee one time and the service works for the life of the device. If you’re using GNSS on your cell phone, you don’t want to have to pay to renew that every year. Nobody does that today and nobody’s expecting to do that in the future. Our lifetime fee model will apply usually for mass market applications. For higher performance applications that have more capabilities associated with them there’ll be different tiers, each with different services. We can provide all kinds of subscription plans.

What will be the differentiators between the different tiers?

JJ: The base service, initially, will be a timing service. Once we have enough satellites to provide four in view, we will expand that to a positioning service. Those will be the base services. We have bandwidth to transmit some data over the constellation and we could use it to provide enhancements. For example, if you make surveying receivers you could sell your customers enhancements services from the sky, rather than from a network. We’ll also be offering a service where, if security is very important, we’ll be able to provide all the keys for the different services over the air rather than over the internet or a network connection. That’ll be a separate service, with a separate fee.

Lastly, we are also planning to provide an integrity service that will verify that the signal has a certain level of performance thresholds. Critical applications that need certain levels of performance will be able to receive the signal. If it drops below certain performance thresholds, we will flag that to the device so that it knows that, even though it is receiving a signal, it should not continue to use it due to signal degradation.

With legacy GNSS, satellites in MEO broadcast signals to receivers. There’s no need for two-way communication and, anyway, transmitting to the satellites would require too much power. With LEO satellites, however, you need a lot less power from the ground to talk to the satellites. Would two-way communication benefit certain applications?

JJ: The initial service does not consider two-way capabilities today. However, we are leaving room in the signal and hardware designs to potentially offer that in the future.

Your business model is the exact opposite of the gift from U.S. taxpayers to the world that is GPS.

JJ: Yes, but there is value we offer that cannot be obtained from GPS

Who will build the receivers? Do you expect that “if you build it, they will come”?

JJ: We are in discussions with just about every tier one manufacturer out there. What’s publicly announced is that we have a strong relationship with Hexagon | NovAtel. They have been supportive of us for a long time now and are the most advanced in their development and support for our signals.

I assume that, at least for a transitional period of several years, we’re talking about adding Xona to the traditional GNSS on the receivers — just like, many years ago, we went from GPS-only to GPS and GLONASS, and then, more recently, to multifrequency receivers that use all the satellites in view. Would there be any reason, at some point, to have Xona-only receivers?

JJ: We have designed our signals to make it as easy as possible for receiver manufacturers to support them. We want to design the signal so that most receivers can support them with just with a firmware upgrade. Many receiver manufacturers ask the same question that you just asked. For certain applications, maybe Xona Pulsar-only makes sense or maybe it’s just GPS and Xona or GPS and some other constellation and Xona. There are initiatives looking at all these scenarios but most of them today are GNSS plus Xona as a complement. However, there are those that are also looking at potentially Xona only.

Image: Xona Space Systems

Image: Xona Space Systems

It’s interesting what you said about firmware as opposed to needing new hardware.

JJ: Correct. Given that we’re a startup we want to facilitate that. For some of the advanced features — for example, encryption — the receiver needs much more horsepower. So, it depends on the receiver. Some very optimized ASIC types of receivers don’t have the horsepower to run an encryption engine.

Of course, that horsepower is increasing anyway…

JJ: Exactly. And there are other techniques, right? For example, some IoT receiver manufacturers are offloading a lot of the processing power to the cloud. So, the device is designed to have some sort of network connection. Then, if it needs to do heavy processing, it can do that in the cloud. That can be done in different ways. For future applications, some receiver manufacturers are looking to potentially add this capability to next generation receivers.

Of course, the cloud introduces some lag…

JJ: Right. It depends on the application. If it’s an IoT device or an asset tracker, maybe it’s not mission-critical. It just depends on the application.

What markets or applications are you targeting first?

JJ: Timing is a big area of focus for us for initial applications and potentially offering it depends on our partners. We are looking to partner with companies that will help us provide enhancements such as signal integrity. Those are the two services that we’re looking to provide first, because we can do that with one satellite in view.

Will the timing you provide be good enough for cell phone base stations? For television broadcasts? For financial transactions?

JJ: Based on our architecture, our system will provide better timing accuracy than what GNSS provides today, because we have several advantages. Our constellation architecture is patented. One of its key pieces is that our satellites are designed to use GPS and Galileo signals, as well as inputs from ground stations, for timing reference. Also, all our satellites will share their time amongst themselves. We will average all these timing inputs and build a clock ensemble on the satellites. That enables much higher accuracies than just having a few single inputs.

That raises a critical question, especially in the context of complementary PNT: will your satellites have their own atomic clocks or will they rely entirely on GNSS? If the latter, any problem with GNSS would also affect your system.

JJ: This was one of the key points that we kept in mind when we architected the constellation. It keeps its own clock ensemble. It uses the time between the satellites, as well as time inputs from GNSS and from the ground, as references, but the time is kept on the satellites themselves in this clock ensemble. So, that clock ensemble runs independently, even if you lose GNSS.

It will degrade…

JJ: Right, it won’t have the same accuracy as when it has all the inputs, but it will keep working because it’s functioning independently. If we have all the references, then the output is very, very high accuracy. If we lose some of those references, it’ll degrade, and continue to operate. That design — which makes it independent of GNSS — makes it a very resilient system. So, it is complemented by GNSS, but not dependent on it.

The devil’s in the details. What kind of frequency standard will be on the satellites? How fast will their time degrade? How long will it remain sufficiently accurate for certain applications?

JJ: A quartz clock is an atomic clock. I know where you’re going because I come from the timing industry. Since we’re a commercial company, one of the goals of the constellation design was to keep the cost of the satellites themselves as low as possible, so that we can deploy them at a low cost. We will leverage the very high-quality atomic clocks in GNSS satellites and ground stations in which governments have already invested. Each GPS satellite has two rubidium clocks and one cesium clock. The type of oscillator that we use costs much less to keep the satellite cost down, but that is the advantage of the clock ensemble. Also, if you look at our architecture, all the satellites talk to each other. So, the time is being shared between many different oscillators and they’re taking inputs from GNSS and from ground stations. So, that clock ensemble becomes very strong, because it has many inputs from the adjacent satellites and from the different references.

If GPS goes down entirely, we’ll have bigger problems. Your system would continue to work and, even if degraded, will be a lot better than nothing. Your architecture, however, leaves room for people to say that we also need ground-based systems.

JJ: That’s a really good point. The idea of having another LEO-based constellation is to take advantage of what can be done in LEO for GNSS. It’s not intended to replace ground-based systems or alternative systems. If you want the most resilient time and position, you need to use a combination of everything. GNSS alone will not give you the best combination. We always like to say that we’re complementing GNSS.

Jan, what is the role of simulation in building a new GNSS with a very different constellation and very different orbits than existing ones?

JA: Before the Xona constellation or any other emerging constellation has deployed any satellites, simulation is the only way for any potential end-user or receiver OEM to assess its benefits. Before you can do live sky testing, a key part of enabling investment decisions — both for the end users as well as the receiver manufacturers, and everybody else — is to establish the benefits of an additional signal through simulation. Once it’s all up there and running, there are still benefits to simulation, but then there’s an alternative. Right now, there really isn’t an alternative to simulation.

With existing GNSS, you can record the live sky signals and compare them with the simulated ones. It’s a different challenge when it’s all in the lab or on paper.

Image: Xona Space Systems

Image: Xona Space Systems

JA: Yes, but it is not an entirely novel one, at least to us at Spirent. We went through it with other constellations and signals — for example, with the early days of Galileo. It’s often the case that ICDs or services are published before there is a live-sky signal with which to compare them. So, we do have mechanisms in terms of first generating it from first principle, putting out the RF, running tests with that RF, and then seeing that what we put out is what we expect based on our inputs and the ICD. Obviously, we always work off the ICD, which is essentially our master. Then, a lot of work needs to happen to turn what’s written in the ICD into an actual full RF signal, overlay motion, and all those things. So, we have a well-established qualification mechanism to make sure that whole chain works for signals when we don’t have a real-world constellation.

Another very important check is when you work with some of the leading receiver manufacturers who have done their own implementation and you bring the two things together and see if they marry up. Then there’s always a bit of interesting conversation happening when things don’t line up, but we have a lot of experience in resolving that. So, there’s the internal (mathematical) validation of things — which we do before we bring something to market — and then there is validation with partners, be they the constellation developer or a receiver manufacturer, or both.

JJ: Then, one step further from the receiver manufacturers, what we call the OEMs, want to validate that the receiver is doing what it’s supposed to do. The best way to do that is with a simulator. You can try to get a live sky signal, but it can be difficult. You must get on a roof. It may not have an optimal environment for that. The best way to prove that in a controlled environment is with a simulator. Spirent works with two levels of customers: first, the receiver manufacturers, then all the application vendors or OEMs that use those receivers.

JA: What we’ve done with the SimXona product recently follows very closely along those lines. First, we did validation ourselves. Then, we worked in a close partnership with Xona for them to certify that against some of their own developments. So, we follow that same proven development approach. It’s just that, in this case, the signal comes out of a LEO.

What is the division of labor here between Spirent Communications and Spirent Federal? In particular, which device comes into play with Xona?

PC: Spirent Federal has provided support to Xona but the equipment is the COTS equipment provided from the UK by Spirent Communications.

JA: This Xona product does not currently implement any restricted technology only accessible through Spirent Federal. That is very much the case, especially for the aspects of secure GPS, for which we have the proxy company, Spirent Federal. However, the SimXona product is a development through Spirent Communications, albeit heavily aided by Spirent Federal, from a technical perspective and others, but there are no Spirent-Federal-specific restricted elements to SimXona or the current Xona offering.

PC: If we ever had to go into a U.S. government facility to demonstrate SimXona or to sell it to them, that would be Spirent Federal that would be involved.

Publicerad den Lämna en kommentar

PNT by Other Means: Oxford Technical Solutions

An exclusive interview with Paris Austin, Head of Product – New Technology, Oxford Technical Solutions. For more exclusive interviews from this cover story, click here.


What are your title and role?

I’m the head of product for core technology at OxTS. My role now is focused on R&D innovation. So, the research side, developing prototypes and taking new technology to market effectively. One of the key things we’re examining is GNSS-denied navigation: how we can improve our inertial navigation system via other aiding sources and what other aiding sensors can complement the IMU or inertial measurement unit to give you good navigation in all environments. Use GNSS when it’s good, don’t rely on it when it’s bad or completely absent.

We rely increasingly on GNSS but are also increasingly aware of its weaknesses and vulnerabilities. What do you see as the main challenges?

Excessive reliance on anything leads to people exploiting it, which is where the spoofing, the jamming, and the intentional denial come in. We all rely on technology nowadays to do all our menial tasks; then, if we lose the technology, we don’t have the skills to do the task ourselves and we’re in trouble. Reliance on a mass global scale on GNSS is a good and a bad thing. It is good for technology because costs come down. Access to GNSS data is increasingly easy and devices that use it are increasingly cost-effective. But if your commercial, industrial, or military operations rely too much on that one sensor, they can fall over. That’s where complementary PNT comes in: if you can put your eggs in other baskets, so that you have that resilience or redundancy, then you can continue your operation — be it survey, automotive or industrial — even if GNSS falls or is intermittently unavailable or unavailable for a long period of time.

However, you can fully replace a GNSS only with another GNSS.

You cannot replace GNSS with anything that has all the pros and none of the cons. You could use something like lidar or an IMU to navigate relative to where you started. However, you would not know where you are in the world without reference to a map, which would have been made with respect to GNSS global coordinates. The best thing you can do is use things with GNSS to plug the gaps or rely less on it periodically in the sense of having multiple updates per second and be able to at least start with a global reference, then navigate relative to that for a period of time and then get another global update. Then you can navigate in between either via dead reckoning or local infrastructure that is being referenced with respect to the global frame. That way, you can transition between GNSS and localized aiding without any dropouts in your operation or your functionality without relying on completely clean GNSS data all the time.

As you say, you can’t replace it. If you do claim to be breaking free from GNSS you’re really playing a different game and just describing it in a way that sounds as good as GNSS, but in reality you’re saying, “I can navigate in this building but I don’t know where this building is” until you start saying, “Well, I’ve referenced it with respect to a survey point that used a GNSS survey pole.” At that point, you’re not breaking free from GNSS, you’re just using it differently.

INS-GNSS integration has been around for a long time and the two technologies are natural partners because each one compensates for the other’s weaknesses. What have been some of the key recent developments in that integration?

The addition of new GNSS constellations has helped a lot because you need four satellites for a position or time lock and six satellites to get RTK. What previously were 12 to 14 satellites from GPS and GLONASS visible at any one time have doubled with the addition of Galileo and BeiDou. So, your requirement for six satellites at any one time has become a much more reasonable proposition in terms of maintaining that position lock in the first place. Meanwhile, IMU sensors have been coming down in price. So, you can make a more cost-effective IMU than ever, or you can spend the same and get a much better sensor than you ever could before. Your period between the GNSS updates is also less noisy and you have less random walk and more stability.

With less drift you can also go for longer periods without re-initializing your IMU.

Yeah, exactly. Your dead reckoning period can go longer, while still taking advantage of tight coupling wherein you use the ambiguity area of the IMU to reduce the search area for the satellites. So, a better IMU means that you can use GNSS more readily when you go under a bridge or go through a tunnel. You can lock on to satellites quicker again because of the advancements that have been made with the IMU technology.

What have been some of the key advances in IMU technology in the last five or ten years?

With GNSS receivers, the market has become more competitive, there are now more options than ever before. People being disruptive in the space has allowed us to use lower cost sensors for the same performance or mix and match gyroscopes and accelerometers to get the best IMU complementary level. Previously, you may have had an accelerometer that far outweighed the performance level of the gyroscope. So, you would have very good velocity drift over time. But if you’re heading drifts, you still end up in the wrong place when you haven’t had GNSS for a while.

So, that’s allowed us to pick a much more complementary combination of sensors and producing an IMU that we manufacture and calibrate ourselves, while using off-the-shelf gyroscopes and accelerometers. That allows us to make an IMU that is effectively not bottlenecked in any one major area. I think previously, with IMUs, you took what you could get and some of that technology was further ahead than other. So, it’s a good thing for us because the sensors that we’re getting do not cause single-source bottlenecks and we can achieve higher level of performance than we ever could, without having to significantly increase our prices.

The way we’ve always seen it, either you add features or performance level and maintain the price, because the technology is maturing over time, or you disruptively lower your price with the same technology. On occasion, we have done that in the survey space. That’s where the performance level requirements are far tighter because people are moving from static survey using GNSS, where they’re used to millimeter-level surveys, into the mobile mapping space, where they still rely entirely on RTK GNSS.

However, they also rely on high accuracy heading, pitch, and roll to georeference points from a lidar scan at a distance instead of only exactly where they are. Where new IMU technology has helped us is to get the better heading, pitch, and roll performance for georeferencing as well as reducing the drift while we dead reckon in a GNSS outage.

What is the typical performance of IMU accelerometers and gyros these days?

It boils down to what it gives us in terms of position drift or heading, pitch, and roll drift over 60 seconds. Real-time heading, pitch, and roll is heavily affected by gyroscope performance.

How much more do you have to pay to get that increase in performance?

There are definitely diminishing returns. When you look at some of the Applanix systems that have very good post-processing performance in terms of drift, you’re talking about something like $80,000 for a mobile mapping survey system that is maybe 50% better on roll and pitch in normal conditions, let alone an outage, vs. $30,000 to $40,000 for our top system, which is 0.03 roll and pitch, for example. If you go down to 0.015, you can pay double for the INS. Similarly, if you go the other way, and you go cheaper, you can probably get a .1 degree roll and pitch system for $1,000.

So, it’s a very steep curve. The entry level systems are very disruptively low priced now but given the requirements for certain applications —particularly survey — that .1 degree means that you can never achieve centimeter-level point cloud georeferencing. And that’s where people are still justifying spending $80,000 or more on the INS. They also spend similar levels on their RIEGL lidar scanners and other profilers. So, it’s complementary to the quality of the other sensors. However, it really doesn’t make sense to spend $1,000s on your INS and then $80,000 on your lidar, because you’re going to be bottlenecking the point cloud that you get out of it at the end anyway.

The same goes for autonomous vehicles, where people are now spending sub-$1,000 on their lidar or their camera, and they don’t want to spend $30,000 to $40,000 on their INS for a production level, autonomous vehicle. So, there needs to be that similar complementary pricing for sensors in that space, where you can offer an INS for hundreds of dollars, for example, that performs maybe only a percentage less than INSs do today.

For an autonomous vehicle to stay in lane, it still needs these building blocks to be high accuracy, because they’ve only got 10s of centimeters with which to play. However, they are doing it from the point of view that they don’t care where they are in the global frame at that moment in time to stay in their lane, only where the lane markings are. However, they will care where they are in the global frame when they come to navigate off of a map that someone else has made and they’re looking for features within the map, for such things as traffic signs, stoplights, and things that are out of sight or occluded by traffic, so that they know if they’re approaching them and the camera is just blocked at that time. That’s where the global georeferencing comes in and where GNSS remains critical effectively. Right?

It ranges price-wise. The top-end systems — Applanix and NovAtel — in the open road navigation sense, are not orders of magnitude better but you do end up paying double very quickly. If you look at the datasheet, positioning in open sky conditions is identical between a £1,000 power system and an £80,000 pound system. The differences all come in those drifts specs, or the heading, pitch, and roll specs that are being achieved, because the value really comes from the IMU being used at that point.

Is most of the quality difference between these devices due to better machining, smarter electronics, or improved post-processing?

Any one of them on their own will not get you a good navigation solution. Fundamentally, you can have a good real-time GNSS-only system that will work at a centimeter level if you just use, say, a u-blox receiver, which is less than $100. Adding a low-cost IMU can fill some gaps, but not particularly intelligently and you’ll get jumps and drop-outs or unrecoverable navigation. That’s when the algorithms come in to play in terms of intelligent filtering of bad data and when to fall back on one solution versus the other and when to blend the two.

I was asking specifically within INS. When you’re talking about a $1,000 INS versus an $80,000 INS, how much of the improvement in performance is due to manufacturing, how much of it is due to smart electronics, and how much of it is due to algorithms or post processing?

Most of it is probably down to the raw sensor quality and then the calibration of the sensors. An IMU calibration is important, in terms of compensating for bias and scale factor errors, but also for the misaligned angle of the sensors. So, you need to make sure that your accelerometers and your gyros are all mounted exactly orthogonal to each other. A $1,000 sensor is very unlikely to be calibrated to the same level as an $80,000 one. That’s probably because you’d get 10% more out of calibrating the $1,000 one but you might get three times the performance out of calibrating the $80,000 one. So, you have a lot more to get out of a high-end system in terms of unlocking the potential whereas the low-end sensors are probably already giving 80% to 90% of their potential out of the box, with no calibration at all.

You affect such things as warmup time. A well-calibrated system will already be modeled accurately almost as soon as you power it on. If you don’t calibrate the system, you can still have a Kalman filter or something running in real time that can model the errors live. But it will mean that you won’t be at spec level performance as soon as you power up. When does it matter to you that you get the best data? Is it the instant you power up because you’re navigating an autonomous vehicle out of the parking garage? Or do you have 10 minutes before you need to take the data and use it for anything, and therefore you can take those 10 minutes to model the sensors live?

You might save money on the electronics budget but spend it to pay the driver to do the warm-up procedure. You can reallocate where you spend your money. If you’re rolling out a fleet of 100 vehicles, though, you probably don’t want to have to have 100 drivers that are trained to do a warm-up procedure. So, you would spend the money on the electronics to have an INS that does not require a warm-up. That is an option that you can go with now. If you spend the extra you can get away from the warm-up procedure requirements, because things have been modeled during calibration instead of in real time.

Your website focuses on three areas: automotive, autonomy, and surveying and mapping. Why those and what might be next in terms of markets or end user applications?

Automotive is probably the bread-and-butter part of OxTS. For a long time, automotive users were looking for a test and validation device that could give them their ground truth data to validate onboard vehicle sensors. We were very much the golden truth sensor, making sure that the sensors they were putting into the production vehicles were fit for purpose and safe. So, if they claimed it had autonomous emergency braking, they used our sensor to say how far away it was from the target — for example, a pedestrian — when it made the vehicle stop. Did it break with the appropriate distance between them? They had a unit in each vehicle and got centimeter accuracy between them. That was very easy to do with GNSS. Because on a proving ground for automotive users, they always have RTK.

Now the automotive world is moving into the urban environments and doing more open-road testing. So, the need for complementary PNT is more on their mind than ever. They are looking for a technology from us and our competitors that allows them to keep doing those tests that they did on the proving ground, but in real world scenarios. They may collect 1,000 hours of raw data and then only have an autonomous emergency breaking (AEB) event kick in three times in those 1,000 hours. They will then look at the OxTS data at that time and say something like, “Did the dashboard light come on and then did the brake kick in at the required time to avoid the collision?”

So, they rely on the INS data to be accurate all the time. It cannot be that in 1,000 hours, if you get those three events, two of them do not meet the accuracy requirements to be your ground truth sensor. Because then they would basically say, well, we don’t know whether the AV kicks in at the right time on the open road. They would have to fall back to the proving ground testing to have any confidence. So, that’s where the automotive world is looking to use an INS to reference its onboard sensors.

In autonomy and survey, on the other hand, the INS is used actively to feed another sensor to either georeference or, in the case of autonomy, actively navigate the vehicle. So, that data being accurate is critical because an autonomous vehicle without accurate navigation cannot move effectively and would have to revert to manual operation. There’s a lot to do with localization and perception and avoidance of obstructions and things like that.

Timing synchronization is critical. People haven’t solved a way to synchronize multiple vehicles without using GNSS and PPS. Some people are using PTP to synchronize, but they’ll often have a GNSS receiver at the heart of it with the nanosecond-accurate time to be the actual synchronization time. And then everything else is a slave PTP device that operates off of that. So, if we did not give accurate timing, position and orientation, there is basically nothing that that vehicle could do to navigate other than navigating relative to where it was when it last had accurate INS time.

Often, these vehicles will enter a kind of limp mode or stop completely and require user operation to get it to the next stage. It’s where you see the street drone-type small robots now, which will stop if a pedestrian walks in front of it, obviously, because it is a safety requirement. But also, if it doesn’t know where it is, like a Roomba operating inside, it cannot localize with respect to landmarks that it has in its map, it will just effectively try to re-localize off of random movements until it can orient itself. In that scenario, an INS or an IMU can help you reduce the number of times that you’re losing absolute localization. Where the autonomy side of things comes in for us is if we can offer the navigation quality, more of the time and to a high accuracy but for acceptable cost, then the sensor is a viable one to be put into the autonomous vehicle.

In autonomy, our active and potential customers are looking to do everything for a very, very low cost base, because they know that they’re trying to reach consumers with these products rather than businesses. So, their value box is entirely within the algorithms that they’re selling. They’re trying to offer scalable solutions that could roll out to thousands or millions of vehicles around the world, with their algorithms at the center of them. That localization and perception stuff is where you see companies such as Nvidia getting involved, because they want to be at the heart of it. Then they say that they can support any sensor while not being tied to any one of them. However, their algorithm is always going to be there at the heart of it. They will have GNSS receivers they support, they will have IMUs, they will have cameras, lidar, and radar and all the other kinds of possible aiding sensors. But they will say that their algorithm will still function if you have any number of those being fed in at any time.

So, autonomy relates to automotive in a sense, because you have autonomous passenger vehicles, but you also have autonomous heavy industry and autonomous survey, where people are flying drones autonomously or operating Spot autonomous dog robots, things like that, which can still be a survey application where you don’t want to have a human in the loop but you still need to navigate precisely. Someone may be sending a Spot dog robot into a deactivated nuclear reactor where they don’t want to send a human, but they still need to get to a very specific point within that power station and report back. They need to avoid obstructions, they need to georeference data they collect, and then take a reading from a specific object or sensor that’s inside and come back out safely. So, accurate navigation throughout the whole process is very important.

I understand the role of OxTS in testing and development. However, are any of your systems going to be in any production vehicles?

Many of the companies that are working on autonomous passenger vehicles are realizing that they are still a long, long way away.

What about your presence in the auto market more broadly?

They are used, but as separate components. You will have GNSS, IMU, radar, cameras, and lidar but the localization and perception will all be done by the OEM or by a tier one supplier to the OEM. So, they don’t want a third-party solution that is giving them a guarantee of their position because it’s a black box. They need to have traceability and complete insight as to what each sensor is saying so that they can build in redundancy and bring the vehicle safely to a stop if one of those systems is reporting poor data. For production vehicles, we are very much used as a validation tool in the development stage, but in terms of producing the production vehicle, they need to have that visibility of the inner workings of the system. Most INSs will not give you that insight as to how they arrived at their navigation output, because that is proprietary information. As a result, many automotive customers are looking to do that themselves. However, as I said, they’re realizing that it’s very difficult, and they’re quite a long way from navigating anywhere.

Therefore, currently no OxTS products are in production vehicles.

Not for passenger autonomy. However, they are used in some of the other autonomous spaces, such as heavy industry, that take place in private, fixed spaces such as mines, quarries, and ports where there is little interaction with the public. That is not only because the vehicle price point is much higher for some of these mining vehicles and heavy industry vehicles, but also because you don’t have to have your algorithm and perception capability deal with vehicles that are not autonomous or are driven by drivers that are not trained on health and safety in the area.

In these private spaces, you can tune your systems to work with each other without having to worry about the pedestrians and the random vehicles for which you’ve not accounted in your perception algorithms. That’s where the divide comes at the moment. If there are untrained people in the area, then there’s a lot more to accommodate and that makes the proposition much more difficult.

Are you at liberty to discuss any recent end user success story with your products?

The Ordnance Survey in the UK has been using our INS to create 3D maps on which they can then use semantic segmentation to classify features within the environment and pull out all the relevant features within a survey of a city, for example. They’re blending the raw data from OxTS lidar and map data that they have to create high accuracy 3D maps that can be used to add that third dimension to the high accuracy 2D maps that have been their value proposition for the past few decades. They can say, “here are all the trees in the environment” or all the traffic signs or buildings or that kind of thing that you’re going to see in Google Earth imagery. They start to reach the realms of high accuracy map data. They’re looking to sell that map data to commercial entities to monetize it and use it on a nationwide level and then on a global level.

If you have that map data, there’s a lot that you can do with it, in terms of intelligent decision making about routing a vehicle, or many other things, such as monitoring the heat output of buildings. In the EU, there are many directives around such things as carbon emissions. If you’re being more efficient with the heat output of your buildings, you can effectively say that you’re hitting your CO2 emissions reduction goals, by running whatever initiative to insulate buildings better and things like that. It always starts with, “Where was I when I saw this object or this building?” Therefore, I can georeference that building, I can color it by thermal imaging and things like that.

They can start to produce 3D imagery that is colored by thermal output, they can do it by any other number of sensors as well, that can give them meta data that can allow them to sell the data to someone else. It makes what was previously a very big job very efficient. So, they can drive hundreds of kilometers in a day where previously it was a static survey that was done over the course of weeks on foot. It’s also changing the efficiency metric that they can deliver to their end users.

Thank you very much!

Publicerad den Lämna en kommentar

PNT by Other Means: Safran Federal Systems

An exclusive interview with Garrett Payne, Navigation Engineer, Safran Federal Systems (formerly Orolia Defense & Security). For more exclusive interviews from this cover story, click here.


What led to the Versa PNT?

Payne.

Garrett Payne

It is an all-in-one PNT solution that provides positioning, navigation, and very accurate timing. We can take in GNSS signals, as well as the satellite signals, and integrates that with an IMU for a fused solution. I work on the navigation filter and software inside it. So, I’ve been able to get deep into developing and fine tuning the filter inside for an assured and robust navigation solution. I’ve been able to integrate some other new kinds of PNT technology into that as well. So, I’ve been working on projects with integrating odometry for speed and measurements from a vision-based sensor for position fixing. Those are all complementary PNT sources that help the Versa. You always have a good fused solution, even if you’re in a GNSS-degraded/denied environment.

It sounds like a sort of extreme sensor fusion, integrating every possible PNT source.

Correct. GNSS has global coverage, of course, while some positioning sources, such as UWB, are very local.

Can a Versa on a mobile platform transition seamlessly from one to the other?

It’s all very configurable. You can plug-and-play the sensors that you have. Then, you can check the integrity of each measurement source. For example, if you’re in a GNSS-degraded environment, the Versa has some software that can alert you to that and will automatically filter out those measurements, and then navigate based on the other sensors.

With UWB, if there’s nothing local and already mapped out, could you set up some transmitters very quickly, as needed?

Versa PNT. (Image: Safran Federal Systems (formerly Orolia Defense & Security))

Versa PNT. (Image: Safran Federal Systems (formerly Orolia Defense & Security))

Our goal with this project of integrating UWB technology is to identify the exact sensors that we would need. Then it would just be plug-and-play: you would take a Versa unit and plug in a UWB sensor, and it would be able to automatically detect that and talk to other Versa systems that have UWB transceivers. Once we get all the software figured out, it will be simple in GNSS-denied environments for these UWB transceivers to start talking to each other.

If you have units within a building that all have Versa PNTs with UWB, they can see each other’s relative position, but not their absolute position. However, if one of them is located at a known point, such as the entrance or a corner, that would serve as a reference for the other ones to know where they are within the building.

Right. The technology is proven. There are already sensors that do that in warehouses and other large buildings. We want to take that idea and expand it to other GNSS-denied/degraded locations. It would be the same concept: one Versa unit goes on the edge of an area and knows its location, then broadcasts it to other Versa units with UWB technology, enabling them to determine their absolute location as well.

If 50 meters is not enough to get outside the GNSS-denied/degraded area, you might set up a chain or a mash of as many units as needed.

Correct.

What’s your rough timeline to go live?

Currently, we’re evaluating UWB computer technology from different vendors and integrating it in the software portion. We will probably begin performing full field tests in the first quarter of 2024.

Are there any non-defense applications, such as with first responders?
We also provide very accurate beaconing signals that are used for location purposes. So, this is an additional technology that can be used in GNSS-degraded locations — such as deep urban canyons, jungles, or inside buildings — as long as long as you’re within range of the UWB transceiver.

You could accurately survey a point inside a structure ahead of time. Then you could place your UWB transmitter in that surveyed spot and provide the coordinates to other units for use in positioning.

Right, right. If you’re thinking of a very large building in a city, on every floor you could have a beacon in a very accurately surveyed location. So, if you’re in a rush, you can automatically determine your range from different beacons and use that data to determine your position.

How long has Versa PNT been available? Did it evolve from a previous solution you had?

Our company has been founded on timing. We have VersaSync, which provides very accurate timing signals. We’ve extended on that by adding a navigation solution. Many of our customers are using the timing portion of our platforms to generate very accurate frequency reference signals. It also provides an assured navigation solution by fusing GNSS and inertial data.

What markets and applications are you targeting?

Versa PNT. (Image: Safran Federal Systems (formerly Orolia Defense & Security))

Versa PNT. (Image: Safran Federal Systems (formerly Orolia Defense & Security))

We’re providing precise position, timing, and situational awareness for different applications. Our systems can be used for ground, air, and sea-based applications. We specifically at Orolia Defense and Security [now Safran Federal Systems] market towards the U.S. government, defense organizations, and contractors. Our systems have applications beyond defense and security, as they can be used anywhere accurate position and/or timing is needed.

How does the Versa fit into the larger debate about developing complementary PNT capabilities to compensate for the vulnerabilities of GNSS?

It is an expensive, high-end solution that fits a few niches. Every type of sensor that you’re using for PNT has its strengths and weaknesses. That’s why we have a very accurate navigation filter solution that dynamically evaluates the sensor inputs. GNSS is great but not always accurate or available. Other sensors are also not always reliable. That’s why we try to make the unit and the software inside it as customizable and flexible as possible.

Can you give me a couple of use cases?

If a ground vehicle application is entering a GNSS denied/degraded environment, the Versa PNT’s software will detect any kind of GNSS threat. So, it’s going to cut off the GNSS speed and continue to provide a PNP solution based on inputs from the other sensors — such as an IMU, a speedometer, an odometer, or a camera. They’re all providing you different position feeds, so that you can still have an insured position.
The VersaPNT also contains internal oscillators that can provide very accurate timing signals.

An IMU-derived position drifts, of course, so it needs to be periodically re-initialized.

That’s why it’s important to use a navigation filter that’s initialized with a good position from GNSS or other sources, so that you can estimate and dynamically correct the IMU drift using bias terms and offsets.

Publicerad den Lämna en kommentar

Seen & Heard: Driving blind and keeping ballots valid

“Seen & Heard” is a monthly feature of GPS World magazine, traveling the world to capture interesting and unusual news stories involving the GNSS/PNT industry.


From paradise to panic… Or not

Tourists at the Honokohau Small Boat Harbor in Kailua-Kona, Hawaii, drove their car into the harbor after following directions on a mobile map application, and were surprised when the car filled with water, reported Insider and the Washington Post. A witness to the incident took a video showing two women in a Dodge Caravan driving “confidently” into the harbor. The witness also stated that the women were not panicked and were smiling as the car tipped forward into the water. The driver and passenger eventually climbed out of the car and were not injured in the incident. An information specialist for the Hawaii Department of Transportation stated that mobile mapping applications are inaccurate and tourists should always be aware of their surroundings.


Image: Lorado/E+/Getty Images

Image: Lorado/E+/Getty Images

Apple tags to the rescue again

New York City will give out free Apple AirTags to residents in an effort to stunt an increasing number of car thefts, reported the New York Post. A local nonprofit donated 500 AirTags to the city to be handed out to residents, especially those in New York Police Department’s (NYPD) 43rd Precinct in The Bronx. NYPD encourages drivers to purchase the device if they are not able to receive one from the city. An equitable distribution plan is being designed by the Crime Prevention Unit of NYPD’s Community Affairs Bureau. The city will also be fundraising to purchase more AirTags or similar devices.


Image: adamkaz/iStock/Getty Images Plus/Getty Images

Image: adamkaz/iStock/Getty Images Plus/Getty Images

Keeping ballots valid

The Ottawa County Clerk’s office in West Olive, Michigan, is using location data to track vital election data around the county in real time, reported KATV News Channel 7. Once the election machine scans the results of a ballot, the data is uploaded to a flash drive and sealed with a tabulator. Then, a bipartisan group of election workers places the flash drive in a sealed container with a GPS receiver and a radio transmitter that communicates the container’s location in real time to the county clerk’s office. Ottawa County Clerk, Justin Roebuck, believes the receivers add an extra layer of security and will instill faith in voters that nobody is tampering with their ballots.


Credit: vvectors/iStock/Getty Images Plus/Getty Images

Credit: vvectors/iStock/Getty Images Plus/Getty Images

Driving blind

GPS plays a quiet, but integral role in Formula 1 (F1) racing. In a sport where split-second reactions are vital, GPS helps drivers and their teams improve race to race and navigate tracks safely. The importance of live location data was seen in the opening practice session at the 2023 Australian Grand Prix FP1. A red flag was flown due to loss of location data triggered by a glitch in the distribution of live tire information. This caused several near-misses on the track because drivers no longer received traffic advisory calls from their teams, reported Autosport. It took more than nine minutes to restore the real-time location data.

Publicerad den Lämna en kommentar

PNT by Other Means

Image: Safran Federal Systems

Image: Safran Federal Systems

Due to the limited space available in print, I was able to use only used a small portion of the interviews I conducted for our July cover story. For full transcripts of them (totaling more than 12,000 words) see below:

  • Safran Federal Systems (formerly Orolia Defense & Security) makes the VersaPNT, which fuses every available PNT source — including GNSS, inertial, and vision-based sensors and odometry. I spoke with spoke with Garrett Payne, Navigation Engineer.
  • Xona Space Systems is developing a PNT constellation consisting of 300 low-Earth orbit (LEO) satellites. It expects its service, called PULSAR, to provide all the services that legacy GNSS provide and more. I spoke with Jaime Jaramillo, Director of Commercial Services.
  • Spirent Federal Systems and Spirent Communications are helping Xona develop its system by providing simulation and testing. I spoke with Paul Crampton, Senior Solutions Architect, Spirent Federal Systems as well as Jan Ackermann, Director, Product Line Management and Adam Price, Vice President – PNT Simulation at Spirent Communications.
  • Oxford Technical Solutions develops navigation using inertial systems. I spoke with Paris Austin, Head of Product – New Technology.
  • Satelles has developed Satellite Time and Location (STL), a PNT system that piggybacks on the Iridium low-Earth orbit (LEO) satellites. It can be used as a standalone solution where GNSS signals will not reach, such as indoors, or are otherwise unavailable. I spoke with Dr. Michael O’Connor, CEO.
  • Locata has developed an alternative PNT (A-PNT) system that is completely independent from GNSS and is based on a network of local ground‐based transmitters called LocataLites. I spoke with Nunzio Gambale, founder, chairman, and CEO.
Publicerad den Lämna en kommentar

NextNav launches Pinnacle testbed in Europe

Image: NextNav

Image: NextNav

NextNav has launched the first European commercial testbed for its high accuracy Pinnacle vertical location technology. Operating in Paris, France, the testbed will demonstrate the benefits Pinnacle can bring to local emergency response agencies and integration with applications and devices from existing NextNav partners.

Available across the United States in more than 4,400 cities and towns, and currently being deployed across Japan, Pinnacle technology provides z-axis data and has been demonstrated in independent testing to deliver 94% accuracy.

The announcement of a testbed in France comes after the release of a recent European Joint Research Centre (JRC) report, which highlighted NextNav’s accuracy in providing floor-level vertical location in addition to its ability to provide a resilient layer for traditional GPS services.

With a terrestrial-based system, NextNav aims to provide highly accurate 3D position, navigation, and timing (PNT) information — revolutionizing emergency services, logistics, telecommunications, and other sectors that rely on precise PNT and are otherwise vulnerable to GPS interference – an increasing concern across the region.

Publicerad den Lämna en kommentar

Southern Company granted FAA waiver for autonomous BVLOS operations

Southern Company — an energy provider — in partnership with Skydio, has been granted a Federal Aviation Administration (FAA) conditions-based waiver enabling remote-based, autonomous beyond visual line of sight (BVLOS) dock operations across its system.

The BVLOS waiver allows the Southern Company system to conduct remote-based infrastructure monitoring and inspection at plant sites, substations, and other fixed site locations, which enables more efficient inspections, mapping and monitoring.

The Southern Company system will conduct these BVLOS operations using Skydio X2 and Skydio Dock. Skydio’s artificial intelligence technology enables operators to safely inspect infrastructure in close proximity to structures and in complex environments.

The Southern Company system was previously granted a waiver in November 2022, that allowed for advanced BVLOS operations using UAVs to map and inspect stacks, transmission lines and basins at Plant Barry in Bucks, Alabama. This waiver granted the company the ability to conduct recurring inspections of its system’s critical infrastructure.

Publicerad den Lämna en kommentar

Northrop Grumman completes successful test flight of airborne navigation system

Image: Northrop Grumman

Image: Northrop Grumman

Northrop Grumman has conducted a successful flight test of its advanced airborne navigation solution, embedded GPS/INS modernization, known as EGI-M. It is the first time that EGI-M, equipped with an M-code capable receiver, has been tested in flight.

Testing took place in May aboard a testbed aircraft. Flight test data confirmed that Northrop Grumman’s prototype EGI-M solution, the M-code-capable LN-351, performed at standards equal to its current LN-251 INS/GPS system, featuring modern fiber optic gyro technology.

Critical design review for EGI-M was completed in 2020. Launch platforms for Northrop Grumman’s EGI-M include the E-2D Advanced Hawkeye and the F-22 Raptor. The fully operational EGI-M system will feature a modular platform interface, designed to integrate with current platform navigation systems — supporting advanced software and hardware technology upgrades.

Publicerad den Lämna en kommentar

UAVs and machine learning fight invasive species in WV/PA

Image: Donn Bartram

Image: Donn Bartram

Researchers at the West Virginia University Davis College of Agriculture, Natural Resources and Design are using UAVs to develop tools to detect, map, treat and monitor invasive plant species with a $175,000 grant from the Richard King Mellon Foundation.

Multiflora rose is an invasive shrub that threatens native plants in more than 40 states, including West Virginia and Pennsylvania. This project aims to equip UAVs with sensors to collect environmental data in a designated area of southwestern Pennsylvania over multiple seasons. The research team will use that data, combined with machine learning technology, to develop software that can identify multiflora rose and, eventually, other invasive species. The software could then be used for targeted delivery of herbicides via UAVs.

WVU is collaborating with two partners to help facilitate the project, including CNX — a natural gas company headquartered in Canonsburg, Pennsylvania that is offering the use of reclaimed mine land — and Resource Environmental Solutions — an ecological restoration company that is providing technical assistance with herbicide selection and deployment.

This project builds upon ongoing UAV-based research conducted by the National Resource Analysis Center (NRAC) with the U.S. Office of Surface Mine Reclamation and Enforcement. The current study focuses on autumn olive, which is one of the most common invasive brush species in West Virginia.

Most of the data collection and analysis focused on multiflora rose will begin in the 2024 spring growing season, but NRAC’s team of researchers is already using autumn olive data to see what information can be gathered about multiflora rose.

Publicerad den Lämna en kommentar

Microchip cesium atomic clock provides autonomous precise time

Image: Microchip Technology 

Image: Microchip Technology

Microchip Technology has released the 5071B cesium atomic clock that can perform autonomous timekeeping for months in the event of GNSS denials.

The 5071B is the next-generation commercial cesium clock to the 5071A. The 5071B is available in a three-unit height, 19-inch rackmount enclosure, making it a compact product for environments where it can be easily transported and secured.

The 5071B has upgraded electronic components to address possible obsolescence or non-RoHS circuitry. The clock provides 100 ns holdover for more than two months, maintaining system synchronization when GNSS signals, like GPS, are denied.

As a cesium beam tube product with no deterministic long-term frequency drift, the 5071B provides absolute frequency accuracy of 5E-13 or 500 quadrillionths over all specified environmental conditions for the life of the product. For military applications requiring rapid deployments for system radars, 5E-13 stability eliminates the need for the acquisition of external synchronization sources prior to radiating.

The 5071B is now fully compliant with the Restriction of Hazardous Substances Directive, making this device available in regions where regulatory policies are in place.