Elevating Global Awareness

Lidar Innovations

Lidar Innovations
Throughout its history, lidar has been one of the very few technologies in which the exponential growth in the hardware’s ability to collect data has outpaced software’s growing ability to process and visualize that data—largely because it is an unstructured point cloud, unlike raster data, which consists of rows of pixels. Software is catching up, however, offering new ways to manage, edit, and visualize point-cloud data.

At the same time, traditional lidar is being challenged by two new and potentially alternative technologies: multi-ray photogrammetry and Flash Lidar, which also generate point-clouds. The former can be used to increase the performance of both light and cheap consumer-grade cameras and of large high-end aerial sensors. Both may soon make 3D sensors small and light enough to be deployed on small unmanned aerial vehicles (UAVs).

SOFTWARE CATCHING UP

Sensor manufacturers are not in the software business; therefore the software they produce to support their devices tends to be expensive, hard to use, and lagging behind customers’ needs, argues Elmer Bol, Director of Reality Capture at Autodesk. Therefore, the workflow to obtain useful data from lidar scanners is often complicated and this, more than the cost of the sensors, limits their use.

AUTODESK

Bol and a couple of his friends recognized this problem and founded Alice Labs to create tools to dramatically simplify that work flow. “If we want to increase the number of scanners sold by a factor of ten, we need to make sure that the entire workflow is simplified for any user, not only for professional surveyors,” he says. Three years ago, Autodesk acquired Alice Labs. A couple of years earlier, it had acquired REALVIZ, which led to the development of Photofly, a module that powers its consumer-oriented 123D Catch software that creates 3D mesh from photos. Last year, Autodesk released Photo on ReCap 360, a professional-grade cloud service for creating high-resolution 3D mesh and photo-based point clouds. ReCap 360’s RealView enables users to share laser scans with other Autodesk 360 accounts for viewing in a Web browser.

With the acquisition of Allpoint Systems, the company expanded its capabilities for laser scanning workflows with targetless scan-to-scan registration in its ReCap Pro desktop software. Recently, it released a new scan-to-scan registration feature in ReCap Pro that allows users to automatically snap together scans continuously using feature recognition, instead of survey targets. “It dramatically simplifies the process of registering together laser scans to create a single model,” says Bol. “Our goal is to make that an in-field work flow. I can teach anyone to register laser scan data in minutes without the need for survey expertise.”

Point clouds can be pretty rough, he explains, and might need some cleaning or other editing. ReCap, a free data preparation tool that ships in all of Autodesk’s soft- ware suites, allows users to import any kind of scan data, visualize it, and edit and clean the data. See Figures 1-2.

Fig 1_Autodesk_ReCapPro1

FIGURE 1.
Using Autodesk ReCap Pro, point clouds no longer look like abstract groups of points; they are fully functioning 3D models.


FIGURE 2. Autodesk ReCap Pro shows various height levels through color coding.

FIGURE 2.
Autodesk ReCap Pro shows various height levels through color coding.


EUCLIDEON

Euclideon, founded in late 2009 as a gaming company, soon demonstrated its middleware 3D graphics engine, called Unlimited Detail. Unlike traditional polygonal game engines that drive on a computer’s GPU (graphics processing unit), it renders 3D points called atoms using only a computer’s CPU, explains Derek van Tonder, the company’s Technical Business Development Coordinator. The following year, aided by a large government grant, Euclideon began to develop its first commercial geospatial software offering, Geoverse.

Merrick & Company, Inc., a U.S. engineering, architecture, planning, and geospatial services company, was an early adopter of Geoverse and soon became a distributor. “Euclideon came to us with a very simple question,” says Bill Emison, Senior Account Manager in Merrick’s Geospatial Solutions division. “Do you have more points than you know how to manage and visualize? We said yes, so they showed us their solution and it became quite a treat. The rest is history. Their back-ground in the video-game industry has served them well because they know how to render very large datasets very efficiently.” According to Josh Beck, a software consultant in the same division, Merrick staff found the product to be a great resource for both quality assurance and marketing, by allowing them for the first time to efficiently visualize huge datasets, internally as well as for their clients and potential clients.

Competing technologies just parse the data and are limited in the number of points they can display at certain scales, Beck explains, because they load the data into a computer’s RAM. By contrast, “Geoverse allows us to visualize all the points, no matter how many, from just a typical work station or even a five-year-old laptop.” Users can load “three terabytes of data in less than a second,” he claims, and then pan it in real time.

The key to Geoverse’s performance is its novel form of indexing. “It is like a Google or Yahoo search algorithm for 3D points,” says van Tonder. “We find exactly one 3D point for every pixel on the screen.” See Figure 3.

FIGURE 3. 3D view of elec- trical substation and transmission lines, courtesy of Merrick.

FIGURE 3.
3D view of electrical substation and transmission lines, courtesy of Merrick.


At the International Lidar Mapping Forum (ILMF) in Denver in February, Euclideon’s CEO Bruce Dell will present a keynote on new technologies and the future of scanning, with Christoph Fröhlich, CEO of Zoller + Fröhlich.

ALTERNATIVE TO LIDAR: MULTI-RAY PHOTOGRAMMETRY PIX4D

Traditionally, photogrammetrists reconstruct 3D information using two images and a stereo display. At ILMF, Dr. Christoph Strecha, CEO of Pix4D, will present a new approach to reconstructing 3D information with increased accuracy from several images, based on multi-ray photogrammetry. It is the basis of the company’s new release of Pix4Dmapper. This concept, he predicts, will be standard in a couple of years. “If you want to put a sensor on a very light-weight UAV or on mobile phones,” he points out, “you are always restricted in price, weight, and energy consumption. Given these restrictions of the sensor, multi-ray photogrammetry is the best approach we have.” See Figures 4-5.

FIGURE 4. Pix4D’s rayCloud editor combines the 3D points of a point cloud with the original input images, resulting in this image of a quarry.

FIGURE 4.
Pix4D’s rayCloud editor combines the 3D points of a point cloud with the original input images, resulting in this image of a quarry.


FIGURE 5. Object annotations for a stockpile volume measurement in Pix4Dmapper.

FIGURE 5.
Object annotations for a stockpile volume measurement in Pix4Dmapper.


One way to increase accuracy is to build better sensors, but that makes them too heavy for small UAVs, Strecha argues. That, for example, is the route that Microsoft took by buying Vexcel, an Austrian com- pany that builds very good cameras, and using them to capture cities and build beautiful, highly accurate 3D models. The alternative is to use smart algorithms.

“There’s a growing market in extracting 3D information from consumer devices and that’s what we’re addressing,” he says. “We are focusing on integrating images that have been taken from the air looking down- ward with oblique ones. This is especially interesting for modeling cities, where you not only want to get the roofs but also very detailed information on the façades. It is very challenging to generate simplified models directly from captured data: this is a car, this is an entry, this is a window, and so on. Multi-ray photogrammetry will give a lot of added value because you are not just measuring point clouds, but you are able to automatically also integrate them into an existing database.”

MICROSOFT

While multi-ray photogrammetry is not new, it has been helped along by digital cameras, says Jerry Skaw, Marketing Manager for Microsoft’s Photogrammetry Division. It is based, he explains, on a dense-matching process that starts with a flight pattern that has 80 percent forward overlap and 60 percent side overlap, as opposed to traditional flight patterns that have an overlap of only about 60 percent forward and 20 percent on the side. “This provides enough redundancy so that you get the same points on the ground and up to twelve different images,” Skaw explains. “Consequently, you have a more robust and highly automated dense matching process.” See Figure 6.

FIGURE 6. Very high- density point cloud from dense matching of cathedral in Graz, Austria, taken with UltraCam Xp at ground sam- pling of 6 cm, is exportable to LAS file format, courtesy of Microsoft.

FIGURE 6.
Very high- density point cloud from dense matching of cathedral in Graz, Austria, taken with UltraCam Xp at ground sam- pling of 6 cm, is exportable to LAS file format, courtesy of Microsoft.


Microsoft’s UltraMap software ingests imagery from the company’s UltraCam sensor and stitches together the sub-images created by its different CCDs. Next, it does radiometric corrections, color-based color balancing, and aero-triangulation. Then the software extrapolates precise exterior orientation data to generate per-pixel height values and outputs very dense point clouds.

“This process is very automated and these point clouds are much denser than lidar point clouds,” says Skaw. “Ours are on the order of or greater than 300 points per square meter.” From there, users can generate a digital surface model (DSM), which derives the same vertical accuracy as the ground-sample distance at which the images are taken, and export the DSM or the point cloud in LAS format. They can also generate a digital terrain model (DTM), but that’s not exportable. From there, the DSMs and DTMs are used to generate ortho photos. See Figures 7-8.

FIGURE 7. The DSMs generated in the UltraMap Dense Matcher module inherit the accuracy of the dense underly- ing point clouds and can be exported in tiles as 32bit floating GeoTIFF for use in downstream workflows.

FIGURE 7.
The DSMs generated in the UltraMap Dense Matcher module inherit the accuracy of the dense underlying point clouds and can be exported in tiles as 32bit floating GeoTIFF for use in downstream workflows.


FIGURE 8. From the DSMs and DTMs UltraMap OrthoPipeline module automatically generaces final DSM- and DTM-based orthomosaics in optional TIFF & TFW and GeoTIFF file formats.

FIGURE 8.
From the DSMs and DTMs UltraMap OrthoPipeline module automatically generaces final DSM- and DTM-based orthomosaics in optional TIFF & TFW and GeoTIFF file formats.


“There are huge efficiency gains with flying one of our cameras and using our software to create point-clouds,” Skaw argues. “You can fly faster, you can do much more over- lap, and you end up with a much larger usable swath.” These advantages are particularly important for very large collection areas, such as for national mapping and mining. Also, by collecting more data than lidar, multi- ray photogrammetry produces better DSMs and, there- fore, better ortho photos.

ALTERNATIVE TO LIDAR: FLASH LIDAR ADVANCED SCIENTIFIC CONCEPTS

Advanced Scientific Concepts (ASC) developed 3D Flash Lidar cameras on the basis of much core research around the readout ICs (also known as ROICs), which are “the brain” of the focal plane array, explains Thomas Laux, ASC’s Vice President of Business Development and Sales. Flash Lidar cameras operate and appear very much like 2D digital cameras. Like the latter, they have rows and columns of pixels on their focal plane arrays, but with the additional capability of measuring the 3D “depth” and intensity.

A pulsed laser illuminates the objects in front of the camera and each pixel independently records the time each pulse takes to reach the objects and return to the sensor. With each flash (frame), ASC cameras capture 16,384 data points, allowing them to capture scenes at a high dynamic rate. They are solid state, therefore they have no mechanical parts that wear out, do not require routine calibration, and are smaller, lighter, and more durable than laser scanners.

Because they measure distances directly, Flash Lidar cameras can provide absolute range data on the fly, as well as the speed at which they are approaching another object and its image. “This technology is now emerging into a wide range of applications,” says Laux. “It has been of keen interest to some application areas like NASA, for ranging or imaging rather long ranges and knowing the absolute range from a target—for example, for autonomous rendezvous and docking with the International Space Station—but also for landing on various planetary locations like the Moon, Mars, and asteroids.”

Flash Lidar cameras are also used for mapping and are being researched as a way to image into and through obscuration—such as dust, fog, or smoke. These are very critical areas with regards to moving machines, especially autonomous ones, such as UAVs used for aerial mapping or to transport cargo. For imaging into and through water, ASC typically uses its raw or continuous sample mode (CSM), while for other applications it uses a range-and-intensity mode.

Ten or 15 years ago, Laux points out, one of these cameras plus the processing unit would probably have weighed 15 or 20 pounds. “Now, we’re talking about all-in with, let’s say, a 12 millijoule laser capable of imaging well over a kilometer and a TigerCub camera that weighs less than three pounds,” says Laux. See Figure 9. “We have a new one that weighs less than a pound.“

FIGURE 9. Advanced Scientific Concepts Inc.’s TigerCub 3D Flash Lidar camera with Zephyr laser weighs less than three pounds. Image courtesy of ASC.

FIGURE 9.
Advanced Scientific Concepts Inc.’s TigerCub 3D Flash Lidar camera with Zephyr laser weighs less than three pounds. Image courtesy of ASC.


“The same unit is being targeted for going into mines, either for autonomous operations on big trucks or underground. You put these cameras on vehicles or sensors and then you do an explosive blast keeping people safe. You can manage all this stuff remotely, without putting humans at risk.”

Over the next three to five years, Laux predicts, numerous companies are going to use this technology to create whole new application areas. “Scanning lidar systems have proven that time-of-flight is of value to humans. Now we’re poised to watch this emerging technology just ‘blow off all the doors.’ ”

BALL AEROSPACE AND TECHNOLOGIES CORP.

Ball Aerospace and Technologies Corp. has been studying ASC’s Flash Lidar device and uses it as a component in its Total Sight Flash Lidar system, which also contains visible and/or MWIR cameras to provide contextual imagery and a single board computer to process the data. “We fuse the color imagery with the data coming out of the ASC lidar camera in real time, on a frame-by-frame basis,” says Roy Nelson, Sr. Advanced Systems Manager at Ball. “We also fly an Applanix INS unit in our box and we georegister all of the lidar data in real time. So, we have built a lidar system, not just a camera, where lidar is now Laser Imaging, Ranging, and Detection.” See Figure 10.

FIGURE 10. Scan of Denver, Colorado, courtesy of Ball Aerospace Corp.

FIGURE 10.
Scan of Denver, Colorado, courtesy of Ball Aerospace Corp.


Ball has developed the signal processing and the software to perform this fusion in real time, compiling the metadata for each frame rather than for each pixel. “That reduces the amount of metadata and it also allows you to do significant processing on the frames,” Nelson explains. “Before the next frame of data is taken, we process all of the data from the previous frame. So, on the ground, you see a 3D full-motion video image and it is fully geo-registered. At the same time, we are creating a LAS file, in parallel. At any given time, the user can snip the LAS file and use it as a full Level 3 data product.”

While the current scanning systems have been optimized over the years to provide very accurate data on static targets, Nelson says, the value of Ball’s system is in providing time-critical information in real time when the conditions on the ground are rapidly changing. “Flash Lidar rolls at 30 frames per second and it is an array-type sensor, so it is nothing more than a highly accurate 3D camera. It provides full motion video in 3D.” Ball is now looking at re-packaging its unit to fit within the space constraints of small UAVs.

SABRE LAND & SEA

None of the lidar scanners currently on the market are small and light enough to fit on the smaller and lighter UAVs that are exploding in popularity on the civilian market. “They have been doing laser scanning for years with unmanned helicopters capable of carrying more than 100 kilograms, but with those you are up near the costs of a manned helicopter,” says Stephen Ball, the founder and CEO of Sabre Land & Sea. “You might as well just fly a Robinson R22 two-seat helicopter with a pod on it, instead of using a $250,000 UAV to carry a $500,000 lidar scanner. We wanted to build a system small enough, light enough, and cost-effective enough for the UAV market.” He is targeting UAVs under 20 kilograms and integrating into aerial mobile mapping what he learned during more than 10 years in the terrestrial mobile mapping sector.

While the weight of the laser scanner itself is the biggest challenge, the required components also include the GPS/IMU navigation system, the processing hardware, and the power source. Sabre is developing a pod with these components into which it can integrate any manufacturers’ sensor. They are starting with a FARO scanner, which is a high-precision laser scanner that was designed for static-based scanning on the ground.

“We have successfully integrated it for the airborne environment, but it is still too heavy,” says Ball. “We have now managed to reduce the weight of the SABRE pod in order for it to carry lasers from other manufacturers, such as Velodyne and Ibeo.” To stabilize the platform in high winds, SABRE developed its own electric-powered, multi-rotor aircraft. The payload can also be separately stabilized, using a double-gimbal and a gyroscope. See Figure 11.

FIGURE 11. Multi-rotor UAV prototype landing on auto-pilot, courtesy of Sabre Land & Sea.

FIGURE 11.
Multi-rotor UAV prototype landing on auto-pilot, courtesy of Sabre Land & Sea.


Besides weight, a few additional hurdles still need to be overcome before lidar can be routinely deployed on small UAVs, Ball argues. The first one is the combination of cost, reliability, and insurance. “Even if we make a laser scanning system that costs $80,000,” says Ball, “are they going to put it on a light-weight UAV that you can buy off the shelf for $20,000? For small UAVs, it’s an insurance and confidence issue.” The second hurdle is the lack of a small UAV that is reliable in a broad range of weather conditions. The third one is regulatory: in the United States, UAVs may not be operated commercially except under a Certificate of Authorization (COA) from the Federal Aviation Administration, which is hard to obtain. The agency is required by law to issue regulations for commercial UAV use of the airspace by 2015. The final hurdle is that data acquisition and processing needs to be easier.

CONCLUSIONS

Demand for 3D imagery continues to grow rapidly, for consumer, business, and government applications—including creating 3D maps for navigation, modeling as-built construction, and planning emergency response. However, the way we collect, process, and visualize this data is changing rapidly, both from hardware and software perspectives. Large and heavy laser scanners may soon give way for many applications to small and light sensors that use emerging technologies.

Contributor / Pale Blue Dot, LLC Portland, Org / www.palebluedotllc.com