The upsurge in the use of sensors installed in pilot-less devices, known as drones, to acquire aerial images has given new life to photogrammetric applications. It was not so long ago that these applications were reserved for specialists who had mastered the mathematical principles of camera modelling, image block aerotriangulation and geometric correction. Indeed, today a whole range of tools is available, a good number of which focus on the automatic reconstruction of 3D objects and scenes from multiple photos. This field has grown in leaps and bounds given the advances in artificial intelligence and shape recognition, as well as the profusion of digital images of the world in which we live. One has just to consult the Wikipedia page entitled “Comparison of photogrammetry software” to see the impressive number of software programs in existence, most of which offer functions for the automatic modelling of several images.
However, if we take a closer look at this list, two major features stand out. First of all, in his comparison criteria, the author of the Wikipedia page* did not include the option of viewing images in stereo (using a NVIDIA 3D-type system with shutter glasses) that allows the photo-interpretation or manual extraction of precise measurements. Also missing from this list were several professional photogrammetry software applications such as Socet GXP (formerly called Socet Set), Imagine Photogrammetry (formerly LPS) or Summit Evolution (Dat/EM). These products are highly specialized and costly, and are sold as photogrammetric suites, often integrated into a GIS application. Among other things, they provide interactive stereoscopic viewing functions as well as a variety of automated operations.
This list in a way shows a trend in the field of photogrammetry, which has spiked general interest from many users and developers for the processing of 2D videos or images applied to 3D reconstruction. This is a dynamic and highly innovative field (e.g.: https://grail.cs.washington.edu/rome) including expertise gained from artificial intelligence and computer vision. Therefore, it is not surprising that the first applications to offer functions for processing images from drones first appeared in software dedicated to this area. Processing drone images may seem to be a natural extension of 3D reconstruction, especially given the quantity of the images to be processed and the ability to process images in variable acquisition conditions.
However, some uncertainty as to the results and the difficulty in obtaining precise measurements in a given mapping system could disappoint those for whom mapping accuracy is important when it comes to geographic information, as well as those who consider the use of drones for image acquisition as an extension of aerial photography or satellite imagery in a geomatics context.
However, there exists software that could be qualified as “hybrid”, such as Correlator 3D and Pix4D, which has benefited from advances in computer vision, in particular for the effective processing of many images and through highly automated and intuitive approaches. These software programs appear, on the one hand, to be more thorough regarding geometric principles and mapping accuracy of products generated, and, on the other hand, more integrated in a GIS processing chain. However, most of them do not offer the possibility of stereo viewing.
As for more traditional solutions, most suppliers, facing a market with an increasing number of niches, but short on highly automated solutions, now offer a drone image processing module (Summit UAS, Imagine UAV) with solutions integrated into costlier GIS suites. However, they remain about the only ones to offer the option of interactively viewing drone images in stereo, or their derivatives, and to interpret them or extract precise information manually.
Viewing in stereo is also one of the aspects that piqued our interest during our initial experiences with drone images. Although the capability to automatically and efficiently process these batches of images and quickly have the resulting products is an essential requirement today, we should also think about the possibility of viewing images in stereo and extracting thematic information from these images, or even the possibility of correcting and enhancing automatically generated products (for example, refining the cleaning of a surface model in order to obtain a terrain model). Or simply the fact of being able to measure control points in stereo to improve aerotriangulation for a block with absolute precision?
We have therefore tried to benefit from the processing and triangulation abilities of software, such as Correlator 3D, in order to obtain camera and exterior orientation settings adjusted with sufficient precision. Then, we imported stereo pairs using these same settings in a software program that allows 3D viewing such as Summit. We had the chance to have a block of images with adequate overlap and enough flight lines, which are the essential conditions for successful photogrammetric modelling of a block of images. As well, since the camera was not calibrated, it was possible to use a very practical function, which uses image redundancy to determine via autocalibration the internal parameters of the camera, for example, the lens distortion coefficients, which are significant for certain types of cameras, such as the D800e used for our tests. Refining these parameters, often unknown initially, has proven to be of great use, since it allows to obtain a clear stereoscopic vision without vertical parallax on a larger part of the stereo overlap. Conversely, using a few approximate parameters provides an adequate stereoscopic viewing only on a small radial portion at the centre of the model.
The exercise was conclusive and interesting: the degree of detail offered by the drone images in stereo is impressive. We could almost see the difference between ¼” and ¾” gravel, which is amazing! However, solutions that allow to obtain this type of strict configuration remain few and specialized. The possibility of acquiring drone images has not changed certain basic photogrammetric principles. We must be able to resolve, with sufficient precision, the interior orientation parameters (camera) and exterior orientation parameters (position of the drone in space). To achieve this, among other things, a flyover must be planned based on a block of images with enough flight lines and adequate overlap between photos. Once the images are integrated in the stereoscopic viewing system, it is then possible to perform photo-interpretation to extract information or even interactively enhance automatically generated products, such as a digital terrain model.
*Note from the author: At the time of writing this article, Socet Set, Summit Evolution, Summit UAS and Imagine LPS software were not part of the list. They have recently been added.
To learn more about drone-assisted imagery, contact Effigis, and obtain a tailored solution for your project.