Developers Perspective: The new LiDAR tool for iOS

At Photogram, we are constantly looking for innovations and novelties in the field of digital surveying. Our goal is to make these technologies user-friendly and more accessible, and for some time now our platform has been turning any smartphone that can take good quality videos and photos into a reliable surveying tool. The LiDAR sensors in the new PRO iPhones open up new possibilities to make surveying with smartphones even more efficient and accurate. How we use this new hardware to create point clouds is described here.

At present, we mainly use photogrammetry in our software. With this technology, point clouds of an object or a target location can be created from photos and/or videos with impressive accuracy. Although this technology is very flexible and delivers good results, the conversion of video and photo data into point clouds takes a lot of time. In addition, it is only possible to judge afterwards whether the raw data is sufficient for a satisfactory result.

One of the biggest advantages of this technology is that the distances to the objects are immediately scaled correctly. Subsequent conversion to the correct size is therefore not necessary, which saves computing power and therefore time. The orientation of the point cloud is also clear thanks to the IMU built into the device. This means that the point cloud is always correctly positioned in the room. Classic laser scanners consist of a laser and a receiver unit that are directed at a rotating mirror. The entire unit then rotates around the Z-axis so that the laser can scan the entire environment. These devices provide very accurate results, but are unwieldy for occasional users. A new development in LiDAR technology involves systems that dispense with large mechanical components and are therefore extremely compact. These laser scanners utilise MEMS (Micro-Electro-Mechanical Systems) technology to integrate microscopically small, movable mirrors on circuit boards. This technology makes it possible to direct the laser beam precisely in all directions without having to rely on a large mechanical rotating mirror. These advanced LiDARs are known as solid state LiDARs as they no longer contain any conventional mechanical components.

Since 2020, Apple has integrated a solid state LiDAR in all PRO models of its iPhones and iPads. This LiDAR is mainly used to improve the AR functions. With iOS14, the ARKit API was also made available to developers. This has given us the opportunity to integrate this new hardware into our software and use it for measurements.

Our "LiDAR function" uses the depth information from the LiDAR scanners of the iPhone or iPad to generate point clouds. The exact position and rotation in space is provided by ARKit with the integrated IMU, allowing 3D coordinates to be calculated in relative space. In addition, ARKit provides a "confidence value" for each point, which indicates the reliability of the captured points. It indicates how certain ARKit is that it has correctly determined the position and orientation of the detected points. The raw data from the LiDAR sensor does not yet have any colour values. Therefore, while using our "LiDAR function", up to 30 photos are taken every second with the integrated camera to add colour information to each point in the point cloud. Based on all this information, our algorithm tries to calculate the best available points, which are then saved as a point cloud. This process is repeated up to 30 times per second.

Our app is based on web technology, which offers many advantages, e.g. the possibility to develop the app only once and use it on different platforms such as web, iOS and Android. Nevertheless, there are also some disadvantages, such as the lack of access to native APIs like ARKit, which makes it difficult to access the LiDAR. The computing power in a web application is limited, and functions such as live 3D visualisation with real-time mapping are difficult to implement with web GPU APIs such as WebGL.

Our "LiDAR function" uses the depth information from the LiDAR scanners of the iPhone or iPad to generate point clouds. The exact position and rotation in space is provided by ARKit with the integrated IMU, allowing 3D coordinates to be calculated in relative space. In addition, ARKit provides a "confidence value" for each point, which indicates the reliability of the captured points. It indicates how certain ARKit is that it has correctly determined the position and orientation of the detected points. The raw data from the LiDAR sensor does not yet have any colour values. Therefore, while using our "LiDAR function", up to 30 photos are taken every second with the integrated camera to add colour information to each point in the point cloud. Based on all this information, our algorithm tries to calculate the best available points, which are then saved as a point cloud. This process is repeated up to 30 times per second.

Our app is based on web technology, which offers many advantages, e.g. the possibility to develop the app only once and use it on different platforms such as web, iOS and Android. Nevertheless, there are also some disadvantages, such as the lack of access to native APIs like ARKit, which makes it difficult to access the LiDAR. The computing power in a web application is limited, and functions such as live 3D visualisation with real-time mapping are difficult to implement with web GPU APIs such as WebGL.

If a LiDAR is available, the openLidarCapture function in useLidar.ts can be used to start the actual LiDAR plugin with the native user interface. The startLidar function in LidarIos.swift is then called for this purpose. Here you can see how a new native view controller is pushed over the existing web view controller to display the point cloud with the native metal graphics interface. Unlike other functions, the openLidarCapture function is not completed after one call. Instead, a bidirectional communication channel is maintained between web and native. This is used to tell the app what to do next. For example, it is informed when the point cloud is ready and under which path it will be saved. The web app needs this information later in order to start the upload to our server with the useLidarUpload function in useLidarUplaod.ts. However, the actual upload is carried out by the native plugin itself, as the bridge does not have enough bandwidth to transfer the point cloud from the native plugin to the web app. The upload code can be found in the NFSClient.swift file.

LiDAR sensors are currently only used by Apple. Therefore, this function is only available in the PRO version of the iPhone 12 (or newer) and the iPad 11 (or newer).

Author:

Head of Development
Matthias Keim, dott.