In late 2020, DJI announced the first integrated lidar drone solution and a full-frame camera payload for aerial surveying, the Zenmuse L1 and P1, respectively.

Integrating a Livox Lidar module, a high-accuracy IMU, and a camera with a 1-inch CMOS on a 3-axis stabilized gimbal, the L1 captures and generates point clouds in real-time at 240,000 pts/s and features a detection range of 450 m (80% reflectivity). As an end-to-end solution, DJI’s goal with the L1 is to generate precise 3D point cloud models with a focus on mapping.

  

Designed for photogrammetry flight missions, the P1 features a full-frame sensor with interchangeable fixed-focus lenses on a 3-axis stabilized gimbal, offering 3 cm horizontal and 5 cm vertical accuracy without GCPs and covering up to 3 km2 in a single flight. Compared to DJI’s current photogrammetry solution, the Phantom 4 RTK featuring a 20-megapixel sensor, the P1 delivers more precision and accuracy with its 45-megapixel sensor. 

To understand what led DJI to develop these two new sensors, what their benefits are over other solutions, and which industry sectors or applications can take advantage of them, we connected with Arjun Menon, Engineering Manager at DJI.

Danielle Gagne: What were some of the identified challenges in the industry that led DJI to start working on building the Zenmuse L1 payload?

Arjun Menon: DJI’s initial interest in the L1 stemmed from robotics because, traditionally, lidar has been quite extensively used in localization and mapping in robotics. When we entered the lidar enterprise space, we started realizing that there was a lot of opportunity in lidar-based mapping solutions in the construction and surveying industries, because most of the available systems were quite expensive, and quite fragmented, in the way they were built. For example, the flight systems were different from the sensing systems, or the data management system was different from the communication between the ground station and the aircraft. So, we saw there was tremendous potential for us to simplify all this. If you compare this to some of the mapping solutions that are available today, we saw there was a lot of potential for further innovation. With these learnings in mind, we set out to build a lidar-based mapping solution, applying some of the most advanced algorithms that we have built in robotics, and build a system with sensing, flight, communications, data management, and all these other subsystems vertically integrated. This was extremely critical for us, and we also wanted to make it available at a price point that was much more accessible than the competition. We are also looking to improve some of the existing solutions out there in terms of features. For example, we introduced Live View as a standard feature where we are now able to get a true color point cloud as you fly the aircraft. 

Can you talk me through some of the key distinguishing features of the Zenmuse L1 and Zenmuse P1 payload?

The L1 is a solution to capture 3D information inherently, because of the way the lidar sensor works, which uses lasers and time of flight to measure distance, whereas the photogrammetry (P1) takes pictures, identifies common points between the pictures, and does a triangulation to find out and generate the depth information to create the 3D model. With the L1 inherently capturing this 3D information, it doesn't need additional processing for this depth information, meaning it does not need the amount of processing time that images do to generate 3D information. In terms of processing time, the L1 is designed to be more efficient than the P1. Also, the L1 can map in low light situations and in areas where there's heavy foliage, meaning that you can differentiate vegetation versus the ground itself.

What are the benefits for end users to having an integrated payload option, especially in comparison to other methods out there on the market today? 

The main advantage of an integrated payload solution is that you do not have to worry about managing multiple subsystems. What does this mean? It starts even with things like cable management. All the cable management between the sensors and the aircraft is completely taken care of by using our latest Skyport 2.0, which is the connector that goes between the sensing system and the aircraft. For example, things like communication and time synchronization between multiple sensors, such as the RTK, the camera, and the lidar, are all managed within this payload. With Live View and 3D models that are generated and visualized in real-time on the remote controller, the flight management systems, and the data captured, all integrated within the system, allows you to primarily focus on capturing data.

DJI has mentioned this release being the democratization of lidar technology. Can you talk about what that means and how the Zenmuse L1 can enable access to users and markets in which lidar might not have been used as much before? How do you see this democratization of lidar impacting the drone industry?

Lidar technology was previously viewed as complex and a piece of technology quite hard to operate. It required some experience to use some of these sensors and the aircraft were traditionally big to make space for these large lidar systems that were on board. They were also quite expensive, so you never usually found more than one or two systems in most of these different companies that used photogrammetry as a sort of their main way of mapping. The goal of the Zenmuse L1 was to make this lidar technology accessible to surveyors and make them important pieces of equipment in the surveyor’s toolbox. Our goal was to focus on simplicity, to reduce the complexity, to make it a convenient integrated solution, and to bring it in at a significantly more affordable price point. Once we paired this with our flagship model, the MT 100 RTK, which has its own set of powerful features, such as obstacle avoidance, and autonomous flight mission features, this package was aimed at end users that want to focus on the primary goal of mapping. When we say democratization of lidar technology, what we mean is that we want it to be widely accessible to folks who may have not yet explored the lidar market and the lidar technology for their specific cases.

Could this be a way for people who might have less information or knowledge about lidar to get access to the value that lidar can bring into their workflows? Or is it still requiring that survey and mapping kind of knowledge as a base line?

Traditionally, you did need a bit of surveying and mapping knowledge, especially when it came to things like data management. Once we received this data, all the processing required very specific surveying and mapping tools that were quite common in the industry. It needed sort of that experience of “how do we need to fly this to get the best output out of the system”. Then, all data management required you to know some of the coordinate systems, and what are the additional parameters that you need to do to tune these systems. Whereas with the Zenmuse L1, we are trying to keep that experience minimal, or none at all, for you to explore this space. We try to keep it as standard as possible, such that it is a new system, but it's not something that requires a lot of prior experience. 

More broadly, what industry sectors or applications did you envision the Zenmuse L1 and the Zenmuse P1 serving?

The Zenmuse L1 is being launched at a price point that is significantly more reasonable than the competition. With this combination of simplicity and the ability to operate quite easily compared to the previous products, we are hoping that they will start to be used in new industries. For example, one big use case that we see is in public safety, where we see accident reconstruction as one of those applications where if there's a need for 3D models of the exact accident site, they can map that area and get it as soon as possible. Another one is disaster management sites, where with the L1 large swaths of data can be processed much faster when compared to photogrammetry solutions.

The Zenmuse L1 is also equipped with some powerful software and post-processing capabilities, can you walk me through the workflow from start to finish?

Once the drone is set up and flying, you're able to monitor the 3D point clouds that are coming in through DJI’s Pilot app. You can choose to have those colorized with the camera in-built on the payload and see the progress of the aircraft itself. Other activities such as the health of the aircraft, battery life, mission progress, and how much time is left can also be checked on the DJI Pilot app. Once the aircraft has completed the mission, you can analyze the mission, see the full 3D model on the DJI Pilot app, and take course measurements of the area that was mapped. Let's say you want to see this data on a larger computer, rather than a mobile app screen, then you can export this data that was just captured in real-time from the SD card and visualize it in a computer without any additional processing. With this data, you could run your software on it and see what sort of coverage you have. The SD card that stores this live processed data also includes the raw data that was collected by all these sensors during the flight, which is used for post-processing. Then, you can export this raw data into DJI Terra, which has all the algorithms and several other features to generate the 3D models in true colorized format and export it to any industry-standard format. 

Available for order in early 2021, the two sensors aim to bring new perspectives, with even more efficiency, at an affordable cost without compromising the quality and accuracy of the data collected for precise aerial inspections and data collection missions. While the U.S. Department of Commerce recently added DJI to its Entity List of companies that need a license for certain exports, that doesn’t stop anyone from owning, buying, or using a DJI drone, or components.