Posts by: Stephen Mather

Better 3D all the time

 

Recent posts have detailed the improvements in the digital elevation products from OpenDroneMap that are forthcoming, and we are really excited about those.

That code is now relatively mature and likely usable for your use case, with only a couple of caveats:

  • The first caveat is that the processing for point clouds takes 3-4 times as long. That said, the quality is so much better, I feel comfortable with calling that a feature, not a bug.
  • The other caveat is that the code doesn’t yet respect the “–max-concurrency” parameter. It will use every processor you have, whether you want it or not. We will be fixing that before we merge it in with the master branch.

Those caveats aside, the quality of what we get in an elevation model is almost incomparable, and the memory footprint for the dense point cloud step that we improved is 60% what it was, so processing larger datasets should become easier. Finally, while better and more detailed, the total number of points in the point clouds is decreased (about 1/3 to 1/4 the size), which may help the speed of subsequent steps, although I will confess this is an untested theory. Interestingly, even though there are fewer points, they are better distributed, so it looks like more points.

Digital Elevation Model Improvements, cont. with code!

 

Ok, so far what is available is broken code but that will be fixed.  You can check it out in the meantime in this pull request. It calculates the full depthmaps, but does not yet write the PLY file needed for subsequent steps.

The trick to improve DEMs was already in our build process: for elevation models, MVE’s dmrecon utility is a slower but otherwise better option than smvs, the utility we are currently using. It provides more detail, is less “melty” as one person described smvs to me, and overall gives us much better results. A theoretical disadvantage is that svms can find results where there aren’t features, and thus does better gap filling. For drone mapping use cases, however, this remains predominantly a theoretical and not practical limitation.

Better Digital Elevation Models for everyone

 

We have some cool new-to-us approaches in the works for digital elevation models. Here’s a quick teaser and comparison old to new. This is a digital surface model over some buildings, fences, and trees in Dar es Salaam:

Blurry DSM image -- Old approach

Blurry DSM image — Old approach

 

Nice DSM image -- New approach

Nice DSM image — New approach

Water runs downhill

 

Animated GIF of nearshore water

Water running downhill is a challenge for elevation models derived from drone imagery. This is for a variety of reasons, some fixable, some unavoidable. The unavoidable ones include the challenges of digital terrain models derived from photogrammetric point clouds, which don’t penetrate the way LiDAR does. The avoidable ones we seek to fix in OpenDroneMap.

Animation of flow accumulation on poorly merged terrain model.

Animation of flow accumulation on poorly merged terrain model, credit Petrasova et al, 2017

 

The fixable problems include poorly merged and misaligned elevation models. Dr. Anna Petrasova’s GRASS utility r.patch.smooth is one solution to this problem when merging synoptic aerial lidar derived elevation models with patchy updates from UAVs.

Another fix for problematic datasets is built into the nascent split-merge approach for OpenDroneMap. In short, that approach takes advantage of OpenSfM’s extremely accurate and efficient hybrid structure from motion (SfM) approach (a hybrid of incremental and global SfM) to find all the camera positions for the data, and then split the data into chunks to process in sequence or in parallel (well, ok, the tooling isn’t there for the parallel part yet…). These chunks stay aligned through the process, thus allowing us to put them back together at the end.

For a while now, we’ve been in the awkward position with respect to the nickname: split-merge. It lacks a lot of merge features. You can merge the orthos with it, but not the remaining datasets. And while this will be true for a while longer, we have been experimenting a bit with the merging of these split datasets (thanks to help from Dr. Petrasova). Here for your viewing pleasure is the first merged elevation model from the split merge approach:

Digital surface model of Dar es Salaam, merged from 4 separately processed split-merge submodules.

Digital surface model of Dar es Salaam, merged from 4 separately processed split-merge submodules.

More to come!

Repost: Upcoming improvements to OpenDroneMap — Better everything.

 

Now that OpenDroneMap has a new website, we have a blog to go with it. For the first post, we will repost something from https://smathermather.com on the latest upcoming changes to OpenDroneMap:

Problem Overview:

Slide31

One of the greatest challenges with OpenDroneMap (ODM) is getting great results out of sparse data. I used to describe this as getting good data out of mediocre inputs, but this isn’t a fair descriptor, and here I’ll make a public apology: just because I have the time to fly with lots of overlap most of the time doesn’t mean it should be a requirement for everyone else. Ok. Now that that is off my chest, let’s talk about upcoming improvements which address this and more.

What does Better everything mean?

In the title, I say “Better everything”. The secret to better ODM data (and Piero Toffanin can take nearly all the credit for these improvements and discoveries of how) is to improve nearly every step in the pipeline. Let’s talk in general terms about what that means:

We can roughly break the pipeline into the following steps:

  1. Extract features
  2. Match features
  3. Structure from motion
  4. Dense matching / Multi-view stereo
  5. Meshing
  6. Texturing
  7. Final products
    1. Orthophoto
    2. Digital surface model
    3. etc.

There are other parts and pieces here — spatial referencing, undistorting of imagery, etc., but this is a good enough outline for discussion purposes.

As it happens, our underlying process is pretty good for steps 1 through 3. OpenSfM is our library of choice, and it handles the SfM functions quite favorably. At some point I would like to compare the positions of a really good RTK dataset to what we get from SfM, and maybe even do some testing with a synthetic dataset for a robust test of or SfM quality, but on the whole we don’t have notable deficiencies in this part of the tool.

Focusing on the parts to improve:

This just leaves us the other half… steps 4 through 7. The first step that is important here is dense matching. After a considerable amount of parameter testing, tuning, etc., we concluded that OpenSfM’s dense matching isn’t up to snuff yet. This isn’t a necessarily a critique: it’s one of the younger parts of the toolchain, and more mature FOSS solutions are out there already.

Multi-view Stereo:

So, we look to improve our dense matching / multi-view stereo. We are getting some favorable results by changing the underlying multi-view stereo library. These results compare favorably to Agisoft and Pix4D, two closed-source industry standards for UAV photogrammetry:

Slide27

Slide28

Meshing and Texturing:

Now, for steps 5 through 6, we need an improved meshing procedure and improved texturing in order to get great results in our final output. Here, we can evaluate with orthophotos as a final product:

Slide30

Slide31