I had the pleasure a couple weeks ago of attending the sibling conferences Understanding Risk West and Central Africa and State of the Map Africa in Côte d’Ivoire (Ivory Coast) a few weeks ago. It included a nice mix of formal discussions of how to reduce risk in light of climate change, as well as discussions of all aspects of OpenStreetMap efforts.
The conferences were hosted in two locales: one in Abidjan and the other in Grand Bassam, with a day of overlap of the two conferences in Abidjan.
Cristiano Giovando of Humanitarian OpenStreetMap Team and World Bank Global Facility for Disaster Reduction and Recovery, Olaf Veerman of Development Seed, and I (Stephen Mather, Cleveland Metroparks) led a workshop on drone mapping for resilience and focused on a workflow that went from flying, to processing, uploading to OpenAerialMap, and digitizing in OpenStreetMap. It went seamlessly and some of my jokes even translated well into French for the Francophiles in the workshop.
Plenty of dancing occurred at the end of the days of hard work, and throughout the rest of the conferences I led drone demos and discussed approaches to drone mapping at breaks.
(Reposted from https://smathermather.com/2019/12/02/self-calibration-of-cameras-from-drone-flights-part-3/)
I have been giving a lot of thought to sustainable ways to handle self calibration of cameras without undue additional time added to flights. For many projects, I have the luxury of spending a little more time to collect more data, but for larger projects, this isn’t a sustainable model. In a couple of previous posts (this one and this one), we started to address this question, pulling from the newly updated OpenDroneMap docs to highlight the recommendations there.
As I have been thinking about these recommendations, there are other more efficient ways to accomplish the same goal. Enter the calibration flight: the idea is that with some cadence, we have dedicated flights at the same height and flight speed as the larger flight in order to estimate lens distortion.
For this testing, I chose a relatively flat but slightly undulating area in Ohio in the USA: the Oak Openings region, which is a lake bottom clay lens overlayed with sand dunes from glacial lakes. It has enough topography to be interesting, but is flat enough to be sensitive to poor elevation models.
The test area flown is ~80 acres of residences, woodlots, and farmland.
Flown with a DJI Mavic Pro which has an uncalibrated lens with movable focus, the first question I wanted to address is how much distortion do we get in our resultant elevation models if we just allow for self calibration? It turns out, we get a lot:
We have seen this in other datasets, but this forms a good baseline for our subsequent work to remove this.
Next step, we fly a calibration pattern. In this case, I plotted an area large enough to capture two passes of data, plus an orbit around the exterior of the area with the camera angled at 45° for a total of 3 minutes and 20 seconds.
When we process this data in OpenDroneMap, we can extract the cameras.json file (either in the processing directory or we can download from WebODM) and use that in another model. We can do this using the cameras parameter on the command line or in WebODM through uploading the json file from our calibration dataset.
But, before we do that, let’s do a review of our calibration data — process it and take a look at what kind of output we get. First, we process it using defaults and evaluate the elevation model to look for artifacts that might indicated whether the calibration pattern wasn’t successful.
Our terrain model from the Ohio Statewide Imagery Program elevation model looks like this for our calibration area:
Note that this is mostly a moderately flat farm field with a road and small ditches running North/South in the west of the image and a deep Northeast Ohio Classic ditch in the east.
How does our data from our calibration flight look?
It’s not bad. We can see the basic structure of the landscape — from the road in the west to the gentle drop in elevation in the east.
Our default camera model is a perspective camera. How does this look with the Brown–Conrady camera model that Mapillary recently introduced into OpenSfM?
With the Brown–Conrady camera model, we see additional definition of the road bed, ditches alongside the road, and even furroughs that have been ploughed into the field. For this small area, it appears the Brown–Conrady camera model is really improving our overall rendering of the digital terrain model, likely as a result of an improved structure from motion product. We even see the small rise in the field at the southern central part of the study area, and as with the default (perspective) model, the slope down toward the ditch on the east of the study area.
With running these both with perspective and Brown–Conrady cameras, we can apply those camera models as fixed parameters for our larger area and see what kind of results we get.
Our absolute values aren’t correct (which we expect), but the relative shape is — the dataset is now appropriately relatively flat with clear delineation of some of the sand features. This is the goal, and we have achieved it with some of the most challenging data.
How does our Brown–Conrady calibration model turn out? It did so well on the small scale, will we see similar results over the larger area?
In this case, no: the Brown–Conrady model over compensates for our distortion parameters. More tests need to be done in order to understand why. For now, I recommend using the perspective model for corrections on large datasets, and Brown–Conrady camera model on smaller datasets where the details matter, but the distortion isn’t discernible.
Camera calibration is a special challenge with commodity cameras. Temperature changes, vibrations, focus, and other factors can affect the derived parameters with substantial effects on resulting data. Automatic or self calibration is possible and desirable with drone flights, but depending on the flight pattern, automatic calibration may not remove all distortion from the resulting products. James and Robson (2014) in their paper Mitigating systematic error in topographic models derived from UAV and ground‐based image networks address how to minimize the distortion from self-calibration.
Bowling effect on point cloud over 13,000+ image dataset collected by World Bank Tanzania over the flood prone Msimbasi Basin, Dar es Salaam, Tanzania.
To mitigate this effect, there are a few options but the simplest to flight plan are as follows: fly two patterns separated by 20°, and rather than having a nadir (straight down pointing) camera, use one that tilts forward by 5°.
As this approach to flying can be take longer than typical flights, a pilot or team can fly a small area using the above approach. OpenDroneMap will generate a calibration file called cameras.json that then can be imported to be used to calibrate another flight that is more efficiently but, from a self calibration perspective, less accurately.
Vertically separated flight lines with the above interleaved 20° flight pattern also improve accuracy, but less so than a camera that is forward facing by 5°.
American Red Cross International Services Division continues their development of Portable OpenStreetMap, or POSM, in the latest 0.9 release. The concept is simple: use the full ecosystem of OpenStreetMap tools and sibling projects to deliver better geospatial data for aid and recovery wherever Red Cross goes.
In the case of integration of OpenDroneMap, the challenge has been one of compute power: how does one, for example, travel to a response in the Philippines, collect all the aerial data, and deliver those data to partners before departure?
The new answer? A passel of POSM. Well, we won’t do the story justice, so we’ll link out to the post:
In a previous post, we discussed distributed split merge using ClusterODM an API proxy for NodeODM and NodeMICMAC. This was an exciting development, but we’ve taken it further with autoscaling. Autoscaling will allow you to deploy as many instances as you need to get the processing done on the provider of your choice. This currently works for n=1 providers (DigitalOcean). Want to help with others? We have issues open for AWS and Scaleway, or open an issue for another provider you want to see.
This means, no more parallel shells to get your data processed, just configure, send a job, and watch the data process.
How does this work? We need to configure a few pieces, I’ll give you the overview:
We need 3 elements: ClusterODM, a local NodeODM instance, and a configuration file so that ClusterODM can orchestrate deployment of the secondary servers.
Grab a decent size Digital Ocean machine to install this on, say something with 16 or 32 GB RAM. ClusterODM setup is easy, just follow these directions for dependencies. I’ll give you the crib notes: it’s just docker, docker-machine and NodeJS.
We’ll need to deploy a NodeODM instance locally. This does the local processing and also allows ClusterODM to know what API it is proxying. We need to run it on something other than the default port 3000
docker run -d -p 3001:3001 opendronemap/nodeodm:smimprov --port 3001
Now that we have a NodeODM instance, let’s proxy it with ClusterODM. Let’s create a configuration file for our ClusterODM:
Code/community sprints are a fascinating energy. Below, we can see a bunch of folks laboring away at laptops scattered through the room at the OSGeo’s 2019 Community Sprint, an exercise in a fascinating dance of introversion and extroversion, of code development and community collaboration.
A portion of the OpenDroneMap team is here for a bit working away at some interesting opportunities. Tonight, I want to highlight an extension to work mentioned earlier on split-merge: distributed split-merge. Distributed split-merge leverages a lot of existing work, as well as some novel and substantial engineering solving the problem of distributing the processing of larger datasets among multiple machines.
Image of the code sprint.
This is, after all, the current promise of Free and Open Source Software: scalability. But, while the licenses for FOSS allow for this, a fair amount of engineering goes into making this potential a reality. (HT Piero Toffanin / Masserano Labs) This also requires a new / modified project: ClusterODM, a rename and extension of MasseranoLabs NodeODM-proxy. It requires several new bits of tech to properly distribute, collect, redistribute, then recollect and reassemble all the products.
Piero Toffanin with parallel shells to set up multiple NodeODM instances
“Good evening, Mr. Briggs.”
The mission: To process 12,000 images over Dar es Salaam, Tanzania in 48 hours. Not 17 days. 2 days. To do this, we need 11 large machines (a primary node with 32 cores and 64GB RAM and 10 secondary nodes with 16 cores and 32GB RAM), and a way to distribute the tasks, align the tasks, and put everything back together. Something like this:
… just with a few more nodes.
Piero Toffanin and India Johnson working on testing ClusterODM
This is the second dataset to be tested on distributed split-merge, and the largest to be processed in OpenDroneMap to a fully merged state. Honestly, we don’t know what will happen: will the pieces process successfully and successfully stitch back together into a seamless dataset? Time will tell.
For the record, the parallel shells were merely for NodeODM machine setup.
While the last decade has been dominated by the growing hegemony of the global base map, mapping will swing now for a while towards the principle of mapping the world, one organic pixel at a time. 2014 is the beginning of artisanal satellite mapping, where we discover the value in 1-inch pixels from personally and professionally flown unmanned aerial systems (drones). There is, as all things military-industrial, the dark side of drones. But as with all of these technologies, we will be discovering the great democratizing power of the artisanal, as applied to ‘satellite’ views.
So many FOSS options… .
After making the 2014 prediction post and then deciding to start OpenDroneMap, I had a doubt: what if someone comes along and creates something better? What if something better already exists? What if there is no point to the work? And then I remembered all the above and relaxed. Also, if someone comes along and creates something better, in the informal parlance: yay for the world!
The reality when I started the project was there was an existing project that was FOSS and was photogrammetry for drones and other small cameras: MICMAC. It’s exquisite: great quality, fully FOSS being a CeCILL-B (I like to think of it as a French version of the GPL (edit — it’s the French version of the lesser GPL (edit on the edit: CeCILL-B is equivalent to BSD…)), but IANAFL [I am not a French Lawyer] and not completely sure that’s correct), and at the time, difficult to use for non-French speakers.
MICMAC has evolved a lot since then, with better docs and community posts in both French and English, however usage of it can still be a challenge. Free and Open Source is hard. It can be hard for users, it can be hard for maintainers. So, it is such a relief when FOSS becomes ever so much easier.
So, it is with some excitement that I turn your attention to NodeMICMAC. NodeMICMAC is a fork of NodeODM, and thus provides web API access to MICMAC, in the same way that NodeODM does for OpenDroneMap’s command line ODM application.
NodeMICMAC makes using MICMAC really easy, and should slot into the OpenDroneMap ecosystem pretty seamlessly, thanks to JP and the folks at DroneMapper.
When we spoke with JP, I was curious about his motives: this is such a cool move that changes the industry. Why? The answer: for the same reasons that we work on OpenDroneMap, to grow this really cool open photogrammetric ecosystem.
(Sidenote: check out DroneMapper’s geoBits: ArUco Ground Control Point (GCP) Targets and Detection for Aerial Imagery — more on that later — so cool!).
But, you may ask, how does it stack up? Frighteningly well. If you have been paying attention to the improvements in OpenDroneMap the last couple of years, you may have noted orders of magnitude improvements in every step in processing. Even with these, MICMAC shows us how a mature photogrammetry project should display its wares.
Point cloud comparisons
ODM vs. MICMAC’s point cloud over a house:
ODM vs. MICMAC’s point cloud over a drainage:
Orthophoto over truck, fence, road, vegetation:
Orthophoto over a duplex house:
MICMAC is, as it’s reputation indicates, a bit of a grail. It gives us some very nice results, and now simply at the expense of spinning up a docker instance. Watch this space the next few weeks — Masserano Labs will be working with DroneMapper on integrating it into WebODM and quickly graduating it to a first class citizen of the OpenDroneMap ecosystem.
So, will NodeMICMAC replace NodeODM and all the work in ODM? Not so fast! There’s still space for what we’ve built. Remember my intro above? Of course you do. With upcoming capacity to handle massive datasets, NodeODM, PyODM, ODM, and other projects will still get our love. But as they say, if you can’t beat them, have them join you. Right? I think that’s the phrase… . Perhaps MICMAC isn’t joining OpenDroneMap, but we will be happy to fork their code and contribute back where we can, and thanks to JP and DroneMapper for making this possible!
For anyone using OpenDroneMap to process really large datasets, some good news came through early last year with improvements to how OpenSfM handles large datasets. This came in the form of an innovative, first of it’s kind, hybrid SfM method which combines the better attributes of global and incremental SfM approaches.
TLDR: This helps make processing large datasets orders of magnitude faster, and can be tuned to be even faster (at some accuracy expense), which is really exciting and wonderful work by the folks (especially Pau) over at Mapillary.
But this change was not enough for OpenDroneMap. After the SfM step, we have several more processing steps, all challenges with respect to data processing techniques, memory usage, and other issues which crop up when you adapt a bunch of libraries meant for one thing to another set of much larger things.
Enter: split-merge. Since early last year, we have also had some scripts to help split up large datasets into smaller pieces, keeping those smaller pieces aligned with each other, and helping with some of the merging of those data back into a cohesive whole at the end. This was a great work-around for processing those larger datasets, but for a variety of reasons (funding and time being the big two), never got completed.
Now a big phase of that work is wrapping up. You can find that work in Pull Request #979. Do you have a really large dataset that needs processing on the ODM command line? Try the following:
./run.sh --project-path /path/to/datasets --split <target number of pictures per submodel> mydataset
Most of our blog posts on OpenDroneMap are meant for interested users. Every now and then we have a gem for our contributors / power users who like to dive into the code a bit and enhance things.
For any of you who have done this, or have wanted to do this in the past few years, you’ve had to deal with the Ecto framework, a dynamically configurable Directed Acyclic processing Graph (DAG) framework which is way overkill for the way OpenDroneMap is really used, and makes contributing to OpenDroneMap much harder.
Until now! Or. Until soon. Well, after pull request 979 gets into master (with a likely requisite update to version 0.6 for OpenDroneMap), ecto goes away. As usual, we can thank the prodigious Piero Toffanin for this work.
If you’ve been stymied by contributing to ODM in the recent few years, come back. ODM: Now with less ectoplasm!