Drone Collected
Stockpile Solutions

FAQ

  1. Dense Image Matching (DIM) using highly redundant images acquired from a low flying small Unmanned Aerial System (sUAS). This is the method employed in the AV-900 MMK. The images are processed through an algorithm called “structure from motion” (SfM) that reconstructs a three-dimensional (3D), very dense point cloud as well as an ortho image mosaic. The Pros are – personnel are out of harm’s way, the stockpiles are very accurately modeled since, unlike ground-based methods, no part of the pile is occluded from the data collector, the method is very fast and the cost is generally less than competing methods. The cons are that some unusual surface types may not be amenable to the SfM algorithm (although we have not yet encountered this problem in our many tests).
  2. Traditional survey techniques such as the use of Real Time Kinematic (RTK) Global Navigation Satellite System (GNSS): There are no real pros except that the service is relative easy to perform or contract. The cons are – personnel are in harm’s way, the method is not very accurate since a small number of survey points are used to construct a 3D model, the tops of piles and interior sections are usually not well modeled since they are occluded and the method is very labor intensive.
  3. Ground-based laser scanning: The pros are that it can be more accurate than traditional surveying because more sample points are included in the model. The cons are similar to traditional surveying - personnel are in harm’s way, the tops of piles and interior sections are usually not well modeled since they are occluded and the method is very labor intensive (even more so than traditional surveying since multiple scan locations must be occupied).
  4. Model reconstruction from ground-based images. This is a method wherein some sort of scale is constructed (for example, placing two traffic cones a known distance apart) and a series of hand-collected photos (or video) is taken of each stockpile to be modeled. A popular cloud-based service can use the camera of an iPhone. The only real Pro for this method is that the equipment needed for data collection is very inexpensive (although, in a cloud-based services model, the service can be rather expensive). The Cons include: The method provides only a relative model. That is, it is not tied to a real world coordinate system. Thus even if the volume computation were accurate, the location in the real world is unknown. This means that time series analysis cannot be performed nor can a site orthophoto be generated. In addition, it has all of the cons of ground-based surveys - personnel are in harm’s way, the tops of piles and interior sections are usually not well modeled since they are occluded, the dependence of the system on cameras with low signal to noise ratios means that textures from shadow areas/dark materials are not well modeled and the method is very labor intensive.
  5. Aerial Photogrammetry. This method involves the use of manned aircraft to image the stockpiles using stereo model reconstruction. It has the advantage of keeping personnel out of harm’s way and having the potential of being more accurate than ground-based methods (this depends on how many stereo points the service provider extracts). It has the disadvantage of high cost (as a service and very high cost for an owner/operator model). It has an additional disadvantage of suppling only point and line features for elevation modeling as opposed to a dense point cloud.
  6. 6. Aerial laser scanning (LIDAR). This method uses a manned aircraft (or an sUAS) to collect a 3D point cloud of the stockpile area using an airborne laser scanner (LIDAR). The advantages are – personnel are out of harm’s way, if the data are densely collected the results are quite accurate, the data are absolutely positioned. The Cons include – high cost (as a service and very high cost for an owner/operator model).

A summary of the pros and cons of the various techniques is provided in the figure below:

  1. Volumetric Analysis
  2. A complete orthophoto mosaic of the site area
  3. Stockpile surface area
  4. Profiles and cross sections
  5. Contours
  6. Digital Elevation Models (DEM)

In the workflow, we generate (or could generate) data in the following formats:

  • Point Cloud – LAS (could be converted to ASCII, of which PNEZD is a flavor)
  • Control and Check Points – ASCII and 3D Shape (Shape is a very common format for points, lines and vectors that virtually all software can import)
  • Ortho Mosaic – TIFF, JPEG compressed TIFF (virtually all software can read these formats)
  • Volume Polygons – 3D Shape with volume results as attributes
  • Volume Data – DBF (this can be directly read into Excel and exported in any of the many Excel formats such as CSV)
  • Cut/Fill analysis imagers - TIFF

Other data that LP360 sUAS can create and export:

  • Model Key Point (MKP) reduced surface model (point cloud thinned by an accuracy criteria) – LAS, ASCII
  • Gridded digital elevation model – ASCII, GeoTIFF float (nearly all software can read a GeoTIFF float file)
  • Various analysis images such as shaded relief, elevation aspect, elevation color maps and so forth - TIFF
  • Cross-sections – 3D Shape, DXF
  • General vector attributes – 3D Shape with attributes

If accurate ground control is used, the accuracy achievable with any of our MKs is as good as or better than controlled aerial photogrammetry. We routinely achieve accuracies on the order of 4 cm in both horizontal and vertical, even with difficult surface materials such as cold mix asphalt and kaolin.

We recommend a high end laptop computer (so it can be used for field processing) containing an NVidia Graphics Processing Unit, GPU (this is used to accelerate the point cloud extraction processing). We have a recommendation for a laptop that has performed very well for us on our knowledge base. This laptop costs about $2,750.

All drones supported by our mapping kit fly via an automated program for data collection. All systems include a set of automated reactions to critical events such as low battery, loss of telemetry link, exceeding a 3D geofence and so forth. Thus no flying skills are needed for routine missions and most anomalies. The software process is quite straightforward. It is very helpful if the user has knowledge of basic mapping and is comfortable in learning new computer software. Of course, a more sophisticated background is helpful when diagnosing problems. For example, an unskilled user could successfully process a routine project but would have difficulty with a root cause analysis for a problem such as vertical error exceeding a threshold. Many mines/quarries typically employ personnel with basic surveying skills. These persons would be ideal for operating the MMKs and processing data.

The BYOD MK is a “bring your own drone” system. We offer web-based training (included with the BYOD MK) where we provide details on flying the drone, using our supplied mission planning and control software.

The BYOD MK collection parameters depend, of course, on the drone you are using.

The total collection and processing time, including mission planning, flying and data processing would range from 4 to 6 hours (excluding travel time) if a computer equivalent to our recommendation is employed.

The BYOD MK is a software only system. The first year of maintenance is included in the kit. The out-year maintenance is approximately 20% of the purchase price. It includes all software updates as well as access to monthly training webinars.

No problem. Check with us to see if your drone and camera are supported by our mission planning/control software. If so, the BYOD Mapping Kit is the right solution for you.

AirGon Reckon is a complete cloud-based ecosystem for hosting and delivering volumetric analysis and related site data to the end use customer.

The current version of Reckon (May 2015) provides storage for:

  • Orthomosaics of the site
  • Control Points
  • Volumetric polygon vectors
  • Redline markups (annotations)
  • General reports and documents (such as notes on a site and accuracy reports).

Reckon supports downloading volumetric report data as an Excel spreadsheet or as a printable pdf document. It also supports downloading of general files that have been stored with a site (accuracy reports, general notes, etc.).

All vector data (volumetric polygons, control points, etc.) and all files that have been stored in the “Reports” section of the site.

Yes, this is a very powerful feature of Reckon. Data can be designated as “temporal” when posted to Reckon. This allows you to view the changes of a site over time or go back in time to a point of interest to examine conditions at that time.

Yes. Reckon is hosted in Amazon Web Services (AWS). As such, it is outside the firewalls of all involved parties (the data publisher, the data consumer and AirGon). Sites of individual customers are completely isolated within AWS from one another. The system includes a self-service set of administration tools that allow end users to grant and revoke access to individual users. Reckon uses Amazon’s various storage systems for hosting data. These storage systems are maintained and backed up by Amazon. While loss of data is always a possibility, only the most sophisticated of IT organizations could match the integrity of the AWS infrastructure.

Absolutely. The two fundamental design principals behind Reckon are ease of use and scalability. Reckon is infinitely (well, within the constraints of AWS!) scalable.

Reckon includes tools for both the data publishers and the data consumers. These tools are very simple to use and are directly accessible via the web.

Reckon is a subscription service. The price is based on the number of sites that are being maintained by the customer and the total data stored by the customer.

The current pricing model is (this is subject to change):

  1. Level 1 - up to 25 GB of online data, up to 5 sites - $100/month
  2. Level 2 - up to 60 GB of online data, up to 25 sites - $200/month
  3. Level 3 - up to 100 GB of online data, up to 50 sites- $300/month
  4. Level 4 – up to 150 GB of online data, up to 75 sites - $400/month
  5. Level 5 - up to 200 GB of online data, unlimited sites - $500/month
  6. Level 5+ Each additional 50 GB above 200 GB adds $120/month

No. Reckon is hosted in Amazon Web Services and is available only as a subscription model.

The Reckon revenue sharing model is designed for service providers who are performing volumetric analysis (or site surveys) for end-use customers using the MMK and are authorized AirGon Service Providers. An example would be a surveyor who is performing volumetric analysis for a number of different mining/quarry companies. This service provider will receive a percentage of the monthly Reckon hosting fees being paid by the end-use customers. This is an excellent way for service providers to participate in annuity revenue without the overhead of managing their own storage and delivery systems.

Yes, a Service Provider can select from one of two billing methods for Reckon. If you wish to maintain all customer relationships, AirGon will bill you for Reckon and you, in turn, will bill your customers. This allows you to use Reckon in a variety of ways such as building the fees into an overall volumetrics service subscription. Your Reckon invoices will identify charges on a per customer basis.

If you do not want to become involved in the billing process, you can ask AirGon to directly bill your customers for Reckon services. You would still post data to the customer site and do other management actions on behalf of your customers but AirGon would directly bill the customer.

Absolutely. Reckon is a critical technology of AirGon with a very active development team. We will be responsive to customer needs as Reckon continues to evolve. For example, we have already had requests to use Reckon to “close” the deliver cycle between data collector and mine site operators (that is, to allow mine site operators to ‘order’ a volumetric analysis of specific areas of a site via Reckon).

Loki is a Global Navigation Satellite System (GNSS) Post-Process Kinematic (PPK) direct geopositioning hardware & software solution for low cost DJI drones as well as custom drones using digital single lens reflex (DSLR) cameras such as Nikon, Canon, Sony, etc. It is aimed at the high accuracy drone mapping community.

Loki will greatly improve the accuracy of drone mapping projects. It allows a reduction in the number of ground control points needed to achieve a specified horizontal and vertical accuracy level. It can provide a reference to the geodetic network in circumstances where no ground control is possible.

The complete Loki kit is US $5,995. Until September 30th, 2017 it is available at an introductory price of US $4,995. Be aware, however, that demand for Loki is high. Orders will be processed based on the date we receive your firm order.

A limited number of pre-release systems will ship near the end of August, 2017. We expect to be shipping in limited quantities by the end of September, 2017.

Loki includes a one year return to factory warranty. This warranty does not cover damage to components caused by a crash nor does it cover cables (including the personality cable).

ASPSuite (the Loki post-processing software) is bundled with the Loki kit and includes 1 year of software updates. After the one year anniversary, you can purchase an extended service contract for Loki or a software update contract for the software only.

A full Loki support contract (the first year is included with the original Loki purchase) is US $1,200 per year. This covers the Loki controller (including the internal battery), the antenna and the ASPSuite software. Cables (including the personality cable) are not covered. The post-warranty support plan does not cover damage due to crashes.

The original Loki purchase includes all ASPSuite software updates for the first year. After that time, all updates are included if you purchase the Loki system extended warranty (US $1,200 per year). If you wish to cover the ASPSuite software only, the annual maintenance fee is US $600.

A Direct Geopositioning System (DGPS) is a method of computing the location of a camera (or other sensor) at the exact time that it acquires an image to cm level accuracy. The DGPS comprises a Global Navigation Satellite System (GNSS) receiver, a camera event trigger and recording system (typically an SD memory card). Post-Processing software is used to compute location information from the onboard DGPS log and encode the acquired images with the resultant X, Y and Z locations.

High accuracy aerial mapping requires a tie to the geodetic network (the coordinate or spatial reference system). You probably are familiar with these as State Plane reference systems, Universal Transverse Mercator (UTM) and so forth. One of the common ways to tie an aerial mapping project to a spatial reference system (SRS) is by the use of image identifiable targets (so-called Ground Control Points, GCP) whose locations are precisely determined using survey equipment. These ground control points are used in photogrammetric processing software to determine the locations (and orientation) of each photo at the point that it was acquired. A Direct Geopositioning System (DGPS) augments this process by directly determining the camera position on board the drone at the time of image acquisition. Using DGPS can reduce the amount of ground control needed for a given accuracy level. In some circumstances, a DGPS can eliminate the need for ground control entirely (although this requires great care).

The bottom line is that a DGPS improves accuracy and significantly reduces project field time (and hence, cost).

Global Navigation Satellite System (GNSS) Real Time Kinematic (RTK) positioning solutions use a technique called differential carrier phase GNSS to derive the location of a navigation “rover” to centimeter level accuracy. The general principal is to place a survey grade GNSS “base” station at a precisely know location (there are straightforward techniques for determining the base station location). The base station determines what it thinks is its location using carrier phase GNSS. It then computes the error between where it knows it is located and the location computed from the GNSS. This difference is the error. A similar GNSS receiver in close proximity to the base (say 10 km or less) will have the same error as the base. By broadcasting this error from the base to the rover, the rover can apply the error correction, computing its location to centimeter accuracy. Since the error “vector” is reported via a communication link to the rover, the moving (“kinematic”) rover can make the correction in “real time” hence the name Real Time Kinematic GNSS.

Post-processed kinematic (PPK) GNSS positioning works exactly the same as RTK (see “What is GNSS RTK) except that the error vector is recorded by the base station rather than transmitted to the rover. The rover’s position is corrected in a post-processing computation session rather than in real time.

It is really a question of the application. If you need to know the precise location of the rover while it is at a specific location, then RTK will be required. An example of this need would be the use of RTK for centimeter accuracy navigation. This could be a requirement for autonomous vehicle control such as autonomous machine control (AMC) of construction equipment. If discerning the location in the “back office” environment is sufficient, then PPK would be the correct choice. Figuring out the locations of photos for aerial mapping is a good example of an application where PPK is a good fit.

One of the inputs to an RTK or PPK solution is the exact position of the GNSS satellites at the time of the positioning operation (for example, the time a photo was taken). The position of GNSS satellites is continuously updated from ground-based tracking stations. An RTK system can make use only of positioning data available at the time of data collection. A PPK system, on the other hand, makes use of data before, during and after collection. These data are called “post-pass ephemeral” data. Since PPK is able to use longer observations, it is, in general, more accurate than RTK data.

A PPK system does not require a radio link from the base to the rover. This not only simplifies deployment but also improves reliability. A radio link typically requires line of sight from the base to the rover. If the drone moves such that a stockpile or building is between the base and the rover, the radio link may be lost with the result of not being to find the location of the rover during the “black out” period.

Some scheme for a base station is always required for differential GNSS computations such as DGPS. There are several options including:

  • A portable base station on a tripod mount. You should plan on this being in place for a minimum of 2 hours.
  • A Permanent base station placed at the mapping site. This is a typical configuration for a mine site where a lot of survey is performed or automatic machine control (AMC) is being used.
  • A Virtual Reference System (VRS). This is a subscription service that uses permanent base stations such as those of a Continuously Operating Reference System (CORS) network.
  • A remote base station such as a single CORS station. This configuration is typically the least accurate of the four choices.

Note that Loki currently requires a local base station – either portable or fixed. We are working to support VRS but this will be sometime in the future.

The position computed by the DGPS must be synchronized to each camera exposure if the camera location at the time of the photograph is to be precisely known. A camera specifically designed for airborne photogrammetric applications includes an output signal that sends a pulse when the camera shutter is half-way through snapping a picture. This signal is called a Mid Exposure Pulse (MEP). A DGPS has several “event marker” inputs that will record the precise time that a signal appears on one of the event markers. By routing a camera’s MEP output to a DGPS event marker input, the photos can be synchronized to the DGPS computed positions.

Strictly speaking, a digital single lens reflex (DSLR) camera is a camera with a mirror behind the lens that routes the image to a view finder. When the user presses the shutter release button, the mirror move out of the way, exposing the digital sensor to the lens-formed image. It is the same as (now old fashioned) film-based single lens reflex cameras with the film replaced by a digital sensor. Newer prosumer digital cameras are “mirrorless” but are still often referred to as DSLR. Unlike smaller consumer cameras and most DJI drone cameras, the DLSR cameras have an attachment for a flash unit.

See the question “How does the DGPS know when the camera takes a picture?”. A Mid Exposure Pulse (MEP) is a signal sent from the camera to the DGPS that signals the DGPS each time the camera snaps a picture. It is called “Mid Exposure” since, ideally, we bracket the time of the exposure. In reality, the exposure is usually sufficiently fast and the drone speed sufficiently slow that the point during the exposure that we declare the event makes no material difference.

A DSLR camera does not have a MEP output (see the question “How does the DGPS know when the camera takes a picture”). However, a DSLR does have the ability to add a flash strobe unit. We tap into the flash unit to detect when the camera has taken a picture by simply connecting to the camera “hot shoe” and switching on the flash of the camera.

Ah, that is the real secret sauce of the Loki! The Phantom 4 Pro and x4S cameras do not have a Mid-Exposure Pulse (MEP) output nor do they support any sort of external flash. To overcome this limitation, GeoCue/AirGon designed a hardware circuit that replaces the SD card of the drone. A custom computer on our SD card insert called a Complex Programmable Logic Device (CPLD) “listens” to messages on the drone SD bus. From this message traffic, we are able to synthesize a MEP and send this signal to the Loki controller. GeoCue has a patent pending on this and a variety of other methods of synthesizing a MEP or correlating an effective MEP to other events.

Yes. For accurate aerial mapping, regardless of your use of direct geopositioning, the camera must be calibrated to achieve results of the highest possible accuracy.

There are generally two options for calibrating a mapping camera:

  • In Situ – this uses ground control points (GCP) that have been surveyed to known locations and the direct geopositioning information provided by Loki. Most Structure from Motion (SfM) processing packages such as PhotoScan Pro (which is a component of the AirGon Bring Your Own Drone Mapping Kit) and Pix4D include the capability of performing In Situ calibration.
  • Laboratory – A laboratory camera calibration consists of taking multiple images from different perspectives of a printed target (typically 1.2 m x 1.2 m). Calibration software is included with Agisoft PhotoScan and Pix4D Mapper.

No. Camera calibration instructions should be provided by the vendor you have selected for your Structure from Motion (SfM) software. If you selected the AirGon Bring Your Own Drone Mapping Kit (a very wise choice!) then the included 8 hours of BYOD Mapping Kit training does cover camera calibration.

The Loki kit includes everything you need to do direct geopositioning with a DJI Phantom 4 Pro, an Inspire 2 with X4s camera or a DSLR with a flash hot shoe. The components include:

  • The Loki System Controller (including the Septentrio AsteRx-m2 GNSS Engine)
  • Maxtena (M1227HCT-A2-SMA) L1/L2 GPS/GLONASS active GNSS antenna
  • A personality cable (you select DJI or DSLR, depending on your camera)
  • Controller to antenna cable
  • Charging/Data cable
  • Mounting kit for Phantom/Inspire (system is easily mounted to bespoke drones)
  • ASPSuite Post-Processing Software, Advanced Edition (1 Roaming license)
  • 2 hours of web-based ASPSuite training
  • 1st year software support

The Loki controller is self-contained, relying on an internal LiPo battery for power for both the controller and the (included) active antenna. This battery will power the Loki for approximately 4 hours. The battery is recharged when Loki is plugged into a computer for data transfer. On custom drones, Loki can be powered during flight via the USB-C connector.

On DJI drones using the SD card personality cable, Loki detects the power state of the drone and powers up/down accordingly. On DSLR configurations, Loki is powered up by a momentary switch on the Loki controller. It automatically powers down if 15 minutes have passed since the last MEP was sent to the controller.

There are two LED status lamps on the Loki controller:

  • Battery – Yellow when Loki is charging. Green when the Loki is plugged in to USB power and fully changed.
  • Satellite Lock – Yellow when Loki is acquiring satellites, Green when Loki has acquired sufficient satellites for differential carrier phase GNSS.

No. At this time, Loki requires a local base station. It does not currently function with a remote CORS station or with a Virtual Reference System (VRS). We do plan on offering VRS support at some point in the future.

No. At this time, Loki requires a local base station. It does not currently function with a remote CORS station or with a Virtual Reference System (VRS). We do plan on offering VRS support at some point in the future.

The Loki ASPSuite post-processing software encodes the image EXIF data with the camera exterior orientation (the computed X, Y, Z position of the exposure station) and ancillary information for camera calibration (should you choose to supply this). The exterior orientation can also be output as a comma separated values file. The output of ASPSuite will flow directly into Agisoft PhotoScan and Pix4D Mapper. We will soon have a seamless workflow to the DroneDeploy cloud hosted processing system.

The mass of the complete Loki system, including antenna, mount, Loki controller and personality cable for an DJI kit is 220 grams.

Loki installs with mounting rods secured to the Phantom 4 Pro via removable rubber bands. The personality and antenna cables are secured in place with tape (included in the Loki kit). We recommend that the personality cable not be removed from the drone once it has been installed. No modifications to the Phantom 4 drone are required.

The Loki “personality cable plugs into the SD card slot on the DJI Phantom 4 Pro and Inspire 2 drones. The personality cable has a built-in 32 GB storage system. Images are transferred from the drone to your processing computer by using the computer to drone cable supplied with your DJI Drone.

The Loki controller monitors the DJI drone via the personality cable. When the DJI is powered on, the Loki controller powers up. When the DJI is powered down, the Loki powers down.

Not nearly as dramatic as you might think. In a recent test we performed of running a Phantom 4 Pro from fully charged to 15% remaining battery capacity in calm air, we observed the following results:

  • Flight time without Loki = 24 minutes
  • Flight time with Loki = 20 minutes

This was an actual mapping mission, not a hover test. Thus we observed a 17% reduction in flight time as a result of the added mass of the Loki. We have not yet performed the equivalent test on the Inspire 2 but expect the same or better performance.

In general, the Inspire 2 with X4s camera is a much better mapping platform than the Phantom 4 Pro, regardless of Loki. The Inspire 2 is much more wind resistant and can operate in colder weather. Due to its higher mass, it is a more stable platform and thus can remain on the planned flight lines in heavier wind conditions. That said, the Phantom 4 Pro works fine for mapping so long as you are willing to be limited to calmer air and warmer flying temperatures.

At the time of this first version of the FAQ (August 7, 2017), we have not tested this configuration. However, we did receive a new m200 with X4s camera last week and will soon be testing this configuration. Barring unforeseen issues, we think the m200 with X4s camera should work fine. However, if you are planning to use Loki on this configuration, you need to contact us so we can keep you up to date with our testing.

The Loki controller box includes mounting slots around all four edges of the base. You can fabricate a bracket or simply wire tie the controller onto a drone body mounting plate. The antenna includes a mounting bracket, mast and ground plane. The mast can be mounted in a standard GNSS antenna mast mount.

This version of Loki does not trigger the camera. Thus you will have to continue to use your current camera triggering mechanism.

Yes. Loki includes a “stackable” hot shoe connection. Our MEP signal is fed out of the stack by a PC cable of the same type used on a slave flash unit. This cable is included with the DSLR personality cable option.

The Loki DGPS workflow accepts raw GNSS data from a base station, GNSS positioning information from the Loki controller and raw images from the drone. These data, along with ephemeris data downloaded from the web, are used to refine the positions of the images to high accuracy DGPS values. This flow is accomplished within the AirGon Sensor Package Software Suite (ASPSuite). Thus for refined image coordinate computations, all required software is included with the Loki system.

Loki will support any image to point cloud software (Structure from Motion, SfM) that can accept image locations encoded into the photo EXIF data or as an ancillary Comma Separate Values (CSV) file. These include PhotoScan Pro, Pix4D and cloud platforms such as DroneDeploy.

The components of the flow are shown in the diagram below. These components are included in the combination of the Loki ASPSuite and the AirGon Bring Your Own Drone (BYOD) Mapping Kit. They include:

  • DJI Ground Station Pro – Flight planning and mission control
  • ASPSuite – Loki DGPS post-processing
  • Agisoft PhotoScan – Point cloud and image generation (as well as In Situ camera calibration)
  • LP360 – Accuracy assessment, data reprojection, data cleaning, data analysis, product creation