Monday 22 June 2020

Plotting for Scientific and Engineering Applications Using Python



In learning anything to do with programming it often helps to have a motivating factor to invest time and effort into developing a particular programming application.

I have long found Python to be a very valuable tool with regard to spectrum analysis and image processing in developing remote sensing technology using modified drone cameras equipped with customised filters.

The science of acquiring images which contain information, such as near-infrared imagery, involves the development and testing of specialised optical filter technology for use in modified cameras for imaging.

The 2 filters developed for this purpose are essentially dual bandpass filters, made of high quality glass, with different transmission characteristics in the near-UV to blue band and Near-infrared band.

They are circular filters embedded in a custom housing for use with the Mavic 2 Pro Hasselblad Camera.

Filter#1



Filter#2



Measuring the spectra of the optical filters is critical for defining their application in the field.

We can measure the characteristic spectra using a visible wavelength optical spectrum analyser (OSA) and read the data into a plot in Python showing the dual-bandpass nature of the filter at 2 critical wavelength regions. 





We can overlay 2 sets of data from 2 filters and compare the 2 in the same wavelength regions



Both Types of Filters for sale here:

https://www.ebay.ie/itm/155859816325

https://www.ebay.ie/itm/155859834544



We can also study the color histogram of the image taken by comparing an image taken without a filter and and image taken using the different filter types.

With No Filter:





With Filter #2:



This information is useful in constructing radiation indices for identification of quantifiable objects in an image, which is key in the practice of quantity surveying using remote sensing techniques.

As we can see using the filter the green channel is greatly diminished and we can leverage the amount of Red detected (which is NIR using our filter) with the Blue using our formula to get the modified NDVI, giving a way to quantify the vegetation density in a photographed and/or mapped region. 




We can also plot the Red channel (Our NIR) vs the Blue Channel (Our Visible Leverage) to get a characteristic soil and wet edge profile, a kind of correlogram, which is very useful as a remote classification tool. 



The data in the scatter plot has a particular pattern however it is really too dense to properly interpret. A solution to this is to create a hexagon bin plot. 



we can tidy up our plotting further by using the gaussian kernal density function. This also helps prepare the plot for further study as we shall see.

To begin we need to flatten the data from a 2D pixel array to 1D with length corresponding to elements (Warning! this can take some time). the fastest way to do this is to use the python library-level function ravel, which reshapes the array and returns its view. Ravel will often be many times faster than the similar function flatten as no computer memory needs to be copied and the original array is not affected. *

With added shading we get an easier graph to view and work with overall:




With the ability to place a contour around certain regions of the plot we can then have the ability to link the spectral content with the spacial content in our image analysis.

First we can define a threshold for our NDVI and plot it to see the NDVI levels marked above this threshold if we wish



Next we can place a contour over the spectral region that the threshold mark corresponds to in the NIR vs Blue spectral graph.



We can interpret this ourselves and classify the spectral analysis using established knowledge of its features. This is the beginning of developing a kind of classification system based on this linked information.

We can also classify using other means such as plotting the pixels located between the dry and wet wedges to indicated where mixed vegetation could be.

Going futher down this path leads to exploring other tools such as clustering algorithms, support vector machines, neural networks and the like as we approach the cusp of the field of machine learning with our spectral information which is a valuable technology in and of itself. This deserves its own series of articles with regard to combining drone-based  remote sensing with machine vision tools, particularly some of those features in use in the QGIS toolboxes, such as Orfeo.

Interesting still is the fact that indices beyond the NDVI can be developed and studied using these scientific plotting techniques using new filter designs to explore new potential environmental markers. An Ultraviolet-based environmental index using UV-remote sensing cameras is already under development for use in land and marine applications, taking advantage of the ability of UV light to penetrate the air-water barrier and return useful reflectance image information which we will be exploring in the near future. 



As always the code is available on the GitHub page and is open for customisation and tinkering for each persons own needs: https://github.com/MuonRay/PythonScientificPlotting



Notes:

* see reference to this at pages 42-43 @ Numerical Python: A Practical Techniques Approach for Industry By Robert Johansson

Saturday 13 June 2020

Drone NDVI Mapping with QGIS and Python Analysis Code




Code Link: https://github.com/MuonRay/QGISPython/blob/master/NDVIQGISWithMapLegend.py

In this video I showcase the creation and use of Near-Infrared (NIR) TIF orthomosaic datasets made using UAV (drone) photography and photogrammetry which can be analysed in QGIS using a Normalised Differential Vegetation Index (NDVI) processing in the Python console in QGIS. This is comparable to the analysis done using satellite imagery, such as Sentinel-2, however using a 4K NIR converted camera on a drone flying at a maximum height of 70 meters means that we can get up to 1cm pixel resolution on the ground, allowing for very accurate remote sensing of areas of interest.

QGIS itself a free geomatics software package with a lot of functionality with regards to creating custom code recipes for analysis of datasets acquired using Earth-monitoring satellites and/or drones. There are a lot of interesting add-ons for QGIS, going from simple codes that allow for a cursory editing of a dataset to improve contrast or in more complex applications such as the Orfeo toolbox for machine learning. QGIS, being opensource means that there is large online community of professionals who use its features in research and industry alike and regularly update the different applications of this impressive piece of software.

Scripts that run in QGIS are written in Python code with a particular syntax native to QGIS that allows it to call its image processing libraries, which work on TIFs with greater ease than Python standalone scripts alone would and do not create lossy conversions as experienced with Python standalone coding libraries when processing TIF data. There is also options to save the processed images with a defined dpi ratio to preserve the image quality when saving to PNG or JPEG files for viewing completed maps outside of the program.

I would highly recommend using QGIS in conjunction with drone imaging and I am eager to explore some of the more in-depth of its applications further, in particular with respect to the classification and potential of the Orfeo machine learning toolbox.

Tuesday 9 June 2020

Fusing 3D Modelling with NDVI in Python + VisualSFM + Meshlab



Here is an exercise in 3D image processing I performed using Near-infrared images processed into colormap NDVI, allowing me to create a 3D model of plants for use in 3D plant monitoring/health classification.

Near-infrared (NIR) reflectance images as described before can carry information about the health of plants, with healthy plant tissue reflecting more strongly in NIR as well as Green.



The NDVI formula leverages the NIR channel with the visibly reflected light, in an NIR converted camera the Red channel becomes NIR and Blue and Green become the visible. The dual-bandpass filter chosen will separate the different color channels, in the case of the filter I use a separation between the blue and NIR regions of the spectrum.Thus In my Python code used to generate the NDVI, the blue channel is leveraged against NIR for more precise close range NDVI.


This was created using (1) custom Python code to process the NIR reflectance images into a graded NDVI temperature scale images and using a combination of (2) VisualSFM for point cloud and polygon generation and (3) Meshlab to tidy up and display the polygon file.


An RGB reconstruction was also performed on a collection of standard images (captured using a phone camera) of the collection of plants scanned for comparison.

The NDVI 3D model was not a perfect reconstruction however it was cleaner in general than the RGB model and considerably faster to process in VisualSFM after the Python code had processed the input RAW (DNG format) images. It is relatively easy to see the distinction between the healthy vegetation and the background environment, the wooden decking, the plants were placed upon in the NDVI 3D model. This has lead me to think that this technique could be further developed in machine vision of plants in an environment in 3D, especially if the 3D model can be converted into a movie using a program like CMPMVS which could then be plugged into a platform such as PyTorch or Tensorflow for use in plant health classification in 3D.In any case this was an interesting way to demonstrate the use of NDVI and NIR imaging as applied in novel applications in the field of 3D photogrammetry and modelling with the intent of creating datasets for future explorations in machine vision research.Code Available Here: https://github.com/MuonRay/PythonNDVI/blob/master/ndvibatch.py