Sunday 22 December 2019

NIR Drone Scanning of Ancient Megalithic Cairn Tombs at Loughcrew County...





This video shows some test footage of the megalithic passage cairn tombs at Loughcrew in County Meath, Republic of Ireland this Winter Solstice taken with an NIR converted Hasselblad camera onboard the DJI Mavic Pro 2 drone.

Clusters of Megalithic Cairns are dotted around the Slieve na Caillaigh hills at Loughcrew, the main concentrations are on Carnbane East where Cairn T is the centrepiece and Carnbane West where Cairn L is located.
The illumination of the passage and chamber at the Winter solstice sunrise in Newgrange is world famous. Less well known is the Equinox illumination at sunrise in Cairn T at Loughcrew. The backstone of the chamber is illuminated by a beam of light at sunrise on the Spring and Autumnal Equinoxes.

The sun light is shaped by the stones of the entrance and passage and descends the backstone while moving from left to the right illuminating the solar symbols.

With Drone-based NIR Imaging, combined with the Soil-Adjusted Vegetation Index (SAVI) it is possible to map certain artificial features in the topography of the land with respect to the vegetation distribution. This allows for the possibility to examine the scale of manmade features such as farmland, burial sites and perhaps settlements that have been buried for centuries and can be difficult to detect using visual maps alone.

The SAVI index is one of many indices used by archaeologists in such research and drone-based imaging offers a unique opportunity to test such techniques affordably and quickly, without the use of more expensive aircraft or satellite imaging techniques. Moreover, the ease of deployment and the high quality of the imaging systems available for drones today makes them superior to other forms of often more expensive aerial surveillance, something which will become a major trend in the future as drone imaging and mapping advances.



Friday 15 November 2019

Herschel and the Physics of the Invisible Universe


Most of the universe is invisible to humankind. Moreover most of it is currently invisible to our modern equipment. As advanced and fine-tuned as it is at present, all of our detectors in the enterprise of physics could only ever see about 4% of the universe, theoretically. The rest, 96%, is essentially an invisible section of reality, made up of Dark Matter and Dark Energy. These percentage distributions is also merely based on what we know now about the structure of galaxies and their expansion, there is every reason to think that there is even more hidden physics, perhaps on forever.

We sometimes take it for granted that a lot of physics is invisible to our eye and everyday experience, particularly when considering our modern technological world depends on the existence of countless invisible wireless signals.

However the existence of invisible packets of energy, acting at a distance, would have been a completely occult notion at the time of Newton and Huygens with their respective theories of light being made up of quantifiable corpuscles and waves respectfully, expanding upon the first treatment of light as being a system of rays as described by Fermat into being a composite structure to describe such phenomena as reflection, refraction and of course the visible spectrum as a function of interaction of white light with a prism.



Newton's Corpuscular (or particle) theory of light, is based on the view of light being made up of equally elastic particles, held sway for much of the 18th century as it could explain refraction and reflection relatively well up until the phenomenon of interference could not be explained using particles alone.

The development of classical optics as described mathematically by Huygens and then used by Fresnel in the Huygens-Fresnel wave theory of light is necessary, among other things, to explain in the observation that parallel if a beam of light shines on a small round object, a bright spot will appear at the center of a circular shadow by means of constructive interference of the waves travelling around the object. Further developments of course in the experiments of double slit produced interference by Thomas Young further confirmed the wave theory of light as being necessary.



In any event, the notion of "action a at a distance" when considering forces acting on particles of light or matter was deeply troubling to Newton and he consciously developed his theories to eliminate what he considered to be non-existent forces. Newton's laws are a their core an empirical approximation of physics*(#1) and in a sense continue on the work laid out by Galileo before him. Newton, using the calculus he invented, ran mathematical studies on how things moved over time, and found a small set of rules which could be used to predict what would happen to, say, frictionless balls being pushed from a state of rest and how inertial masses create equal and opposite motion. The laws "work" in effect because they are effective at predicting the visible universe, which was assumed by scientists of the 17th century to be analogous to a clock with a perfectly visible and quantifiable clockwork.

Newton and other contemporaries such as Huygens did not examine physical laws which were in a sense "invisible" - Newton himself could not explain the origin of gravity in his theory and in a way his physics was a bookkeeping device to in a sense replace its explanation with its function and thus make predictions of the thing without having to explain the thing itself. and objects beyond simply labels of F12 and M1, F21 and M2, so that F12 = -F21 for example:



Huygens also did not explain the origin of the waves in his wave theory of light, that needed much more work in the development of electromagnetism and it was not until nearly the end of the 19th century that the invisible forces of electricity and magnetism acting in unison gave rise to the nature of the waves of light itself, as described by James Clerk Maxwell.

Nevertheless, Newton's theories were and still are very successful in applications, however it must be said that they do lack explanatory power of the nature of the forces themselves. They are still, in effect, "invisible" in this arrangement*(#2)

The beginning of modern science to look into the physics of the truly "invisible" really began with the famous 18th to 19th century astronomer William Herschel.

Sir William Herschel
bust by John Charles Lochée
1787

William Herschel was born on November 15th, 1738 in Hanover Germany. Herschel performed his extensive scientific work in Britain, which he migrated to when he was 19 to avoid military service during the infamous Seven-Years-War. Living in Britain, the young Herschel had been able to earn a living as as a skilled musician and composer for 15 years, during which his sister Caroline Herschel joined in him Britain and became a skilled musician, mathematician and astronomer in her own right. Both the Herschel siblings' work in astronomy was remarkably thorough, dedicated and careful. However, we shall see that he was not so careful to avoid the spark of true discovery that can often lie hidden just orthogonal to established knowledge.

Beginning his work in astronomy, William Herschel constructed his primary Newtonian telescope in the back garden of his home in London. He had constructed his telescope with the specific missions to study binary star systems to observe, over many years with his sister Caroline, the proper motion of stars and this singular focus led to a plethora of discoveries along the way between his initial years of astronomical study between 1779 and 1792. He soon discovered many more binary and multiple stars than expected, and compiled them with careful measurements of their relative positions in two catalogs presented regularly to the Royal Society in London.

Artist's rendition of a binary star system





In 1797, Herschel measured many of the systems again, and discovered changes in their relative positions that could not be attributed to the parallax caused by the Earth's orbit.

He waited until 1802 to announce the hypothesis that the two stars might be "binary sidereal systems" orbiting under mutual gravitational attraction, a baricenter essentially around empty space, a hypothesis he confirmed in 1803.

Such a discovery was very phenomenal at the time, as Herschel himself had been influenced by the writings of his fellow astronomer John Mitchel, who in 1783 proposed the concept of "Dark Stars", essentially invisible stars so massive that light cannot escape. Mitchel was also supportive of the theory of binary systems of stars existing around mutual gravitational centers and perhaps influenced a connection between the two.




In between this work, while looking for binary star systems, Herschel systematically reexamined objects discovered previously by Charles Messier and cataloged (but largely unclassified) in his famous catalog on stars. Messier was focused on searching for comets and so, understandably, did not spend much time on the objects he had first classified in a system of numbers we still use today for most objects seen in the Northern skies, M31 for example; the Andromeda Galaxy.

Herschel discovered that many objects, seen initially as stars, in the Messier catalog were in fact clusters of stars or nebulae* but one object he found, which had not been found by Messier before and which turned out to be the Planet Uranus. This was a significant discovery, as it was the first planet discovered since ancient times, all planets up to Saturn having been known since at least the Ancient Greeks and most probably earlier. It also helps put to sleep the notion that there is some hidden geometric mysticism to the planetary arrangements to many people: after all, what astrologer or psychic ever had anything to say before about the 7th planet!


This discovery also set Herschel up with the opportunity to do astronomy, and by extension, physics in general, as a full time profession. Herschel, somewhat obsequiously, called the new planet the "Georgian star" (Georgium sidus) after King George III, which predictably brought him favor with the King; the name of the Planet did not stick, particularly internationally. In any event, the King of England offered William a constant stipend of 200 pounds a month, less than he could of earned as a full time musician according to records but a job that Herschel took immediately! Caroline herself was eventually to get 50 pounds per month as William's assistant, making her the first woman in recorded history to receive a salary as an astronomer.

William and Caroline, now both in the employ of King George III, were now able to conduct astronomy on a scale that saw the largest telescopes of their time being constructed, such as Herschel's famous 20 foot, 18.7'' telescope

Herschel's 20 Foot Telescope

It could be argued this began the age of large scale telescope construction. Progress in astronomy on the surface of the earth really is a matter of simply building larger individual telescopes (or baselines of multiple telescopes) in better locations and this is a fact right up to the present day.

During his career, Herschel was as renowned as a maker of telescopes as he was as an astronomer, and he oversaw the construction of several hundred instruments, which were sold in England, Ireland and in the European continent. Using even larger telescopes, such as 40 foot telescope built in 1789, William was able to catalog much more deep sky objects and created the first surveys of the Milky way and discovered that many of the deep sky objects were so-called "spiral nebulae", really galaxies although the notion of a galaxy was unknown to astronomers of that time.



Caroline was also given a telescope, constructed by William, similar to the one with which he had discovered Uranus. With this telescope Caroline discovered 8 comets and over a dozen “nebulae” between 1783 and 1797.

These included: Spiral Galaxy NGC 253 Elliptical Galaxy M110, one of the satellite galaxies to Andromeda and discovered many comets in her time too, being one of the first women to be recognized for such work at the Royal Society in London.

During the rest of his life, William Herschel produced lists of thousands of nebulae and star clusters, and he was the first to distinguish between distant clusters and dusty nebulae.



As large as some of the telescopes he designed, William Herschel was also working with smaller telescopes, this time to examine the spectrum of stars and of the Sun itself. Herschel started at first to use color filters to separate the different bands of visible light from astronomical objects, a technique still used in amateur astronomy today to highlight certain features.

A Color Filter that Exposes the Near-Infrared and Near-Ultraviolet from Sunlight

In his observations of the Sun, in February 1800, Herschel described how the various colored filters through which he observed the Sun allowed different levels of heat to pass. The energy output of a 5700–5800K star, our sun, is greater in the visible wavelengths than in the infra-red, and most of the visible light should be passing equally through the atmosphere. How could these redder wavelengths then be hotter?

He performed a simple experiment to study the 'heating powers of colored rays': Like Newton over a century before he split the sunlight with a glass prism into its different constituent colours and this time made the next important step, using a thermometer as a detector he measured the temperature of each colour.



Herschel observed an increase as he moved the thermometer from the violet to the red part of the spectral 'rainbow'.


The result is due to the fact that the blue wavelengths in the classic Herschel experiment are more strongly scattered, hence more "spread out", than the redder wavelengths and thus more energy reaches the thermometers measuring the red end of the spectrum.

This experiment, given the limits of technology at the time, could not measure the discrete energy in each color individually but at the very least Herschel provided a kind of reference frame between different regions of the spectrum, with respect to their indices of refraction by the prism, and their temperatures.

Today, we can measure the discrete energies of wavelengths of light, independent of the effects of light scattering affecting the intensity. As a rule of thumb  thinking in modern terms, blue light is in fact more energetic than red light in terms of the excitation energy of electrons than it is in terms of its "temperature" as measured by Herschel.



Herschel also measured temperatures in the region just beyond the red color where no light was visible as a control. Herschel expected it to merely measure the ambient temperature of the room and, to his surprise, it recorded an even higher temperature there in apparent "shade". He deduced the presence of invisible 'calorific' rays, now called 'infrared' radiation.

An underlying understanding of the physics involved makes a big difference in how to interpret the results. The concept of heat had clearly been known since prehistory, as well as the fact that it travels through the air from a flame or heated object. What Herschel discovered was subtler than the existence of an invisible heat radiation.  Herschel found the first solid evidence that light and infrared are the same quantity that we know today to be electromagnetic radiation, a critical example of scientific reductionism.

Through a series of simple experiments, Herschel found the first piece in one of the great puzzles of physics that would take another century and the formulation of the quantum theory to solve, particularly when one considers that objects that emit light through heating must emit energy in discrete packets, quanta, in order for the visible world to make sense, as is the case for Max Planck's theory of radiation solving some of the absurdities from the so-called "ultraviolet catastrophe".

The Planck Black Body Radiation Curve


But perhaps the greatest lesson to be learned about the discovery of the infrared part of the spectrum of light is how it was discovered. Let us remember: it was not by complete accident. Herschel was thinking about the problem and searching for it the right way, based on an exploitation of the known principles of light, some of which he discovered (i.e. light filtered Red being hotter than Blue). However the discovery was found by a detector system that was not so fine-tuned and was exploring part of the answer in what would appear, on first glance, to be not in a local optimum location but then turned out to be the global optimum of the search for a discovery!

What does it really mean then when we can detect something that is a genuine discovery, using a detector system that is not set up properly? Does that mean that although we may set out to find something to prove (or should I say disprove!) a theory, even if we set up our equipment 100% correct (with that careful precision physicists are most famous for) we may not be guaranteed to find anything because what we are looking for is, in a sense, just a little nudge to the right or to the left?

In the history of science Herschel's discovery of IR radiation is in great company, with the discovery of the CMB as another similar example. I cannot say for all cases but it is sometimes the case and the unfocused or just downright wrong setup can give us real jewels of scientific knowledge.









Extra Points:



*#1
This is particularly evident with Newton's third law. Newton in his time never could explain the origins of forces that governed the laws he discovered, nor could Huygens . Without specifying the nature or origin of the forces on the two masses, Newton's 3rd law assumed they arise from the two masses themselves, that they must be equal in magnitude but opposite in direction so that no net force arises from purely internal forces. Hence physics is then reduced to a kind of balancing act between the 2 opposing masses.

Newton's Third Law is in reality, a meta-law (that is a higher level law invoking the existence of a more fundamental law) and we find, through Noether's Theorem, that it arises as a direct consequence of conservation of momentum, which essentially says that in an isolated system with no net force on it, momentum doesn't change. This means that if you change the momentum of one object, then another object's momentum must change in the opposite direction to preserve the total momentum. Forces cause changes in momentum, so every force must have an opposite reaction force.

But now we may ask:
Why is momentum conserved?
Conservation of momentum comes from an idea called Noether's Theorem, which states that conservation laws in general are a direct result of the symmetries of physical systems. In particular, conservation of momentum comes from translation symmetry. This means that the behavior of a system doesn't change if you select a new origin for your coordinates (put differently, the system's behavior depends only on the positions of its components relative to each other). This symmetry is found in all isolated systems with no net force on them because it is effectively a symmetry of space itself. Translation symmetry of a system is a consequence of homogeneity of space, which means that space is "the same everywhere" - the length of a rod doesn't change if you put it in a different place.
But now we may ask:
Why is space homogeneous?
In classical mechanics, this is one of the basic assumptions that allows us to do anything In this case it basically says that because the laws of physics as expressed in terms of Cartesian coordinates are the same no matter where you put the origin (the coordinate system and its origin is an arbitrary artificial invention) . Noether's theorem mathematically implies that momentum must be conserved in such a condition. 
In reality, according to General Relativity, space actually isn't homogeneous - it curves in the presence of massive objects. But usually it's close enough to homogeneous that classical mechanics works well (gravity is pretty spectacularly weak, after all), and so the assumption of homogeneity holds.


*#2 
In the contemporary world of Newton, the forces of magnetism and static electricity (through rubbing fur on amber and so forth) was known about, albeit nascent by the ancient Greeks and the Chinese too. This knowledge was pure empiricism and with very little theory behind it, and this could have been interpreted as an example of "action at a distance" in which some force overcomes the balance of Newton's third law with regards to the quantity of mass alone. Newton did not have a knowledge of charge or magnetic field strength, and nor would anyone until well into the 19th century.  

It seems that Newton did not pay much attention in incorporating the concept of magnetism or charge in his description of the physical world. He explicitly mentions electricity in Optics and he seems to have used magnets in experimental studies. That he didn't go very far with this is not particularly surprising: forces between permanent magnets do require knowledge of how calculus works in predicting the aspects of electric and magnetic fields and Newton's Laws generally follow afterward. The concept of opposing magnetic poles and electrical charges was not well quantified in Newton's time and so would have appeared somewhat occult to him.

Nevertheless, In the famous final paragraph of the Scholium Generale that Newton added to the second edition of the Principia, published in 1713, he wrote of “a certain most subtle spirit which pervades and lies hid in all gross bodies.” It was this active spirit that gave rise, he supposed, to the electrical attractions and repulsions that manifested themselves at sensible distances from most bodies after they had been rubbed, as well as to the cohesion of particles when contiguous. In addition, he surmised, it was the agency responsible for the emission, reflection, refraction, inflection and heating effects of light; and by its vibrations in “the solid filaments of the nerves,” it carried sensations to the brain, and commands of the will from the brain to the muscles in order to bring about bodily motion.

Friday 1 November 2019

Standard Environmental Drone Monitoring Techniques: NDVI, ENDVI and SAVI




With a standard filter we have demonstrated the use of a drone under single deployment to create Near-Infrared images.

The next step is to create indices which are important for the specific application in environmental monitoring. Of particular interest is the monitoring of regions which are vulnerable to natural and human induced disasters such as forest fires




We are also interested in comparing regions based on the amount of soil exposed due to factors such as grazing, human activity, etc.

To do this we must consider that there are many environmental indices to choose from in the field of NIR monitoring (and even more with other spectral bands).

Using experimental flights taken with an NIR camera using a bandpass filter with the following spectrum:


We have developed 3 important classification indices to consider using standard NIR filters on converted IR Cameras deployed on a drone used in environmental monitoring.

NDVI - Normalised Differential Vegetation Index

We have discussed NDVI before on this blog and will not go through it in detail again, however it is important to point out the differences between the satellite use of NDVI and the Drone/Aerial Monitoring implentation of the NDVI with regards to the different but essentially equivalent formula.

With satellite imagery we use an NDVI formula which has the best possible leveraging between a single band in the visible spectrum vs the NIR band. In an RGB image taken of the Earth's surface by a satellite, the blue portion of the visible spectrum (roughly 400nm-450nm) will diminished the most significantly by the atmosphere of the Earth, which scatters blue light more strongly than green or red. Hence, water aside*, features of the surface which reflect blue are relatively diminished in intensity. Hence, most satellite cameras used in Earth observation of vegetation ignore the blue region altogether and focus on Red, Green and several key IR bands.

Therefore, for satellite imagery, we use what has been come to be known as the standard NDVI formula:


With the drone imaging we perform, we can use a different but equivalent formula which gives a far better leveraging between NIR and the Blue channel for several reasons.


  • There is less atmosphere between the surface feature being examine and the drone CCD sensor, so blue light scattering is not an option
  • Some vegetation is in fact Red when Healthy, due to pigments. Blue however is not reflected by healthy plants, due to chlorophyll A and B both having prominent absorption in the blue region of the visible spectrum.
  • New CCD sensors used in drones (such as the Hasselblad) are generally designed to have equal sensitivity between the Red and Blue bands 
  • The Red band can be used as the NIR band in a converted NIR camera, making for easier data acquisition with existing cameras

So for Drone imaging we use the following NDVI formula, sometimes called the Infrablue NDVI formula.



Using our code in Matlab (see Appendix) from the following Near-Infrared DNG image



We process the following NDVI image



We can also use a similar code in Python that can do essentially the same processing, just with a different colormap scale:



At this point it all depends on personal preference which format to use, Matlab or Python. Python tends to be better for preserving bit-depth for use in further applications we will discuss in a future article on 3D reconstruction from NDVI processed images. I also prefer the colormaps that I have been able to create in Python myself as they provide a much sharper definition of the different NDVI scales in an image overall.

*(Water can reflect blue light in higher intensity because it is, in effect, reflecting the color of the sky (which is itself blue because of blue light being scattered in the first place) ).

ENDVI - Enhanced Normalised Differential Vegetation Index



Extended Normalized Difference Vegetation Index (ENDVI) data gathered for vegetative health
monitoring can be used to provide similar, but spectrally different information as compared
to traditional NDVI data.

Soil background, differing atmospheric conditions and various types of vegetation can all influence the reflection of visible light somewhat differently.

ENDVI uses Blue and Green visible light instead of the Red only method of the standard NDVI algorithm. This allows for better isolation of plant health indicators.



A normal healthy plant will reflect both visible Green and NIR light while absorbing Blue visible light, hence plants appear Green in the Visible Spectrum and will Glow Brightly in the Near-Infrared.

In the ENDVI measurement we are calculating the relationship between high absorption of this Blue light and high reflectance of Green and NIR radiation as an indicator of plant health.

Hence ENDVI analysis may, at times, be able to impart more accurate or reliable information regarding plant or crop health by additional reference of information in the green portion of the spectrum with respect to NIR.

It also has an added advantage in terms of distinguishing the differences between algae, lichens and plants on ground and, more importantly, in water which is critical for the monitoring of algal blooms and other contaminants which reflect in different intensity in the Near-Infrared than vascular plants,

As discussed in a past article, Vascular plants reflect NIR more strongly when healthy due to the presence of spongy mesophyll tissue which algae and lichens lack regardless of their individual health. Thus ENDVI gives more ability to classify plants and objects that appear bright in the NIR overall and with particular respect to their position in the ecosystem.



Again, using our code in Matlab on the same NIR DNG image we process the image from NIR to ENDVI



And we can do the same with generating ENDVI with our Python code:


SAVI - Soil-adjusted vegetation index



The SAVI is structured similar to the NDVI but with the addition of a soil reflectance correction factor called L that leverages the overal NIR brightness of soils so as to not wash out any potential features in the soil.

Just as in the case with NDVI, the satellite imaging version of the SAVI formula contains the Red band for use in leveraging against the NIR:


For our drone version, as before with NDVI, we use the blue channel as our leveraging against the NIR, which has been swapped with the camera's red channel using our InfraBlue Filter. The formula is hence amended to:


L is a constant and is related to the slope of the soil-line in a feature-space plot.



Hence the value of L varies by the amount or cover of green vegetation: in very high vegetation regions, i.e. thick forest, L=0; and in areas with no green vegetation, i.e. deserts, steep slopes, canyons, deserts, beaches etc then L=1. Generally, an L=0.5 works well in most situations (i.e. mixed vegetation cover, the red region in the NIR vs VIS(Blue) Plot).



So L=0.5 (half) is usually the default value used.
Note: When L=0, then SAVI = NDVI.


Again, using our code in Matlab on the same NIR DNG image we process the image from NIR to SAVI


And we can do the same with generating SAVI with our Python code:



SAVI can be very useful in observing very dry or grazed regions where soil and rock is exposed either by climate, animal or human activity.

SAVI has particular interest not only in helping to distinguish low level vegetation from soil and bare soil from rock but it has particular attractiveness in archaeology in distinguishing markings and artificial arrangements in the soil and rock surfaces in a landscape that would otherwise be depicted as largely homogeneous in colour and thus featureless.

This is particularly helpful when mapping regions which may have been used for farming or habitation long ago but have been largely abandoned with rock walls and areas of human habitation receding into the landscape.

In the area of archaelogy it is of course very important to map regions where ancient settlements once stood, and to discover new artifacts.



Also in the field of rewilding, old stone walls that divided farmlands in the past can be an obstacle to migrating wildlife and are an important feature with regards to distribution of species and bottlenecks involving their range in an area and may shape their behavior in feeding as well as predator-prey relations and human interactions.






Matlab Scripts:


All Matlab Files that load DNG need the following loadDNG function file:


function [rawData, tinfo]= loadDNG(dngFilename)   
    if(exist(dngFilename,'file'))
        tinfo = imfinfo(dngFilename);
        t = Tiff(dngFilename,'r');
        offsets = getTag(t,'SubIFD');
        setSubDirectory(t,offsets(1));
        rawData = t.read();
        t.close();
    else
        if(nargin<1 || isempty(dngFilename))
            dngFilename = 'File';
        end
        fprintf(1,'%s could not be found\n',dngFilename);
        rawData = [];
    end
end


Full Poster Script:

%Experimental Vegetation Index Mapping program using DJI Mavic Pro DNG 16-bit images taken using InfraBlue Filter 
%Works with DNG Images only - needs loadDNG script in same directory
%(c)-J. Campbell MuonRay Enterprises Drone-based Vegetation Index Project 2017-2019 

rawData = loadDNG('DJI_0592.DNG'); % load it "functionally" from the command line

%DNG is just a fancy form of TIFF. So we can read DNG with TIFF class in MATLAB

warning off MATLAB:tifflib:TIFFReadDirectory:libraryWarning


%Transpose Matrices for image data because DNG image format is row-major, and Matlab matrix format is column-major.

%rawData = permute(rawData, [2 1 3]);

%Assume Bayer mosaic sensor alignment.
%Seperate to mosaic components.
J1 = rawData(1:2:end, 1:2:end);
J2 = rawData(1:2:end, 2:2:end);
J3 = rawData(2:2:end, 1:2:end);
J4 = rawData(2:2:end, 2:2:end);



figure(1);


J1 = imadjust(J1, [0.09, 0.91], [], 0.45); %Adjust image intensity, and use gamma value of 0.45

J1(:,:,1) = J1(:,:,1)*0.80; %Reduce the overall colour temperature by factor of 0.80

B = im2single(J4(:,:,1)); %VIS Channel Blue - which is used as a magnitude scale in NDVI
R = im2single(J1(:,:,1)); %NIR Chaneel RED which is the IR reflectance of healthy vegeation 
G = im2single(J2(:,:,1)); %VIS Channel Green - which is used as a magnitude scale in ENDVI

L = 0.7; % Soil Reflectance Correction Factor for SAVI Index

%InfraBlue NDVI
InfraBlue = (R - B)./(R + B);
InfraBlue = double(InfraBlue);


%% Stretch NDVI to 0-255 and convert to 8-bit unsigned integer
InfraBlue = floor((InfraBlue + 1) * 32767); % [-1 1] -> [0 256] for 8-bit display range(*128), [0, 65535] for 16-bit display range (*32500)
InfraBlue(InfraBlue < 0) = 0;             % may not really be necessary, just in case & for symmetry
InfraBlue(InfraBlue > 65535) = 65535;         % in case the original value was exactly 1
InfraBlue = uint16(round(InfraBlue));             % change data type from double to uint16
% InfraBlue = uint8(InfraBlue);             % change data type from double to uint8

NDVImag = double(InfraBlue);



%ENDVI - Green Leveraged

ENDVI = ((R+G) - (2*B))./((R+G) + (2*B));
ENDVI = double(ENDVI);

%% Stretch ENDVI to 0-255 and convert to 8-bit unsigned integer
ENDVI = floor((ENDVI + 1) * 32767); % [-1 1] -> [0 256] for 8-bit display range(*128), [0, 65535] for 16-bit display range (*32500)
ENDVI(ENDVI < 0) = 0;             % may not really be necessary, just in case & for symmetry
ENDVI(ENDVI > 65535) = 65535;         % in case the original value was exactly 1
ENDVI = uint16(round(ENDVI));             % change data type from double to uint16
% InfraBlue = uint8(InfraBlue);             % change data type from double to uint8

ENDVImag = double(ENDVI);


%SAVI - Soil Reflectance Leveraged 
%The SAVI is structured similar to the NDVI but with the addition of a
%“soil reflectance correction factor” L.
%L is a constant (related to the slope of the soil-line in a feature-space plot)
%Hence the value of L varies by the amount or cover of green vegetation: in very high vegetation regions, 
%L=0; and in areas with no green vegetation, L=1. Generally, an L=0.5 works
%well in most situations (i.e. mixed vegetation cover)
%So 0.5 (half) is the default value used. When L=0, then SAVI = NDVI.

SAVI = (((R-B)*(1+L))./(R+B+L));
SAVI = double(SAVI);

%% Stretch ENDVI to 0-255 and convert to 8-bit unsigned integer
SAVI = floor((SAVI + 1) * 32767); % [-1 1] -> [0 256] for 8-bit display range(*128), [0, 65535] for 16-bit display range (*32500)
SAVI(SAVI < 0) = 0;             % may not really be necessary, just in case & for symmetry
SAVI(SAVI > 65535) = 65535;         % in case the original value was exactly 1
SAVI = uint16(round(SAVI));             % change data type from double to uint16
% InfraBlue = uint8(InfraBlue);             % change data type from double to uint8

SAVImag = double(SAVI);


% Display them all.
subplot(3, 3, 2);
imshow(rawData);
fontSize = 20;
title('Captured Image', 'FontSize', fontSize)
subplot(3, 3, 4);
imshow(R);
title('Red Channel', 'FontSize', fontSize)
subplot(3, 3, 5);
imshow(G)
title('Green Channel', 'FontSize', fontSize)
subplot(3, 3, 6);
imshow(B);
title('Blue Channel', 'FontSize', fontSize)
subplot(3, 3, 7);
myColorMap = jet(65535); % Whatever you want.
rgbImage = ind2rgb(NDVImag, myColorMap);
imagesc(rgbImage,[0 1])
%imshow(recombinedRGBImage);
title('NDVI Image', 'FontSize', fontSize)
subplot(3, 3, 8);
myColorMap = jet(65535); % Whatever you want.
rgbImage2 = ind2rgb(ENDVImag, myColorMap);
imagesc(rgbImage2,[0 1])
%imshow(recombinedRGBImage);
title('ENDVI Image', 'FontSize', fontSize)
subplot(3, 3, 9);
myColorMap = jet(65535); % Whatever you want.
rgbImage3 = ind2rgb(SAVImag, myColorMap);
imagesc(rgbImage3,[0 1])
%imshow(recombinedRGBImage);
title('SAVI Image', 'FontSize', fontSize)
% Set up figure properties:
% Enlarge figure to full screen.
set(gcf, 'Units', 'Normalized', 'OuterPosition', [0, 0, 1, 1]);
% Get rid of tool bar and pulldown menus that are along top of figure.
% set(gcf, 'Toolbar', 'none', 'Menu', 'none');
% Give a name to the title bar.
set(gcf, 'Name', 'Demo NDVI Poster', 'NumberTitle', 'Off')

%Save NDVI, ENDVI and SAVI images to PNG format file
imwrite(rgbImage, 'NDVI.png');
imwrite(rgbImage2, 'ENDVI.png');
imwrite(rgbImage3, 'SAVI.png');




Python Script:




"""
Experimental Vegetation Index Mapping program using DJI Mavic 2 Pro
JPEG 16-bit combo images taken using InfraBlue Filter

This script uses rawpy

rawpy is an easy-to-use Python wrapper for the LibRaw library.
rawpy works natively with numpy arrays and supports a lot of options,
including direct access to the unprocessed Bayer data
It also contains some extra functionality for finding and repairing hot/dead pixels.
import rawpy.enhance for this

First, install the LibRaw library on your system.

pip install libraw.py

then install rawpy

pip install rawpy

%(c)-J. Campbell MuonRay Enterprises 2019
This Python script was created using the Spyder Editor

"""

import warnings
warnings.filterwarnings('ignore')
import numpy as np
from matplotlib import pyplot as plt  # For image viewing
import rawpy

#!/usr/bin/python

from matplotlib import colors
from matplotlib import ticker
from matplotlib.colors import LinearSegmentedColormap

#dng reading requires libraw to work


#a nice selection of grayscale colour palettes
cols1 = ['blue', 'green', 'yellow', 'red']
cols2 =  ['gray', 'gray', 'red', 'yellow', 'green']
cols3 = ['gray', 'blue', 'green', 'yellow', 'red']

cols4 = ['black', 'gray', 'blue', 'green', 'yellow', 'red']

def create_colormap(args):
    return LinearSegmentedColormap.from_list(name='custom1', colors=cols3)

#colour bar to match grayscale units
def create_colorbar(fig, image):
        position = fig.add_axes([0.125, 0.19, 0.2, 0.05])
        norm = colors.Normalize(vmin=-1., vmax=1.)
        cbar = plt.colorbar(image,
                            cax=position,
                            orientation='horizontal',
                            norm=norm)
        cbar.ax.tick_params(labelsize=6)
        tick_locator = ticker.MaxNLocator(nbins=3)
        cbar.locator = tick_locator
        cbar.update_ticks()
        cbar.set_label("NDVI", fontsize=10, x=0.5, y=0.5, labelpad=-25)


# Open an image
infile = 'DJI_0592.DNG'

raw = rawpy.imread(infile)

rgb = raw.postprocess(gamma=(1,1), no_auto_bright=True, output_bps=16)

R = rgb[:,:,0]
G = rgb[:,:,1]
B = rgb[:,:,2]
# Get the red band from the rgb image, and open it as a numpy matrix
#NIR = image[:, :, 0]
         
ir = np.asarray(R, float)



# Get one of the IR image bands (all bands should be same)
#blue = image[:, :, 2]

r = np.asarray(B, float)



# Create a numpy matrix of zeros to hold the calculated NDVI values for each pixel
ndvi = np.zeros(r.size)  # The NDVI image will be the same size as the input image

# Calculate NDVI
ndvi = np.true_divide(np.subtract(ir, r), np.add(ir, r))

'''
Extended Normalized Difference Vegetation Index (ENDVI) data gathered for vegetative health
monitoring can be used to provide similar, but spectrally different information as compared
to traditional NDVI data.

Soil background, differing atmospheric conditions and various types
of vegetation can all influence the reflection of visible light somewhat differently.

ENDVI uses Blue and Green visible light instead of the Red only method of the standard NDVI
algorithm. This allows for better isolation of plant health indicators.

A normal healthy plant will reflect both visible Green and NIR light while
absorbing Blue visible light, hence plants appear Green in the Visible Spectrum and
Will Glow Brightly in the Near-Infrared.

In the ENDVI measurement we are calculating the relationship between
high absorption of this Blue light and high reflectance of Green and NIR waves
as an indicator of plant health.

Hence ENDVI analysis may, at times, be able to impart more accurate or reliable information
regarding plant or crop health by additional reference of information in
the green portion of the spectrum with respect to NIR.

'''
#ENDVI = [(NIR + Green) – (2 * Blue)] / [(NIR + Green) + (2 * Blue)]
# Get one of the green image bands (all bands should be same)


g = np.asarray(G, float)

#(NIR + Green)
irg = np.add(ir, g)

# Calculate ENDVI
#ndvi = np.true_divide(np.subtract(irg, 2*r), np.add(irg, 2*r))

#%SAVI - Soil Reflectance Leveraged

#The SAVI is structured similar to the NDVI but with the addition of a
#“soil reflectance correction factor” L.
#L is a constant (related to the slope of the soil-line in a feature-space plot)
#Hence the value of L varies by the amount or cover of green vegetation: in very high vegetation regions,
#L=0; and in areas with no green vegetation, L=1. Generally, an L=0.5 works
#well in most situations (i.e. mixed vegetation cover)
#So 0.5 (half) is the default value used. When L=0, then SAVI = NDVI.



#SAVI = [((R-B)*(1+L))./(R+B+L)];

L=0.5;
one = 1;

#rplusb = np.add(ir, r)
#rminusb = np.subtract(ir, r)
#oneplusL = np.add(one, L)

#ndvi = np.true_divide(np.multiply(rminusb, oneplusL), np.add(rplusb, L))


# Display the results
output_name = 'InfraBlueNDVI3.jpg'

fig, ax = plt.subplots()
image = ax.imshow(ndvi, cmap=create_colormap(colors))
plt.axis('off')
     
create_colorbar(fig, image)
extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
     
fig.savefig(output_name, dpi=600, transparent=True, bbox_inches=extent, pad_inches=0)
        # plt.show()

Monday 14 October 2019

Fun with Near-Infrared Drone Imaging

Experimenting with Drone imaging with NIR converted cameras and specialist filters has led me to develop image processing techniques to create NDVI images but it has also allowed me to create visually interesting, (borderline psychedelic!), videos and images.



Travelling with my drone to the beaches of Porto, Portugal allowed me to experience great photo opportunities of the very visually dynamic Atlantic coastline of Portugal.



Choosing different filters also makes sure that you never see the same scene twice, no matter where you are. Even a relatively overcast sky can be very vibrant and beautiful in the Near-Infrared.



Of course the vibrancy of vegetation is the easiest thing to notice in contrast with the relative darkness of other features in an image, such as Indy the cat here!



It is also interesting to notice that the fake grass on the lawn does not reflect strongly in the NIR, unlike the weeds! NIR reflectance is not merely to do with the pigment of vegetation, it is in all likelihood due to the complex structure of the spongy mesophyll layers and the water content in the xylem vessels that run through healthy vegetation, particularly leaves. This is why, as discussed previously, NIR and NDVI is a very valuable indicator of vegetation classification and of plant health.

In any event, whether the application be scientific or artistic, there is always a lot of fun to be had in opening up a new visual perspective on the everyday features of the world.




Notes:


For standard RGB batch processing of DNG (RAW) images files obtained from my DJI Mavic Pro 2 I use the following Python script which uses the rawpy wrapper for the Libraw library for convenience. Here it is:


# -*- coding: utf-8 -*-
"""
Created on Sat Oct 12 03:38:28 2019

@author: cosmi

Working Batch Processer for DNG to JPG

rawpy is an easy-to-use Python wrapper for the LibRaw library.
rawpy works natively with numpy arrays and supports a lot of options,
including direct access to the unprocessed Bayer data
It also contains some extra functionality for finding and repairing hot/dead pixels.
import rawpy.enhance for this


"""

import os
import rawpy
import imageio


for infile in os.listdir("./"):
    print( "file : " + infile)
    if infile[-3:] == "tif" or infile[-3:] == "DNG" :
       # print "is tif or DNG (RAW)"
       outfile = infile[:-3] + "jpg"
       raw = rawpy.imread(infile)
       print( "new filename : " + outfile)
       rgb = raw.postprocess()
       imageio.imsave(outfile, rgb)

Wednesday 2 October 2019

Specialist NDVI Filter Developments for Mavic Pro 2

Both Types of Filters for sale here:

https://www.ebay.ie/itm/155859816325

https://www.ebay.ie/itm/155859834544




I have been developing a brand new series of NDVI specialised filters for use with both standard and IR converted drone cameras. Using these prototypes I am hoping to make some new developments in the use of single-camera near-infrared imaging and NDVI analysis for use in affordable environmental sensing and monitoring worldwide.

The 2 filters are essentially dual bandpass filters, made of high quality glass, with different transmission characteristics in the near-UV to blue band and Near-infrared band.

They are circular filters embedded in a custom housing for use with the Mavic 2 Pro Hasselblad Camera.

Filter#1



Filter#2



With its large sensor area and excellent image vibrancy this sensor can detect the near-infrared band easily for distinguishing plants from other objects in a landscape image.

The following graphs show transmission of the 2 filters.

Filter #1 has the largest transmission in the near-infrared band and has a nice plateau which captures much of the characteristic NIR reflectance pattern from healthy plants.



With IR converted cameras (hot mirror filter removal) the large resulting transitivity with this kind of filter can sometimes overexpose the sensor if taken during high brightness, i.e. during a very sunny day, so in practice although it provides a nice profile the high transmission does not necessarily provide the greatest way to characterize one section of healthy vegetation from another, which is the key factor we want to move forward beyond NIR imaging as just a nice trick for hobbyist photographers and using this technique as a reliable classification tool.

With standard unconverted cameras, (with no hot mirror filter removal) the depleted transmission of the NIR with this kind of filter causes the opposite problem to occur, namely very little IR is recorded by the sensor which means classification is limited. However, with camera settings adjustments, ISO and Exposure settings set to an optimum level for example, we can overcome this and still create NIR profiling so NIR filters can indeed be used by non converted cameras within some limits, i.e. the images must be taken during a very bright clear day at noon to minimize shadowing.


Filter #2 is again a dual bandpass filter but with much narrower and defined transmission lines.



We see the NIR band is far more diminished in transitivity, which does eliminate some of the over-saturation problem with the IR converted cameras. The UV-Blue band is more focused around the near-UV than in filter #1, which means that this filter can pick up more UV information and use it to balance more with the NIR information which could be useful for more novel image analysis beyond the scope of conventional NDVI. These are topics for another article*(see notes)

Image testing is ongoing with these filter designs, with both IR converted and unconverted drone cameras.

In practice I found the characteristics of Filter#2 to be sharper than Images taken with Filter#1. This has advantages in more overcast drone imaging conditions, however in bright daylight conditions Filter#1 tends to be better as shadows become less pronounced by the same criterion.


The following video shows a NIR test flight of a reservoir in Ireland called Blessington Lake, officially Pollaphuca. It is a bird conservation site and source of water for local use. This reservoir was created artificially by the construction of Pollaphuca Dam for use in Hydroelectricity production.



I find this is a very good site to test general environmental boundaries, such as that between a forested area, a shoreline and the water itself.

This test also shows that shadows are diminished if the drone imaging is performed during overcast weather, however the overall NIR reflectance information is very clear. This is an important detail as NDVI generating satellites cannot penetrate cloud covered areas, so Drones add an advantage in this regard.

Moreover, the resolution on the ground using a drone at a height of 100m can be as low as 2cm per pixel, a resolution not possible using satellites and this information is needed for close environmental monitoring and species classification, particularly in wild areas.

Staring off with an image we can begin an example NDVI plot


JPG NIR Image Taken with Mavic 2 Pro, using Filter#1

Either the JPG or DNG (i.e RAW) image format can be converted to NDVI

DNG(RAW) NIR Image Taken with Mavic 2 Pro, using Filter#1


Using the Following Infrablue NDVI Formula:

 

The red and blue colour channels can be used to compute an NDVI colourmap using Matlab (see Appendix for Codes):
JPG NDVI Image (processed in Matlab)

DNG Files contain more information as the colour channels are recorded separately on a 1x3 Matrix and so no splitting is necessary.The resulting image is processed as a PNG or een a TIF image if we like. In any vent we see shadows where NDVI should be zero much more defined as they would have been recorded.

DNG NDVI Image (processed in Matlab)

In the DNG Image the trees are far more defined relative to the grass, the shoreline is also far better defined relative to the water.

Similarly Processed using a standalone Python Script (with a slight amendment in colourmap labeling as to show that the water should indeed be absorbing IR, thus having a negative NDVI):




Updates will be posted here along with a showcasing of videos, images, NDVI batch processing code for use in 3D NDVI Plotting and spectral information on the filters used for ease of method replication.

It is hoped with new filter developments that Enhanced NDVI conversion can be achieved with a single use DJI Mavic Pro 2 Drone Camera, using a filter that also uses the Green Colour Channel for leverage adding to distinguishing different plants, soils, water, algae, rocks and other objects.

I am also hoping that analysis of captured video and making careful notes of exposure settings used in the field in a recent expedition to Portugal will allow me to engineer more refined versions of these filters in the coming weeks.


Appendix:

  • Notes:


Some UV standalone imaging applications, which could be leveraged against simultaneously captured NIR information include imaging fluorescent objects (certain coral, algae, flowering plants, animal species) , recording sulfur aerosols in the air, as sulfur aerosols transmit visible and NIR but block UV. Another possible imaging application is seeing through certain flames (UV and blue light travels through orange portion of flames and this has been used by the steel industry in welding for decades as a proven technology to image an object that is shrouded by embers or sparks). Some of these imaging techniques, within reason, could be indexed in a similar way to NDVI. 


  • Matlab Code#1 (JPG Input Only):

%Experimental Vegetation Index Mapping program using DJI Mavic Pro DNG 16-bit images taken using InfraBlue Filter 
%Works with JPG Images only - DNG Files need a separate script
%(c)-J. Campbell MuonRay Enterprises Drone-based Vegetation Index Project 2017-2019 
% Read original image.
rgbImage = imread('DJI_0182.JPG');
% Extract colour channels.
redChannel = rgbImage(:,:,1); % Red channel
greenChannel = rgbImage(:,:,2); % Green channel
blueChannel = rgbImage(:,:,3); % Blue channel

figure(1);

subplot(2,2,1)
imshow(redChannel)
title('Red Channel')

subplot(2,2,2)
imshow(greenChannel)
title('Green Channel')


subplot(2,2,3)
imshow(blueChannel)
title('Blue Channel')

B = im2single(blueChannel); %VIS Channel Blue - which is used as a magnitude scale 
R = im2single(redChannel); %NIR Chaneel RED which is the IR reflectance of healthy vegeation 


InfraBlue = (R - B)./(R + B);
InfraBlue = double(InfraBlue);



%% Stretch NDVI to 0-255 and convert to 8-bit unsigned integer
InfraBlue = floor((InfraBlue + 1) * 32767); % [-1 1] -> [0 256] for 8-bit display range(*128), [0, 65535] for 16-bit display range (*32500)
InfraBlue(InfraBlue < 0) = 0;             % not really necessary, just in case & for symmetry
InfraBlue(InfraBlue > 65535) = 65535;         % in case the original value was exactly 1
InfraBlue = uint16(round(InfraBlue));             % change data type from double to uint16
% InfraBlue = uint8(InfraBlue);             % change data type from double to uint8

NDVImag = double(InfraBlue);


figure(3);

myColorMap = jet(65535); % Whatever you want.
rgbImage = ind2rgb(NDVImag, myColorMap);

imagesc(rgbImage,[0 1]), colorbar

%imshow(rgbImage), colorbar
%c = colorbar % Add a color bar to indicate the scaling of color


imwrite(rgbImage, 'InfraBlueJPGInput.png');


  • Matlab Code#2 (DNG Input Only):
  • Main File:


%Experimental Vegetation Index Mapping program using DJI Mavic Pro DNG 16-bit images taken using InfraBlue Filter 
%Works with DNG Images only - needs loadDNG script in same directory
%(c)-J. Campbell MuonRay Enterprises Drone-based Vegetation Index Project 2017-2019 

rawData = loadDNG('DJI_0182.DNG'); % load it "functionally" from the command line

%DNG is just a fancy form of TIFF. So we can read DNG with TIFF class in MATLAB

warning off MATLAB:tifflib:TIFFReadDirectory:libraryWarning


%Transpose Matrices for image data because DNG image format is row-major, and Matlab matrix format is column-major.

%rawData = permute(rawData, [2 1 3]);

%Assume Bayer mosaic sensor alignment.
%Seperate to mosaic components.
J1 = rawData(1:2:end, 1:2:end);
J2 = rawData(1:2:end, 2:2:end);
J3 = rawData(2:2:end, 1:2:end);
J4 = rawData(2:2:end, 2:2:end);


figure(1);

subplot(2,2,1)
imshow(J1)
title('Red Channel')

subplot(2,2,2)
imshow(J2)
title('Green Channel 1')

subplot(2,2,3)
imshow(J3)
title('Green Channel 2')

subplot(2,2,4)
imshow(J4)
title('Blue Channel')


J1 = imadjust(J1, [0.09, 0.91], [], 0.45); %Adjust image intensity, and use gamma value of 0.45

J1(:,:,1) = J1(:,:,1)*0.80; %Reduce the overall colour temperature by factor of 0.80

B = im2single(J4(:,:,1)); %VIS Channel Blue - which is used as a magnitude scale 
R = im2single(J1(:,:,1)); %NIR Chaneel RED which is the IR reflectance of healthy vegeation 


InfraBlue = (R - B)./(R + B);
InfraBlue = double(InfraBlue);


%% Stretch NDVI to 0-255 and convert to 8-bit unsigned integer
InfraBlue = floor((InfraBlue + 1) * 32767); % [-1 1] -> [0 256] for 8-bit display range(*128), [0, 65535] for 16-bit display range (*32500)
InfraBlue(InfraBlue < 0) = 0;             % not really necessary, just in case & for symmetry
InfraBlue(InfraBlue > 65535) = 65535;         % in case the original value was exactly 1
InfraBlue = uint16(round(InfraBlue));             % change data type from double to uint16
% InfraBlue = uint8(InfraBlue);             % change data type from double to uint8

NDVImag = double(InfraBlue);


figure(3);

myColorMap = jet(65535); % Whatever you want.
rgbImage = ind2rgb(NDVImag, myColorMap);

imagesc(rgbImage,[0 1]), colorbar

%imshow(rgbImage), colorbar
%c = colorbar % Add a color bar to indicate the scaling of color


imwrite(rgbImage, 'InfraBlueFin.png');

  • loadDNG Header File:

function [rawData, tinfo]= loadDNG(dngFilename)   
    if(exist(dngFilename,'file'))
        tinfo = imfinfo(dngFilename);
        t = Tiff(dngFilename,'r');
        offsets = getTag(t,'SubIFD');
        setSubDirectory(t,offsets(1));
        rawData = t.read();
        t.close();
    else
        if(nargin<1 || isempty(dngFilename))
            dngFilename = 'File';
        end
        fprintf(1,'%s could not be found\n',dngFilename);
        rawData = [];
    end
end



  • Python Code (JPG Input Only)


"""
Experimental Vegetation Index Mapping program using DJI Mavic 2 Pro
JPEG 16-bit combo images taken using InfraBlue Filter

%(c)-J. Campbell MuonRay Enterprises 2019
This Python script was created using the Spyder Editor

"""

import warnings
warnings.filterwarnings('ignore')
from scipy import misc
import imageio
import numpy as np
from matplotlib import pyplot as plt  # For image viewing

#!/usr/bin/python
import getopt
import sys

import matplotlib.pyplot as plt
from matplotlib import colors
from matplotlib import ticker
from matplotlib.colors import LinearSegmentedColormap

#dng reading requires libraw to work

# Open an image
image = misc.imread('DJI_0182.JPG')

# Get the red band from the rgb image, and open it as a numpy matrix
#NIR = image[:, :, 0]
         
#ir = np.asarray(NIR, float)


ir = (image[:,:,0]).astype('float')


# Get one of the IR image bands (all bands should be same)
#blue = image[:, :, 2]

#r = np.asarray(blue, float)

r = (image[:,:,2]).astype('float')


# Create a numpy matrix of zeros to hold the calculated NDVI values for each pixel
ndvi = np.zeros(r.size)  # The NDVI image will be the same size as the input image

# Calculate NDVI
ndvi = np.true_divide(np.subtract(ir, r), np.add(ir, r))


# Display the results
output_name = 'InfraBlueNDVI3.jpg'

#a nice selection of grayscale colour palettes
cols1 = ['blue', 'green', 'yellow', 'red']
cols2 =  ['gray', 'gray', 'red', 'yellow', 'green']
cols3 = ['gray', 'blue', 'green', 'yellow', 'red']

cols4 = ['black', 'gray', 'blue', 'green', 'yellow', 'red']

def create_colormap(args):
    return LinearSegmentedColormap.from_list(name='custom1', colors=cols3)

#colour bar to match grayscale units
def create_colorbar(fig, image):
        position = fig.add_axes([0.125, 0.19, 0.2, 0.05])
        norm = colors.Normalize(vmin=-1., vmax=1.)
        cbar = plt.colorbar(image,
                            cax=position,
                            orientation='horizontal',
                            norm=norm)
        cbar.ax.tick_params(labelsize=6)
        tick_locator = ticker.MaxNLocator(nbins=3)
        cbar.locator = tick_locator
        cbar.update_ticks()
        cbar.set_label("NDVI", fontsize=10, x=0.5, y=0.5, labelpad=-25)

fig, ax = plt.subplots()
image = ax.imshow(ndvi, cmap=create_colormap(colors))
plt.axis('off')

create_colorbar(fig, image)

extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
fig.savefig(output_name, dpi=600, transparent=True, bbox_inches=extent, pad_inches=0)
        # plt.show()