Thursday, 26 March 2020

Coronavirus Emergency In Ireland - Why The Government Must Mandate a 2 Week Total Shutdown



This is an urgent message, so lets not waste time with introductions.

The Government of Ireland must mandate a Complete Shutdown of all non-essential and non-medical business immediately. 

This means closing all non-essential manufacturing, factories, construction sites, call centers and non-essential warehouse work (excepting medical device, food services for humans and animals, pharmaceuticals and other essentials) - Private and Multinational firms that are still continuing business as usual in Ireland MUST BE CLOSED - They must not be allowed to continue work as they dictate until they chose to close. WE ARE IN A NATIONAL CRISIS AND THEY MUST BE MANDATED TO CLOSE. 

I RECOMMEND THAT AS MANY PEOPLE AS POSSIBLE USE THIS POST AS A PLATFORM TO NAME AND SHAME PRIVATE COMPANIES IN THE REPUBLIC OF IRELAND THAT ARE REFUSING TO CLOSE AND PUTTING THE LIVES OF OUR CITIZENRY AT RISK AND ARE MORTGAGING THE FUTURE OF LIFE AS WE KNOW IT FOR SHORT-TERM PROFIT.

The numbers of people out on the streets, parks, beaches and other public environs is also alarming. In the UK it is now illegal to be out during a nationwide lockdown while in Ireland we have still not made measures to ENFORCE SOCIAL DISTANCING THAT CAN SAVE HUNDREDS IF NOT THOUSANDS OF LIVES. 




Despite many euphemisms and mitigating measures the government has taken they have been half-measures at best and will do nothing to FLATTEN THE INFECTION CURVE. WE MUST MOVE PAST HALF-ASSED MITIGATION AND HAVE A FULL LOCKDOWN, APPLY "THE HAMMER" - Only then can we move onto "THE DANCE" AND CAREFULLY RESTART OUR COUNTRY AGAIN.

The key word here is MITIGATING MEASURES. the government refuse to take any hardline measures to, in-effect, flatted the infection from an exponential rate to a linear rate and eventually reaching a level that will not overwhelm our already beleaguered healthcare system. 


We have healthcare workers who can only be considered superheroes at this point who are quickly becoming so overwhelmed that they are, in-effect, samurai in nurses and doctors clothing. They are risking their own lives and also having to take the extra burden and risk because, again PRIVATE BUSINESSES REFUSE TO CLOSE ON THEIR OWN! THEY MUST BE FORCED RIGHT NOW!

TIME IS KEY IN ALL OF THIS. WE MUST GET OUR COUNTRY FROM AN EXPONENTIAL STAGE OF INFECTION RATE TO A LINEAR STAGE AND FLATTEN THE CURVE. WE CAN ONLY DO THIS WITH A FULL LOCKDOWN FOR AT LEAST 2 WEEKS!

To citizens I recommend you STAY INDOORS AS LONG AS POSSIBLE, we can only fight this virus and stop it becoming a permanent part of the microbial ecosystem in this country if we band together on this and most importantly HOLD BUSINESS AND GOVERNMENT TO ACCOUNT! DON'T FORGET THAT THEY SHOULD HAVE HAD A WUHAN OR RUSSIAN STYLE LOCKDOWN AT LEAST A WEEK AGO IF NOT TWO!

THE DATA DOES NOT LIE - The John's Hopkins Website has been providing more up to date information than any of the highly funded and subsidized quangos and institutions in IRELAND. The most important detail is to read the shape of the graph, not merely quote numbers.

This is Our Status (March 26th 2020):




WE ARE HEADING INTO EXPONENTIAL GROWTH. WE MUST ACT NOW!

Compare this to Italy, the Worst Country Hit in Europe:




EXPONENTIAL INFECTION RATE HAS CAUSED THE VAST NUMBER OF DEATHS WITHIN ITALY.

SEE John's Hopkins COVID 19 Updates live here: reading the curve is far more essential than the individual numbers: https://coronavirus.jhu.edu/map.html

The Ideal is to APPLY THE HAMMER, initiate a 2 week controlled lockdown and flatten the infection rate, as CHINA and most East Asian Countries have done:




WE HAVE BEEN TOLD CONSTANTLY BY THE IRISH GOVERNMENT AND THE MAINSTREAM MEDIA (RTE Are most culpable) THAT THE WORD "LOCKDOWN" IS A SCARE TACTIC. WELL, IN TIMES LIKE THIS FEAR WORKS TO GET THE JOB DONE. WE DON'T HAVE TO BE AFRAID FOREVER BUT THIS CRISIS REQUIRES US TO REALLY BE AFRAID FOR OUR LOVED ONES AND FOR OURSELVES.

Morever, it has not been the word "lockdown" that the citizenry have had a hard time understanding: The term "Social distancing" has been apparently interpreted by the public with a mixture of confusion and reckless abandon, maybe because it was written by the same second-rate advertising gurus that write the script our politicians read. 

As the Comedian George Carlin once said "Euphemisms are bullshit, people need direct honest language" - So, my fellows, please STAY INDOORS FOR AS LONG AS YOU HAVE FOOD AND MEDICINE! 




Again, to the Government, INITIATE A CLOSURE OF NON-ESSENTIAL PRIVATE BUSINESSES TO ENSURE THAT WE FLATTEN THE INFECTION RATE! YOU WILL BE HELD ACCOUNT FOR THIS NEGLIGENCE!

We can hopefully look back on this moment and perhaps laugh and have a kind of "I was there" moment with the kids in the future. Perhaps. But if we do not act now we are going to plunge this country into loss and sorrow that will take generations to heal. Please do what is in the common good.

IMPORTANT NOTE:

As I write this America has overtaken every other country as the most heavily infected. this was not unexpected given the delayed response there however they are due to get much worse due to the high level of exponential growth leading to a great strain on their healthcare system:





Saturday, 14 March 2020

Framing The Power of Art Using Luminescent Solar Concentrators and UV Lighting



This project involves creating vibrant etching artwork using ultraviolet reactive perspex in a Luminescent Solar Concentrator (LSC) configuration with silicon thin film solar panels on the side to harvest light as energy, boosted using an energy harvester circuit designed for indoor solar energy harvesting.

This technology was developed over a few years and has been tested using many different designs. The concept of using solar concentrators in buildings (in windows, roofing, glasshouses etc) has been explored in many research scenarios however the development of indoor based light-energy harvesting for display use is something that is novel in the context of use in displaying artwork.

UV artwork is very interesting and the vibrancy in certain UV reactive materials adds a contrast on images within the environment they are placed that is not found in other display media. The UV light source does not need to be super-bright and often the display is more effective in an environment where lighting is kept at a minimum to bring out the sharper colors from the fluorescence.  Many museums are using this technique to create interesting and eye catching displays for unique visual experiences. In some situations it resembles Neon lighting however it is far more energy efficient and safer to install.



Introduction:



The installation of conventional photovoltaic (PV) solar cells have many advantages. The ease of installation, the ability to generate energy locally and with minimal investment and supervision makes them ideal to install on buildings and integrate within devices.They can be used on rooftops, most portable solar chargers and in other applications directly absorb sunlight and convert it into electricity. 

The major drawback is that amount of power, in watts, conventional solar cells can harvest is dependent on the amount of area the cells cover. So in areas where surface area is a premium, PV solar has clear limits in its energy harvesting capacity. High rise cities for example do not have the space for all but the most superficial of solar energy installations. This is a fundamental problem for genuinely running an energy hungry city on energy harvested directly from the environment.

Electricity is generally consumed first at the source where it is generated. So energy producing installations if positioned far from a cityscape will have their transmitted power drawn from other sources first, such as small towns and villages, and ultimately creates a situation where prioritization of power distribution is backwards - effectively powering the small municipalities before the larger ones.

Accounting for losses in transmission over large distances also means it makes sense to position a power plant as close to a city as possible. This is not so much a problem with stored fuels, such as fossil fuels, which can be stored and converted to electricity in power stations that require less space and are thus possible to install near large cities. With generic renewable energy however, large amounts of space is required, be it wind farms or solar arrays, so it is not really practical to position it near a city for the reason described and certainly not within it, only in a superficial sense such as on rooftops, as city high rises inevitable lead to shading of the surface. 

Conventional PV solar cells made of high quality materials, from Silicon to CIGS, will require refinement, purification, cleaning, doping and deposition and this will naturally increase their expense as the power they are expected to generate, and thus size, increases in demand. So even with conventional photo-voltaic solar cells we are trapped in the process in which the more power we want to harvest, the more money we will have to invest, making the transition to solar energy an expensive one.

This does not even go into mentioning that the most efficient photovoltaic absorption materials known, such as Gallium Arsenide (which has benefits of being a radiation-hard material) and  perovskite photovoltaic materials are extremely expensive.


Concentrating and Harvesting Solar Energy


Methods of concentrating solar energy, such as using reflective mirrors and/or lenses offers an obvious solution to the problem. However concentrating solar energy using conventional methods often means that we must have more supervision and infrastructure available in an area. For example, concentrating solar energy by mirrors and/or lenses is often done by heating water in a power plant infrastructure which may not always be available in a local area and will require yet more investment. 

Moreover, different technologies are required to harvest the concentrated solar energy - i.e. using Stirling engines, thus creating technology barriers along with the economic cost on transitioning to solar energy.

Therefore it would be nice if we were able to use combine the ability to harvest solar energy using conventional solar cells with new approaches to concentrating solar energy. One such approach is the idea to concentrate the solar energy by "pumping" a fluorescent medium with sunlight, akin to how a laser crystal can be pumped with a flashlamp, in which by clever design of the fluorescent medium and optics, the photon population in the flourescnt medium can be concentrated to a large level which can then be delivered to the PV cell.

This is the idea behind the operation of a Luminscent Solar Concentrators or LSC. A LSC absorbs the light on a plate embedded with highly efficient light-emitters called “lumophores” that then re-emit the absorbed light at longer wavelengths, a process known as the Stokes shift. For example, absorbing UV radiation to produce visible light. This re-emitted light is directed by total internal reflection (just like an optical fiber) to a micro-solar cell for conversion to electricity. Because the plate is much larger than the micro-solar cell, the solar energy hitting the cell is highly concentrated.



With a sufficient concentration factor, only small amounts of expensive photovoltaic materials are needed to collect light from a potentially inexpensive luminescent waveguide. The waveguides could be in principle made to be integrated into double-glazed glass windows, thus providing a possible solution to the integration of energy harvesting within high rise cities and across all available dimensions of a building.

Even within a building, in locations where sunlight reaches on walls the concentration of light to generate moderate amounts of energy can be achieved, at the very least for lighting.

Now why would this be important to consider? Well, LSCs have not only the potential to provide a means to Stokes-shift shorter wavelengths, such as UV, into longer ones such as red and blue, but also to provide a means to upconvert longer wavelengths of light, such as infrared, which is not converted to electricity by the PV material into shorter wavelengths such as red.

A large portion of radiation emitted by the Sun itself is in infrared and only generates waste heat in a PV cell, which undermines its overall performance. So developing materials that do demonstrate upconversion of infrared light, from IR lasers, is a reasonable step to developing devices that will work outside of the lab, perhaps using sunlight or alternatively as a way to provide wireless power distribution by means of lasers. Energy distribution by means of lasers, such as the common GaN UV Lasers, may also be possible to demonstrate with such a system..



This project, its images and design was developed by MuonRay Enterprises Ireland for educational purposes.



Artworks Depicted


In this project I created several etchings as a demonstration of how etching UV Perspex on different layers with contrasting colors can create very vivid and sharply colored images, not to mention the inclusion of a mirror which gives the images a sort of "floating pseudo-hologram effect". I will share the images depicted in the video and explain what they are from. All are cultural and historical symbols that I find interesting.




A mythical dragon, somewhat inspired by oriental dragons but I was thinking more along the lines of Lewis Carroll's Jabberwock when I was designing it - "with eyes of flame!"

The Winged Disk - The winged sun is a symbol associated with divinity, royalty and power in the Ancient Near East in particular the Old Kingdom of Ancient Egypt where it was used both to represent the Gods Ra and Horus.



A Celtic Knot Moon in conjunction with a Celtic Spiral Sun. The Celtic Knot is sometimes a depiction of a universe forever in flux but without clear beginning or end either in scale or direction. The Spiral Trigram is commonly associated with the connection between the changing seasons, death, life and rebirth. All are a common motif in Ancient Irish Culture.


The 4 Treasures of the Tuatha Dé Danann, a supernatural magical race in Irish mythology who bore these magical artifacts, crafted in faraway islands beyond Ireland. They are thought to be a pantheon of demigods or precursors to the main deities of pre-Christian Gaelic Ireland. In my depiction of the 4 Treasures, I symbolise each with one of the 4 classical elements of antiquity.

Claíomh Solais (The Sword of Light) - A Magical Sword that when swung no person could escape it. This sword is often associated with Nuada the King of the Tuatha Dé Danann, with him sometimes given the epithet Airgeadlámh, meaning "silver hand/arm". The element I have associated it with is fire.


Cauldron of Dagda - Dagda was one of the Tuatha De Dannan and later on the Irish All Father associated with fertility, agriculture, strength, as well as magic, druidry and wisdom. He is strongly associated with the culture at the Brú na Bóinne (Newgrange). From His cauldron it is said nourishment flows from which nobody is ever left hungry or weak. The element I have associated it with is water.

Lia Fáil (Stone of Destiny) - This is the ancient Coronation stone from which the High King's and Queens of Ireland would be deemed fit to rule the country by the stone itself vibrating. The an ancient stone still sits on the Hill of Tara to this day, however whether or not this is the true one remains unknown. The element I have depicted with this treasure is Earth with plant like roots boring into the ground from the stone to add to the effect and also the name of the stone itself etched in Ogham.

The Spear of Lugh - Lugh is a God of many talents in Irish mythology. He was a master craftsman but also a warrior and in some depictions both a messenger and a trickster. Lugh is linked with the harvest festival of Lughnasadh, which bears his name. He was also attributed to storms and so I have depicted his weapon of choice, a spear, with lighting and attribute it as the element of air. When Lugh uses his spear, it is said that it brings a decisive end to all conflict. 



This symbol is the Hamsa Hand, also known as The Hand of Miriam (Sister of Moses) in Jewish Tradition and also known as The Hand of Fatima in Islam and to a less common extent as the Hand of Mary in Christianity. There is also history of its use much earlier in Ancient India. In general it is used as a protective talisman. It is a symbol that people believe can protect them from harm against the evil eye and bring them goodness, abundance, fertility, luck and good health. In my travels I have seen it quite commonly used on ships in the Middle East as a charm to ward off storms.

With this one I used 3 layers of UV reactive perspex to reasonable effect at creating a vibrant display of color.


This project has been a fun way to share the concept of using perspex to create efficient illuminated artwork that can power itself and remain functional without external power sources. The technique in developing the art has been a learning experience for me and I hope to create more artwork using this technique in the future. 











Thursday, 20 February 2020

3D NIR Drone Scanning: Python + VisualSFM VS PIX4D




In this project I showcase my developments in 3D Drone Scanning where I can generate detailed 3D models with NDVI indices by modifying the DJI Mavic 2 Pro Hasselblad Camera in a Near-Infrared (NIR) Conversion.

Using a custom made Dual Bandpass Filter I can obtain clear NIR, green and blue leveraged colour images  which can be processed in Python + Visual SFM for free (using Code I developed in Python) and viewed in Meshlab. The advantage of this technique is affordability.

Alternatively, using commercial software such as PIX4D I can use the reflectance maps to generate a high fedelity and clear NDVI, ENDVI, SAVI map with customisable colormaps which look professional and can be used in the PIX4D BIM suite for accurate volume and area measurements which can be a very useful tool for systemic analysis of a region. In my view PIX4D offers unequaled scope in what one can do with drone mapping.



Index based classification is itself a forerunner to more sophisticated techniques in the field of machine learning. Machine learning is a diverse tool that uses convolution neural networks, trained under either supervised or non-supervised conditions with a selection of classified data for classifying known objects in a general landscape. This can allow, among other things, the user to compare directly individual sections of a chosen landscape, i.e. which areas have been depleted of vegetation with respect to others and what consistently known features are associated with this difference, i.e. the presence or absence of a significant plant undergrowth/understory/underbrush.



With 2D data alone however such classifications are impoverished from significant environmental features, the most important one being missing is overall topology of the landscape. We simply can only see and classify so much from a single 2D point of view, even with tools such as NDVI and others. No matter how many detectors may be onboard an imaging system, be it drone or satellite, without noticing this fact we are limiting our scope of classification.



Topology data of a landscape is itself critical for numerous reasons, being the source information for depth-based classifications, the basis for performing dynamic partial differential flow-based simulations (in fire physics, air flow, etc.) and even for charting the individual biomes in a region with elevation being a key factor affecting the distribution of species in an area, cold-thriving species existing upland and temperate species existing closer to sea level for example.



3D Scanning is the obvious solution and can be accomplished just as easily with NIR cameras as with regular imaging systems onboard a drone. The only difference being the ability to process the large datasets needed into meaningful maps that can have their features classified by the conventional tools of NDVI, ENDVI, SAVI and so on.





Using apps such as DroneHarmony or DJI Mapping Software with the Phantom 4 RTK for example it is possible to plan a plot sweep of a region of defined size.



The convential RGB data is itself very useful and can be classified with meaningful results based on height, shape and colour. This all epends on the 3D reconstructing software being used. PIX4D Mapper is probably the best proprietary software soltion on the market today for this work. The ease of processing and the countless features of this software really makes it the industry standard in construction, mining and of course for environmental monitoring.







The output is a orthomosaic TIFF file which can be classified using NDVI, ENDVI, SAVI as has been done before.












Creating full 3d NDVI Maps using freeware is a bit trickier but can be done in practice aswell. One must be careful not to delete the EXIF Data in Post-processing, or if one does then it is advise that data be transferred over to allow for consitency in maps to compare between RGB and NDVI profiles.

This can be done in Matlab or Python, depending on preference.


This project is attempted with existing freeware for ease of reproduction and replication. Python, Meshlab and VisualSFM and CMVS are all free and opensource. This is of utmost importance for NGOs and Charities working in this area.

Another useful program is QGIS which is opensource and works with Python in its interface and can run scripts from a built in compiler. At the very least it is useful for converting the orthomosaic TIFF Files in JPEG images which retain geoinformation. This is useful for doing postprocessing in Python.

The full unabridged use of QGIS in its full potential in NDVI-style classification is a subject for another project however and I hope to come back to sharing my experiences of using it in the near future. 

Sunday, 22 December 2019

NIR Drone Scanning of Ancient Megalithic Cairn Tombs at Loughcrew County...





This video shows some test footage of the megalithic passage cairn tombs at Loughcrew in County Meath, Republic of Ireland this Winter Solstice taken with an NIR converted Hasselblad camera onboard the DJI Mavic Pro 2 drone.

Clusters of Megalithic Cairns are dotted around the Slieve na Caillaigh hills at Loughcrew, the main concentrations are on Carnbane East where Cairn T is the centrepiece and Carnbane West where Cairn L is located.
The illumination of the passage and chamber at the Winter solstice sunrise in Newgrange is world famous. Less well known is the Equinox illumination at sunrise in Cairn T at Loughcrew. The backstone of the chamber is illuminated by a beam of light at sunrise on the Spring and Autumnal Equinoxes.

The sun light is shaped by the stones of the entrance and passage and descends the backstone while moving from left to the right illuminating the solar symbols.

With Drone-based NIR Imaging, combined with the Soil-Adjusted Vegetation Index (SAVI) it is possible to map certain artificial features in the topography of the land with respect to the vegetation distribution. This allows for the possibility to examine the scale of manmade features such as farmland, burial sites and perhaps settlements that have been buried for centuries and can be difficult to detect using visual maps alone.

The SAVI index is one of many indices used by archaeologists in such research and drone-based imaging offers a unique opportunity to test such techniques affordably and quickly, without the use of more expensive aircraft or satellite imaging techniques. Moreover, the ease of deployment and the high quality of the imaging systems available for drones today makes them superior to other forms of often more expensive aerial surveillance, something which will become a major trend in the future as drone imaging and mapping advances.



Friday, 15 November 2019

Herschel and the Physics of the Invisible Universe


Most of the universe is invisible to humankind. Moreover most of it is currently invisible to our modern equipment. As advanced and fine-tuned as it is at present, all of our detectors in the enterprise of physics could only ever see about 4% of the universe, theoretically. The rest, 96%, is essentially an invisible section of reality, made up of Dark Matter and Dark Energy. These percentage distributions is also merely based on what we know now about the structure of galaxies and their expansion, there is every reason to think that there is even more hidden physics, perhaps on forever.

We sometimes take it for granted that a lot of physics is invisible to our eye and everyday experience, particularly when considering our modern technological world depends on the existence of countless invisible wireless signals.

However the existence of invisible packets of energy, acting at a distance, would have been a completely occult notion at the time of Newton and Huygens with their respective theories of light being made up of quantifiable corpuscles and waves respectfully, expanding upon the first treatment of light as being a system of rays as described by Fermat into being a composite structure to describe such phenomena as reflection, refraction and of course the visible spectrum as a function of interaction of white light with a prism.



Newton's Corpuscular (or particle) theory of light, is based on the view of light being made up of equally elastic particles, held sway for much of the 18th century as it could explain refraction and reflection relatively well up until the phenomenon of interference could not be explained using particles alone.

The development of classical optics as described mathematically by Huygens and then used by Fresnel in the Huygens-Fresnel wave theory of light is necessary, among other things, to explain in the observation that parallel if a beam of light shines on a small round object, a bright spot will appear at the center of a circular shadow by means of constructive interference of the waves travelling around the object. Further developments of course in the experiments of double slit produced interference by Thomas Young further confirmed the wave theory of light as being necessary.



In any event, the notion of "action a at a distance" when considering forces acting on particles of light or matter was deeply troubling to Newton and he consciously developed his theories to eliminate what he considered to be non-existent forces. Newton's laws are a their core an empirical approximation of physics*(#1) and in a sense continue on the work laid out by Galileo before him. Newton, using the calculus he invented, ran mathematical studies on how things moved over time, and found a small set of rules which could be used to predict what would happen to, say, frictionless balls being pushed from a state of rest and how inertial masses create equal and opposite motion. The laws "work" in effect because they are effective at predicting the visible universe, which was assumed by scientists of the 17th century to be analogous to a clock with a perfectly visible and quantifiable clockwork.

Newton and other contemporaries such as Huygens did not examine physical laws which were in a sense "invisible" - Newton himself could not explain the origin of gravity in his theory and in a way his physics was a bookkeeping device to in a sense replace its explanation with its function and thus make predictions of the thing without having to explain the thing itself. and objects beyond simply labels of F12 and M1, F21 and M2, so that F12 = -F21 for example:



Huygens also did not explain the origin of the waves in his wave theory of light, that needed much more work in the development of electromagnetism and it was not until nearly the end of the 19th century that the invisible forces of electricity and magnetism acting in unison gave rise to the nature of the waves of light itself, as described by James Clerk Maxwell.

Nevertheless, Newton's theories were and still are very successful in applications, however it must be said that they do lack explanatory power of the nature of the forces themselves. They are still, in effect, "invisible" in this arrangement*(#2)

The beginning of modern science to look into the physics of the truly "invisible" really began with the famous 18th to 19th century astronomer William Herschel.

Sir William Herschel
bust by John Charles Lochée
1787

William Herschel was born on November 15th, 1738 in Hanover Germany. Herschel performed his extensive scientific work in Britain, which he migrated to when he was 19 to avoid military service during the infamous Seven-Years-War. Living in Britain, the young Herschel had been able to earn a living as as a skilled musician and composer for 15 years, during which his sister Caroline Herschel joined in him Britain and became a skilled musician, mathematician and astronomer in her own right. Both the Herschel siblings' work in astronomy was remarkably thorough, dedicated and careful. However, we shall see that he was not so careful to avoid the spark of true discovery that can often lie hidden just orthogonal to established knowledge.

Beginning his work in astronomy, William Herschel constructed his primary Newtonian telescope in the back garden of his home in London. He had constructed his telescope with the specific missions to study binary star systems to observe, over many years with his sister Caroline, the proper motion of stars and this singular focus led to a plethora of discoveries along the way between his initial years of astronomical study between 1779 and 1792. He soon discovered many more binary and multiple stars than expected, and compiled them with careful measurements of their relative positions in two catalogs presented regularly to the Royal Society in London.

Artist's rendition of a binary star system





In 1797, Herschel measured many of the systems again, and discovered changes in their relative positions that could not be attributed to the parallax caused by the Earth's orbit.

He waited until 1802 to announce the hypothesis that the two stars might be "binary sidereal systems" orbiting under mutual gravitational attraction, a baricenter essentially around empty space, a hypothesis he confirmed in 1803.

Such a discovery was very phenomenal at the time, as Herschel himself had been influenced by the writings of his fellow astronomer John Mitchel, who in 1783 proposed the concept of "Dark Stars", essentially invisible stars so massive that light cannot escape. Mitchel was also supportive of the theory of binary systems of stars existing around mutual gravitational centers and perhaps influenced a connection between the two.




In between this work, while looking for binary star systems, Herschel systematically reexamined objects discovered previously by Charles Messier and cataloged (but largely unclassified) in his famous catalog on stars. Messier was focused on searching for comets and so, understandably, did not spend much time on the objects he had first classified in a system of numbers we still use today for most objects seen in the Northern skies, M31 for example; the Andromeda Galaxy.

Herschel discovered that many objects, seen initially as stars, in the Messier catalog were in fact clusters of stars or nebulae* but one object he found, which had not been found by Messier before and which turned out to be the Planet Uranus. This was a significant discovery, as it was the first planet discovered since ancient times, all planets up to Saturn having been known since at least the Ancient Greeks and most probably earlier. It also helps put to sleep the notion that there is some hidden geometric mysticism to the planetary arrangements to many people: after all, what astrologer or psychic ever had anything to say before about the 7th planet!


This discovery also set Herschel up with the opportunity to do astronomy, and by extension, physics in general, as a full time profession. Herschel, somewhat obsequiously, called the new planet the "Georgian star" (Georgium sidus) after King George III, which predictably brought him favor with the King; the name of the Planet did not stick, particularly internationally. In any event, the King of England offered William a constant stipend of 200 pounds a month, less than he could of earned as a full time musician according to records but a job that Herschel took immediately! Caroline herself was eventually to get 50 pounds per month as William's assistant, making her the first woman in recorded history to receive a salary as an astronomer.

William and Caroline, now both in the employ of King George III, were now able to conduct astronomy on a scale that saw the largest telescopes of their time being constructed, such as Herschel's famous 20 foot, 18.7'' telescope

Herschel's 20 Foot Telescope

It could be argued this began the age of large scale telescope construction. Progress in astronomy on the surface of the earth really is a matter of simply building larger individual telescopes (or baselines of multiple telescopes) in better locations and this is a fact right up to the present day.

During his career, Herschel was as renowned as a maker of telescopes as he was as an astronomer, and he oversaw the construction of several hundred instruments, which were sold in England, Ireland and in the European continent. Using even larger telescopes, such as 40 foot telescope built in 1789, William was able to catalog much more deep sky objects and created the first surveys of the Milky way and discovered that many of the deep sky objects were so-called "spiral nebulae", really galaxies although the notion of a galaxy was unknown to astronomers of that time.



Caroline was also given a telescope, constructed by William, similar to the one with which he had discovered Uranus. With this telescope Caroline discovered 8 comets and over a dozen “nebulae” between 1783 and 1797.

These included: Spiral Galaxy NGC 253 Elliptical Galaxy M110, one of the satellite galaxies to Andromeda and discovered many comets in her time too, being one of the first women to be recognized for such work at the Royal Society in London.

During the rest of his life, William Herschel produced lists of thousands of nebulae and star clusters, and he was the first to distinguish between distant clusters and dusty nebulae.



As large as some of the telescopes he designed, William Herschel was also working with smaller telescopes, this time to examine the spectrum of stars and of the Sun itself. Herschel started at first to use color filters to separate the different bands of visible light from astronomical objects, a technique still used in amateur astronomy today to highlight certain features.

A Color Filter that Exposes the Near-Infrared and Near-Ultraviolet from Sunlight

In his observations of the Sun, in February 1800, Herschel described how the various colored filters through which he observed the Sun allowed different levels of heat to pass. The energy output of a 5700–5800K star, our sun, is greater in the visible wavelengths than in the infra-red, and most of the visible light should be passing equally through the atmosphere. How could these redder wavelengths then be hotter?

He performed a simple experiment to study the 'heating powers of colored rays': Like Newton over a century before he split the sunlight with a glass prism into its different constituent colours and this time made the next important step, using a thermometer as a detector he measured the temperature of each colour.



Herschel observed an increase as he moved the thermometer from the violet to the red part of the spectral 'rainbow'.


The result is due to the fact that the blue wavelengths in the classic Herschel experiment are more strongly scattered, hence more "spread out", than the redder wavelengths and thus more energy reaches the thermometers measuring the red end of the spectrum.

This experiment, given the limits of technology at the time, could not measure the discrete energy in each color individually but at the very least Herschel provided a kind of reference frame between different regions of the spectrum, with respect to their indices of refraction by the prism, and their temperatures.

Today, we can measure the discrete energies of wavelengths of light, independent of the effects of light scattering affecting the intensity. As a rule of thumb  thinking in modern terms, blue light is in fact more energetic than red light in terms of the excitation energy of electrons than it is in terms of its "temperature" as measured by Herschel.



Herschel also measured temperatures in the region just beyond the red color where no light was visible as a control. Herschel expected it to merely measure the ambient temperature of the room and, to his surprise, it recorded an even higher temperature there in apparent "shade". He deduced the presence of invisible 'calorific' rays, now called 'infrared' radiation.

An underlying understanding of the physics involved makes a big difference in how to interpret the results. The concept of heat had clearly been known since prehistory, as well as the fact that it travels through the air from a flame or heated object. What Herschel discovered was subtler than the existence of an invisible heat radiation.  Herschel found the first solid evidence that light and infrared are the same quantity that we know today to be electromagnetic radiation, a critical example of scientific reductionism.

Through a series of simple experiments, Herschel found the first piece in one of the great puzzles of physics that would take another century and the formulation of the quantum theory to solve, particularly when one considers that objects that emit light through heating must emit energy in discrete packets, quanta, in order for the visible world to make sense, as is the case for Max Planck's theory of radiation solving some of the absurdities from the so-called "ultraviolet catastrophe".

The Planck Black Body Radiation Curve


But perhaps the greatest lesson to be learned about the discovery of the infrared part of the spectrum of light is how it was discovered. Let us remember: it was not by complete accident. Herschel was thinking about the problem and searching for it the right way, based on an exploitation of the known principles of light, some of which he discovered (i.e. light filtered Red being hotter than Blue). However the discovery was found by a detector system that was not so fine-tuned and was exploring part of the answer in what would appear, on first glance, to be not in a local optimum location but then turned out to be the global optimum of the search for a discovery!

What does it really mean then when we can detect something that is a genuine discovery, using a detector system that is not set up properly? Does that mean that although we may set out to find something to prove (or should I say disprove!) a theory, even if we set up our equipment 100% correct (with that careful precision physicists are most famous for) we may not be guaranteed to find anything because what we are looking for is, in a sense, just a little nudge to the right or to the left?

In the history of science Herschel's discovery of IR radiation is in great company, with the discovery of the CMB as another similar example. I cannot say for all cases but it is sometimes the case and the unfocused or just downright wrong setup can give us real jewels of scientific knowledge.









Extra Points:



*#1
This is particularly evident with Newton's third law. Newton in his time never could explain the origins of forces that governed the laws he discovered, nor could Huygens . Without specifying the nature or origin of the forces on the two masses, Newton's 3rd law assumed they arise from the two masses themselves, that they must be equal in magnitude but opposite in direction so that no net force arises from purely internal forces. Hence physics is then reduced to a kind of balancing act between the 2 opposing masses.

Newton's Third Law is in reality, a meta-law (that is a higher level law invoking the existence of a more fundamental law) and we find, through Noether's Theorem, that it arises as a direct consequence of conservation of momentum, which essentially says that in an isolated system with no net force on it, momentum doesn't change. This means that if you change the momentum of one object, then another object's momentum must change in the opposite direction to preserve the total momentum. Forces cause changes in momentum, so every force must have an opposite reaction force.

But now we may ask:
Why is momentum conserved?
Conservation of momentum comes from an idea called Noether's Theorem, which states that conservation laws in general are a direct result of the symmetries of physical systems. In particular, conservation of momentum comes from translation symmetry. This means that the behavior of a system doesn't change if you select a new origin for your coordinates (put differently, the system's behavior depends only on the positions of its components relative to each other). This symmetry is found in all isolated systems with no net force on them because it is effectively a symmetry of space itself. Translation symmetry of a system is a consequence of homogeneity of space, which means that space is "the same everywhere" - the length of a rod doesn't change if you put it in a different place.
But now we may ask:
Why is space homogeneous?
In classical mechanics, this is one of the basic assumptions that allows us to do anything In this case it basically says that because the laws of physics as expressed in terms of Cartesian coordinates are the same no matter where you put the origin (the coordinate system and its origin is an arbitrary artificial invention) . Noether's theorem mathematically implies that momentum must be conserved in such a condition. 
In reality, according to General Relativity, space actually isn't homogeneous - it curves in the presence of massive objects. But usually it's close enough to homogeneous that classical mechanics works well (gravity is pretty spectacularly weak, after all), and so the assumption of homogeneity holds.


*#2 
In the contemporary world of Newton, the forces of magnetism and static electricity (through rubbing fur on amber and so forth) was known about, albeit nascent by the ancient Greeks and the Chinese too. This knowledge was pure empiricism and with very little theory behind it, and this could have been interpreted as an example of "action at a distance" in which some force overcomes the balance of Newton's third law with regards to the quantity of mass alone. Newton did not have a knowledge of charge or magnetic field strength, and nor would anyone until well into the 19th century.  

It seems that Newton did not pay much attention in incorporating the concept of magnetism or charge in his description of the physical world. He explicitly mentions electricity in Optics and he seems to have used magnets in experimental studies. That he didn't go very far with this is not particularly surprising: forces between permanent magnets do require knowledge of how calculus works in predicting the aspects of electric and magnetic fields and Newton's Laws generally follow afterward. The concept of opposing magnetic poles and electrical charges was not well quantified in Newton's time and so would have appeared somewhat occult to him.

Nevertheless, In the famous final paragraph of the Scholium Generale that Newton added to the second edition of the Principia, published in 1713, he wrote of “a certain most subtle spirit which pervades and lies hid in all gross bodies.” It was this active spirit that gave rise, he supposed, to the electrical attractions and repulsions that manifested themselves at sensible distances from most bodies after they had been rubbed, as well as to the cohesion of particles when contiguous. In addition, he surmised, it was the agency responsible for the emission, reflection, refraction, inflection and heating effects of light; and by its vibrations in “the solid filaments of the nerves,” it carried sensations to the brain, and commands of the will from the brain to the muscles in order to bring about bodily motion.

Friday, 1 November 2019

Standard Environmental Drone Monitoring Techniques: NDVI, ENDVI and SAVI




With a standard filter we have demonstrated the use of a drone under single deployment to create Near-Infrared images.

The next step is to create indices which are important for the specific application in environmental monitoring. Of particular interest is the monitoring of regions which are vulnerable to natural and human induced disasters such as forest fires




We are also interested in comparing regions based on the amount of soil exposed due to factors such as grazing, human activity, etc.

To do this we must consider that there are many environmental indices to choose from in the field of NIR monitoring (and even more with other spectral bands).

Using experimental flights taken with an NIR camera using a bandpass filter with the following spectrum:


We have developed 3 important classification indices to consider using standard NIR filters on converted IR Cameras deployed on a drone used in environmental monitoring.

NDVI - Normalised Differential Vegetation Index

We have discussed NDVI before on this blog and will not go through it in detail again, however it is important to point out the differences between the satellite use of NDVI and the Drone/Aerial Monitoring implentation of the NDVI with regards to the different but essentially equivalent formula.

With satellite imagery we use an NDVI formula which has the best possible leveraging between a single band in the visible spectrum vs the NIR band. In an RGB image taken of the Earth's surface by a satellite, the blue portion of the visible spectrum (roughly 400nm-450nm) will diminished the most significantly by the atmosphere of the Earth, which scatters blue light more strongly than green or red. Hence, water aside*, features of the surface which reflect blue are relatively diminished in intensity. Hence, most satellite cameras used in Earth observation of vegetation ignore the blue region altogether and focus on Red, Green and several key IR bands.

Therefore, for satellite imagery, we use what has been come to be known as the standard NDVI formula:


With the drone imaging we perform, we can use a different but equivalent formula which gives a far better leveraging between NIR and the Blue channel for several reasons.


  • There is less atmosphere between the surface feature being examine and the drone CCD sensor, so blue light scattering is not an option
  • Some vegetation is in fact Red when Healthy, due to pigments. Blue however is not reflected by healthy plants, due to chlorophyll A and B both having prominent absorption in the blue region of the visible spectrum.
  • New CCD sensors used in drones (such as the Hasselblad) are generally designed to have equal sensitivity between the Red and Blue bands 
  • The Red band can be used as the NIR band in a converted NIR camera, making for easier data acquisition with existing cameras

So for Drone imaging we use the following NDVI formula, sometimes called the Infrablue NDVI formula.



Using our code in Matlab (see Appendix) from the following Near-Infrared DNG image



We process the following NDVI image



We can also use a similar code in Python that can do essentially the same processing, just with a different colormap scale:



At this point it all depends on personal preference which format to use, Matlab or Python. Python tends to be better for preserving bit-depth for use in further applications we will discuss in a future article on 3D reconstruction from NDVI processed images. I also prefer the colormaps that I have been able to create in Python myself as they provide a much sharper definition of the different NDVI scales in an image overall.

*(Water can reflect blue light in higher intensity because it is, in effect, reflecting the color of the sky (which is itself blue because of blue light being scattered in the first place) ).

ENDVI - Enhanced Normalised Differential Vegetation Index



Extended Normalized Difference Vegetation Index (ENDVI) data gathered for vegetative health
monitoring can be used to provide similar, but spectrally different information as compared
to traditional NDVI data.

Soil background, differing atmospheric conditions and various types of vegetation can all influence the reflection of visible light somewhat differently.

ENDVI uses Blue and Green visible light instead of the Red only method of the standard NDVI algorithm. This allows for better isolation of plant health indicators.



A normal healthy plant will reflect both visible Green and NIR light while absorbing Blue visible light, hence plants appear Green in the Visible Spectrum and will Glow Brightly in the Near-Infrared.

In the ENDVI measurement we are calculating the relationship between high absorption of this Blue light and high reflectance of Green and NIR radiation as an indicator of plant health.

Hence ENDVI analysis may, at times, be able to impart more accurate or reliable information regarding plant or crop health by additional reference of information in the green portion of the spectrum with respect to NIR.

It also has an added advantage in terms of distinguishing the differences between algae, lichens and plants on ground and, more importantly, in water which is critical for the monitoring of algal blooms and other contaminants which reflect in different intensity in the Near-Infrared than vascular plants,

As discussed in a past article, Vascular plants reflect NIR more strongly when healthy due to the presence of spongy mesophyll tissue which algae and lichens lack regardless of their individual health. Thus ENDVI gives more ability to classify plants and objects that appear bright in the NIR overall and with particular respect to their position in the ecosystem.



Again, using our code in Matlab on the same NIR DNG image we process the image from NIR to ENDVI



And we can do the same with generating ENDVI with our Python code:


SAVI - Soil-adjusted vegetation index



The SAVI is structured similar to the NDVI but with the addition of a soil reflectance correction factor called L that leverages the overal NIR brightness of soils so as to not wash out any potential features in the soil.

Just as in the case with NDVI, the satellite imaging version of the SAVI formula contains the Red band for use in leveraging against the NIR:


For our drone version, as before with NDVI, we use the blue channel as our leveraging against the NIR, which has been swapped with the camera's red channel using our InfraBlue Filter. The formula is hence amended to:


L is a constant and is related to the slope of the soil-line in a feature-space plot.



Hence the value of L varies by the amount or cover of green vegetation: in very high vegetation regions, i.e. thick forest, L=0; and in areas with no green vegetation, i.e. deserts, steep slopes, canyons, deserts, beaches etc then L=1. Generally, an L=0.5 works well in most situations (i.e. mixed vegetation cover, the red region in the NIR vs VIS(Blue) Plot).



So L=0.5 (half) is usually the default value used.
Note: When L=0, then SAVI = NDVI.


Again, using our code in Matlab on the same NIR DNG image we process the image from NIR to SAVI


And we can do the same with generating SAVI with our Python code:



SAVI can be very useful in observing very dry or grazed regions where soil and rock is exposed either by climate, animal or human activity.

SAVI has particular interest not only in helping to distinguish low level vegetation from soil and bare soil from rock but it has particular attractiveness in archaeology in distinguishing markings and artificial arrangements in the soil and rock surfaces in a landscape that would otherwise be depicted as largely homogeneous in colour and thus featureless.

This is particularly helpful when mapping regions which may have been used for farming or habitation long ago but have been largely abandoned with rock walls and areas of human habitation receding into the landscape.

In the area of archaelogy it is of course very important to map regions where ancient settlements once stood, and to discover new artifacts.



Also in the field of rewilding, old stone walls that divided farmlands in the past can be an obstacle to migrating wildlife and are an important feature with regards to distribution of species and bottlenecks involving their range in an area and may shape their behavior in feeding as well as predator-prey relations and human interactions.






Matlab Scripts:


All Matlab Files that load DNG need the following loadDNG function file:


function [rawData, tinfo]= loadDNG(dngFilename)   
    if(exist(dngFilename,'file'))
        tinfo = imfinfo(dngFilename);
        t = Tiff(dngFilename,'r');
        offsets = getTag(t,'SubIFD');
        setSubDirectory(t,offsets(1));
        rawData = t.read();
        t.close();
    else
        if(nargin<1 || isempty(dngFilename))
            dngFilename = 'File';
        end
        fprintf(1,'%s could not be found\n',dngFilename);
        rawData = [];
    end
end


Full Poster Script:

%Experimental Vegetation Index Mapping program using DJI Mavic Pro DNG 16-bit images taken using InfraBlue Filter 
%Works with DNG Images only - needs loadDNG script in same directory
%(c)-J. Campbell MuonRay Enterprises Drone-based Vegetation Index Project 2017-2019 

rawData = loadDNG('DJI_0592.DNG'); % load it "functionally" from the command line

%DNG is just a fancy form of TIFF. So we can read DNG with TIFF class in MATLAB

warning off MATLAB:tifflib:TIFFReadDirectory:libraryWarning


%Transpose Matrices for image data because DNG image format is row-major, and Matlab matrix format is column-major.

%rawData = permute(rawData, [2 1 3]);

%Assume Bayer mosaic sensor alignment.
%Seperate to mosaic components.
J1 = rawData(1:2:end, 1:2:end);
J2 = rawData(1:2:end, 2:2:end);
J3 = rawData(2:2:end, 1:2:end);
J4 = rawData(2:2:end, 2:2:end);



figure(1);


J1 = imadjust(J1, [0.09, 0.91], [], 0.45); %Adjust image intensity, and use gamma value of 0.45

J1(:,:,1) = J1(:,:,1)*0.80; %Reduce the overall colour temperature by factor of 0.80

B = im2single(J4(:,:,1)); %VIS Channel Blue - which is used as a magnitude scale in NDVI
R = im2single(J1(:,:,1)); %NIR Chaneel RED which is the IR reflectance of healthy vegeation 
G = im2single(J2(:,:,1)); %VIS Channel Green - which is used as a magnitude scale in ENDVI

L = 0.7; % Soil Reflectance Correction Factor for SAVI Index

%InfraBlue NDVI
InfraBlue = (R - B)./(R + B);
InfraBlue = double(InfraBlue);


%% Stretch NDVI to 0-255 and convert to 8-bit unsigned integer
InfraBlue = floor((InfraBlue + 1) * 32767); % [-1 1] -> [0 256] for 8-bit display range(*128), [0, 65535] for 16-bit display range (*32500)
InfraBlue(InfraBlue < 0) = 0;             % may not really be necessary, just in case & for symmetry
InfraBlue(InfraBlue > 65535) = 65535;         % in case the original value was exactly 1
InfraBlue = uint16(round(InfraBlue));             % change data type from double to uint16
% InfraBlue = uint8(InfraBlue);             % change data type from double to uint8

NDVImag = double(InfraBlue);



%ENDVI - Green Leveraged

ENDVI = ((R+G) - (2*B))./((R+G) + (2*B));
ENDVI = double(ENDVI);

%% Stretch ENDVI to 0-255 and convert to 8-bit unsigned integer
ENDVI = floor((ENDVI + 1) * 32767); % [-1 1] -> [0 256] for 8-bit display range(*128), [0, 65535] for 16-bit display range (*32500)
ENDVI(ENDVI < 0) = 0;             % may not really be necessary, just in case & for symmetry
ENDVI(ENDVI > 65535) = 65535;         % in case the original value was exactly 1
ENDVI = uint16(round(ENDVI));             % change data type from double to uint16
% InfraBlue = uint8(InfraBlue);             % change data type from double to uint8

ENDVImag = double(ENDVI);


%SAVI - Soil Reflectance Leveraged 
%The SAVI is structured similar to the NDVI but with the addition of a
%“soil reflectance correction factor” L.
%L is a constant (related to the slope of the soil-line in a feature-space plot)
%Hence the value of L varies by the amount or cover of green vegetation: in very high vegetation regions, 
%L=0; and in areas with no green vegetation, L=1. Generally, an L=0.5 works
%well in most situations (i.e. mixed vegetation cover)
%So 0.5 (half) is the default value used. When L=0, then SAVI = NDVI.

SAVI = (((R-B)*(1+L))./(R+B+L));
SAVI = double(SAVI);

%% Stretch ENDVI to 0-255 and convert to 8-bit unsigned integer
SAVI = floor((SAVI + 1) * 32767); % [-1 1] -> [0 256] for 8-bit display range(*128), [0, 65535] for 16-bit display range (*32500)
SAVI(SAVI < 0) = 0;             % may not really be necessary, just in case & for symmetry
SAVI(SAVI > 65535) = 65535;         % in case the original value was exactly 1
SAVI = uint16(round(SAVI));             % change data type from double to uint16
% InfraBlue = uint8(InfraBlue);             % change data type from double to uint8

SAVImag = double(SAVI);


% Display them all.
subplot(3, 3, 2);
imshow(rawData);
fontSize = 20;
title('Captured Image', 'FontSize', fontSize)
subplot(3, 3, 4);
imshow(R);
title('Red Channel', 'FontSize', fontSize)
subplot(3, 3, 5);
imshow(G)
title('Green Channel', 'FontSize', fontSize)
subplot(3, 3, 6);
imshow(B);
title('Blue Channel', 'FontSize', fontSize)
subplot(3, 3, 7);
myColorMap = jet(65535); % Whatever you want.
rgbImage = ind2rgb(NDVImag, myColorMap);
imagesc(rgbImage,[0 1])
%imshow(recombinedRGBImage);
title('NDVI Image', 'FontSize', fontSize)
subplot(3, 3, 8);
myColorMap = jet(65535); % Whatever you want.
rgbImage2 = ind2rgb(ENDVImag, myColorMap);
imagesc(rgbImage2,[0 1])
%imshow(recombinedRGBImage);
title('ENDVI Image', 'FontSize', fontSize)
subplot(3, 3, 9);
myColorMap = jet(65535); % Whatever you want.
rgbImage3 = ind2rgb(SAVImag, myColorMap);
imagesc(rgbImage3,[0 1])
%imshow(recombinedRGBImage);
title('SAVI Image', 'FontSize', fontSize)
% Set up figure properties:
% Enlarge figure to full screen.
set(gcf, 'Units', 'Normalized', 'OuterPosition', [0, 0, 1, 1]);
% Get rid of tool bar and pulldown menus that are along top of figure.
% set(gcf, 'Toolbar', 'none', 'Menu', 'none');
% Give a name to the title bar.
set(gcf, 'Name', 'Demo NDVI Poster', 'NumberTitle', 'Off')

%Save NDVI, ENDVI and SAVI images to PNG format file
imwrite(rgbImage, 'NDVI.png');
imwrite(rgbImage2, 'ENDVI.png');
imwrite(rgbImage3, 'SAVI.png');




Python Script:




"""
Experimental Vegetation Index Mapping program using DJI Mavic 2 Pro
JPEG 16-bit combo images taken using InfraBlue Filter

This script uses rawpy

rawpy is an easy-to-use Python wrapper for the LibRaw library.
rawpy works natively with numpy arrays and supports a lot of options,
including direct access to the unprocessed Bayer data
It also contains some extra functionality for finding and repairing hot/dead pixels.
import rawpy.enhance for this

First, install the LibRaw library on your system.

pip install libraw.py

then install rawpy

pip install rawpy

%(c)-J. Campbell MuonRay Enterprises 2019
This Python script was created using the Spyder Editor

"""

import warnings
warnings.filterwarnings('ignore')
import numpy as np
from matplotlib import pyplot as plt  # For image viewing
import rawpy

#!/usr/bin/python

from matplotlib import colors
from matplotlib import ticker
from matplotlib.colors import LinearSegmentedColormap

#dng reading requires libraw to work


#a nice selection of grayscale colour palettes
cols1 = ['blue', 'green', 'yellow', 'red']
cols2 =  ['gray', 'gray', 'red', 'yellow', 'green']
cols3 = ['gray', 'blue', 'green', 'yellow', 'red']

cols4 = ['black', 'gray', 'blue', 'green', 'yellow', 'red']

def create_colormap(args):
    return LinearSegmentedColormap.from_list(name='custom1', colors=cols3)

#colour bar to match grayscale units
def create_colorbar(fig, image):
        position = fig.add_axes([0.125, 0.19, 0.2, 0.05])
        norm = colors.Normalize(vmin=-1., vmax=1.)
        cbar = plt.colorbar(image,
                            cax=position,
                            orientation='horizontal',
                            norm=norm)
        cbar.ax.tick_params(labelsize=6)
        tick_locator = ticker.MaxNLocator(nbins=3)
        cbar.locator = tick_locator
        cbar.update_ticks()
        cbar.set_label("NDVI", fontsize=10, x=0.5, y=0.5, labelpad=-25)


# Open an image
infile = 'DJI_0592.DNG'

raw = rawpy.imread(infile)

rgb = raw.postprocess(gamma=(1,1), no_auto_bright=True, output_bps=16)

R = rgb[:,:,0]
G = rgb[:,:,1]
B = rgb[:,:,2]
# Get the red band from the rgb image, and open it as a numpy matrix
#NIR = image[:, :, 0]
         
ir = np.asarray(R, float)



# Get one of the IR image bands (all bands should be same)
#blue = image[:, :, 2]

r = np.asarray(B, float)



# Create a numpy matrix of zeros to hold the calculated NDVI values for each pixel
ndvi = np.zeros(r.size)  # The NDVI image will be the same size as the input image

# Calculate NDVI
ndvi = np.true_divide(np.subtract(ir, r), np.add(ir, r))

'''
Extended Normalized Difference Vegetation Index (ENDVI) data gathered for vegetative health
monitoring can be used to provide similar, but spectrally different information as compared
to traditional NDVI data.

Soil background, differing atmospheric conditions and various types
of vegetation can all influence the reflection of visible light somewhat differently.

ENDVI uses Blue and Green visible light instead of the Red only method of the standard NDVI
algorithm. This allows for better isolation of plant health indicators.

A normal healthy plant will reflect both visible Green and NIR light while
absorbing Blue visible light, hence plants appear Green in the Visible Spectrum and
Will Glow Brightly in the Near-Infrared.

In the ENDVI measurement we are calculating the relationship between
high absorption of this Blue light and high reflectance of Green and NIR waves
as an indicator of plant health.

Hence ENDVI analysis may, at times, be able to impart more accurate or reliable information
regarding plant or crop health by additional reference of information in
the green portion of the spectrum with respect to NIR.

'''
#ENDVI = [(NIR + Green) – (2 * Blue)] / [(NIR + Green) + (2 * Blue)]
# Get one of the green image bands (all bands should be same)


g = np.asarray(G, float)

#(NIR + Green)
irg = np.add(ir, g)

# Calculate ENDVI
#ndvi = np.true_divide(np.subtract(irg, 2*r), np.add(irg, 2*r))

#%SAVI - Soil Reflectance Leveraged

#The SAVI is structured similar to the NDVI but with the addition of a
#“soil reflectance correction factor” L.
#L is a constant (related to the slope of the soil-line in a feature-space plot)
#Hence the value of L varies by the amount or cover of green vegetation: in very high vegetation regions,
#L=0; and in areas with no green vegetation, L=1. Generally, an L=0.5 works
#well in most situations (i.e. mixed vegetation cover)
#So 0.5 (half) is the default value used. When L=0, then SAVI = NDVI.



#SAVI = [((R-B)*(1+L))./(R+B+L)];

L=0.5;
one = 1;

#rplusb = np.add(ir, r)
#rminusb = np.subtract(ir, r)
#oneplusL = np.add(one, L)

#ndvi = np.true_divide(np.multiply(rminusb, oneplusL), np.add(rplusb, L))


# Display the results
output_name = 'InfraBlueNDVI3.jpg'

fig, ax = plt.subplots()
image = ax.imshow(ndvi, cmap=create_colormap(colors))
plt.axis('off')
     
create_colorbar(fig, image)
extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
     
fig.savefig(output_name, dpi=600, transparent=True, bbox_inches=extent, pad_inches=0)
        # plt.show()