Friday, 22 April 2016

Integrating the Concept of Meta-heuristics to Neural Networks

Exploring the topic of Meta-heuristics (see article here) further, we can structure ways to implement them in neural network structure which is useful for solving certain problems. 

When using metaheuristic algorithms, we define the search space the algorithm works within based on constraints, i.e. we enframe the individual signalling nodes in a configuration space.

We define a constrained metaheuristic in terms of a network structure. The constraints themselves are really imposed by the weights of the inputs, W i,j.


For a given input vector: 


In a neural network structure we weight the inputs and sum them through k-number of hidden layers in the following summation:



Our activation function, A(x), which passes the weighted values through the k-number of layers is then in the same form as the signalling basis function for a metaheuristic algorithm, which is based on the Gaussian form of the signalling intensity which is monotonically decreasing.


The activation then takes the form:



Which is a sigmoidal activation function




This is exactly the same form of equation as our meta-heuristic activation function as discussed in the previous chapter in this study.

Hence the “hidden” layers of the neural network will be a summation of all possible weights the inputs take across the sigmoidal activation function in the neural network structure.

The output is then a finite discrete integer response, which could be represented on a digital number line, characteristic of the initial input vector, that has been effectively broadened by the k-layered neural network:



Metaheuristic signalling processes are then reduced simply to the weights the input vector acts on in the neural network picture:





I represent this construct in the diagram below, in which I have created a Radial Basis Singal Function that passes through a neural network in Matlab which receives a signal and creates an output response characteristic of this input, broadened by the k-number of paths of sigmoidal activation function signalling within the network:





Non-stochastic processes can be simulated quite well by neural networks in this fashion. Moreover, using meta-heuristics to design specific power law-based signalling activation functions we can structure simulations using neural networks in real-world design applications. 

It also turns out that one can expand on this mathematical reductionist method on the theories of metaheuristics and neural networks to join them together even further using a path integral interpretation of machine signalling which can in principle provide a somewhat more fundamental view of how self-organizing synchronization can occur in certain networks that have not been explicitly programmed.

Sunday, 17 April 2016

Meta-Heuristics and Universal Power Law-Based Signalling Algorithm

Introduction - Defining the criteria of heuristic and meta-heuristic behaviour:


Heuristic algorithms typically intend to find a good solution to an optimization problem by ‘trial-and-error’ in a reasonable amount of computing time. Here ‘heuristic’ means to ‘find’ or ‘search’ by trials and tallying the hits and misses. There is no guarantee to find the best or optimal solution, though it might be a better or improved solution than an educated guess.

Any reasonably good solution, often suboptimal or near optimal, would be good enough for such problems. Broadly speaking, local search methods are heuristic methods because their parameter search is focused on the local variations, and the optimal or best solution can be well outside this local region. However, a high-quality feasible solution in the local region of interest is usually accepted as a good solution in many optimization problems in practice if time is the major constraint.

Metaheuristic algorithms are higher-level heuristic algorithms. Here, ‘meta-’ means ‘higher-level’ or ‘beyond’, so metaheuristic means literally to find the solution, based on a "endowment", often based on a power law, that creates the appearance of learned behavior. Moreover, like heuristic algorithms, metaheuristic algorithms also include randomization, trial-and-error, process elements aswell. The endowment and randomization behaviour together creates behaviour in the metaheuristic which is greater than the sum of the individual parts, making for non-programmed emergent behaviours. 

Broadly speaking, metaheuristics are considered as higher-level techniques or strategies which intend to combine lower-level techniques and tactics for exploration and exploitation of the huge space for parameter search.

This kind of search can be represented in this way, for example you imagine that you have a particle which travels along a search space landscape function f(x) which contain a shallow hole and a deep hole. 



The deep holes can represent the search space global optimum (i.e. best solution possible for f(x)) and the shallow hole is an approximations of the solution, i.e. a varying local optimum solution of f(x).

The objective of the search procedure across this space should therefore be to get to the particle to the bottom of the global optimum and furthermore stay there. 

If you were to simply exploit an instruction to move a particular direction downhill, depending on where the exploited procedure assumes the best direction lies to obtain the best solution, then it is likely that the particle will fall into a local optima rather than a global optima and will furthermore stay there as the procedure would be designed to find the global optimum by travelling downhill and therefore would not be able to go uphill.

However if you introduce randomization - which could be noise, heat i.e. if you introduced "simulated annealing" so that there is a random motion superimposed on the exploited procedure searching for a hole to go down then with the "right" amount of randomness or simulated annealing, then you can effectively make sure the particle can escape the shallow local optima but cannot escape the deep global optima. 

The "right" amount of randomness introduced to the exploited procedure is determined by some kind of predetermined or learned tuning of randomness introduced to the system so that this global optimization can happen if the search is allowed to run over a long enough period of time. Running the metaheuristic procedure long enough then will allow a global optimum to eventually be found. This is in essence what a metaheuristic algorithm is all about. 

Meta-Heuristics and Universal Power Law-Based Signalling Algorithm:

The most basic meta-heuristic signalling algorithm we can construct is one based on some sort of power law for signalling.

Light signalling can of course be represented by a power law. Light signal transmission, like light in general, obeys the inverse-square law in its propagation through space. That is to say, the intensity of the light, I, is inversely proportional to the square of the distance from the light source.


Therefore the light signal gets weaker and weaker as the distance increases.

The meta-heuristic algorithm therefore must contain a function which monotonically decreases its signalling power between each transceiver node with distance, r, under a discrete light absorption coefficient for the physical medium g.

Under a given physical medium, the Gaussian form of the light intensity is then determined by:


The universal power law signalling function of is then written to be monotonically decreasing:



Where r is the distance between the different nodes.

m is the power law parameter for the signal, for light signals it is m=2.(for signals that use audio, chemical, etc, power laws it will be different)
Where β0 is the emitted signal at r=0

The distance between any two nodes, i and j at Xi and Xj respectively, is the Cartesian distance as follows:





Where Xi,j is the (k)th component of the spatial coordinate Xi of the (i)th node.

Where d is the number of dimensions. 

The time evolution of each individual node, i, over its signalling cycle is the governed by the fact each typical node, i, is coupled to the most intense, i.e. brightest, node it sees, j, by the following equation:


The second term is the signalling term, for most cases in implementation  β0 = 1. This is the physical endowment, the power law, exploitation process in the metaheuristic.

The third term is the term that defines the randomization of the possible propagation path’s taken which uses a randomization parameter, a, which for most cases in implementation is distributed in the domain of [0,1]. This is the heuristic, "trial and error", exploration process in the metaheuristic.

Interestingly, we can write this equation symmetrically as:


Which when presented in terms of node distance reads:




We can then define the signalling Activation Function, A:


Which reduces our equation to a more fundamental signal field representation:

Based on these formulae, we have the corresponding rules:

·         Each node in the field will emit and absorb discrete light signals equally.

·         The signals are proportional to the intensity of the light emitted, which both decrease in proportion to increasing distance between the nodes.

·         For any 2 flashing nodes in the field, the signal absorbed with the highest intensity will induce the strongest coupling

·         If there is no signal detected with higher intensity than any one particular node, it will designate itself as being isolated and signal randomly

·         The light insanity of any signalling node is determined by the landscape of the field, itself determined by the nature of the activation function itself.

Observing Metaheuristic Exploration VS Exploitation in a Search Space:

From the rules derived from the physical parameters used to construct the pre-programmed signalling mechanism we can define that the metaheuristic nature of the signalling algorithm is based the behaviours of exploration and exploitation

Exploration in the metaheuristic algorithm is achieved, in the case of the above signalling algorithm, by the use of randomization which enables the algorithm to have the ability to jump out of any local state of coupling between 2 or more  signalling nodes so as to explore the search for more couplings in the network over a potentially larger area.

Randomization is also used for local search around the current state if the signalling are limited to a local region. When the signalling frequency is long, randomization then allows for exploration of the the search space on a larger scale. Fine-tuning the right amount of randomness and balancing local search and global search are crucially important in controlling the performance of the synchronizing metaheuristic algorithm.

Exploitation is the use of local knowledge of the search and solutions found so far so that new search moves can concentrate on the local regions or neighbourhood where the optimality may be close; however, this local optimum may not be the global optimality. Exploitation tends to use strong local information such as gradients, the shape of the mode such as convexity, and the history of the search process.

Observations, using simulations, of the convergence behaviour of common optimisation algorithms suggests that exploitation tends to increase the speed of convergence, while exploration tends to decrease the convergence rate of the algorithm. On the other hand, too much exploration increases the probability of finding the global optimum, while strong exploitation tends to make the algorithm being trapped in a local optimum.

The relationship between exploration and exploitation both can be seen with a simple implementation of the metaheuristic derived above in the context of sensory exploration and contrastive learning exploitation based on the activation signalling function

The sample code is implemented as:

    for k=1:N  % Generate sensed signals, given that the initial values are instincts to exploit
       
        z(k,1)=sigmaz2(k,1)*(0.5-randn); % Generate internal signalling system based on activation function A(r)

        y(k,1)=x(k,1)+z(k,1); % Generate input – i.e. an external signal that is sensed - exploration
       
        Y=[y(k,1); Y(1:(n-1),1)]; % Learn by contrast – i.e. shift regression vector and load in new value - exploitation (or endowment)
       
        xhat(k,1)=mean(Y); %Get the mean of the contrastive learning procedure
    end

We plot the exploration and exploitation convergence plots for 100, 1000 and 10,000 iterations:

100 iterations:


1,000 iterations:



10,000 iterations:



As seen and discussed, there is a fine balance between the right amount of exploration and the right degree of exploitation. Despite its importance, there has never been any known practical guideline, apart from generic tuning, for this balance as regards to learning behaviour.

It does not seem to be practical, in any sense, to find a guideline based on the balances between exploration and exploitation by simply reading convergence plots alone, which are simply data, which is at least partially recalcitrant as the overall behaviour of the system that follows the algorithm has not itself been pre-programmed but nor is the system based entirely on probability.

Therefore, it makes more sense to think about the behaviour and data we empirically observe using an abstract model of a physical system that reduces the behaviour, otherwise taken for granted, to a simple signalling action principle based on the environmental parameters of the system. We will be exploring the consequences of this further and search for different ways of understanding metaheuristics in the realm of information theory, in particular in the behaviour of systems which do not occur from prior programming.

Next Part:  Integrating the Concept of Meta-heuristics to Neural Networks


Friday, 8 April 2016

Conductive Plastic with Carbon Nanotube-Polyurethane Colloidal Gel


In this demonstration I am going to show how I have created a conductive carbon nanotube polymer film using a gel made by mixing multi-walled carbon nanotubes with a water-based polyurethane gloss.




The nanotubes used are Multiwalled Carbon Nanotubes (CNTs)  Produced by Chemical
Vapor Deposition (CVD). Each CNT is typically 10-30nm in diameter, 5-20 microns in length. 









First it is key to do this in a well-ventilated area while wearing a gas mask and protective gloves. I have found Military-Grade Face and Hand Protection is Optimal. Free carbon nanotubes can be dangerous and potentially carcinogenic if inhaled. This is one of the reasons why we want to trap the carbon nanotubes into a polymer gel structure before applying them. Nevertheless, even when bonded to a material it is important to be responsible and to treat carbon nanotubes as you would any nanofibrous material such as asbestos.






We are using a water-based polurethane gel and since CNTs do not dissolve in water we have an effective colloidal gel. 
The gel is made by using a small volume of polyurethane, poured into a glass mixing bar.
The carbon-nanotubes are then poured into the mixing jar.
This is then mixed until we get a relatively even CNT colloidal gel.




We can then paint the gel on a suitable substrate, either plastic or paper. The substrate should be fibrous in order to properly bind the CNT+Polyurethane gel complex into the material.

Cellulose Acetate, used in laser printers and overhead projector transparencies, is an ideal fibrous material to apply the CNT colloidal gel to. Cellulose Acetate fibers themselves are noted for their absorption, particularly of water-based solvents, their effect of not shrinking easily under absorption (unlike cellulose paper) and also dyes easily, i.e. absorbing colorant particles, which is one of the reasons it is used in laser printing in the first place.



The gel is then allowed to anneal on the substrate at a low temperature, in this case in the heat of the sun.



When the conductive layer is dry we can then test the conductivity using a coin cell battery and a volt-meter.



As seen in the video, there is some loss across the conductive gel layer but this does not effect the function of an LED that uses the material to complete a circuit.

More importantly we have trapped the CNTs in a polymer gel and bonded that gel onto a fibrous polymer surface, namely the cellulose acetate substrate.

In short we have a flexible, durable and safe conductive material.


We can expand this idea further to create conductive CNT polymer frames for use in UAVs and other aerospace applications.



These frames are strong, resistant to electrical discharge, are highly conductive and absorb heat from the sun easily (important for maintaining a stable temperature in cold, perhaps icy, flying conditions).

More interesting still is the concept that it may be possible to design the frame so that the power source for the UAV being contained in the frame itself by means of using the conductive polymer frame as an electrode in a battery or supercapacitor.

Due to the potential danger of CNTs, non-experimental CNT frames for aerospace applications would most likely be further coated with a lacquer polymer to prevent any splintering of CNTs away from the frame under wear and tear. 

Another idea to take in mind is the fact that in a water-CNT colloidal solution, we can use a magnet to orient the carbon nanotubes.



Hence in a water-based polyurethane colloidal solution of CNTs, we can in principle orient the annealing position of the nanotubes by placing the substrate over an array of strong permanent magnets. This would also trap the nanotubes further into the structure and mitigate their ability to escape. 

This may also be used as a method to develop high density arrays of oriented carbon nanotube polymer films for use in energy storage technology.

Like always, every we have done is an experiment and requires much more development to see the different outcomes of this technology. 



Friday, 1 April 2016

Applied Nanotechnology for Cleaning Optics - Silica Nanoparticle Microfiber Cloths.



Microfiber cloths are now fairly ubiquitous for glass/plastic optics cleaning applications in many fields and industries.

Cleaning with microfiber products is fast, easy and environmentally friendly. Microfiber cloths are attractive in their ability to lift away even difficult forms of dirt without the use of chemical solvents.

Microfiber cloths are also soft and do not scratch the surface they are cleaning, but at the same time they are effective for all cleaning applications.

We can examine the differences between microfiber cloths and natural fibers by looking at their micro-structure (fig-1) and comparing their effects.

Fig-1 - Microstructure of an Artificial Micro-Fiber and a Natural Fiber (i.e. Cotton)



Natural single fibers, such as cotton, more or less simply move or push dirt and dust from one place to another.

Microfibers, owing to their microscopic structure, actually ‘scrape’ the dirt or stain from the surface, and then store the dirt particles in the fabric until it is washed.

Microfibers then trap dirt and dust inside the cloth, and do not spread dust or dirt around. The user can clean the cloths with either a polar (i.e. water) or non-polar (i.e. acetone or ethanol) later.

The scheme of cleaning dirt by natural fibres and microfibres are shown in below (fig-2).

Fig-2 - Cleaning Scheme of a Natural Fiber vs a MicroFiber. The Natural Fiber merely sweeps a dirt layer whereas a MicroFiber sweeps and scoops up the fine nano-sized dust/dirt particles



By treating microfiber cloths with SiO2 (Silica) nanoparticles in solution we can "trap" a concentration of nanoparticles inside the structure of each microfiber (Fig-3) which is then replicated throughout the superstructure of the cloth.

Fig-3: Nano-Treatment of MicroFiber with SiO2 (Silica) Nanoparticles


This allows us to create, among other things, superhydrophobicity ("water-fearing") and oleophobicity ("oil fearing") cloths, often displayed as the so-called "Lotus Effect" -i.e. suspending a condensed droplet on the surface of the material (Fig-4)

Fig-4: "Lotus Effect" of water droplet on a Superhydrophic MicroFiber Cloth


Owing to their fine, compact structure, microfibre textiles offer excellent filtration effects for air filtration. Hence, although we have hydrophobicity and oleophobicity, air molecules can still pass through the cloth unencumbered.

The trapped silica nanoparticles within the microfiber cloth can then be used to buffer conventional silica glass or polymer plastic surfaces to remove fine dirt and create a high optical transparency without creating erosion.

Moreover, by buffering with the silica nanoparticle-saturated cloth we can in-effect deposit a fine layer of silica nanoparticles on top of lightweight polymer plastic optics (Fig-5).

Fig-5: Lightweight and Cheap to Produce Polymer Plastic Optics have a wide range of uses today


In practice this layer may not be even, however it may help either fill in some grooves in the uneven surface or perhaps create a few droplet seeding points on the surface of the polymer optics where a water drop can form and roll off much more easily instead of the water vapor fogging up the optics which greatly mitigates visibility.

In any case, this can in principle give otherwise ordinary polymer optics the effect of having a thin layer of what is effectively nanoparticulate glass which can reduce fogging and increase durability with polymer optics which fog and corrode much more easily than glass.

The process can be accomplished by spraying an ordinary microfiber cloth with silica nanoparticles in a water suspension until the cloth is completely saturated and is dripping and then leaving the cloth to dry in air.

We can then, by simply sweeping the cloth across the optic surface, use the saturated cloth to buffer a layer of nanoparticulate silica on a polymer optic's surface (Fig-6). The buffering procedure is represented in the diagram below

Fig-6: Silica Nanoparticle Treatment of Conventional Optic Surface


The saturation treatment process is then repeated before each buffering to ensure that the cloth is laden with as many silica nanoparticles as possible so that we get the most amount deposited on the surface of the optics, leading to the most even distribution of the layer as possible.

This can give us, in principle, a new way to easily optimize polymer-based optics, which are relatively cheap and lightweight, to the best of our abilities in applications where weight and expense of conventional glass optics can be a problem for our goals.

Video demonstration:




Video and Project Designed and Developed by MuonRay Enterprises Ireland.