How can the structure of our brains help to predict Californian wildfires?

Neuron design - based neural networks could help predict Californian Wildfires

How can neurons and brain architecture inspire us in building better machine learning models? Image credit: Marcus Kauffman, via Unsplash.

This article is a runner-up for the Science Communication Competition organised by The Oxford Scientist, and supported by the Biochemical Society.


Our brains are regarded to be the most unique organ in the body, they house our personality, consciousness and importantly, our decision-making skills. Inside, our brains operate complex networks encompassing an orchestra of many types of cells including immune cells, structural cells and, predominantly—neuronal cells. Neurons are cells in our brain that can communicate information between each other using chemical and electrical signals. These neurons form intricate networks which map across our brain waiting to be activated. The way our roughly 86 billion neurons connect is inextricably linked with our abilities to develop skills over time and decisions we make.

This complex network is a sophisticated mechanism of taking in information from our surroundings, processing it, and generating appropriate responses to our external stimuli. These problem-solving capabilities have overarching potentials in predicting many types of information; therefore, presenting a great framework for predictive protocols. This exact mechanism has been exploited by artificially generated neural networks—deep learning algorithms which can generate predictions based on multiple types of input data. 

The way our roughly 86 billion neurons connect is inextricably linked with our abilities to develop skills over time and decisions we make.

When your brain decides to generate responses, multiple signals from neurons are fired at specific junctions. Let’s look back at this morning, you woke up and likely couldn’t wait for the taste of morning coffee. You’ve brewed the kettle, poured the milk (oat or otherwise) and are now sipping on the cup, savouring the flavours. This consumption triggers a cascade of signals which transmit information to your brain, culminating in that energy rush you get after your first cup. But how does this information get from your cup to your caffeine high? The key is a transmission of data through action potentials.

The generation of action potentials involves small molecules, known as ions, that can enter or leave the cell depending on their type. This movement is regulated by the availability of associated channels and transporters in the membrane which may or may not require energy to be accessible, akin to different satellite navigation directions which can include or avoid toll roads. The net difference in ion amounts inside and outside of the neuron results in a change to the charge of the cell. This facilitates the cell to reach a “threshold”, determining whether it will be activated. If it reaches the voltage to become active, the ion imbalance is increased within the cell, to allow the action potential to travel down the neuron like a train down a track. After this process has concluded in one area of the neuron the ion balance is restored. 

Once action potentials reach the end of the neuron, the electrical currents trigger the release of chemical messengers. These chemical messengers are known as neurotransmitters and can cross the physical space between neurons, scientifically termed synapses. Multiple types of neurons can feed into the same collective synapse, known as pre-synaptic neurons. This can facilitate multiple types of input into the post-synaptic neuron depending on the neurotransmitter released. Once bound to the surface of post-synaptic cells, the neurotransmitters trigger subsequent ion changes in the downstream post-synaptic neuron and the cycle repeats.

For example, when learning a new language, practicing more frequently can increase the strength of correlating connections, increasing the ability to remember the subject.

Based on which signal is stronger, excitatory or inhibitory, this orientates the downstream neurons on whether to react or not. If it receives an excitatory input, which increases the cellular charge, the neurons will continue the propagation. Comparatively, negatively charged ion influx can result in lowered charges, halting the proverbial train in its tracks. This provides the response in favour of whichever signal was strongest, the activation or inhibition, allowing a certain pattern of events to produce an output which is proportional to the input. 

Connections between neurons are not static and can change in strength over time depending on usage. For example, when learning a new language, practicing more frequently can increase the strength of correlating connections, increasing the ability to remember the subject. Conversely, if a pathway is underutilised and not reinforced over time, the connections can weaken, causing memory loss resulting in you saying “cat” at the French bakery instead of “coffee”. This is known as synaptic plasticity, an evolutionarily fundamental function to allow species to learn from their mistakes and develop skills over their lives

Neural networks are made up of layers of unoriginally named neurons which function like biological neurons.

Now that the concept of biological synaptic pathways has been explored, we can inquire as to how this concept has been applied in the world of artificial intelligence. Specifically, in predictive software’s so called neural networks, which are designed based on the synaptic relationships in our own brains. Neural networks are made up of layers of unoriginally named neurons which function like biological neurons. This is a form of machine learning, by which information is fed into the network in the same manner as us processing external stimuli.

Customisable networks are composed of multiple layers which can be divided into the input layer, hidden layers and output layer. When information is added to the network, it is first broken down by the input layer. The different layers of the network are connected by channels which each have a mathematical value known as “weight”. When developing the network for the first time, these weights are determined randomly. Tailored equations then allow the weighted sum of inputs to be calculated, which results in a neuron having a weight high enough to qualify for activation function, or not. This activation function is akin to the biological notion of action potential generation, whereby if the threshold is reached the neuron is activated and propagates its signal. In the hidden layers, this process happens multiple times over to reduce the number of neuronal channels and streamline probabilities. This process is termed forward propagation

In the output layer, multiple possible outcomes are compared and that with the highest value becomes the output generated by the network. If information of a picture of a cat was inserted, each pixel would be broken down into input, fed into the network. The output, if correct, would identify that the picture was of a cute cat. However, akin to human learning, the network can make mistakes, and without training it could come out with some unexpected responses. For example, if you happened to feed the network only pictures of cats with yellow collars on, the network could assume that yellow collar always equals cat. This could end in some awkward situations if, for example, a duck was wearing a yellow collar, the network could incorrectly label it as a cat. Therefore, networks must go through rigorous training programs being fed multiple types of input data and then working backwards to correct their mistakes. 

To facilitate this, neural networks undergo training where they are also fed the correct output and can then compare their generated output to the actual output. Just as a child learns the names of animals by repeating their mistakes, the neural network undergoes a mechanism of back propagation. This involves the network seeing how far off the correct weighting value it was and then going back and adjusting the weight associated with each neuronal channel. This process can be repeated multiple times as the predicted weighting gets closer to the correct output, until the desired result is reached. In this respect, over time, neural networks become more accurate in their predictions of outputs from designated inputs.

The way the network repeats its training to strengthen network accuracy can be compared to the biological concept of synaptic plasticity previously discussed. Once the network has assigned the weights such that it can generate the correct output most of the time, the training can end, and the network can begin to be applied to inputs with unknown outcomes. While this is an oversimplification of the mechanisms, multiple more complex types of neural networks have also been created for different applications. 

What makes humans unique is the ability to truly comprehend the external environment in multiple contexts to make organic decisions; compared to neural networks which present this illusion though rigorous training.

Subsequently, neural networks can be fed multiple types of information, not just images. One application of neural networks has been using a customised network to make predictions on forest fire severity. The network in this case would be fed multiple types of data, such as how combustible the environment within the given area is and the ignition sources within it. Then, the network can use this data to create probabilities of future wildfire events based on its training protocol by producing percentage area burned maps, meaning the amount of land predicted to burn in a region according to the dataset provided. When looking the networks biological counterpart, the processing of this data is comparable to how a postsynaptic cell receives various signals to culminate in a final output. This tool can be applied to a multitude of landscapes from equatorial Africa to the Hollywood Hills, although some locations may require different area specific networks.

Information from these networks can shape how landscapes can be adapted to reduce percentage burned landmass according to risk factors determined by the neural network inputs. While only one example of neural network utility, networks can be custom built for almost any situation, from how to bake bread to the perfect way to spend a weekend in Rome, highlighting their likely increasing role in our lives

In summary, while our physical brains can be compared to artificial predictive programs, it shouldn’t be misunderstood that neural networks are understanding this information. What makes humans unique is the ability to truly comprehend the external environment in multiple contexts to make organic decisions; compared to neural networks which present this illusion though rigorous training. While the networks don’t understand the input they receive independent of assigned weights, could there be scope for this in the future? Currently, the ceiling of this technology is unknown, but perhaps, like our brains, may hold limitless potential.  


Top