Everything in this Universe is quantum: stars, galaxies, planets, Earth, and its Nature. Reality is generated inside our brains by processing neurons using micro-voltages through the probable collapse of several wave functions at the same time.
Ingenuity of Nature
Analog is one of the direct consequences of wave function collapse, and that’s why Nature prefers to work in this way, among other possible benefits such as decentralization, “low power computing,” and “computing placement.” The total sum of these benefits is clear robustness and adaptability. It doesn’t make sense for a statistical approach because Nature cannot predict when the next asteroid will hit Earth.
Still, it can survive such cataclysmic effects due to its inherent significant advantage: it can evolve. Precisely, this evolutionary characteristic makes it so resilient. The less adaptable organisms will disappear in time. There isn’t statistical data within Nature; Nature tries through evolution only.
Evolution and adaptability require, among other things, the capacity of local learning without prior knowledge of the environment, and that’s why we have thousands of different species of ants, among many others, as an example. There is also a need for a type of intelligence that is not very power-hungry because Nature doesn’t necessarily benefit large biological organisms; of course, there are species such as elephants and whales, but the number of insects far exceeds the number of other animals by quite a lot.
If you need to overeat, you are restricted only to some areas on this planet, the areas that provide such an amount of food that can be transformed into energy. There are exceptions, of course. It seems the most successful organisms on this planet are small; they have some kind of intelligence, either a swarm or group intelligence, and they are pretty adaptable to various environments.
We know ants are 160 million years old, highly adaptable, and have swarm intelligence; they survived at least one Extinction Level asteroid impact and are highly versatile. Nature evolves. Evolving means, in a broader way, adaptation to a given environment and/or context. Nature also doesn’t need too much energy to evolve because it will decrease its overall robustness and make it prone to extinction — the energy on this planet is limited, and it seems more reliable to have almost 9 million species instead of a power-hungry few thousand. Diversity is a direct consequence of adaptability.
So, Nature doesn’t rely on statistics; quite the contrary, it highly benefits from randomness, the precise same randomness we can perceive in the quantum world. Somehow, Nature inherits the quantum properties and benefits from them through the random effects, closing a generic circle: the quantum world — this means all the universe — is random. Nature uses the wave function collapse, and this way evolves while benefiting from random events such as asteroid impacts, solar flares, or high levels of oxygen in the atmosphere.
We can affirm that Nature evolves because it doesn’t need a lot of energy, it is decentralized, and it benefits from random events. Biological neurons work in a similar way (obviously): no two brains are alike, no two brains have the same neuroplasticity and the precise same number of neurons, and every single neuron within a single brain has a different state (weight).
The general “working” principle is the same: human brains do have roughly the same structure, regions, volume, weight, “power consumption,” and “requirements”. Everything evolved, or was designed, to work towards adaptability, to enable and to benefit from evolution, and to be as robust as possible. Biological brains can work for at least 100 years, can learn new stuff, and their power consumption stays the same while being able to do thousands of different things at the same time. They can adapt to new environments and contexts.
Neuromorphic engineering
Also known as neuromorphic computing, it uses a similar approach to biological brains. There are two main reasons for this: low power consumption can be achieved, and it can solve the local processing problem.
For that to happen, new types of electronic components needed to be researched: memristors. In a comprehensive way, these are resistances with memory and we can control those resistances in real-time. This is important because being able to control those resistance values dynamically, equals different weights.
In return, this means memristors can be transformed into something very similar to a hardware biological synapse and neuron, which enables a different approach to what true AGI can be. It is both interesting and notable that some of these memristors work with micro-voltages as well, in a totally analog way.
So, analog, low-power, and the ability to learn on-site — at the hardware level. Just like in a biological brain, the values are slightly different and can “evolve” within some given parameters, meaning that two such small neuromorphic brains, put on two different chips, can recognize the same patterns while being inherently different. There are multiple implications, but two of them are worth mentioning.
First implication: what is a product? Small brains are built on chips with the capability of recognizing tens of thousands of algorithms while having a small power consumption. They will be everywhere. Also, if you have, let’s say, a device for cleaning your room with such a brain-on-a-chip, capable of recognizing tens of thousands of patterns — able to do tens of thousands of different things, are we still talking about a vacuum cleaner or something else entirely?
It won’t be just a smart vacuum cleaner capable of cleaning more difficult carpets; it will be a vacuum cleaner capable of driving your car while teaching you about pottery and art at the same time. Chip capabilities will far exceed product functionalities, so what will a product be? Whatever its owner will decide.
Second implication: hints of true AGI? Neuromorphic engineering, at least the one we are building, works in a very similar way to Natural Intelligence; the only artificial parts are the materials used for building our brain-on-a-chip. Electric power laws and that’s it. Small form factor, low power consumption, and tens of thousands of recognizable patterns on a single chip mean something more: capabilities are transcending the technological foundations. In a way, the expression “brain-on-a-chip” shows that context-aware, adaptive machines are not on the horizon; they are already here.