skip to content
← Go back

Self Mutating AI

Self Mutating AI

import MailingList from ”@/components/blog/NewsletterForm”;

Artificial Neural Networks

The problem with Artificial Neural Networks (ANNs) is they are unbelivably simple, when isolating each “neuron”. The complexity comes from the abundance of them. Individually each one has single feature, the weight, attached to them that influence the interaction with the following layer of neurons until a decision is made. But this doesn’t make sense. When we think of artificial intelligence we tend to think about an intelligence beyond our own. And when we think about our current brain structure there are multiple parts of the brain that are responsible for particular things that intertwine and assist eachother. E.g. the occipical lobe processes vision for the parential lobe to determine where spatially the body is for the frontal lobe make the decision not to hit a pole face first by moving to the side or something. Each section of the brain is firing to assist others, either simultaneously or so quickly that it might as well be simultaneously. When we think of trying to bitch-slap a mosquito out of the air, how is this tiny fucker able to zig-zag pass our hand and dodge it almost every time?! There’s some real magic going on with that biology. Or a huntsman spider moving all 8 legs and jumping to a piece of bark by pumping all the blood to the tips of their legs to act spring-like. It’s mind-boggling.

Let’s think for a second though. The first step is to get intelligence to some level where 2 designated parts can work together. Lets take the context of mario: when do we perform an action? we visualise the environment and we execute the action in relation to it. When we see a goomba running over to kick our ass we jump on it’s head an uno-reverse it. So there’s a interconnection between the two sections of our “brain”. But to take intelligence one step further than our own, we cannot be confined to the constraints of our own anatomy. This requires out-of-the-box creativity that breaks out of our pre-existing knowledge of brains and how they work. However, we 100% can use everything we know as reference to invent something new. E.g. a combination of our brain and how bacteria makes decisions.

You may be thinking, wtf? bacteria don’t make decisions…

But they do. They use sensory receptors that are altered by the environment to influence their movement. They receive information about the environment to come to a decision. Not as complex as the human mind, but you get my drift.

So we spoke about the macro level of the human brain (lobes) but when we dive into the cells that are within the brain, the neurons, what really are these?


Why Not AGI?

There is a specrum of true AGI but at the very core it depends on how capable is one towards adapting.

To begin, AGI is the ability to reason about what decision to take given an environment. When the agent is placed in an environment it’s unfamiliar with it will call upon past experience to estimate the best action and if there isn’t anything it will have to trial-and-error by guessing the best action to take. And so it seems like Q-learning w/ the ability to reason about new environments is the way forward. I personally align with Aristotles philosophy of learning from empirical investigation.

A very interesting case for choosing an action would be socially. When we’re in an argument we may have to substitute logical reasoning for emotional reasoning as the consequences of logical may actually ruin the output despite being objectively correct. The subjective emotional reasoning may be beneficial for solving the situation and thus achieving a satisfactory result that defuses the situation - a great example is the president catering what they say to the masses to gain their votes to later do what they truly want after elected - saying what they want to hear (which really is manipulation).

There are a few more things to cover to truly achieve AGI though.

Self Mutation

The system needs to be able to continously update itself dynamically without forgetting everything, e.g. adding an input parameter to an existing NN or creating an entirely new NN and deciding how it will interact with current infrastructure. I believe current models require a reset and need to retrain after something is fine-tuned instead of adjusting current beliefs.

Unlearning

The hardest part about self-mutating AI is unlearning what has been learned.

My initial thought to this was “override existing knowledge”. However overriding would only work if the road (e.g. pathways) were the same, but they already changed after first training. The human brain doesn’t literally override, rather neurons die if unused when no electrical stimuli routes through it. Just like the saying with muscles “use it or lose it” holds true with the strenght of synaptic connections in the brain.

In turn, it doesn’t make sense to override knowledge. Since every interconnected neural pathway would need to be updated. This would be far too complex, if a single neuron is overriden everything that connects or depends on it needs to update as well.

And so what happens when the brain adapts?

We grow new neurons that then connect to existing ones when we learn and associate knowledge together. This is the essence of neural plasticity, the ability to reorganise one’s topology by forming new neural connections throughout life.

To understand the baremetal algorithm of unlearning we must study the brain and attempt to translate it into an algorithm while maintaining all other knowledge.

Transfer Learning

Meta Learning

Meta-learning, the ability to learn how to learn. To be able to identify inputs, discover goals, form relationships between things that don’t necessarily correlate with eachother via abstraction and then self-mutating in order to create NNs based on discoveries and update when needed.

Modularised Interconnection

Connecting parts of the lobes of the brain [temporal, parietal, occipical, frontal] to limbs of the body, [arms, legs, head] to more specialised modules of limbs, e.g. arm -> hand -> fingers enables precise movement and dynamics that wouldn’t be available without it. The entire existance doesn’t depend on the arm but it brings features and makes the agent better off than if it were without.

NeuroEvolution of Augmenting Topologies (NEAT)

You may initially think of NeuroEvolution of Augmenting Topologies (NEAT) which is a terrific innovation! However, it only scratches the surface of modifying the intelligence. We need to be broader and enable few things:

  • Generation of new NNs
  • Interconnecting NNs: to name a few [how they react and respond to inputs other eachother, where to intercept signal paths, when to amplify/reduce signals and by how much]
  • Developing new lobes modules that process a certain type of information: e.g. occipical, temporal, frontal, parietal lobes.
  • How the lobes the work together, similar to the interconnection of the NNs

How do humans relearn/override knowledge? When we are taught something in school and later discover it’s incorrect what is the process for overriding the pre-existing information?

Goal Setting + Discovery

To extend, it’ll need to autonomously set goals. When set in an environment, what is the process to discovering a goal? For human this would be eating healthy over a time period to feel better than if you ate horribly.

Abstract Thinking

When we think of generalisation we think of adapting to uncorrelated situations with or without using past experiences as some kind of point of reference, e.g. lifting a couch you remember the technique from deadlifting at the gym. The transferability of knowledge between domains vastly different from one another is where we excel.

How do we correlate these two environmental actions though?

In addition, being able to think in abstraction enhances the whole simulation and understanding part as it isn’t required to go through each complete execution to learn. It can derive understanding an form better ways to apporach problems and thus more efficient. When thinking about the infinite complexities of something we are able to get rid of all the details and reason about the problem in a simpler sense, breaking it down into bite-size-chunks that form the basis of all the idiosyncrasies of the underlying system.

Memory Recollection

Finally, a criticl characteristic of humans is being able to recall moments from cues or from plain mind wondering from a point in the past while simultaneously remembering the [array of emotional associations, significance of the event (e.g. trajedy, exciting time), how it changed you]. I imagine this would be ridiculously hard to implement because: we how are humans able to remember all these attributes like a comp-sci mapping while remembering video/img sements of that space and time from a single cue.

I’m not yet sure of the importance with memory recollection of the event that updates the decision making process - maybe there is something there?


Markov Property

”… if the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceed it.” - Wikipedia

If depending only on the current state then how can we make delayed gratification decisions? E.g. not death-scrolling on the phone that rewards instant small amounts of dopamine vs doing hard tasks for little dopamine but to then gain a lot more later on and feel better about yourself.


Sequential vs Spacial


Philosophy

Thinking about “what makes a human?” is quite a fascinating topic. I’ve been binging Steven West’s Podcast, Philosophize This! that has shed light on very interesting points, particularly in the realm of consciousness. And so I’ve been pondering some points: what is: [creativity, thought, consciousness] and how are they brought into existance? As a result, after


Perceiving Information

Us humans receive stimuli from many factors: [touch, sight, smell, taste, sense of space, emotion, ...]. The most useful one is the sense of sight. Although lifeforms like bats have adapted with echo-location, on the internet there is no sound, for numbers, etc, unless we’re able to perceive binary code being spoken about as a language. It’s merely frontend and backend. Data that is perceived emotionally or not. If there were no emotion it would just be data. Advertisements would be useless. Seeing images or videos of family and friend wouldn’t remind you of anything. And so when put into an environment we need to be able to identify parts of that environment. And human intuition is being able to abstractly identify objects in the environment to instantly think of the fastest path to a maze for example or not go down the dark alley-way with hooded people holding guns. And so, how would intelligence abstractly gain intuition without visualising the data. We already know computers can handle millions of dimensional data but what if it could also see it somehow? It might be able to order data that helps it discover computationally faster algorithms and discover relationships between things we wouldn’t of been able to find without visualising it (at least in our lifetime).

The visualisation brings me to my next point. To have fully-autonomous agents that are able to self-mutate and generate new neurons it needs to be able to find what should be a parameter after having a goal. For example, take in the NN below:

Alt text

The human gave the NN the inputs but didn’t take into consideration everything important (state and future of the country (sentimental analysis), distance to essential infrastructure, current market conditions, etc). This would only be achieved if the human accounted for everything or the AI was able to think of what would impact the price of the house. The best way to learn about all of this would be to identify relationships abstractly about the attributes, e.g. state and future of country and housing prices. Maybe the two topics don’t have any direct links of reference and so it would need to be able to contextualise the correlation by abstract thought rather than logic referencing. This idea seems like the core factor behind gemeralisation: establishing relationships between things that don’t directly refer, or even relate, to eachother at the fundamental level.

Now this begs the question, how do you know a piece of information should be an input or disregarded? And when met with an enviornment that has limitless possibilities and combinations, such as Earth, then what is the factor that allows us to filter out relevent things? Something to think about. I guess for specialized programs that have a limited amount of actions then there’s your inputs but when things become state-dependent that question reveals itself again: how do you filter relevent information in a limitless state-depedent environment, especially when unrelated things can assist in understanding in an abstract sense?

What if we didn’t know the maximum reward possible for a goal? How can we then apply gradient descent (cost = 0.5 ( predicted - actual ) ^2) if we don’t have the actual value? It doesn’t make sense to know what would happen when in a blackbox. Sure, we can guess the actual outcome but on what grounds? What reasoning is there to get to that conclusion? Or what if the goal was a boolean and there is no gradient descent applicable? E.g. is a chair conscious or not - or maybe everything is measurable?


Removal of hyperparamaters

Hardcoded hyperparameters, e.g. [learning rate, gradient step size, etc], seem to be the “bias” humans apply. However, why don’t we allow the autonomous agent to discover the correct hyperparams? It would check if the defaut 1 is enough or not and adjust based on that.


Activation Functions

Activation functions seem primitive too. Every neuron is unique and each one would have their own unique activation function. Sure we can use one layer with a sigmoid and another with a ReLu but when we think of neurons each one would have something similar to the ReLu (where it only activates beyond 0). So what would change with the ReLu? Instead of going linearly diagonal it would have small changes in the line, maybe linearly, exponential, curving slightly at the start, etc. The AI’s job would be to determine what this activation fn should be and mutate them. Neurons die out when unused and change over time, e.g. when a neuron connects to another when learning something new, it makes sense that the activation will change.


Reasoning

Context-dependent decision-making comes down to the adaptablity given a unique environment. When walking into an alley way at night vs during broad daylight there’s an obvious reasoning behind executing the decision to go down that path or not. When it’s your first time, the senses of night-time, people in an area with 2 exit points, easily surroundable with no escape, just makes sense intuitively not to go down.

Decision making comes with 2 considerations of the matter:

  • objective: e.g. is a cookie over a salad better for my health.
  • subjective: when considering factors as what mood are you in, occassion, is it a reward for further motivation, etc, then it alters the decision.

In the world of exploit development:

  • objective: make most money, in shortest path
  • subjective: delaying a fn call over another one produces a better result down the line due to context, sacrificing money to make more later

Q-learning is very interesting. A model-free reinforcementlearning algo that learns the quality/value of an action in a particular state.

Can this be applied to use past learning of unique environments to newer ones?

Well, no. Inheritely it would be perfect for pokemon with pre-established actions that are available to take in a constant environment. But when the env changes each time then it would classify a previous action higher quality when in actuality it’s not.

Adaptability

Current ML models lack the ability to adapt by self-mutating. The restructuring of NNs inputs, layers, and models are not up to par with the continuous adaptation of the human brain.

Why is this?

When we want to modify something / add an input we need to retrain the model, right?

We need to be able to modify:

  • NN neurons + input params
  • how a NN influence’s another:
    • e.g. visually processing all protocols, their functions and how we can order them based on their stateful causations
    • when NN1 can interrupt NN2 while mid computation: how can we restart that electrical signal or trigger a bi-directional playback?
  • lobe architecture
    • add [temporal] to [occipical + frontal]
    • creating self-labeled patterns and a classification of sequences that are similar to those patterns and identifying what deviates.

When restarting after modifying, the context isn’t persistant. How can we insert new information to intertwine with existing NNs without a hard-reset?

Artifical Dendrites

When thinking about an artifical neuron, or rather soma, we don’t take into consideration the dendrite part of the neuron. The branches that interconnect with other somas, via synapses, to enable communication between somas. The dendritic arm can reach out to neighbouring somas and beyond to establish new connections and ultimately form new understanding. And it makes sense that a dendrite could mutate and be the determinating factor of signal amplification,

Decision Making In Other Lifeforms

Fungi

mycorrhizal networks - fungi transfer nutrients and info between plants. in response to env cues, they can make decisions about resource allocation and distribution within the network.

Fungal Hyphal Growth and Nutrient Foraging: ilamentous fungi, such as the model organism Neurospora crassa, exhibit complex decision-making during hyphal growth and nutrient foraging. Fungi must navigate through the substrate to efficiently access nutrients. This involves sensing the environment, responding to gradients of nutrients, and adjusting growth patterns accordingly. We can use this for getting to a goal.

Notes

The goal is to be able to create an action given a situation.

  • decision making hierarchy: prioritisation of decision based on immediate needs for surivial.
  • imagination: we can simulate the enviornment - im not sure how much computation that is

When we think about mimicking the brain, the first thing we think about is modularisation of different models to work together to come to an output. This is referencing ensemble methods. Great thinking! The the brain however is much more complex than simply modularising and combining, yet this is definitely a step further to the realm of the brain.

The brain is more complex since it

There’s got to be NNs that have some of the exact same inputs as other NNs just with a few other params since no neuron is exactly the same. They must work together to get to a point and so it would be more of macro layers (lobes, etc) and within each lobe there are layers of slightly modified neurons or dendrites that process all information: e.g. body temp, what objects are infront of you -> what specific objects, where, how far roughly, dangerous, etc.


  • infinite search space
  • unique environment every time:
    • how do you learn from decisions from completely unique enivronments to apply to another unique environment?
      • how do humans do it? technically everything is a completely unique environemnt. we “fill in the gaps” or derive actions from “similar” states. e.g. feel sick -> go doctor. may not be the same sickness but theres a strategy there of initially going to the doctor
  • stateful environment

Q* refers to the optimal action function. Finding Q* involves training an agent to take actions that maximize its cumulative reward given its environment.

Share this Article

Recent Articles