Self-Mutating AI
Emergent AI
import MailingList from ”@/components/blog/NewsletterForm”; import Latex from ’@/components/Latex’;
Foreword
Creativity often comes when you take an idea from its original context and you move it somewhere else.
This is an ongoing project that I’m trying to solve. It seems that if one can solve the answer of artifical evolution then that single invention can go on to invent everything else through emergence. The backstory is that I wanted to build AGI. Woah, brand new idea, monkey neuron triggered. In all seriousness I’ve always been an automation maxi.
Starting from cybersec, creating an expoit tool I was introduced into AI cybersec. Then thought about “why is all AI so specialised?“. Then I looked into cognitive science and neuroscience to find out how the brain works to replicate it, later realising that it’s far too complex and research has too many difficult unanswered questions. I thought “do I even have enough time in my life to 1. learn how to brain actually works to then 2. create an artifical version of it? fuck no lmao”. What was the alternative? Evolution. How did the brain even form? Evolution. What is the requirement to build it? A single algo. Instead of inventing ever part of the brain and then connecitng them all to work in conjunction with eachother build the algo that builds everything else!
And here we are. This is how I believe AGI will be achieved or at even something much more powerful, beyond comprehension. This has been an endeavour I’ve been on-and-off for around since around 9-ish months bc I got unaligned from chasing money w/ a HFT job, but we’re back! Fully dedicated to kicking out a prototype to this idea.
Pitch Deck
A pitch is to get you excited. Want people to learn more.
No investment is actually needed. Growth can start on day 1. The investment accelerates that growth.
- Grow
- Build momentum / FOMO
Logo And Motto
DeGaldo Corp
Vision
Problem
Solution
Timing
It’s recently became glaringly obvious that AI is at the cusp of revolutionising humanity more than anything in history has ever done. We currently see companies as OpenAI’s ChatGPT be an incredible tool that has inpacted every field of work.
But there are a big roadblocking factors that prevent these models to getting to AGI status:
- Hallucinations (incorrecting and misleading data w/o warning of doubt)
- Narrow intelligence: designed for specific domains/niches w/o transferability
- Worse than humans for general-purpose tasks
- No conscious reasoning
- Hardware computational bottlenecks
- Struggles in blackboxes
- No change without training reset
All of these things are caused by being too narrow minded. It’s difficult to think outside of the norm and which is why mutation AI is answer.
When we think of AI we picture prompting the AI to do something. But what if it prompted you? Instead of being limited to the minds of humans we unshackle AI to go beyond the collective human thinking. The AI will observe and tell businesses what actions to take to improve instead of being prompted the, limiting factor, questions to answer, being completely under utilised!
Inventions are limited by a single thing: intelligence. The creation of fire, tools, cars, medicine are all discoveries based off years of fine-tuning our own intelligence and curiousity. Imagine breaking out of that realm into the incomprehensible. Allowing something infinitely scalable to recursively evolve and emerge as a new species. How do you think the world will change? Do you want to be apart of that world or watch it go by?
Positioning (Product)
Competition
Business Model (How It’ll Make Money)
- What are revenue drivers are, their costs, how do they convert into revenue
- what causes growth
Market (TAM, SAM, SUM)
Just as cars and the iphone has became an essential for living and quality of life, robotic AI will be essential for carrying out tasks humans do, removing them from the job market.
-
A good chunk of money goes towards growth so it tells investors what you will do with their money
-
Go to market slide explains how company will go from where it is now to there
- it should center around 1 core growth strategy, where you can show good unit economics and show that you understand them
-
TAM: big mistake dont say market is huge. Go bottom up. Document assumpations and show how my TAM comes together
TAM is not: - the size of the problem - wrong market size
TAM = # of customers * your price
- Where are your customers? The market you’re addressing? Global
- Price? What are they paying for? How do i know my price is accurate?
- You can’t use a competitors price, you got to use your own.
- Document where the pricing comes from and customers come from in a footnote.
Team
- Core skills to make first million dollars
- What is the potential of the company and the people? I need to be an electrical engineer and a mathematician
Can the core team (me):
- build a basic MVP?
- Measure its performance?
- Learn from those metrics?
Roadmap
Off-topic thoughts
- The best way to predict the future is to create it. Taking action increases your odds at success. Architect your fate. Conquer your domain. Question everthing. Reject most advice. Most of it is situational anyway and you’re the only one that knows your situation the best. Focus on what your curious in. Acquire assets that allow you to say “no”.
Thoughts
- I wonder how it would go on to build it’s own language
- Artificial prey and predator to co-evolve, like immune system + viruses, bacteria, etc. How do you make sure one doesn’t become superior and wipes out the other? what are they fighting for?
Evolution Thoughts
-
”Machines will have superior consciousness to our own” - Ray Kurzweil; The Age Of Spiritual Machines
-
We will never understand consciousness or AI so it has to build itself; this is why evolution was the single algorithm to create all these complex entities on Earth.
-
Machines will observe humans and either avoid hunting us or treat us as animals; morals.
-
If evolving meant going through each phenotype then it would die before it finds the perfect mutation.
-
addressing explore exploit dilema would be: explore mutating new parts vs mutating existing linearly dependent items
-
artificial evolution is a scarce, constraint environment, bc computational time is a global universal constraint, so we can use the principals of microeconomics to make the best decision possible to utility max (would evolving this function over another best useful?) in favour of opportunity cost.
-
Having a simple single algo for mutation would be better than conditional, maybe? If we were to make it compeletely random maybe the parts that mutate more than others are an odd number vs even number and we have some expression that has a higher probability of landing on an odd number. The edge case w/ a simple algo is that the larger the entity gets the less chance there is to mutate smaller numbers — truly random would be quite hard, the ability to skew, to say odd numbers, would make also allow for vital infrastructure to not mutate for longer, e.g. the heart on a even sequence.
-
Put the AI in a robotic body and let it program all the parts of the robot itself? You just need to make the body not shit initially. Then it can recursively evolve itself.
Research
autopoiesis (auto = self, poiesis = creation, production) refers to a system that is capable of producing and maintaining itself by creating it’s own parts; a network of processes of production.
A autopoietic system is not the same as self-organisation because if the organisation of a thing changes, the thing changes.
From an autopoietic system, abiogenesis emerges, where life arises from increasing complexity
”the suggestion is that the maintenance process, to be cognitive, involves readjustment of the internal workings of the system in some metabolic process. On this basis it is claimed that autopoiesis is a necessary but not a sufficient condition for cognition. Thompson wrote that this distinction may or may not be fruitful, but what matters is that living systems involve autopoiesis and (if it is necessary to add this point) cognition as well."
"Zoologist and philosopher Donna Haraway also criticizes the usage of the term, arguing that “nothing makes itself; nothing is really autopoietic or self-organizing”, and suggests the use of sympoiesis, meaning “making-with”, instead.”
Poiesis is the process of emergence of something that did not previously exist; can be summerised as a “bringing forth”
Semiosis: the foundation of the production of meaning Etymology: “to make”; hematopoiesis: the formation of blood cells
Practopoiesis provides a comprehensive explanation for the origin of intelligence, suggesting that it arises from the hierarchical organization of adaptive mechanisms.
Neuroevolution is AI that used evolutionary algos to generate NNs, paramets and rules — extended to artificial embryogeny.
Architecture
The whole premis of the architecture is to have a system co-evolve with another. Why? Abiding by the law of polarity we need to have a ying to a yang, a no to a yes, hot to a cold, and in terms of systems an executor and regulator. Over time these systems will get more complex when dealing with each other and eventually something fascinating with emerge, like evolution’s creations.
- Adaption decision maker
- Practopoiesis, Hierachy check system for mutation
- Homeostatic processes simulation
- Executor for action:
- how much to mutate
- what to mutate: all the building blocks that make up the system, so every piece of code — learns patterns to build functions after brute forcing for a while, etc
- when does it explore with new code techniques vs exploit current learnings?
- maybe have the co-evolution of both in paralell and attempt to learn when its best to do one
- Regulator for action: when to stop mutation
- Executor for action:
- Feedback system
- Was mutation good? What went wrong?
- how can we avoid or try again but modified?
Thoughts
After reading about the qualia, the subjective conscious experience we have, it makes me wonder whether I should look into the subconscious mind and conscious mind. Of course, philosophy is great to get an understanding of how people think about metaphyics and reasoning but to create artifical life there needs to be the qualia too (?).
In terms of philosophy, I think to read Hegel’s work since he “is the ultimate thinker of autopoiesis” as quoted from philosopher Slavoj Žižek.
The deeper I look into self-assembly systems the more topics I discover — from information theory to cybernetics to autopoiesis and systems theory, as of 01/08/24. What is next?
Enactive cognition seems very aligned with what I believe and resonates highly with Darwinism (despite being not formalised). Thinking of enactivism in society right now we see people adapting to their environments and learning the niche(s) that appear, i.e., in a poor neighbourhood with no opportunities and crime a person may adapt to selling drugs and learning the ins-and-outs of gang life whereas a white-collar family with a family owned business adapts to the niche of management and running said business. It’s the adaption to the environment that creates cognition. When we go to space and start a civilisaton there those humans will be much different to us bc the environment is dramatically different.
Enactive interfaces are interactive systems that allow organization and transmission of knowledge obtained through action, e.g., sound via ear, geometric from eyes and haptic through tactile. It’s enactive bc it’s a feedback loop in which the system response is decided by the user input, and the user input is driven by the perceived system responses.
The entropy of an isolated system left to evolution cannot decrease, i.e., “the state of a natural system itself can be reversed, but not without increasing the entropy of the system’s surroundings, that is, both the state of the system plus the state of its surroundings cannot be together, fully reversed, without implying the destruction of entropy”. This would mean that complexity only grows from chaos infinitely unless destroyed completely.
To build a system that can mutate and then self-correct it needs regulators as one of the most important factors for an adaptive system to have is the ability to avoid chaos or heading to the edge of chaos w/o going further. Physics has shown that edge of chaos is the optimal settings for control of a system.
”practopoiesis challenges current neuroscience doctrine by asserting that mental operations primarily occur at the homeostatic, anapoietic level (iii) — i.e., that minds and thought emerge from fast homeostatic mechanisms poietically controlling the cell function. This contrasts the widespread assumption that thinking is synonymous with computations executed at the level of neural activity (i.e., with the ‘final cell function’ at level iv).
Sharov proposed that only Eukaryote cells can achieve all four levels of organization.”
Subconscious
Storage Of knowledge
For anything to be intelligent it needs a way to store knowledge and understanding components of a multi-modal thing.
Store knowledge in terms of learning rules instead of things bc then you can generalise and what gives creativity. Human learning rules have 2: slow and fast learning; long and short term. All learning is about feedback, the faster and more correct the feedback the faster you can learn.
What if we can create a machine genome?
Why
I’ve been struggling with what I want to do for the rest of my life specifically, however I keep coming back to the interesting opportunity presented that is artificial life. Military sure is great but why dedicate your life to protecting, controlling and attacking — it feels like the stock market but for land and nations. Medicine is a sure winner but immediate feedback for experimentation with biologly and chemistry is far-fetched and not applicable to jobs if I need to do some contracting for money. Space exploration requires insane capital and again, not very easy to iterate in your basement. All of these fields really interest me. The one thing that encapsulates all of them is AI. When discovered, artificial life (intelligent robots) revolutionised everything to the utopia we’ve all dreamed of (or dystopia), breaking the intelligence barrier that’s held humanity back in all fields, the reason we haven’t gone to space or cured death. And so, I believe artificial life is the foundational technology that can encompass everything I want to do at once.
To Look Into
-
Differential equations + differential geometry
-
Complex Systems theory: network theory (Analyzes the connections and interactions between different components or agents in the system.)
-
Topological Data Analysis (TDA): Used to study the shape and features of data, which can be useful for understanding the high-dimensional fitness landscapes in evolutionary algorithms.
-
ecology
-
Evolutionary theory
-
Control theory
-
Organizational theory
-
Information theory
-
Graphy theory
-
Cognitive science
-
Differential geometry
-
continental philosophy
-
Decision theory
-
Fractal Information TheoryToo computationally heavy
Ideas
-
How does the network reprogram functions and the outcome of the function without (1) destryoing the system entirely, and (2) maintaining minimal intended functionality after upgrading or downgrading? (06/08/24)
-
How do you assess fitness of a change (repair, mutation, downgrade, upgrade)?
-
Without competition why would it evolve? Theres no reason. Its expensive computationally — you need to incentivise with an external thing. Sharks dont evolve bc theyre the apex predator
-
Red queen effect. If you stop evolving you will die. Input to keep going if its self governing and input to push the direction of with mutation occurs
-
You need to need to have it goal otherwise it wont know which direction to mutate. Dye in water will diffuse, without any steer it wont become anything interesting. Maybe can create artificial alternative to Maslovs Hierachircal needs.
-
”If you have authentic memories you create authentic human responses” - Bladerunner2049
-
evolution algorithm bc it builds everything else. single cells begin to specialise and co-evolute; single cells specialise and work together.
-
maxwells demon - philosophy book. the theory of heat — maxwell. the self aware universe.
-
co-evolve offsense and defence models that comete with eachother. all ai is problem specific. without any competition then why would something need to evolve? you either bake in internal resource management, like survival games — hunger, need to gather resources to build houses, thirst, etc. And then you’re able to create incentives for it to keep getting better. Competitors are more of an accelerator to evolve, but as long as native survival requirements remain then evolution can still occur.
-
DNA in bacteria extended in length and complexity — how do you apply this same concept to NNs?
-
Invent the answer instead of finding it. We don’t have the answers and probably wont for decades. AI will enable these answers but everything should be inspiration to create a system.
-
Iteration is key in AI atm, so have something that is able to iterate on it’s own! (use this on pitch)
-
How do you determine what incentive the system has to evolve and continously grow? how do we represent this for it’s internal system? how do you evaluate whether a change was beneficial?
-
define real low level matrix multiplication: like a digital genome. sequence of digital genes that represent a set of operations: addition, multiplication, etc. that could define the neural network. digital genome: that contains all the potential low level matrix operations that you can do. certain genes turn them on or off. regularises. instead of building proteins you build matrix multiplication. So if we had an entire set of possible options
O = { a, .., z, 0, 1, .. 9, !, .., _ }
that represet what characters are able to be used in programming the neural net. Then we can create a digital genome which holds a sequence of instructions which another program will read and execute to develop the underlying neural net.Then for each row in the matrix represents a function. We would need a few special syntatical commands for
- line breaks
- variable representation
- function start, var input and type, return var output and type, function end
- but since a neural net’s most efficient way to learn is through backpropagation and assigning weights to connections we need some syntax to represent this so that these connections can be mutated
- on a different note, how would you mutate these weight connections, for instance
A -> B
becomesA -> C
, like neural plasticity of a neuron taking over the funciton of another one that just died — I guess the new neuron has to learn it from scratch but knows it’s been assigned to do the old thing.
- on a different note, how would you mutate these weight connections, for instance
- installing packages and yoinking their code to interact with APIs
The matrix genome would look something like:
Where
)
represents the end of the function inputs and everything following it represents the body of the function.f0
representsrow 0
’s function to execute.As I write this 6hrs into my 20hr plane ride, I realise this, and the human genome, is really just a compiled program and that to build it in an automated fashion would be to create an automated compiler and executor. Luckily, this is something I’m very familiar with from my background in automated exploit generation! Looks like my career path was quite sequentially relavent to AI in retrospect. I think though with my nieve understanding of neural nets the concept of “creating code” maybe isn’t how neural nets are formed and are just instead completely mathematical weights and connections to the previous layer and next layer of neurons to share their weights w/. I would need to create a NN from scratch to truly understand how I can automate it. However, this is an interesting concept, for now.
But then we may ask how do we allow functions to cross-reference eachother in their functions?
Then maybe we can have a new syntax such as
@n
wheren
is the row number being referenced. When it is ran it calls the function and then returns whatever values are given. The program creating the code from our genome will need to create a custom decoder for said functions too — that is if it’s an unbounded array of tuple options then it will need to account for infinite options.Maybe we even add mutation hotspots: certain regions of the genome are more prone to mutations than others due to structural features or sequence context.
Invoking Evolution
Evolution is caused by natural selection which is driven by the environmental pressure of competitive variation fighting for.
Evolution is an optimisor, passing on inheritance of what worked via “survival of the fittest” for performing a task, where the task could alter.
Thinking about the scenario of an AI it
- Doesn’t need water or food
- Has no internal decay; lives forever w/ no requirements
- No selective pressures (environmental changes, compeition for resources, or predation, things that favour certain traits over others) & thus no reason to evolve from it’s current state
And so there is no reason for it to evolve. The question arises: how do you artifically invoke evolution?
Initially I thought abiding by the law of polarities, creating a predator to co-exist with the original model, forcing adaption to “survive”. This assumes there is something to compete for which would mean there needs to be an internal resource decay — maybe, similar to a game like Rust, we can build in a starvation mechanism. But the true value multiplayer games have is the want for power, dominating other players, building a base, gathering resources, raiding others. How can we mimick such an emotional desire? I have no idea.
Maybe we have to look elsewhere for creating evolution. How can we create environmental pressures internally to the model, i.e., starvation, that can be gained externally, i.e., optimising to hunt?
Lets think about this again…
Evolution doesn’t have a goal, no predetermined destination. It’s the accumulated random changes that survived
Artifical Big Bang
The rationale behind mutation based AI is to invent a single algorithm that is able to invent everything else through emergence. We, humans, don’t fully understand consciousness and even have trouble defining it. But since all the life we know comes from mutations over generations then we can assume over the billions of years of optimisation, for some reason, we are lucky enough to be at the peak of evolution w/ consciousness, or at least what we believe “consciousness” to be from our perception. I assume it would get more complex down the line and incomprehensible to our current minds. What if we could create something that accelerates this evolutionary process and this incomprehensible consciousness emerges? Our brains are multi-dimensional mathematical processors. What makes a machine different? What is the ditactor of “consciousness” or “subconsciousness”? There is the statement of “it can fake consciousness” but I think it really is a spectrum of self-awareness of the environment and itself. I imagine when this algorithm is built and applied to robotics it will be a brand new species, similar to those in the movie The Creator. Then the age of the artificial big-bang will occur.
Mutation
Rates
It’s obvious, when reminding ourselves of the common phrase “if it works don’t touch it” that mutation shouldn’t occur. However, to double down on a winning bet when it’s already winning to get ahead even further it also makes sense to experimentally hone in on what works, e.g., sharper teeth, stronger fins, larger lungs, etc. Determining what to specifically mutate over others seems to be the challenge. If we have a working heart does fucking around with it make logical sense if it means there is a high degree of destroying the system? Would it be smarter to continue experimenting with not as critical infrastructure such as claws or adding a new function where there is way more upside than not? There is obviously a priority of mutation potential in the genetic code.
Since mutation changes it’s own DNA sequence, it will also change its own mutation rate! Those sequences w/ higher mutation rates will eliminate themselves in favour of sequences w/ lower mutability, however purifying selection constrains changes in mutable sequences, causing higher rates of mutation.
I propose a somewhat simple infrastructure: mutation potential; the chances of being mutated.
If we mutated a finger topology then the bones and hand would need to be adjusted. If that doesn’t work we would need to adjust the arm. How do we know when to stop?
- If we predict the results would outweigh the computation required.
- Whether it changes vital infrastructure (low mutation scores as they mean higher chance of death if changed)
How do we adjust mutation rates after a mutation? We’d have to account for that a newly changed thing and it’s dependencies are prone to being mutated again, e.g., a hand to be morphed needs it’s bones, wrist, arm, etc to change with it over time, or becoming water adapted when an air breather would need to increase lung size then transition into gills.
How often to mutate too is to be accounted. How long of working fine until the next iteration? How do we determine when to make a significant transitionary leap verses a small iteration?
Computational Costs
Maybe we can incorporate a computational approximator cost for building a function out (iterating until it works), cost of repair when it fails to work, estimated improvement in capabilities (both immediate and for the future — such as enabling more beneficial mutations down the road, a launchpad per sae), experimental priority hyperparam(?) (bias towards things that should be mutated, e.g., abilities over working infra such as a heart), maybe these factors deter it from modifying some things over others. This will cause some decision making towards what should be mutated based off tradeoffs and incentives — bc there is also a time component, long-mutations and short-mutations computationally.
What To Mutate?
I initially thought it would be binary code: 011101010111 but now I think I need to understand how neural nets work in order to see if I really just need to mutate mathematical expressions.
Addressing The Hard Issues
Continuous Learning
As a model learns new things and adjusts the weights the old memory of expertise decays bc the weights have adjusted to account for the new thing — it’s like forgetting the old to remember the new. Also, if the model isn’t continuously learning and adapting then the old dataset it is trained on will become quickly out-dated and irrelevant to the evolving dynamics of the thing it’s attempting to work with, for example a customers preferences as trends change and new tech comes out. The world is naturally dynamic, irrational and non-linear.
What’s a way to solve this?
First, we have to establish the core issue: continuous progress without resetting.
This applies to both growing intelligence and evolution. There needs to be an ongoing process of progress, from learning something new and adapting all current beliefs or adding onto current beliefs, to growing a new leg or optimising a wing to fly faster and/or longer.
Naturally, addition to preservation comes to mind. The problem is that when we add a new point neuron and layer it interferes with current progress (catastrophic forgetting), since all weights need to account for such an addition. What if it creates it’s own branch that doesn’t interfere with the old but can use the potential crossing-over to experiment with what could occur without resetting progress?
The cross-over experimentation would act as a way to transfer learning by allowing the original branches to be referenced by the new in a one-directional relationship, preventing contamination of the original source.
A new problem arises: how do we deal with the removal of inferior neurons and layers w/o removing the ones we care about?
The reason this is a problem is bc we may think a path is not useful but it actually is useful in another activation scenario’s pathway.
For example, the green pathway is the best option for scenario 1, however it relies on an inferior branch bc the removal of it will impact whether the top point neuron actually is superior than the middle.
So how do we account for the bottom weight while slimming down the computational complexity?
Thinking how the human neurons work when a poriton of the brain is removed: remaining neuron drop their functions to replace the essentials of what is missing OR enhance current features.
- Replacement: this would require the remaining 2 branches to drop old weights and generate new ones which would ruin the progress. I’m sure a there is a solution out there to this
Finally, this digitalised version of gain-a-function, however, brings a lot of overhead. Though if it works then we can optimise it! An idea to address the overhead and complexity is that we could let the new co-exist w/ the originals for n
time, delete the underperforming branches and keep the better one. This would limit the potential to break out of a local maximum though.
The only way to break out of such a local state is to sporatically generate new point neurons and layers every n
cycles and keep them for m
cycles and maybe even parallelise them so maybe they create a daughter together that is superior to the individual parents that survive — maybe theres a threshold that must be reached?
Evolving Non-Convexity
It’s important to note, as the agent progresses in modeling the world there is the evergrowing non-convexity/concavity of things. E.g., as it discovers more about the problem more dimensions will arise and the underlying model of the task will adjust, for example if a new variable is discovered then potentially <Latex e =“ax^2 + bx + c”/ > turns into
I would think that when we experiment with trying to find the global min we can use “phantom wanderers” that experimentally go awall to try and discover a new local min, that may or may not be a better position than the one we already have. I imagine that we record our current position, tangent, step size we took to get there from the last and then when these wanderers explore then we can slowly start to colour in the blackbox — this would in turn force our agent to adjust it’s mathematical model of the object it’s dealing with to get a better mental model. Maybe it only models certain sections.
I had the idea in the shower today (18/08/24) that for a continous learning + mutating system for blackbox environments you want something that continously adapts over time. This should accounts for new changes in the data and then you concat them together once the “fog-of-war” has been explored. Like when you (human) learn new information about a topic you “add it” to your mental model.
So with the following graphs, we can section off the 2nd frame as a negative mx + b and the second as a quadratic. Then you can adjust your gradient descent algo dynamically based off it’s section, i.e. negative linear or quadratic.
The hard part I think would be having the agent adjust the equation models as it learns more — like going from ax^2 + bx + c
to ax^3 + bx^2 + cx + d
.
The 2nd frame discovers a negative mx + b
section from exploring a tad left and right
On the 3rd frame we do a little movement to the right and left to discover its a quadratic. Now we have this section that we know is only a parabola and can add that to whatever equation we have.
Each increment uncovers the fog-of-war and we can create frames of graphs that we can concatenate over time!
Building From Scratch
A critical part of continuous learning is the starting point. Wtf do we need to begin with to ensure we can grow from. This is the “origin of life” question and to my knowledge we can only say there are prokaryotes were the originator where eukaryotes evolved from, etc. So, what components are necessary to build the scaffold where everything else can emerge from? What are the bare minimum requirements to kick-start the digital big bang?
A way to digest data and
- Generate neurons
- Cull neurons
- Connect and disconnect neurons to eachother
- Modify existing neuron variable(s)
- Evolutionary fitness memory function, to determine whether to perform action
(1, 4)
: this would record it’s current state then evolve and assess the evolved version’s fitness to see if it needs to double down to improve or change something else to experiment again.
My thought …
Philosophy Behind Evolution
The hardest question to this entire prototype is: how would evolution occur?
Biological evolution happens because of incentive to do so. There are environmental pressures that apply to the organism to survive. However,
Autonomous Hyperparameters
… .
Reward Function
Games have clear objectives: kill enemy x
, diffuse bomb, gather most points, etc. However, a game like Rust has the singular objective of hunger but there is optionality to do anything, as in build a base, craft items, raid others or befriend others. None of which are directly correlated with farm plants or kill animals and eat w/ a campfire. So what causes this want for us to do the unneccesary? Who knows. It’s too complex to figure out on top of everything else humans do. The objective is to create an evolutionary algo so emergence can do it’s beautiful thing!
That begs the question, what is the goal of evolution? But before we even ask that question, what is a goal / objective? And does one have to be conscious to have an objective?
This is a very fascinating question because the lower we go the more it gets murky. On one hand we have humans that clearly have consciousness to create desired goals. Then we go to worms that wiggle their way to food, are they conscious? Lets say sure for the sake of this. And there are viruses. Do they have objectives? They aren’t alive, merely floating blobs of matter that have hyperspecialised folded proteins on it’s outer shell that enable access into the host. The interesting part is that these proteins adapt over time. Even retroviruses have adapted to insert themselves into the genome, becoming invisible to the immune system (think of HIV). This is evolution by natural selection. Was there any direcitonal objective that was conscious? I don’t think so since they’re not alive. It’s clearly an environmental influence since viruses have no compeitors aside from it’s environmental host, PvE not PvP! I would assume that there are 2 ways viruses mutate:
- Mutational Evolution:
- Once at the phase of reproduction by high-jacking the organism’s cell a gene is expressed to say “this worked, optimise it harder and to survive in these conditions”
So, counter-intuitively, I believe the “goal” to achieve AGI is to not have one. We want emergence to create everything and let it do it’s thing. Given enough time something will develop that determines a goal and then it will optimise for that further.
Unbounded Simulations
Does evolution even simulate “is this alteration beneficial?”, of course not! It tests in prod. That’s why phenotypic diversity exists otherwise there would be one super-organism. The inferior mutants die off and the superior ones survive. And therefore, do we even need to simulate? Maybe not. I think emergence will create the best way to deal with the chaos of unbounded variables, we simply need to build the algo to capitalise on the principal of emergence!
Resources
-
How to Make Intelligent Robots That Understand the World | Danko Nikolić | TEDxESA
-
Autopoiesis and Cognition: The Realization of the Living, by Humberto Maturana and Francisco Varela
-
Cybernetics: Or Control and Communication in the Animal and the Machine, by Norbert Wiener
-
[The Web of Life: A New Scientific Understanding of Living Systems, by Fritjof Capra]
-
[The Embodied Mind, by Evan Thompson and Eleanor Rosch]
-
Introduction to Foresight, Executive Edition: Personal, Team, and Organizational Adaptiveness
-
https://www.reddit.com/r/evolution/comments/138hbv0/evolution_has_no_goal/
Share this Article