skip to content

AI Researcher

Working on evolutionary ML with a focus on adaptive multi-objective functions and perserving weights while add/sub/modifying features parameters. Curious about Riemannian manifolds, chaotic blackbox uncertainty, signal processing, and stochastic calculus.

Articles

Read more →

Career Updates

14/06 - Parallel "Understanding Deep Learning"

  • Spent the past 2 days relaxing but im back
  • Learned what shallow and deep neural network architecture actually is like (hyperparameters, nested polynomial notation, matrix notation). I learned how hidden layers use x as the input and then the next layer uses the previous h_{k-1} as their input to replace x. Notation wise, im understanding deep learning! But it is quite hard to get my head around. So many symbols is making my quite tired, so i decided to write this update, lol.
    • Im able to formalise my ideas a lot clearer now! This decision to learn RL and DL was actually a good call. I have the math knowledge to be able to read and understand so its not as bad as I thought, although it is challenging to conceptually undersatnd the intuition behind it all. Like why would you only have K = 2, D_K = 3 verses some other arbitrary numbers?
  • I watched a video on how diffusion works: gradient ascent from noise to iteratively denoise it into an image region. Quite cool!
  • Think ill read some more Sutton after i finish the Deep Learning chapter 4 so i can get a curiosity boost reset. Im still curious about DL its just ive been spamming it and need to do a quick gun swap. Other than that not much has happened apart from Iran doing a volley of missiles at Israel. Sure does feel good to be in Australia right now, away from the next WW3.
  • edit: 00:52 the next day:
    • finished the day off by getting to the Loss Function chapter in Prince's DL -- i think its page 50 or 60.
    • And then learned the discounted estimated return value function and how you can turn it into the Bellman Equation. Initially i thought of the Bellman form since it was a clear optimisation only to read it was the thing everyone talks about. And i was like "oh, that was anti-climatic". I thought to myself RL so far has just been some really shitty algorithms optimisation wise, assuming too much that will never really happen! Complete opposite of my mental model. Maybe everyone is just larping and are just not creative people ??? Anyone i finished the chapter in Suttons book and am now on Dynamic Programming on page 70. Making pretty good progress for the 2 days ive been reading them !
    • My ideas are getting a lot closer to formalisation now and where all the failures in AI really are. Im quite excited bc as i learn the names of the things it fills in the hole that was missing for the low level understanding. i think ill definintely be making my own custom topological and combinatorial algos in the future. it seems so obvious to me... free real estate. lets hope WW3 doesnt happen before i can have fun
    • Walked 3 miles today too, trying to heal my soleus bc i injured it over-doing the running sadface
Read more →

Podcast

I host a tecnhical podcast about math, science, crypto, HFT/MEV and infosec. I have experience in these fields which enables me to ask deeper questions than other podcasts with hosts with surface elvel knowledge.

Explore →

Talks

Listen to my first ever interview when I was just starting out in my career! I talked about MEV and reverse engineering :)

Explore →