AI Olympics (multi-agent reinforcement learning)

2,606,331
0
Published 2023-10-22
AI Competes in a 100m Dash!

In this video 5 AI Warehouse agents compete to learn how to run 100m the fastest. The AI were trained using Deep Reinforcement Learning, a method of Machine Learning which involves rewarding the agent for doing something correctly, and punishing it for doing anything incorrectly. Each agent's actions are controlled by a Neural Network that's updated after each attempt in order to try to give the agents more rewards and less punishments over time. Check the pinned comment for more information on how the AI was trained!

Current Subscribers: 264,870

All Comments (21)
  • @aiwarehouse
    It took me months to make this video and it took my computer over 3 days straight to train/record the agents, I hope you enjoy it:D After teaching Albert to walk in the previous video, I read a lot of comments asking about what would happen if I used a more human way of punishing and rewarding Albert, so that’s what this video is about! Each agent starts off the same, the only difference being the design of their body. They’re each rewarded for moving forward and punished based on the efficiency of their movements (based on a muscle fatigue system), so by the end of the video they each should discover a movement that works efficiently for the body they were given. NOTE: Don’t worry, Albert is coming back in the next video, he’s hard at work right now improving his walk:) If you're interested in training your own AI like Albert but don't know how, there's now a really easy way to do it! Luda, an AI lab, recently built a web app that allows you to create and train your own AI using deep reinforcement learning (just like Albert) completely for free in your browser! You build your own character (called a Mel) with lego-like building blocks then watch it train in real-time on their website in just a few minutes (really). It's an awesome project, and just like my videos, makes deep reinforcement learning so much more accessible, which is why I love it so much. This section of the comment is sponsored by Luda, but these words are entirely my own, it's an amazing project that I would have been obsessed with had they released it before I built Albert. I've genuinely been looking for a sandbox/game exactly like this since I was a kid. They're still early, but they're giving my audience first access to their closed, pre-alpha build. Make sure you check out their site and create an AI agent for yourself!:D prealpha.mels.ai/ Now, back to our agents, If you want to learn more about how the agents actually work, you can read the rest of this very long comment I wrote explaining exactly how I trained them! (and please let the video play in the background while reading so YouTube will show the project to more people) THE BASICS Although it seems like there are only 5 agents training here, there are actually 40 copies of the video being simulated simultaneously behind the camera in order to speed up the training, so although the video makes it seem as though there are 1638 attempts, there are actually around 65k. Each agent is controlled entirely by an artificial brain called a neural network. Their brains have 5 layers, the first layer consists of the inputs (the information they’re given before taking action, like their limb positions and velocities), the last layer tells them what actions to take and the middle 3 layers, called hidden layers, are where the calculations are performed to convert the inputs into actions. Each agent is given quite a lot of information about its body, they’re given everything that Albert was given in the last video (which I explain in great depth in this pinned comment https://youtu.be/L_4BPjLBF4E?si=HHv3vrmgIxUGo54f). Just like the last videos, the agents are trained using reinforcement learning. For each attempt an agent has, we calculate a score for how 'good' their attempt was and the training algorithm we used (PPO) makes small, calculated adjustments to that agent's brain to try to encourage the behaviors that led to a higher score and avoid those that led to a lower score. For this video there are 6 different ways each agent is rewarded/punished, and I tried to make these reflect our normal movements as much as possible. REWARD FUNCTION Movement: Each time the agent takes action we check to see how much closer the agent is to the target and we reward them proportional to that distance. If they move a lot closer to the target, they’re rewarded a lot, if they move away from the target, they’re punished. Limb Fatigue: This is the heart of the reward function for this video, every time an agent takes an action on a limb, we punish it proportional to the strength of the movement and the current fatigue of the limb (so if the agent moves a limb that’s already really fatigued, the agent is punished severely), then we increase the fatigue level of the limb based on how strong the movement was, and with each frame we slightly lower the fatigue of each limb to simulate the limbs resting. This reward is meant to simulate muscle soreness and encourage the agents to find the movements that are most efficient for their body design, but also make for more interesting gaits, since without this punishment the agents would all likely opt for a safe shuffle and avoid taking large steps. If you're still reading this, you're probably really smart and want to learn more about Albert, so make sure to join my discord server I just made where we can talk more about the details of Albert's AI! discord.gg/jM2WkNuBnG :) Limb Hit: I wanted to punish the agents for falling over, so any time a limb that isn’t a foot hits something it’s not supposed to hit (the ground, other agents, etc.), we slightly punish the agent, and we also slightly increase the fatigue on that limb. Abrupt movement: Each time the agent takes action we calculate the average velocity of their body and compare it to the average velocity of their body when they last took action, the greater the difference in these two values the more we punish the agent, since a great difference implies abrupt movement was made, something that generally is bad for our bodies. For anyone looking to make something similar to this, this reward is really important for smoothing out the final gait! Chest up: We give the agents a small reward whenever their chest/head is in the upright position, this helps the learning converge easier, without this reward the agents might never learn to stand up and instead just learn to crawl to the target. OTHER I only allowed the agents to make a decision every 5 game ticks, which made the movement look a bit more jagged than if I allowed them to make a decision every tick. I found if I allow them to make a decision every game tick it’s too difficult for them to commit to any proper movements, they end up just making very small movements like slightly shuffling forward instead of taking a full step. The 5 game tick decision time forces them to commit to their decision for at least 5 game ticks so they end up being able to take the less safe (but cooler to watch) large steps. Though you only see one version of these agents, there were actually 40 copies (so 200 agents) training simultaneously behind the camera in order to speed up the training process. Despite this, it still took my computer (threadripper 3960x, rtx 4090, 128gb ram) over 3 days to train/record! Thank you so much for watching! These short videos take literally hundreds of hours to make, if you want to help allow us to make them faster, please consider becoming a channel member! By becoming a member, your name can be in future videos, you can see behind-the-scenes things that don’t fit in the regular videos, you can also use stickers of Albert, Kai and some other characters our team made in comments :_Albert::_Kai::_Tyler::_Jonas::_Catt: (more coming) :D Thank you so much for watching, and please, if you enjoyed the video or learned something, share it with someone you think will also enjoy it! :)
  • @Ax2u
    Purple was robbed! Constantly getting tackled by others, and starting on the disadvantageous side track with less space to manoeuvre... Where is the competitive integrity?! Red deserves a DQ for that awful, childish behaviour on run 912.
  • red throwing a tantrum in the middle of the track so now noone else can pass either was hilarious😭
  • @soularzensei1754
    Purple's idea to piggy back on yellow was genius, I wish they kept developing that pattern.
  • @kirkrowe2901
    This is amazing. I love Red's enthusiasm. "Yeah I love the high jump." "This is a race." "HIGH JUMP!"
  • i'm glad to see albert's knowledge of walking is being used to help teach balbert, gralbert, ralbert, yalbert, and palbert how to walk too!
  • @mf6610
    If these were developed into characters. Purple: Wild Card, Optimistic, Sometimes Lazy Yellow: Straightforward, Patient Red: Entitled, Whiney and Immature, Show-Off, But Very Determined Green: Quirky, Meek Blue: Clumsy, Curious -Purple often gets screwed over by others but stays determined -Red is slow to learn, behaves badly, and suffers from karma a lot
  • @thanemullen8023
    I have never felt so disappointed to see a video end. I hadn’t been watching the time stamp, and I was so invested in seeing them (especially purple) reach the point that they were truly racing.
  • @B-S-S-Iris
    Again, it was very interesting. I felt that when Red fell, it dragged everyone down and negatively affected the learning of those around him, so if the focus was on running, I felt it would be preferable to have him run the race alone and then composite everyone's movements in later editing, etc. I was rooting for Purple's run because it was so careful and beautiful... I still wonder if the longer length of one step is more advantageous? From a Japanese fan Translated by Deepl
  • I love how he colors some of the words red if the AI does something bad , Yellow if its okay And green if its excellent
  • @BeanicusYt
    I can’t wait for all of these AI’s to get their own characters and lore. I can just imagine a cinematic universe for this channel Edit: how the HELL did this comment blow up
  • @coolokayyeah
    It’s amazing how well the AI learns, even if it takes a while
  • @Vd_124
    This made my day, thanks 😂 i’ve been searching awhile for something that could cheer me up
  • @doctorchess801
    Red was the definition of character development Also purple being the Pixar lamp was funny tough
  • @MalarkySparky
    Do you think it would've run differently if they were encouraged more to stay in their own lanes?
  • @mevtine
    Idk how your videos can make me giggle and even laugh so much which I haven't been done for quite a while. Loving your works keep it up my dude