@@ -6,7 +6,9 @@ This project tries to solve OpenAI's bipedal walker using three different ways:
...
@@ -6,7 +6,9 @@ This project tries to solve OpenAI's bipedal walker using three different ways:
Coming soon
Coming soon
# Action Mutation
# Action Mutation
Will get 0 reward, which is basically learning to prevent falling on it's head.
Will get 0 reward, which is basically learning to prevent falling on it's head. The more actions the walker can use, the worse the reward.
This is because the walker tries to generate movement by trembling with it's legs. The covered distance doesn't cover the punishment for doing actions. So after 1600 moves the walker will get a reward arounf -60

## How it works
## How it works
1. Generate a population with a starting number randomized actions (we don't need enough actions to solve the problem right now)
1. Generate a population with a starting number randomized actions (we don't need enough actions to solve the problem right now)
...
@@ -16,6 +18,7 @@ Will get 0 reward, which is basically learning to prevent falling on it's head.
...
@@ -16,6 +18,7 @@ Will get 0 reward, which is basically learning to prevent falling on it's head.
4. Mutate all children and increment their number of actions
4. Mutate all children and increment their number of actions