diff --git a/README.md b/README.md
index 42341f6400cf1c268f38b400322d997364dd4ac4..aff562df56c69428c7b1258f68a958c89021cf6d 100644
--- a/README.md
+++ b/README.md
@@ -7,7 +7,7 @@ Coming soon
 
 # Action Mutation
 Will get 0 reward, which is basically learning to prevent falling on it's head. The more actions the walker can use, the worse the reward.
-This is because the walker tries to generate movement by trembling with it's legs. The covered distance doesn't cover the punishment for doing actions. So after 1600 moves the walker will get a reward arounf -60
+This is because the walker tries to generate movement by trembling with it's legs. The covered distance doesn't cover the punishment for doing actions. So after 1600 moves the walker will get a reward around -60.
 ![Reward](./MutateActions/5_50_50_0.2.png)
 
 ## How it works
@@ -61,6 +61,6 @@ We use Windows, Anaconda and Python 3.7 \
 
 # Important Sources
 Environment: https://github.com/openai/gym/wiki/BipedalWalker-v2 \
-Table of all Environments: https://github.com/openai/gym/wiki/Table-of-environments
+Table of all Environments: https://github.com/openai/gym/wiki/Table-of-environments \
 OpenAI Website: https://gym.openai.com/envs/BipedalWalker-v2/ \
 More on evolution strategies: https://openai.com/blog/evolution-strategies/