Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple Enhancements #27

Open
NadimGhaznavi opened this issue Jan 10, 2025 · 0 comments
Open

Multiple Enhancements #27

NadimGhaznavi opened this issue Jan 10, 2025 · 0 comments

Comments

@NadimGhaznavi
Copy link

NadimGhaznavi commented Jan 10, 2025

Thank you SO much for your video and code. I was really struggling with getting started learning ML (going through courses, tutorials, etc) and your video helped me break through that barrier. Now I'm actively coding ML.

Keep up the great work!!!!!!! People like you are awesome!!!!!

I have your modified AI Snake code on GitHub. I'm using it as a concrete project as I learn more ML and PyTorch....

I thought I'd share what I've done:

  • Pause / Unpause game
  • Elapsed game time in seconds next to the score
  • AI_VERSION constant
    • This allows me to run multiple concurrent versions of the code without clashes with respect to the model.pth file (i.e. a per-version model file)
    • I embed this AI_VERSION in the terminal output, the MaPlotLib window title and the PyGame title, so I can easily differentiate between multiple instances running concurrently
    • It also allows me to branch the code in-place easily, e.g.
AI_VERSION = 6

if AI_VERSION > 1:
  LR = 0.001
  HIDDEN_NODES = 64

if AI_VERSION > 4:
  HIDDEN_LAYERS = 2
  • HIDDEN_NODES constant to set the number of nodes in the hidden layers
  • HIDDEN_LAYERS constant. So if you set it to 3 (for xample) it will add three nn.ReLU and nn.Linear layers to the Linear_QNet model
  • I broke the SnakeGameAI:_move() function into move_helper() and move_helper2()
  • move_helper(self, action) returns a direction
  • move_helper2(self, x, y, direction) returns a Point(x,y)
  • I'm calling these functions in the Agent:get_action() function within the epsilon code block so that the Agent looks one move into the future, and if it's a collision, then it tries to generate another random move: I found that a LOT of random moves were instant death, which weren't very helpful in training. It made a HUGE difference for the speed of the AI training progress.
  • I added a number of new elements to the state[] of the game:
    • The previous move
    • Splitting the is_collision into is_wall_collision and is_self_collision (I also split the SnakeGameAI:is_collision() into SnakeGameAI:is_wall_collision() and SnakeGameAI:is_self_collision())

I'm going to continue to develop the AI with the goal of writing a super-intelligent Snake-playing AI.

I find it really helpful to have a concrete project to work with when I'm learning new tech.

Today Snake, Tomorrow the World!!! ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant