Tutorial
Installing Arcade Learning Environment with Python3 on MacOSX 10
In this tutorial, we install the Python interface for the Arcade Learning Environment (ALE), which in short, allows us to create AI agents for Atari 2600 games.
Prerequisites
To follow this tutorial, you will need:
- MacOSX 10 (El Capitan)
- a sudo, non-root user
Note that this may work on older operating systems but not guaranteed.
Step 1 - Installing Dependencies
Install Homebrew if you have not already.
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Use Homebrew to install several libraries required for low level access to media and controls.
sdl
: designed to provide low level access to audio, keyboard, mouse, joystick, and graphics hardwaresdl_image
: image file loading librarysdl_mixer
: sample multi-channel audio mixer librarysdl_ttf
: allows you to use TrueType fonts in your SDL applicationsportmidi
: real-time MIDI input/output
brew install sdl sdl_image sdl_mixer sdl_ttf portmidi
If you do not have Python 3 installed, you have 2 options, where the first is recommended:
- Install Anaconda3 from source.
- Install via Homebrew using
brew install python3
.
To ensure that these packages have installed successfully, list all packages and ensure that the five packages above are all included.
brew list
We now install our Python dependency, PyGame.
pip install hg+http://bitbucket.org/pygame/pygame
Step 2 - Installing Arcade Learning Environment
We will work in our user directory.
cd ~
Clone the source code from Github.
git clone https://github.com/mgbellemare/Arcade-Learning-Environment.git
Navigate into the newly-created directory.
cd ~/Arcade-Learning-Environment
Create a directory to house our build, and navigate into it.
mkdir build && cd build
Use cmake
to build our makefile.
cmake -DUSE_SDL=ON -DUSE_RLGLUE=OFF -DBUILD_EXAMPLES=ON ..
Finally, launch the build.
make -j 4
Step 3 - Installing the Python Interface
We can install the Python module locally. Navigate to the repository root.
cd ~/Arcade-Learning-Environment
Install the Python module, with the provided setup.py
.
pip install .
Step 4 - Running an Agent in Python
Before proceeding, download ROM files for the Atari games. A ROM file contains copy of data from a read-only memory chip which in this case, comes from an arcade game.
Optional: To see a list of supported games, run ls src/games/supported
from the repository root.
Navigate to your user home directory.
cd ~
Create a demo Python file demo.py
, using nano
or your favorite text editor.
nano demo.py
Copy and paste the following inside your file.
"""
Sample script for the Arcade Learning Environment's Python interface
Usage:
demo.py <rom_file> [options]
Options:
--iters=N Number of iterations to run [default: 5]
--display Display the game being played. Uses SDL.
@author: Alvin Wan
@site: alvinwan.com
"""
import docopt
import random
import pygame
import sys
from ale_python_interface import ALEInterface
def main():
arguments = docopt.docopt(__doc__, version='ALE Demo Version 1.0')
pygame.init()
ale = ALEInterface()
ale.setInt(b'random_seed', 123)
ale.setBool(b'display_screen', True)
ale.loadROM(str.encode(arguments['<rom_file>']))
legal_actions = ale.getLegalActionSet()
rewards, num_episodes = [], int(arguments['--iters'] or 5)
for episode in range(num_episodes):
total_reward = 0
while not ale.game_over():
total_reward += ale.act(random.choice(legal_actions))
print('Episode %d reward %d.' % (episode, total_reward))
rewards.append(total_reward)
ale.reset_game()
average = sum(rewards)/len(rewards)
print('Average for %d episodes: %d' % (num_episodes, average))
if __name__ == '__main__':
main()
Copy the configuration file to the directory that contains your Python file.
cp ~/Arcade-Learning-Environment/ale.cfg .
Finally, run the program, using the path to your downloaded ROM file.
Note: As of the time of this writing (Dec. 2016), the ROM filename must contain only lowercase letters. Otherwise, ALE will hang.
python demo.py <rom_file>
Optionally, watch the game being played using the --display
flag and/or change the number of iterations by setting the --iters=N
flag. For example, to turn on the display and play 10 games, use the following
python demo.py <rom_file> --display --iters=10
The Arcade Learning Environment is now ready to use.