There comes a time in every young developer’s life when they realize that they need to start programming like a grown-up. This is when they learn how to use virtual environments.
A virtual environment is basically like a small clubhouse in your computer that’s a little bit isolated from everything else. Files in that clubhouse only really know about other files in that clubhouse; they couldn’t care less about what’s outside, what’s outside doesn’t really know much about them. It’s a kind of isolation that is frowned upon when it occurs on University campuses, but is quite essential when it comes to writing production software.
Why? Because the world of code is a fickle, perpetually shifting place, with updates and changes happening constantly. You update things all the time; you’re never quite sure what’s going to change, and when. Version 1.2 might work a little bit differently from 1.2.2, and you can never be quite sure if something you’ve written for one will work for the other. Further, you’re using 1.2, but maybe your colleague is still stuck on 1.0 (sad, I know). If she pulls your code and tries to run it, it’ll throw approximately eight thousand bugs.
Does that sound relaxing to you?
Enter virtual environments. With a virtual environment, you pick exactly what version of what packages go inside, and it’s easy for someone else to re-create that environment on their own machine.
I’m in the process of moving the ParagonMeasure backend into a virtual environment, in preparation for building out the Django application which will bring it to life. The rest of this post will follow the process of setting up the environment.
Installation & Creation
First, install virtualenv, the virtual environment library:
sudo pip install virtualenv
Easy enough. Now, create a directory for all your environments to go. This can be separate from the files meant to be run in the environment.
I created a top-level directory in my
code directory called
environments, where I plan on keeping all of the virtual environments I create for any of my projects.
code/ personal/ projects/ work/ environments/
Now, let’s actually create an environment.
virtualenv -p python --no-site-packages env_one
You should see some output that looks like this:
You may be wondering about those options I passed. The
-p flag specifies which version of Python to use to create the environment. Odds are you’ll be fine without it – I did a somewhat wonky install of the Enthought IPython Distribution and made some sort of default, so virtualenv kept crashing until I explicitly said to use regular old
By default, virtualenv will look in the active virtual environment for a package, but will fall back on any global installs if it isn’t found. The
--no-site-packages option tells virtualenv to only use the packages you install in that environment, and not to look in the global folders for it if it isn’t found. I like this option because it means that I have to be explicit about every package that I’m using; I want to avoid a deployment where I realize that I’ve actually been relying on some obscure package I installed two years ago and completely forgot about.
Alright! We now have our very own virtual environment! Let’s take a look at what we’ve made:
code/ environments/ env_one/ bin/ include/ lib/ .Python
/bin directory contains the binaries, like
python, which you’ll actually be running when you use this environment. It also contains an executable python filed called
activate, which we’ll talk about in a second.
/include directory contains a directory called
python2.7/, which contains a bunch of header files (
.h) which I don’t understand.
/python2.7, which is where all the goodies live. When you install new packages into this virtual environment, they’ll end up in
/lib/python2.7/site-packages. That’ll be where your applications running in this environment will look first.
Here’s what’s inside
site-packages to start:
site-packages/ _markerlib/ pip/ pip-1.5.6.dist-info/ setuptools/ setuptools-2.6.dist-info/ easy_install.py pkg_resources.py
Pretty bare-bones, huh?
Now that we have our environment, how do we use it?
In general, we use our virtual environment by calling the binaries installed inside of it – these binaries know to look in the virtual environment before the global environment, and so anything called with those binaries will be called inside of the virtual environment.
There are two ways of doing this. One is the boring way, the other is the cool way.
The boring way is to explicitly state the path to the virtual env binary. Say you had
somescript.py in the
environments/ directory, and you wanted to run it inside of your new virtual environment. You could run it by typing the following:
This tells the shell to look in
env_one/bin for a python executable and use it to run somescript.py, instead of whatever executable would’ve been found if you had typed
python somescript.py and the shell had gone romping around the
$PATH looking for it.
So that’s the boring way.
The AWESOME way is to use the built-in
activate method, as follows:
This will do some magic and change your
$PATH variable to point to the virtual environment before anything else. Further, you can
cd around your hardrive and call files from anywhere without “leaving” the virtual environment. You also get a cool prompt, which I think is the best part:
(the ƒ is my own flavor)
You can “leave” the environment by entering the
deactivate command, which will restore the
$PATH variable and put you back in your global environment.
Now, let’s figure out how to install new packages. Out of the box, a virtual environment is pretty bare-bones:
We’re going to talk about installing packages in a second, but first let’s zoom out for a sec and look at how the
$PATH is specifically changing when we change environments.
sys.path in my global environment:
Here’s the same thing, inside of my virtual environment:
Notice how the all the entries pointing to
/Library/ are totally gone, and the first entries are all pointing towards the directory containing my virtual env? That’s how the magic happens.
Ok, back to installing packages.
We’re using pandas and numpy pretty heavily at ParagonMeasure, so we need to get those bad boys installed ASAP.
Let’s try the obvious:
Let’s take a look in our
env_one/lib/site-packages folder and see if anything looks different.
WELL HOW ABOUT THAT.
Another way of looking at your packages is through the
pip list command:
Duplicating your environment
So, you’ve gotten your virtual environment set up just the way you like it. All the versions are right, your tests are passing, the birds are chirping. How do you create this environment somewhere else?
Via a requirements file. You can create one by entering the following:
This will write a text file containing all of the packages and versions installed in that environment, in a special format that pip can re-interpret later. It’ll look something like this:
You can recreate that environment elsewhere by entering the following (assuming you’ve activated that other environment):
You should see pip go ahead and start installing any missing packages. You can edit requirements.txt directly to add packages or change version numbers. Just know that If you delete a package from requirements.txt, virtualenv won’t uninstall it when you run
pip install -r requirements.txt.
There you basically have it. Virtual environments are kind of like the magical adulthood of programming, where you can exert basically total control over your world (and make other people’s lives easier to boot).
See the full documentation over at the official site.
Shout out to Jamie Matthews’ excellent post covering much of the same ground (which helped me quite a bit).
The ultimate test. I’ve created an environment called
pm_app and have installed only the packages that the ParagonMeasure backend should require. I’m about to
cd back into the main repository and run the test suite…