Home Blog How I Use Python Code Analysis in my Workflow to Make Happy Apps


How I Use Python Code Analysis in my Workflow to Make Happy Apps

Posted by nolan on Jan. 19, 2016, 8:19 a.m.

Ever since first learning the basics of building Python applications, and subsequently gaining familiarity with a large framework like Django, I've sought out tools to nudge me in the right direction and correct my mistakes as I go.

Over the years, I've invested time in the pursuit of saving time by using static analysis tools to catch errors in my Python code during authoring. As a bonus, these same tools help me develop consistency in my code style. It's a constant work-in-progress; I get more familiar with the language and my preferences change, but I always find it helpful to have my code checked by an outside agent as I am writing it.

Here's a look at the current state of how I write nice apps with the help of Python static analysis tools.

Analyzing Python

I start by installing what will be the only Python requirement I'll need:

pip install prospector

Prospector is an command-line tool who's purpose is to combine multiple Python static analysis tools' results into a single, aggregate output. Take a look at the full list of tools Prospector can use and uses by default. The most important of these to me is pylint, the goto for Python analysis.

Additionally, Prospector detects imports in your modules from Celery, Flask, and Django and dynamically includes extensions to pylint as it checks your code. If any piece of Django's library is imported into the code you are checking, Prospector will enlist pylint-django to take extra measures such as suppresssing pylint warnings that noisily complain about standard Django idioms and making pylint aware of the attributes on Django Models and Fields.

Prospector is configurable via a system of yaml config files called profiles that set aspects of how your code is checked. Profiles can enable or disable certain types of messages or PEP rules, or set options dictating how the analysis tool runs. Its killer feature, in my opinion, is its system of profile inheritance, which allows you to create profiles that build off of other profiles in a cascading nature.

Here's what I use as my base profile. All Python projects I write and analyze with Prospector build off of this as a base:

# ~/.propector/base.yaml
    - default
    - strictness_high
    - full_pep8

My base.yaml simply inherits from three of Prospector's built-in profiles, each profile building off of and overriding aspects of the previous with increasing specificity.

For a given Python project, such as a Django site, I create a yaml file in the project root that inherits my base:

# /path/to/my/project/.prospector.yaml
    - base

Now, in this profile, I'm free to fine-tune what aspects of my code are checked, contingent on the special requirements and constraints of this project alone.

Let's run Prospector to my specifications:

cd /path/to/my/project
prospector module/to/check.py --profile-path ~/.prospector --profile .prospector.yaml

The first flag, --profile-path informs Prospector of the directory containing my profiles (base.yaml). This is needed to make the inherits: [base] line in the project's .prospector.yaml correctly resolve my base profile.

The second flag explicity declares the file Prospector must use as the profile for this invocation.

Bringing it into Vim

I've done the work of setting up my analysis tools for use on the command line, but it's impractical to think I'd be obsessive or patient enough to run a command every time I'd like to check what I've written.

This is where Syntastic comes in. Syntastic is a plugin for Vim that runs external analysis tools and integrates their output into Vim. Check out the steps on installing it.

Once Syntastic is installed, I include the following inside my .vimrc:

" ~/.vimrc
fun! ProspectorProfile()
    if filereadable(getcwd() . '/.prospector.yaml')
        return '.prospector.yaml'
    return 'base'
let g:syntastic_python_checkers = ['prospector']
let g:syntastic_python_prospector_args = '--profile-path $HOME/.prospector --profile ' . ProspectorProfile()

These lines essentially instruct Syntastic to run prospector on Python files using the same command line arguments as above. The vimscript function ProspectorProfile() detects if a readable file named .prospector.yaml exists in the current directory and uses it if so. If it doesn't find a the profile, it falls back to using base.yaml in its stead.

Every time a Python buffer is manually checked or a Python file is written to in Vim, Syntastic will run Prospector to our needs and drop its output into a location list below the buffer for us to see.

Ignoring Python Packages

I now have Vim checking my work every time I write to a Python module. Very conviently too! This does introduce one sticky point though.

My virtualenv contains a whole lot of third-party libraries in modules I often need to peruse in order to discover what is happening behind the scenes. Commonly, for example, I will dive into some implementation detail function, place a pdb statement, save and rerun the Django test server.

With my current Vim setup, Syntastic will listen for that save and check the third-party module in accordance with my personal Prospector profile! This is code I did not write, and needless to say, it is not going to conform to my rules.

I can overcome this in Vim by changing Syntastic's mode from "active" to "passive" for Python files inside the prescribed places pip installs packages in my virtualenv:

" site-packages
:autocmd BufRead,BufEnter /path/to/my/virtualenv/lib/pythonX.X/site-packages/* let b:syntastic_mode = "passive"
" src (packages installed with `pip install -e ...`)
:autocmd BufRead,BufEnter /path/to/my/virtualenv/src/* let b:syntastic_mode = "passive"

Running these two commands tells Syntastic that if we open a buffer containing a Python file that pip/setuptools has installed, Syntastic will not automatically analyze the code upon opening or saving the file. We can, however, still manually run Syntastic on the code ourselves.

Same as before, I can't see myself running these commands every time I open Vim to get to work on a project. I'm going to make things easier for myself by taking advantage of a feature of Vim and passing the commands in the command line when I invoke vim from the terminal. Below is a function I've written and placed in my .zshrc (I imagine the .bashrc equivalent is very similar).

# ~/.zshrc
work () {
    # Open vim in the current directory.
    cmds=(":cd $dir")

    if [[ -n "$VIRTUAL_ENV" ]]; then
        # Add virtualenv-specific vim commands to be run at launch.
        if [[ -r $VIRTUAL_ENV/.project ]]; then
            # If virtualenvwrapper has associated this virtualenv with
            # a project directory, open vim in that directory.
            dir="$(cat $VIRTUAL_ENV/.project)"
        cmds+=(":autocmd BufRead,BufEnter $src/* let b:syntastic_mode = \"passive\"")
        cmds+=(":autocmd BufRead,BufEnter $pkg/* let b:syntastic_mode = \"passive\"")

    args=$(for c in $cmds; do echo -n "-c '$c' "; done)

    eval "vim $args $dir $@"

This function, work, detects whether or not it is being invoked in an environment that has the variable $VIRTUAL_ENV set. If so, it makes the assumption it has been called inside a shell that has had virtualenv's bin/activate script run (or workon from virtualenvwrapper).

It formulates the site-packages and src directories specific to this virtualenv and passes them as -c parameters to Vim.

Now, finally, all I have to do to get working on my supercharged, statically analyzed Python project is to cd into my project directory and set up the virtualenv via workon myproject, type work and hit Enter.