# Working between python 2 and python 3

I am so done with programming for python 2.

This is the convergence in a spatial parameter in a spatial model I’m working with. This is a very long run that took a few hours to complete. Tests of this on my (python 3) compute server went fine, tuned correctly, but didn’t have this dramatic convergence.

Once the sampling finished, I looked at the acceptance rate and, sure enough, the AR on the metropolis step of this Gibbs sampler was like… .98.

In the test runs on my compute server, it always tuned to be between .2 and .3 for any reasonable tuning procedure. So, I was perplexed…

Doing more tuning on the compute server, I noticed that, even when the accept rate was real high, my tuned metropolis class was reducing the jump size. It seemed like no matter what, the proposal scale was decreasing.

In fact, it decreased exponentially, which means the tuner never considered the AR as above the target AR. Yet, the AR increases rapidly to 1.

Cue:


acceptance_rate = sampler.n_accepted / sampler._total_iterations



Whereas on my test server, this works, on the python 2 compute server, this yields zero always, since n_accepted is an int less than the int _total_iterations. Thus, the tuner takes the acceptance rate to be zero, even when the acceptance rate is likely never zero.

After my summer of code experience, I’ve come to think this division behavior is the scariest design choice in python 2, and makes the jump from 2 to 3 and back so fraught. I totally forget about __future__ because in 90% of my environments, I don’t need it. And, when I have to maintain code in both 2 and 3, bugs like this can’t be autoconverted, and often will only show by causing the program to act strangely, but not fail.

imported from: yetanothergeographer