My news this week: I have been working with a great team for the last couple years to study fair redistricting in Colorado, and we just posted a new paper on the arXiv!
The basic idea of ensemble analysis is to first use a computer to create millions of random valid congressional districting plans for a state. These plans are created using only the legal parameters for valid plans, with no partisan information included, so are inherently free from partisan bias. We then use real voting data, like the votes in the Secretary of State or Governor’s race, to model elections in these possible districts. We create a histogram of how many seats a given party wins under each of these districting plans, which gives a baseline of what we could expect for plans drawn without partisan bias given the human geography and real voting patterns of the state. If we have a proposed districting plan for the state, we can then see how many seats the party would get under that plan with the same voting data. If the proposed map gives an extreme outcome that gives the party many more or fewer seats than we expect from the baseline, we can identify that map as unlikely to be created without partisan bias. This can be interpreted as evidence or quantification of gerrymandering–or at the very least a sign that the proposed map gives a bad political representation of the state! This simple and powerful idea has shown up recently in many court cases and shaped public policy outside the courts.
Some cool features of this paper: we study the interaction of partisan baseline and two of the fairness criteria in Colorado law–minimizing splitting of counties, and maximizing the number of competitive districts. We also apply statistical techniques to determine the sample size necessary to achieve a desired level of empirical mixing. We are hoping to spread the word about ensemble analysis within the redistricting community and the general public here in the state, so give me a shout if you’d like to talk about this :).