# Reverse-PCA for making sense of the typical structure in multivariate models

I don’t really have a good idea for what many places in the UK are like, nor for what the structure of some of this data is when considering its joint structure. So, while my model fits quite well and yields some interesting results, I’m a bit limited because I don’t really know what a place like Barrow-in-Furness is like, without looking into it.

In general, it’s more difficult to get a sense of what the model’s telling me from the conditional estimates because I don’t really have a sense of the joint picture: I don’t really intuit how they covary across places, like I might in US counties or states.

So, I found myself wanting a kind of “joint” marginal effect, something I could use to work out how my model predictions vary from “places like A” to “places like B” but define those generically, in terms of typical combinations of attributes in my sample.

I started by shifting things linearly along my data’s midranges, but this doesn’t account for the fact that some attributes may be negatively correlated with other attributes in my design matrix, and so I would expect it to be more typical in my data that Xj increases as Xk decreases, on average. This isn’t just a linear shifting using each conditional effect… it’s something else.

So, eigenspaces. I strung some code together to:

1. Grab the sklearn.decomposition.PCA of my model design matrix.
2. Extract the most relevant dimension.
3. Sort my data by this dimension and grab the names of observations.
4. Plot the predicted Brexit % against these names.

Above is the plot of my data’s main dimension, the one that explains the most variance in my design matrix. The lines are the predicted % Brexit, observed % Brexit, and “breakeven point,” along with the names of places sorted by this dimension on the vertical axis.

Now, I can get a sense of how these types of places (sort of like area profiles) relate to one another in my data. This gives me an idea of what happens when I change from “places like Kensington and Chelsea” to “places like Cornwall,” without having to specify the precise covariance structure of my attribute data.

I can slice one dimension off the PCA decomposition, check how varying it changes my model, and see what covariates are related to that dimension.

In a way, gives me the “joint” marginal effect I want: what happens when you move your mean response along many different features, but in a way that reflects how these features covary in your source data.

imported from: yetanothergeographer