Which Independent Variables Belong in a Regression Equation? We Don’t All Agree, But Here’s What I Do.

Little table of X Y Pairs with a Regresion Diagram with Least Squares Line

During my failed attempt to get a PhD from the University of South Florida, my doctor friend asked me one day to build a linear regression model using a small dataset he had collected from a lab. He had measurements of these “new” chemical messengers called cytokines – so that definitely dates this story! I was supposed to use characteristics of the patients as predictors so we could see if any particular patient characteristics were associated with levels of any particular cytokine.

I remember making a model, and wondering if it was right. When I put all the covariates in, most of them were not significant. If I removed the not-significant ones, the other ones got more significant – very suspicious! Then I added interactions. Then I called my professor and asked what to do.

I’ll cut to the chase: she didn’t know what to do, either.

But she did tell me what everyone is essentially supposed to do. First, we are supposed to set up the statistical test before gathering the data. Yeah, good luck with that when dealing with real-life doctors and real-life labs. Next, before we start modeling, we create a set of rules about our model that need to be met for us to accept and select the final model. These are called “model specifications”.

Challenges with Model Specification for Regression Modeling

Jim Frost writes on his blog “Statistics by Jim” quite comprehensively about specifying models. He goes over different ways of making up these rules. For example, for linear regression models, you can pick the one with the best r-squared. Or you can just throw out all the variables with slopes with non-significant p-values.

But what’s especially awesome about Statistics by Jim’s post is that after he talks about how to make the specifications, he puts in a section titled, “Real World Complications in the Model Specification Process”. I’ll paraphrase what he says:

Sometimes the data do not behave, and other times, they behave downright badly!

…so the best laid plans can often go wrong in practice. You might feel tension between different specifications. What if two very different models have the exact same r-squared? Also, Jim points out that there are different kinds of r-squareds, so which one do you choose? What if there are two “sibling” variables who are both significant when together in a model, but ruin the rest of the model by making everything else non-significant? But you can’t just put in one or the other, because neither sibling is significant alone. This is an example of collinearity.

In other words, in real life, regression modeling can be hairy.

What to Do? Stepwise, Stepwise, Stepwise!

Logistic Distribution with Different Mus in Different ColorsOver time, we’ve come to a consensus that “stepwise” model specification is a good idea. This means setting up our model specification in steps: first we do this, then we do that, then we do this other thing. It can help us figure out what covariates to keep in the model by giving us more decision points so we can filter in a batch, then filter that batch into the next batch. Much more organized, and easier to document. So that’s the good news.

The bad news is that there are different kinds of stepwise modeling, and we can’t agree on what the best one is – or even what to call each of them. I used to say what I did was “forward stepwise” modeling – that’s what I was told it was in college. In fact, that’s what I called it in when demonstrating it in my LinkedIn Learning R course on regression.

Then, later, when I was writing this book on descriptive and regression analysis in R and SAS, one of the really helpful peer-reviewers alerted me to this awesome article that actually tries three different logistic regression modeling approaches on the same dataset and compares the answers.

But the problem is that the authors use the term “forward stepwise” to mean something that I didn’t mean in my R course. This worried me, and then I asked around and found that some people do call what I did in the R course “forward stepwise”, and some call it “ambi-directional stepwise” – which, sorry, is just too too weird for me.

Another popular approach is called “backward stepwise” (also unfortunately called “backward elimination”), and the authors tested that one in their article, along with their version of “forward stepwise”, and the approach I actually use – which I guess I will just call “stepwise selection”. That’s the approach I’ve been using since being coached by my professor on that project, and it’s the one I teach to my customers and in my courses. So now I started calling it that.

Okay Great, Stepwise Selection is Best. How Do I Learn it?

If you are an R person, you can take my R course on LinkedIn Learning, and if you are a SAS person, you can take my SAS course. But if you are more of a manager type – or you just want to learn how to do model specification without actually doing any programming – take my big data study design course on LinkedIn Learning. I teach practical model specification that you can apply with real world data…

…even when data behave badly!

Monika Wahi is LinkedIn Learning author of data science courses in both SAS and R, and President of DethWench Professional Services. Not sure what variables to include in your regression model? Contact Monika at [email protected] or on LinkedIn, and she’ll give you a data consultation!

Rainbow logistic regression plot by Krishnavedala.

Last updated August 24, 2019

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.