Joins in Base R: Alternative to SQL-like dplyr

In base R, you can execute SQL-like joins, as long as you use the correct code syntax.

Joins in base R are more natural to you if you are not regularly programming in SQL. If you are used to SQL, then you will likely prefer not to do your dataframe joins in base R. Instead, you will more likely groove with the R package dplyr that allows you to make joins between dataframes using more SQL-like code.

But for a lot of programmers like me in public health (and other domains that use SAS), base R is more natural than dplyr. Therefore, we are more likely to gravitate towards doing our joins in base R. Also, dplyr uses “tibbles”, not regular dataframes, and I prefer dataframes, so I usually stick with base R for data management.

But I like joins! In SAS, we do joins using the old “sort, sort, merge” routine. If we want to do left or right joins, or inner or outer joins, we have to adjust our merge code. In base R, joins can be approached in a similar way as in SAS – but luckily, you don’t have to do the “sort, sort” first!

Joins in Base R: NHANES Use Case

Right here, we are going to start with some code that is a repeat from this blog post about joining NHANES data. You can download the code from GitHub if you are interested. The blog post explains the back story on the NHANES data, and directs you online where you can download the datasets used in this example.

Setting Up the Data

The NHANES datasets are in SAS XPT format, but If you use the foreign package in R, you can easily unpack them into the R environment, which we will do. As described in the blog post, the NHANES data are federated. Therefore, they have to be joined together to make an analytic dataset. The way the business rules go, the demographic dataset (P_DEMO.XPT) is essentially the “denominator file”. It has one record for every respondent in the data. The primary key is SEQN, and SEQN is a foreign key in all the other datasets. Therefore, to build out a dataset for analysis, it is necessary to do a series of left joins onto demographic dataset.

As you can see below, I import the demographic data into a dataset named demo_a. I also import a dataset with oral health examination data called P_OHXDEN.XPT into R, and I name the dataset called dent_a.

demo_a <- read.xport("P_DEMO.XPT")
dent_a <- read.xport("P_OHXDEN.XPT")

The preceding code imports all the variables in each of the source datasets, but we don’t want all the variables. In fact, as described in the blog post, we know we just one two variables from each dataset: SEQN (the primary key), DMDYRUSZ (variable indicating length of time living in the United States) from the demographics, and OHXIMP (variable indicating if respondent has a dental implant) from the oral health dataset. Here is the code I used:

keep_demo <- c("SEQN", "DMDYRUSZ")
keep_dent <- c("SEQN", "OHXIMP")

demo_b <- demo_a[keep_demo]

dent_b <- dent_a[keep_dent]

As you can see, to trim off the columns I do not want, I create a vector containing the list of variable names for variables I want to keep. I name the vectors after the dataset to which they are referring (e.g., keep_dent). Then, I use the vector name in brackets to trim off the columns I don’t want and rename the dataset. Using the keep_dent vector against the original dataset I imported and named dent_a, I create dent_b. I also created demo_b the same way – by just keeping the variables I wanted from the demographic dataset.

Joins in Base R: Correct Left Join

Now we will start by correctly joining dent_b onto demo_b using base R to execute a left join. If you left join dent_b onto demo_b, you are telling R that you want it to keep records with all the SEQNs from demo_b, and include data from records into the merged dataset that match up from dent_b. If there are no records that match up with dent_b from a SEQN in demo_b, then those values in the resulting merged dataset will be NA (empty). But, at least the data from demo_b will be in the merged data, and you will be able to tell which records had no matches in dent_b because those variables will be blank.

The reason why we call it a “left join” is we say the datasets in this order when doing the merge command: demo_b, then dent_b. See how demo_b is on the left of dent_b when we say it in that order? The merge command in R assumes that by putting demo_b first, we are designating it as the “x” dataframe. The “y” dataframe is dent_b, because it comes next.

So, in the next code, we first check the number of rows in demo_b with nrow. Given the governance policies of how the NHANES datasets are related, the business rule is our merged dataframe should have the same number of records as demo_b. If it doesn’t, we did something wrong, so we should check that number first – before the merge.

Next, we create the new merged dataframe as merged_a. In the merge command, to execute a left join onto demo_b, we state demo_b first and dent_b second. We specify the primary key, which is SEQN. Then, to make sure we get a left join, we add the option all.x = TRUE. This tells R to keep all the records in the x dataframe (demo_b), and only matching ones in the other ones.

merged_a <- merge(demo_b, dent_b, by = c("SEQN"), all.x=TRUE)

So we use merge and all.x to left join dent_b onto demo_b and create merged_a. You will notice I checked the number of rows to ensure the left join worked, and it looks like it did.

> nrow(demo_b)
[1] 15560
> merged_a <- merge(demo_b, dent_b, by = c("SEQN"), all.x=TRUE) 
> nrow(merged_a) [1] 15560

Notice that the demographic table says that the total universe of possible respondents is 15,560 in this dataset. But even though all of those records joined (because we forced them), we don’t know if the variables we wanted from dent_b are available for all the records. In fact, we know many will be missing. So we have to do a frequency table looking at the variable OHXIMP from dent_b variables and values of NA.

table(merged_a$OHXIMP, useNA = c("always"))

   1    2   NA 
 358 9390 5812 

As you can see, the records coded with 1 and 2 represent data joined on from OHXIMP. The 5,812 that didn’t join represent missing data in OHXIMP.

The issue with NHANES (and a lot of other datasets) is that the demographic dataset is the only one that has a full list of respondents. Only a subset of those respondents are also in the oral health dataset. A different subset would be in the smoking questionnaire dataset. So if we don’t always join all these datasets with demographics on the left – with a properly-executed left join – we will lose records.

Now, I will go over a few common mistakes.

Mistake #1: Accidentally Doing a Right Join

Let’s do this join again, but this time, we will change part of the code. We will keep the datasets in the merge command in the same order (demo_b first, followed by dent_b), but this time, instead of setting all.x = TRUE, we will set all.y = TRUE.

merged_a <- merge(demo_b, dent_b, by = c("SEQN"), all.y=TRUE)

By doing this, we are telling R to keep all the SEQNs in dent_b, and only keep the ones from demo_b that matched dent_b. This limits the universe of the retrieved records to only the respondents who underwent the oral health exam, which are in dent_b.

This mistake is apparent when looking at the output on the console:

> nrow(demo_b)
[1] 15560
> merged_a <- merge(demo_b, dent_b, by = c("SEQN"), all.y=TRUE) 
> nrow(merged_a) [1] 13772

Clearly, the dataset has been reduced from 15,560 records to 13,772 – not good. We are losing records – all because we said all.y instead of all.x.

Mistake #2: Putting the Wrong Dataframe First

You can make exactly the same mistake as the one we just did by keeping all.x = TRUE the way it is, and just putting the dataframes in the merge command in the wrong order, with dent_b first and demo_b second. Here is the code with the number of rows from the console.

> nrow(demo_b)
[1] 15560
> merged_a <- merge(dent_b, demo_b, by = c("SEQN"), all.x=TRUE) 
> nrow(merged_a) [1] 13772

As a convention, because the demographics represents the entire universe of SEQNs, it should always be stated first, and serve as the left side of a left join through each merge step. As more and more dataframes are patched on, we should not see the number of rows ever change from that 15,560 number. That’s because NHANES is a cross-sectional dataset, so their should be only a one-to-one relationship between the demographic dataset and the others.

Mistake #3: Accidentally Doing an Inner Join

When you do an inner join, you are telling R to only keep the records in the resulting dataset that are in both datasets (demo_b and dent_b). In our case, that would essentially give us the same results as right joining dent_b onto demo_b. However, that is not always the case – it depends on the business rules of the underlying data.

This is why we need to add the all.x = TRUE option – to prevent R from doing an inner join, and not a left join. Here is what happens if we leave that option out.

> nrow(demo_b)
[1] 15560
> merged_a <- merge(demo_b, dent_b, by = c("SEQN")) 
> nrow(merged_a) [1] 13772

Again, we see that we lose records, as merged_a only has 13,772 records, not the 15,560 we expect.

Joins in Base R: Be Careful!

Just like when merging datasets in any statistical language, you have to be careful if you want to use base R to join datasets. First, make sure you understand the underlying business rules before you start joining the datasets. Next, make sure you regularly look at number of rows in datasets before and after merges to make sure you are not losing data (or having any other join issues). And, if you don’t know R and just want to learn it, try my LinkedIn Learning course in “Descriptive Healthcare Analytics in R”.


Added video December 4, 2023.

Read all of our data science blog posts!

Descriptive Analysis of Black Friday Death Count Database: Creative Classification

Descriptive analysis of Black Friday Death Count Database provides an example of how creative classification [...]

Classification Crosswalks: Strategies in Data Transformation

Classification crosswalks are easy to make, and can help you reduce cardinality in categorical variables, [...]

FAERS Data: Getting Creative with an Adverse Event Surveillance Dashboard

FAERS data are like any post-market surveillance pharmacy data – notoriously messy. But if you [...]

Dataset Source Documentation: Necessary for Data Science Projects with Multiple Data Sources

Dataset source documentation is good to keep when you are doing an analysis with data [...]

Joins in Base R: Alternative to SQL-like dplyr

Joins in base R must be executed properly or you will lose data. Read my [...]

NHANES Data: Pitfalls, Pranks, Possibilities, and Practical Advice

NHANES data piqued your interest? It’s not all sunshine and roses. Read my blog post [...]

Color in Visualizations: Using it to its Full Communicative Advantage

Color in visualizations of data curation and other data science documentation can be used to [...]

Defaults in PowerPoint: Setting Them Up for Data Visualizations

Defaults in PowerPoint are set up for slides – not data visualizations. Read my blog [...]

Text and Arrows in Dataviz Can Greatly Improve Understanding

Text and arrows in dataviz, if used wisely, can help your audience understand something very [...]

Shapes and Images in Dataviz: Making Choices for Optimal Communication

Shapes and images in dataviz, if chosen wisely, can greatly enhance the communicative value of [...]

Table Editing in R is Easy! Here Are a Few Tricks…

Table editing in R is easier than in SAS, because you can refer to columns, [...]

R for Logistic Regression: Example from Epidemiology and Biostatistics

R for logistic regression in health data analytics is a reasonable choice, if you know [...]


Connecting SAS to Other Applications: Different Strategies

Connecting SAS to other applications is often necessary, and there are many ways to do [...]

Portfolio Project Examples for Independent Data Science Projects

Portfolio project examples are sometimes needed for newbies in data science who are looking to [...]

Project Management Terminology for Public Health Data Scientists

Project management terminology is often used around epidemiologists, biostatisticians, and health data scientists, and it’s [...]

Rapid Application Development Public Health Style

“Rapid application development” (RAD) refers to an approach to designing and developing computer applications. In [...]

Understanding Legacy Data in a Relational World

Understanding legacy data is necessary if you want to analyze datasets that are extracted from [...]

Front-end Decisions Impact Back-end Data (and Your Data Science Experience!)

Front-end decisions are made when applications are designed. They are even made when you design [...]

Reducing Query Cost (and Making Better Use of Your Time)

Reducing query cost is especially important in SAS – but do you know how to [...]

Curated Datasets: Great for Data Science Portfolio Projects!

Curated datasets are useful to know about if you want to do a data science [...]

Statistics Trivia for Data Scientists

Statistics trivia for data scientists will refresh your memory from the courses you’ve taken – [...]

Management Tips for Data Scientists

Management tips for data scientists can be used by anyone – at work and in [...]

REDCap Mess: How it Got There, and How to Clean it Up

REDCap mess happens often in research shops, and it’s an analysis showstopper! Read my blog [...]

GitHub Beginners in Data Science: Here’s an Easy Way to Start!

GitHub beginners – even in data science – often feel intimidated when starting their GitHub [...]

ETL Pipeline Documentation: Here are my Tips and Tricks!

ETL pipeline documentation is great for team communication as well as data stewardship! Read my [...]

Benchmarking Runtime is Different in SAS Compared to Other Programs

Benchmarking runtime is different in SAS compared to other programs, where you have to request [...]

End-to-End AI Pipelines: Can Academics Be Taught How to Do Them?

End-to-end AI pipelines are being created routinely in industry, and one complaint is that academics [...]

Referring to Columns in R by Name Rather than Number has Pros and Cons

Referring to columns in R can be done using both number and field name syntax. [...]

The Paste Command in R is Great for Labels on Plots and Reports

The paste command in R is used to concatenate strings. You can leverage the paste [...]

Coloring Plots in R using Hexadecimal Codes Makes Them Fabulous!

Recoloring plots in R? Want to learn how to use an image to inspire R [...]

Adding Error Bars to ggplot2 Plots Can be Made Easy Through Dataframe Structure

Adding error bars to ggplot2 in R plots is easiest if you include the width [...]

AI on the Edge: What it is, and Data Storage Challenges it Poses

“AI on the edge” was a new term for me that I learned from Marc [...]

Pie Chart ggplot Style is Surprisingly Hard! Here’s How I Did it

Pie chart ggplot style is surprisingly hard to make, mainly because ggplot2 did not give [...]

Time Series Plots in R Using ggplot2 Are Ultimately Customizable

Time series plots in R are totally customizable using the ggplot2 package, and can come [...]

Data Curation Solution to Confusing Options in R Package UpSetR

Data curation solution that I posted recently with my blog post showing how to do [...]

Making Upset Plots with R Package UpSetR Helps Visualize Patterns of Attributes

Making upset plots with R package UpSetR is an easy way to visualize patterns of [...]


Making Box Plots Different Ways is Easy in R!

Making box plots in R affords you many different approaches and features. My blog post [...]

Convert CSV to RDS When Using R for Easier Data Handling

Convert CSV to RDS is what you want to do if you are working with [...]

GPower Case Example Shows How to Calculate and Document Sample Size

GPower case example shows a use-case where we needed to select an outcome measure for [...]

Querying the GHDx Database: Demonstration and Review of Application

Querying the GHDx database is challenging because of its difficult user interface, but mastering it [...]

Variable Names in SAS and R Have Different Restrictions and Rules

Variable names in SAS and R are subject to different “rules and regulations”, and these [...]

Referring to Variables in Processing Data is Different in SAS Compared to R

Referring to variables in processing is different conceptually when thinking about SAS compared to R. [...]

Counting Rows in SAS and R Use Totally Different Strategies

Counting rows in SAS and R is approached differently, because the two programs process data [...]

Native Formats in SAS and R for Data Are Different: Here’s How!

Native formats in SAS and R of data objects have different qualities – and there [...]

SAS-R Integration Example: Transform in R, Analyze in SAS!

Looking for a SAS-R integration example that uses the best of both worlds? I show [...]

Dumbbell Plot for Comparison of Rated Items: Which is Rated More Highly – Harvard or the U of MN?

Want to compare multiple rankings on two competing items – like hotels, restaurants, or colleges? [...]


Data for Meta-analysis Need to be Prepared a Certain Way – Here’s How

Getting data for meta-analysis together can be challenging, so I walk you through the simple [...]

Sort Order, Formats, and Operators: A Tour of The SAS Documentation Page

Get to know three of my favorite SAS documentation pages: the one with sort order, [...]

Confused when Downloading BRFSS Data? Here is a Guide

I use the datasets from the Behavioral Risk Factor Surveillance Survey (BRFSS) to demonstrate in [...]


Doing Surveys? Try my R Likert Plot Data Hack!

I love the Likert package in R, and use it often to visualize data. The [...]


I Used the R Package EpiCurve to Make an Epidemiologic Curve. Here’s How It Turned Out.

With all this talk about “flattening the curve” of the coronavirus, I thought I would [...]

Which Independent Variables Belong in a Regression Equation? We Don’t All Agree, But Here’s What I Do.

During my failed attempt to get a PhD from the University of South Florida, my [...]

Joins in base R must be executed properly or you will lose data. Read my tutorial on how to correctly execute left joins in base R.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Verified by MonsterInsights