Benchmarking Runtime is Different in SAS Compared to Other Programs

How do you measure how long it takes for code to run in different programs? And why would you want to measure something like that? Mainly, the reason to benchmark runtime is so that you can figure out how to optimize your code.

Benchmarking runtime in SAS is often done with respect to data import and editing, but it’s relatively easy because of some of SAS’s features. First, SAS has an ongoing log file that automatically outputs a lot of information about each run, including runtime. Second, in SAS, where you place a “run” command tells SAS when to essentially start and end runtime sessions, so the log produces metrics consistent with run command placement. Third, SAS as a language does not efficiently support objects other than datasets, in that using other SAS objects – such as arrays and variables – is a somewhat costly and involved process. Therefore, benchmarking runtime in SAS consists mostly of data collection off of the log file.

Benchmarking Runtime in Most Other Programs

Benchmarking runtime in other data software (such as R and SQL) requires a different approach, because there is not a continuous running log file such as we have in SAS. Instead, benchmarking runtime in other programs generally goes like this:

  1. Develop the code you want to time, and make sure it runs properly.
  2. One line before that, insert a line of code that creates a variable that captures the system time as a value before the code runs.
  3. Insert a similar line of code after the code you want to time that creates a variable that captures the system time as a value after the code runs.
  4. Using your two variables, subtract the before time from the after time, and you will get the total run time.

Again, in many instances, SAS will literally do this for you, and paste the results in the log file for you to harvest.

Benchmarking Runtime for Importing Data

A typical use-case for benchmarking runtime is for importing large datasets. To demonstrate this, I took a large dataset – a BRFSS dataset – and converted it to two data formats: R’s native *.rds format (“BRFSS_a.rds”), and *.csv (“BRFSS_a.csv”). Then, I did an example of importing both of them into R GUI and benchmarking the runtime each time. We would expect that the *.rds would be imported much faster than the *.csv, given that *.rds is R’s native format – but we will see for ourselves by calculating runtime for each situation.

First, I looked at the runtime for importing the *.rds version of the file with this code:

start_read_rds <- Sys.time()
Large_RDS_df <- readRDS(file = "BRFSS_a.rds")
end_read_rds <- Sys.time()

RDSTime <- end_read_rds - start_read_rds
RDSTime

In this code, the variable start_read_rds is set to the system time (Sys.time()) before reading in the dataset, and the variable end_read_rds is set to the system time after reading in the dataset. Then, the variable RDSTime is calculated by subtracting the start time from the end time, and displays on the screen. I actually ran this code three times on my laptop, and each time, I got a value of a little over 2 seconds.

Now, let’s look at the code where I read in the same dataset in *.csv format on my laptop using R GUI. We expect this to take longer – but how much longer? Here is the code I used.

start_read_csv <- Sys.time()
Large_CSV_df <- read.csv(file = "BRFSS_a.csv")
end_read_csv <- Sys.time()

CSVTime <- end_read_csv - start_read_csv
CSVTime

As you can see in the code, I used the same approach with the system time, setting start_read_csv to the system time before reading in the dataset, and setting end_read_csv with the system time at the end. I used the read.csv command to read in the dataset BRFSS_a.csv and create dataframe Large_CSV_df. Then, I do the substraction, and create CSVTime as the final runtime.

If you want to do this yourself, check out the Github code here. I didn't include the big BRFSS datasets because they were too large, but the code should work on any input dataset you have - just edit the code to change the name.

Note: The readRDS() command does not have a period in it, and the read.csv() command does – just to drive you crazy.

Visualizing Benchmarking Results

When I ran the *.csv code three times, each time, it took at least 28 seconds. As you can tell from just my writing, this is a lot slower than reading in the *.rds. But how do we compare?

One way to compare is to just make a simple bar chart. This graphic shows that I took three runtime measurements all right next to each other (same environment, same time), averaged them together, and just made a bar chart comparing the averages.

But this is a very simple case. At the US Army, I actually did a test like this (between MS SQL and PC SAS) for different sized datasets. I had a dataset with a million rows, one with 2 million, one with 4 million, etc. I had average runtimes for SQL vs. SAS at these different levels: 1 million rows, 2 million rows, 4 million rows, etc.

This structure of data lent itself more as a time-series display (line graph), with the number of rows of the dataset along the x-axis, and the average time along the y-axis. In that case, MS SQL clearly outperformed PC SAS, and at larger datasets, we could see we were wasting 30 minutes per so many million rows by using PC SAS vs. MS SQL.

The reason I was doing this study was to try to advocate to switching from PC SAS to MS SQL. The arguments against switching were 1) my boss was friends with SAS, and 2) SAS was giving us free licenses. The problem was that the licenses were “costing” us on the other end, in consulting fees. My $95/hour SAS consultant was frustrated playing solitaire all day while we were trying to import millions of rows into PC SAS.

So when I did this study and saw the results, I thought my boss would be happy to switch to MS SQL for storage of our data lake, because we could then use our SAS consultants more efficiently. The SAS consultant who was helping me with the benchmark study was happy to use SQL, although she admitted she needed to get good at it, and there would be a learning curve. No problem! That would have been much cheaper than what we were doing.

But alas, the boss was anti-woman. He did not like the idea of women (me and the consultant) making recommendations. He instead ended the data lake project. I later learned this is not unusual when women gain power in tech – that they are not supported professionally, and have to overdefend their plans and requests for resources.

Environment is Important When Benchmarking Runtime

When I worked at the US Army, I came in early one morning, and benchmarked runtime on loading a dataset into my data lake using the desktop in my office. Later in the day, I ran the same code, and it was very bogged down. It really made a difference. I went around asking people at the location why they thought it was bogged down in the day. More people were on the network in the middle of the day, but we were the US Army – shouldn’t we have so much power that that didn’t matter?

I talked to some people at SAS and at the Army, and they pointed out that it actually also mattered what desktop I was using to launch the code (even though the code was executing on our network). If I launched code from the desktop in our conference room, I may get different results from the desktop in my office. So they strongly recommended to me to minimize the variables in the environment when running the code multiple times and measuring runtime.

This led me to ask: What is the “correct” environment? Theoretically, the “correct” environment would entail launching the code from the desktop of our data analysts during the day when they are working. I didn’t want to bother them, so I just used my own desktop, but chose to launch the code within their working hours.

 

Updated December 13, 2022. Revised banners June 18, 2023.

Read all of our data science blog posts!

Descriptive Analysis of Black Friday Death Count Database: Creative Classification

Descriptive analysis of Black Friday Death Count Database provides an example of how creative classification [...]

Classification Crosswalks: Strategies in Data Transformation

Classification crosswalks are easy to make, and can help you reduce cardinality in categorical variables, [...]

FAERS Data: Getting Creative with an Adverse Event Surveillance Dashboard

FAERS data are like any post-market surveillance pharmacy data – notoriously messy. But if you [...]

Dataset Source Documentation: Necessary for Data Science Projects with Multiple Data Sources

Dataset source documentation is good to keep when you are doing an analysis with data [...]

Joins in Base R: Alternative to SQL-like dplyr

Joins in base R must be executed properly or you will lose data. Read my [...]

NHANES Data: Pitfalls, Pranks, Possibilities, and Practical Advice

NHANES data piqued your interest? It’s not all sunshine and roses. Read my blog post [...]

Color in Visualizations: Using it to its Full Communicative Advantage

Color in visualizations of data curation and other data science documentation can be used to [...]

Defaults in PowerPoint: Setting Them Up for Data Visualizations

Defaults in PowerPoint are set up for slides – not data visualizations. Read my blog [...]

Text and Arrows in Dataviz Can Greatly Improve Understanding

Text and arrows in dataviz, if used wisely, can help your audience understand something very [...]

Shapes and Images in Dataviz: Making Choices for Optimal Communication

Shapes and images in dataviz, if chosen wisely, can greatly enhance the communicative value of [...]

Table Editing in R is Easy! Here Are a Few Tricks…

Table editing in R is easier than in SAS, because you can refer to columns, [...]

R for Logistic Regression: Example from Epidemiology and Biostatistics

R for logistic regression in health data analytics is a reasonable choice, if you know [...]

1 Comments

Connecting SAS to Other Applications: Different Strategies

Connecting SAS to other applications is often necessary, and there are many ways to do [...]

Portfolio Project Examples for Independent Data Science Projects

Portfolio project examples are sometimes needed for newbies in data science who are looking to [...]

Project Management Terminology for Public Health Data Scientists

Project management terminology is often used around epidemiologists, biostatisticians, and health data scientists, and it’s [...]

Rapid Application Development Public Health Style

“Rapid application development” (RAD) refers to an approach to designing and developing computer applications. In [...]

Understanding Legacy Data in a Relational World

Understanding legacy data is necessary if you want to analyze datasets that are extracted from [...]

Front-end Decisions Impact Back-end Data (and Your Data Science Experience!)

Front-end decisions are made when applications are designed. They are even made when you design [...]

Reducing Query Cost (and Making Better Use of Your Time)

Reducing query cost is especially important in SAS – but do you know how to [...]

Curated Datasets: Great for Data Science Portfolio Projects!

Curated datasets are useful to know about if you want to do a data science [...]

Statistics Trivia for Data Scientists

Statistics trivia for data scientists will refresh your memory from the courses you’ve taken – [...]

Management Tips for Data Scientists

Management tips for data scientists can be used by anyone – at work and in [...]

REDCap Mess: How it Got There, and How to Clean it Up

REDCap mess happens often in research shops, and it’s an analysis showstopper! Read my blog [...]

GitHub Beginners in Data Science: Here’s an Easy Way to Start!

GitHub beginners – even in data science – often feel intimidated when starting their GitHub [...]

ETL Pipeline Documentation: Here are my Tips and Tricks!

ETL pipeline documentation is great for team communication as well as data stewardship! Read my [...]

Benchmarking Runtime is Different in SAS Compared to Other Programs

Benchmarking runtime is different in SAS compared to other programs, where you have to request [...]

End-to-End AI Pipelines: Can Academics Be Taught How to Do Them?

End-to-end AI pipelines are being created routinely in industry, and one complaint is that academics [...]

Referring to Columns in R by Name Rather than Number has Pros and Cons

Referring to columns in R can be done using both number and field name syntax. [...]

The Paste Command in R is Great for Labels on Plots and Reports

The paste command in R is used to concatenate strings. You can leverage the paste [...]

Coloring Plots in R using Hexadecimal Codes Makes Them Fabulous!

Recoloring plots in R? Want to learn how to use an image to inspire R [...]

Adding Error Bars to ggplot2 Plots Can be Made Easy Through Dataframe Structure

Adding error bars to ggplot2 in R plots is easiest if you include the width [...]

AI on the Edge: What it is, and Data Storage Challenges it Poses

“AI on the edge” was a new term for me that I learned from Marc [...]

Pie Chart ggplot Style is Surprisingly Hard! Here’s How I Did it

Pie chart ggplot style is surprisingly hard to make, mainly because ggplot2 did not give [...]

Time Series Plots in R Using ggplot2 Are Ultimately Customizable

Time series plots in R are totally customizable using the ggplot2 package, and can come [...]

Data Curation Solution to Confusing Options in R Package UpSetR

Data curation solution that I posted recently with my blog post showing how to do [...]

Making Upset Plots with R Package UpSetR Helps Visualize Patterns of Attributes

Making upset plots with R package UpSetR is an easy way to visualize patterns of [...]

4 Comments

Making Box Plots Different Ways is Easy in R!

Making box plots in R affords you many different approaches and features. My blog post [...]

Convert CSV to RDS When Using R for Easier Data Handling

Convert CSV to RDS is what you want to do if you are working with [...]

GPower Case Example Shows How to Calculate and Document Sample Size

GPower case example shows a use-case where we needed to select an outcome measure for [...]

Querying the GHDx Database: Demonstration and Review of Application

Querying the GHDx database is challenging because of its difficult user interface, but mastering it [...]

Variable Names in SAS and R Have Different Restrictions and Rules

Variable names in SAS and R are subject to different “rules and regulations”, and these [...]

Referring to Variables in Processing Data is Different in SAS Compared to R

Referring to variables in processing is different conceptually when thinking about SAS compared to R. [...]

Counting Rows in SAS and R Use Totally Different Strategies

Counting rows in SAS and R is approached differently, because the two programs process data [...]

Native Formats in SAS and R for Data Are Different: Here’s How!

Native formats in SAS and R of data objects have different qualities – and there [...]

SAS-R Integration Example: Transform in R, Analyze in SAS!

Looking for a SAS-R integration example that uses the best of both worlds? I show [...]

Dumbbell Plot for Comparison of Rated Items: Which is Rated More Highly – Harvard or the U of MN?

Want to compare multiple rankings on two competing items – like hotels, restaurants, or colleges? [...]

2 Comments

Data for Meta-analysis Need to be Prepared a Certain Way – Here’s How

Getting data for meta-analysis together can be challenging, so I walk you through the simple [...]

Sort Order, Formats, and Operators: A Tour of The SAS Documentation Page

Get to know three of my favorite SAS documentation pages: the one with sort order, [...]

Confused when Downloading BRFSS Data? Here is a Guide

I use the datasets from the Behavioral Risk Factor Surveillance Survey (BRFSS) to demonstrate in [...]

2 Comments

Doing Surveys? Try my R Likert Plot Data Hack!

I love the Likert package in R, and use it often to visualize data. The [...]

2 Comments

I Used the R Package EpiCurve to Make an Epidemiologic Curve. Here’s How It Turned Out.

With all this talk about “flattening the curve” of the coronavirus, I thought I would [...]

Which Independent Variables Belong in a Regression Equation? We Don’t All Agree, But Here’s What I Do.

During my failed attempt to get a PhD from the University of South Florida, my [...]

Benchmarking runtime is different in SAS compared to other programs, where you have to request the system time before and after the code you want to time and use variables to do subtraction, as I demonstrate in this blog post.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Verified by MonsterInsights