Benchmarking allows its users to gauge how their organization is doing in relation to similar others as they pursue operational excellence. However, benchmarking can be more complex than some might think because the term refers to a customized set of activities that form a methodology.

Methodologies can range from basic to highly advanced, depending on the goals of the benchmarking program. Furthermore, an organization’s chosen methodology determines the types of questions that can be answered by the data collected, as well as the level of confidence you can have in the validity of those answers. A methodology simply means: a plan that concretely outlines what, how and when things are measured and describes how the measured information will be analyzed and interpreted. The higher the methodology level, the more complex questions your benchmarking program can answer. 

The first task in creating or revising a benchmarking program is to start with the end in mind, where you first determine what questions you want your collected data to answer. Once you figure this out, you can determine the level (e.g., basic, moderate or advanced) of benchmarking methodology (i.e., the plan) you would need. This column describes three different benchmarking methodology levels and explains the types of questions each level allows your organization to answer. 

If this first section seems as clear to you as mud, don’t be alarmed. The rest of this column will walk you through the three benchmarking levels of complexity. The lowest level (basic) methodology refers to the initial process of gathering data from at least some proportion of a larger population on at least one outcome of interest (e.g., collecting the energy bills in dollars spent for a portion of all the hospitals in your region), then taking these data (e.g., all those dollars spent entered into an Excel spreadsheet where each row represents one hospital in your area) and comparing all the data points. 

Basic benchmarking provides analysts with the average and median energy costs for all members in the sample and would also allow them to determine how spread out or skewed the costs were. This measure of skew could point to potential differences within the overall sample but cannot definitively describe how, or why, they might differ. 

Results gathered from basic benchmarking are not guaranteed to be generalizable to all hospitals, particularly if the number of data points in your sample is much smaller than what is found in the overall population or if the variables measured do not accurately measure the concept you are comparing.

A more sophisticated form of benchmarking (moderate level) would include a second step, where all those data points (again, all those rows of data that represent each hospital’s energy cost in dollars) could be sliced up into smaller chunks that could be compared against one another based on some grouping factor that we already know has its own systematic impact on your outcome of interest. 

For example, an analyst may create a column in that same Excel spreadsheet that allows them to categorize each hospital (i.e., row) by some categorical indicator of hospital size (e.g., number of beds). Then, the analyst can carve up the data into several smaller analyses, which allows for apples-to-apples comparisons with only similar others (e.g., comparing the number of dollars spent on energy costs for only hospitals with 25 or fewer beds). However, this type of benchmarking only describes where a specific organization lies in relation to the others that have participated in the project within that specific group, and results cannot be used to judge success or failure based on where a particular hospital scores on a key performance indicator (KPI). 

An even more sophisticated form of benchmarking (advanced) involves the creation of thresholds or rules of thumb. Advanced benchmarking requires more sophisticated methodologies that capture large amounts of data over time and includes all relevant KPIs and their covariates (i.e., factors we know that may influence our KPIs, like bed size or facility condition). Advanced benchmarking includes a final repeating step, where experts in the field and other interested stakeholders work together to interpret the findings and use consensus-driven expertise to determine benchmarks, or concrete definitions of success or failure for a given KPI based on safety or cost parameters. This kind of benchmarking is considered the “brass ring” of benchmarking.

Many health care facilities managers think of benchmarking as the creation of benchmarks or rules of thumb, which we have shown is benchmarking at its most advanced level. If this is desired, organizations can take steps to evolve their own benchmarking programs by implementing advanced methodologies. 

The American Society for Health Care Engineering’s (ASHE’s) research group has been tackling benchmarking using an advanced methodology. This column provides ASHE members and others in the health care facilities field with background on what that means.

About this column

“Data Driven Insights” provides a primer on research basics and shares recent innovations and concrete answers by showcasing the collaborative applied research efforts of the American Society for Health Care Engineering members, academics and scientists and other related professionals working within complementary fields.

Lisa Walt, Ph.D., senior researcher and methodologist, American Society for Health Care Engineering.