Methods
Study Design
We conducted a retrospective cohort study using hospital discharge records from Massachusetts and four comparison states: New York, North Carolina, Nebraska, and Washington, for patients admitted from January 2003 to November 2009. Massachusetts differs from other states in several respects. It is densely populated with several urban areas, as well as rural areas. Compared with the national average, its population is highly educated, with a higher median household income and it has fewer racial and ethnic minorities (Tables S1 and S2, Supplemental Digital Content 1, http://links.lww.com/CCM/A768). With this in mind, we selected comparison states, which overlap Massachusetts in population demographics, while also being broadly representative of the United States as a whole. These states also provide discharge records necessary to identify ICU admissions through the State Inpatient Databases (SID), a data clearinghouse maintained as part of the Agency for Healthcare Research and Quality's Healthcare Cost and Utilization Project. The SID are uniformly reported hospital discharge data that include demographics, diagnosis, procedure, and revenue codes, among other variables, for virtually all hospital discharges from participating states. We linked SID datasets to population estimates for each three-digit zip code, the smallest geographic level available in the Massachusetts SID, based on U.S. Census data (Geolytics, Somerville, NJ). We also linked discharges to hospital characteristics from the American Hospital Association Annual Survey and the Healthcare Cost Information System, including provider type (e.g., government vs private), service type (e.g., acute vs long-term care), total hospital beds, and total critical care beds.
Patient Selection
To capture patient populations potentially eligible under the insurance reform policy in Massachusetts, we limited the analysis to nonpregnant nonelderly adults (age 18–64 yr), admitted to nonfederal acute care hospitals in one of the five states of interest. We excluded people who were not residents of a respective state at the time of admission.
Variable Definitions
The initial health insurance reform legislation in Massachusetts went into effect in July 2006; however, residents were not mandated to purchase health insurance until July 1, 2007. Therefore, we considered the time period before July 2007 (January 2003–June 2007) as the prereform period and defined postreform as July 2007 through November 2009. Our study period, therefore, includes 54 months before and 29 months after the Massachusetts individual mandate.
The primary outcomes of interest were population- and hospital-based ICU use. We calculated the monthly population-based hospital and ICU admission rates, defined as the number of respective admissions per 10,000 population for each three-digit zip code by month. We defined the hospital-based ICU admission rate as the proportion of hospitalizations admitted to the ICU. ICU admission was defined using billing claims as previously described. Secondary outcomes were in-hospital mortality for all ICU patients. For survivors, we categorized discharge location as home with or without home health, skilled nursing or intermediate care, rehabilitation, hospice, or other.
Statistical Analysis
We aimed to isolate the effect of Massachusetts's health insurance reform on each of our outcomes. In our analysis of population and patient demographic and clinical characteristics, we used chi-square test for proportions, t tests for means, and analyses of variance for continuous variables. To examine unadjusted and adjusted results for our primary and secondary outcomes over time comparing Massachusetts and control states, we fit a series of regression models using a difference-in-differences analysis. These models attempt to adjust for secular trends that can bias estimated effects (Table S3, Supplemental Digital Content 1, http://links.lww.com/CCM/A768). All models employed generalized estimating equations (GEE) with robust variance estimates to account for clustering by three-digit zip code for our population-based outcomes and by hospital for the outcomes of hospital-based ICU admission, hospital mortality, and discharge disposition among survivors.
In models examining population-based admission rates, we adjusted for demographic characteristics at the three-digit zip code level, including race, gender, age, median income, and home ownership. In models examining hospital-based ICU admission rate, in-hospital mortality, and discharge disposition, we adjusted for multiple patient-level demographic and clinical covariates that might serve as potential confounders. We used these models to calculate the adjusted predictions for each year in a separate model that included year interacted with state as categorical variables and plotted them over time.
Sensitivity Analyses
We performed several sensitivity analyses to determine the robustness of our findings to different cohort and model specifications. To examine whether hospital closures over the study period may have accounted for our results, we repeated our analyses limiting the cohort only to hospitals open during the entire study period. We hypothesized that safety-net hospitals were likely to experience the greatest increase in patients gaining health insurance after reform and repeated our analyses after limiting the cohort to hospitals in the top quartile of Medicaid admissions before reform. In 2001, New York state expanded eligibility for Medicaid to childless adults with incomes up to 100% of the federal poverty level and parents with incomes up to 150% of the federal poverty level. Given the recent evidence of decreased population-level mortality rates following Medicaid expansion in New York, we excluded New York from our comparison states, hypothesizing that Medicaid expansion in this state, although occurring prior to Massachusetts insurance reform, may attenuate differences in our outcomes of interest. We varied the modeling approach for in-hospital mortality by refitting models using a random effects (REs) linear probability model and logistic regression, where we separately accounted for clustering within hospital using either GEE or REs.
Statistical analyses were performed with SAS 9.2 (SAS Institute, Cary, NC) and Stata 12.0 (StataCorp, College Station, TX). All tests were two tailed, and p value of less than 0.05 was considered significant. This project used de-identified data and was exempted from human subjects review by the Institutional Review Boards for University of Pennsylvania and University of Michigan.