Opacity in Data Reporting: A Look at Cardus (2011, 2012)

This post summarizes our research review, which provides a critical analysis of Pennings et al. (2011, 2012). Click HERE to read a more in-depth version of the arguments presented.

In 2011 and 2012, the Canadian Christian think tank Cardus published reports on their study of adult graduates of Christian private schools in North America. (The 2011 publication focused on schools in the United States and the 2012 publication focused on Canada.) Though the authors of the report, Pennings and his team of researchers, did not set out to analyze homeschooling, best research practices required that some incidental data also be collected on homeschool graduates.

The Cardus publications relied on random samples of homeschool graduates whose responses to various surveys were weighted based on the number of respondents and then weighted again for a variety of demographic factors. As such, the Cardus survey is one of the only studies of a representative sample of homeschool graduates—and one of the only studies whose results can be applied to the larger population of all homeschoolers.

The major findings of the study relate only to religious homeschoolers (or, as defined in the study, homeschoolers whose mothers frequently attended religious services) in the US and Canada. The researchers found that homeschool graduates were less academically prepared for college and had less higher education than public school graduates; that they had a strict and legalistic moral outlook; and that they reported more feelings of helplessness and a lack of clarity about their life goals. In addition, American religious homeschool graduates reported more divorces and fewer children than public school graduates, as well as a lack of interest in politics and charitable giving; however, these characteristics were not shared by Canadian respondents.

Though the study lacked significant methodological flaws, its lack of focus on homeschooling limits the conclusions it allows us to draw. For example, the small sample size limited the number of statistically significant results. The researchers did not define ‘homeschooling’ or distinguish between different types (for example, umbrella schools, correspondence programs, etc); nor did they account for differences in the number of years children spent being homeschooled. The study was limited to religious homeschoolers and defined them by their mothers’ attendance at religious services—this may not be the most precise definition. The homeschool graduates who were surveyed were mostly in their late 20s, which may not have provided a complete picture of their lifetime outcomes.

If the study was more or less sound, the write-up was less so. For some reason, Pennings et al. chose not to report their statistical data in a meaningful way. The dozens of graphs they include in their publications are not labeled in units or with a scale on their y-axes. This makes it impossible to translate data from a graph into a statement like “Group X had 3.4 more children than Group Y.” At best, we can only say that “Group X had more children than Group Y.” Furthermore, though Pennings et al. describe performing significance testing and state that the p-values are represented on the graphs, this does not appear to be the case.

For these reasons it is difficult to draw any direct conclusions from this study. The soundness of the methodology makes some of its findings suggestive of larger trends, but the study’s lack of focus on homeschoolers and opaque methods for reporting data hinder its explanatory power. Apparently the authors plan to follow up on their previous study with one that more directly targets homeschoolers—hopefully this future study will illuminate some of the more murky aspects of Cardus (2011, 2012).

Skip to content