Data Literacy for the 21st Century: Evaluating Competencies and Measurement Strategies for Accurate Data Interpretation

How do students in Swiss higher education self-assess their data literacy competencies, and what methodologies can be used to objectively measure them?

Data Literacy for the 21st Century: Evaluating Competencies and Measurement Strategies for Accurate Data Interpretation
AI-generated image created with ChatGPT; This illustration depicts a group of university students walking while looking at their phones and analyzing visual data. The scene reflects one aspect of data literacy in higher education.

Topic

This thesis investigates how students in Swiss higher education assess their own data literacy skills and whether these self-assessments correspond with objectively measured task performance. By applying the Data Literacy Self-Efficacy Scale (DLSES; Kim et al., 2024), the Cognitive Reflection Test (CRT; Frederick, 2005), and the Bullshit Receptivity Scale (BSR; Pennycook et al., 2015), it explores the relationships between metacognitive self-perception, analytical reasoning, and susceptibility to misleading data representations. The study seeks to validate assessment methodologies that are both reliable and contextually relevant for evaluating data literacy in higher education.

Relevance

In a data-driven society, the ability to accurately interpret data is foundational and not only for academic success but for informed decision-making in professional and civic contexts (Gummer & Mandinach, 2015; OECD, 2021). However, existing research highlights persistent gaps between perceived and actual competence, often due to cognitive biases such as the Dunning–Kruger effect (Dunning, 2011; Atir et al., 2015). This thesis contributes to current debates by empirically testing these assumptions and offering a measurement strategy that balances self-efficacy with objective performance, a critical issue for educators and digital skill developers (Mandinach & Gummer, 2013; Kim et al., 2024).

Results

The findings show that self-assessed data literacy was non-significant predictor of actual task performance (B = 0.058, p = .062, R² = .131), suggesting that students can realistically evaluate their own competence when assessments are well-aligned with the tested content. Surprisingly, higher CRT scores were associated with lower performance (B = –0.042, p = .008), challenging assumptions about reflective reasoning in visual data tasks. Neither bullshit receptivity (B = 0.011, p = .637) nor prior data experience (interaction B = 0.019, p = .587) had significant predictive value. These results emphasize the importance of contextual alignment in assessment design (Kim et al., 2024).

Implications for Practitioners

  • Reliable Self-Assessment Tool: The adapted self-assessment scale showed high internal consistency and partially valid predictions of actual data literacy performance, indicating its usefulness for evaluating perceived competence.
  • Effective Task Design: The graph-based performance tasks helped align perceived and actual competence by reducing ambiguity, supporting accurate self-evaluation.
  • Limitations in Experience Measurement: The current measure of prior data experience lacked depth and failed to capture meaningful engagement, limiting its role in explaining cognitive accuracy.
  • Need for Broader Assessment Scope: The tool’s focus on visual data interpretation limits generalizability; future versions should include diverse data formats (e.g., tables, raw data) and integrate ethical reasoning to better reflect real-world data literacy.

Methods

The study used a quantitative, cross-sectional design (Creswell, 2009), surveying a total of 190 participants from Swiss higher education institutions (cleaned valid sample N=123). The data was collected using a joint online survey with shared infrastructure and a common participant pool. Perceived data literacy was measured using the Data Literacy Self-Efficacy Scale (Kim et al., 2024), while performance was assessed through four graph-based interpretation tasks. Additional measures included the Cognitive Reflection Test (Frederick, 2005) and the Bullshit Receptivity Scale (Pennycook et al., 2015). Regression models tested the relationship between perceived and actual competence while accounting for education, gender, work experience, and online behavior.

References

Atir, S., Rosenzweig, E., & Dunning, D. (2015). When knowledge knows no bounds: Self-perceived expertise predicts claims of impossible knowledge. Psychological Science, 26(8), 1295–1303. https://doi.org/10.1177/0956797615588195

Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. SAGE Publications, Inc. https://www.ucg.ac.me/skladiste/blog_609332/objava_105202/fajlovi/Creswell.pdf

Dunning, D. (2011). The Dunning–Kruger effect: On being ignorant of one's own ignorance. In M. P. Zanna & J. M. Olson (Eds.), Advances in experimental social psychology (Vol. 44, pp. 247–296). Academic Press. https://doi.org/10.1016/B978-0-12-385522-0.00005-6

Frederick, S. (2005). Cognitive reflection and decision making. The Journal of Economic Perspectives, 19, 25–42.

Gummer, E. S., & Mandinach, E. B. (2015). Building a conceptual framework for data literacy. Teachers College Record, 117(4), 1–22. https://doi.org/10.1177/016146811511700401

Kim, J., Hong, L., & Evans, S. (2024). Toward measuring data literacy for higher education: Developing and validating a data literacy self‐efficacy scale. Journal of the Association for Information Science and Technology, 75(8), 916–931. https://doi.org/10.1002/asi.24934

Mandinach, E. B., & Gummer, E. S. (2013). A systemic view of implementing data literacy in educator preparation. Educational Researcher, 42(1), 30–37. https://doi.org/10.3102/0013189x12459803

OECD (2021), 21st-Century Readers: Developing Literacy Skills in a Digital World, PISA, OECD Publishing, Paris, https://doi.org/10.1787/a83d84cb-en.

Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10(6), 549–563. https://doi.org/10.1017/s1930297500006999