Models to Measure Students’ Learning in Computer Science

As computer science becomes integrated into K-12 education systems worldwide, educators and researchers continuously search for effective methods to measure and understand students’ learning levels in this field. The challenge lies in developing reliable and comprehensive assessment models that accurately and discreetly gauge student learning. Teachers must assess learning to support students’ educational needs better. Similarly, students and parents expect schools to document students’ proficiency in computing and their practical application. Unlike conventional subjects such as math and science, very few relevant assessments are available for K-12 CS education. This article explores specific models used to measure knowledge in various CS contexts and then examines several examples of student learning indicators in computer science.

Randomized Controlled Trials and Measurement Techniques

An innovative approach to measuring student performance in computer science education involves evaluating the effectiveness of teaching parallel programming concepts. Research by Daleiden et al. (2020) focuses on assessing students’ understanding and application of these concepts.

The Token Accuracy Map (TAM) technique supplements traditional empirical analysis methods, such as timings, error counting, or compiler errors, which often need more depth in analyzing the cause of errors or providing detailed insights into specific problem areas encountered by students. The study applied TAM to examine student performance across two parallel programming paradigms: threads and process-oriented programming based on Communicating Sequential Processes (CSP), measuring programming accuracy through an automated process.

The TAM approach analyzes the accuracy of student-submitted code by comparing it against a reference solution using a token-based comparison. Each element of the code, or “token,” is compared to determine its correctness, and the results are aggregated to provide an overall accuracy score ranging from 0% to 100%. This scoring system reflects the percentage of correctness, allowing for a detailed examination of which students intuitively understand specific elements of different programming paradigms or are more likely to implement them correctly.

This approach extends error counts, offering insights into students’ mistakes at a granular level. Such detailed analysis enables researchers and educators to identify specific programming concepts requiring further clarification or alternative teaching approaches. Additionally, TAM can highlight the strengths and weaknesses of different programming paradigms from a learning perspective, thereby guiding curriculum development and instructional design.

Competence Structure Models in Informatics

Torsten et al. (2015) introduced a new model in their discussion aimed at developing a competence structure model for informatics with a focus on system comprehension and object-oriented modelling. This model, part of the MoKoM project (Modeling and Measurement of Competences in Computer Science Education), seeks to create a competence structure model that is both theoretically sound and empirically validated. The project’s goals include identifying essential competencies in the field, organizing them into a coherent framework, and devising assessments to measure them accurately. The study employed the Item Response Theory (IRT) evaluation methodology to construct the test instrument and analyze survey data.

The initial foundation of the competence model was based on theoretical concepts from international syllabi and curricula, such as the ACM’s “Model Curriculum for K-12 Computer Science” and expert papers on software development. This framework encompasses cognitive and non-cognitive skills pertinent to computer science, especially emphasizing system comprehension and object-oriented modelling.

The study further included conducting expert interviews using the Critical Incident Technique to validate the model’s applicability to real-world scenarios and its empirical accuracy. This method was instrumental in pinpointing and defining the critical competencies needed to apply and understand informatics systems. It also provided a detailed insight into student learning in informatics, identifying specific strengths and areas for improvement.

Limitations

The limitation of this approach is its specificity, which may hinder scalability to broader contexts or different courses. Nonetheless, the findings indicate that detailed, granular measurements can offer valuable insights into the nature and types of students’ errors and uncover learning gaps. The resources mentioned subsequently propose a more general strategy for assessing learning in computer science.

Evidence-centred Design for High School Introductory CS Courses

Another method for evaluating student learning in computer science involves using Evidence-Centered Design (ECD). Newton et al. (2021) demonstrate the application of ECD to develop assessments that align with the curriculum of introductory high school computer science courses. ECD focuses on beginning with a clear definition of the knowledge, skills, and abilities students are expected to gain from their coursework, followed by creating assessments that directly evaluate these outcomes.

The approach entails specifying the domain-specific tasks that students should be capable of performing, identifying the evidence that would indicate their proficiency, and designing assessment tasks that would generate such evidence. The model further includes an analysis of assessment items for each instructional unit, considering their difficulty, discrimination index, and item type (e.g., multiple-choice, open-ended, etc.). This analysis aids in refining the assessments to gauge student competencies and understanding more accurately.

This model offers a more precise measurement of student learning by ensuring that assessments are closely linked to curriculum objectives and learning outcomes.

Other General Student Indicators

The Exploring Computer Science website, a premier resource for research on indicators of student learning in computer science, identifies several key metrics for understanding concepts within the field:

  • Student-Reported Increase in Knowledge of CS Concepts: Students are asked to self-assess their knowledge in problem-solving techniques, design, programming, data analysis, and robotics, rating their understanding before and after instruction.
  • Persistent Motivation in Computer Problem Solving: This self-reported measure uses a 5-point Likert scale to evaluate students’ determination to tackle computer science problems. Questions include, “Once I start working on a computer science problem or assignment, I find it hard to stop,” and “When a computer science problem arises that I can’t solve immediately, I stick with it until I find a solution.”
  • Student Engagement: This metric again relies on self-reporting to gauge a student’s interest in further pursuing computer science in their studies. It assesses enthusiasm and inclination towards the subject.
  • Use of CS Vocabulary: Through pre- and post-course surveys, students respond to the prompt: “What might it mean to think like a Computer Scientist?”. Responses are analyzed for the use of computer science-related keywords such as “analyze,” “problem-solving,” and “programming.” A positive correlation was found between CS vocabulary use and self-reported CS knowledge levels.

Comparing the Models

Each model discussed provides distinct benefits but converges on a shared objective: to gauge precisely students’ understanding of computer science. The Evidence-Centered Design (ECD) model is notable for its methodical alignment assessments with educational objectives, guaranteeing that evaluations accurately reflect the intended learning outcomes. Conversely, the randomized controlled trial and innovative measurement technique present a solid approach for empirically assessing the impact of instructional strategies on student learning achievements. Finally, the competence structure model offers an exhaustive framework for identifying and evaluating specific competencies within a particular field, like informatics, ensuring a thorough understanding of student abilities. As the field continues to evolve, so will our methods for measuring student success.

References

Daleiden, P., Stefik, A., Uesbeck, P. M., & Pedersen, J. (2020). Analysis of a Randomized Controlled Trial of Student Performance in Parallel Programming using a New Measurement Technique. ACM Transactions on Computing Education20(3), 1–28. https://doi.org/10.1145/3401892

Magenheim, J., Schubert, S., & Schaper, N. (2015). Modelling and measurement of competencies in computer science education. KEYCIT 2014: key competencies in informatics and ICT7(1), 33-57.

Newton, S., Alemdar, M., Rutstein, D., Edwards, D., Helms, M., Hernandez, D., & Usselman, M. (2021). Utilizing Evidence-Centered Design to Develop Assessments: A High School Introductory Computer Science Course. Frontiers in Education6. https://doi.org/10.3389/feduc.2021.695376

Leave a Reply

Your email address will not be published. Required fields are marked *