Measuring Student Contribution in a Software Engineering Team

Introduction

In software engineering, there is very little consensus on how to measure an individual developer’s contribution. Although many measures have been proposed, their usefulness in the industry lacks validation, particularly from the perspectives of team leaders and managers (Lima et al., 2015). The lack of measurement also challenges educators (Gardner et al., 2003). This post will examine student developer contributions within the context of a software engineering project.

ISTE Standard 4.6 advocates for ed tech coaches to be data-driven decision-makers using qualitative and quantitative data to inform their decisions. Standard 4.6b states, “Support educators to interpret qualitative and quantitative data to inform their decisions and support individual student learning.” Techniques discussed in this article could be used to measure student engagement and fulfillment in a team project and give insight into where instruction can be altered in a software engineering course.

I will begin by examining the use of chat platforms like Discord to track individual student contributions. Next, I’ll discuss the role of peer evaluations in assessing team member input. Lastly, I’ll introduce repository mining techniques to quantify these contributions.

Live chat Activity

We’ll start with what I consider the least effective among the three metrics. In recent years, many modern developers have adopted Discord as a tool for real-time communication and collaboration in software engineering projects. Fundamentally, Discord channels serve as dedicated spaces for text, voice, and video communication. In educational contexts, these channels can be structured to reflect the various teams within a software project, facilitating organized, topic-specific discussions. Such channels can host various activities, from casual interactions and planning sessions to problem-solving discussions and code reviews, closely mirroring a real-world software development environment. Furthermore, Discord captures all these interactions, creating a comprehensive, searchable archive of every conversation and exchange.

Moreover, thanks to its bot-integration features, Discord is increasingly seen as an innovative tool for gauging student contributions in team-based projects. Analytical bots like Statbot offer detailed statistics on individual interactions on the platform, enabling the assessment of each student’s engagement. Chat histories also supply quantitative data on the quality of contributions in software engineering team projects.

However, while bots offer valuable quantitative and analytical insights, it’s important to complement this data with qualitative evaluations. Direct observations, feedback sessions, and individual discussions remain indispensable for grasping the subtleties of each student’s input. It’s also vital to address privacy concerns and uphold ethical standards in monitoring, ensuring clear guidelines and transparency from the instructor’s side.

Peer Evaluations

Gardner et al. (2003) conducted a study exploring the use of group member ratings to gauge relative contributions among students in a software engineering team project course. At the end of the project, students rate each team member’s contributions across four criteria using a five-point scale:

  • Attendance at team meetings.
  • Volunteering for and carrying out tasks.
  • Quality of work performed.
  • Effectiveness in communicating ideas.

The findings suggest that these anonymous peer ratings are reliable for ranking team members on their contributions. While students often rate themselves higher than their teammates, the relative contributions ranking remains consistent, which aligns with previous research (West, 2018, Ch.16).

This approach quantifies peer perceptions of engagement and effort. It motivates students to interact and collaborate and allows teams to self-manage contributions. However, limitations exist. Students may not accurately judge true contributions. Dominant personalities could influence ratings. Moreover, if grades hinge directly on these ratings, it might encourage score inflation.

Despite its limitations, peer ratings offer a systematic method to encourage and gauge participation in team projects. They represent the firsthand insights of teammates into individual efforts and team dynamics. Instructors should triangulate peer evaluations with other performance indicators to mitigate potential biases. When applied thoughtfully, group member ratings can be a scalable tool to enhance accountability and ensure equitable effort distribution within student engineering teams.

Using Git Repositories

While subjective peer evaluations are commonly used, analyzing data from git repositories provides an objective lens into individual contributions, revealing insights into aspects like collaboration patterns, subsystem ownership, and consistency of participation (Lima et al., 2015). Instructors can combine these repository-based metrics with subjective evaluations to assess student effort and engagement better.

A fundamental metric is examining each student’s number of commits over time, called code contribution (Lima et al., 2015). This helps reveal whether students contribute regularly throughout the project or make concentrated commits right before deadlines. Students with relatively few commits thinly spread across the weeks likely contributed minimally, while a student with a steady stream of commits each week demonstrates consistent engagement (Glassy, 2006).

Examining the content of commits also provides insights into contribution quality. The code complexity measure is also widely accepted as a good measure of contribution. The code complexity measure considers the complexity and difficulty of the sub-problem being solved. Complexity measures were proposed by McCabe in 1976 and are still widely used today to examine git repositories. The measures analyze code complexity before and after a team member has altered it. Low commit complexity suggests weaker contributions to the team’s software development processes.

A variation of the code complexity measure is the bug-related measures, which measure the contribution to bug introductions and bug-fixing. However, this measure has limitations because some bug fixes do not require writing code, mitigating the developer’s efforts (Lima et al., 2015). Also, advanced repository analysis can reveal collaboration patterns within student teams. Tools like FRASR and ProM introduced by Poncin et al. (2011) can extract event logs from student repository data (using FRASR) and subsequently analyze the development process (with ProM). This tool also incorporates developer roles and adherence to certain development models. 

Of course, reliance solely on git metrics has limitations. First, commits mainly represent coding contributions, overlooking other forms of participation like verbal collaboration and project leadership (Lima et al., 2015). Second, students can artificially inflate their repository activity metrics if they know the algorithm being used. Despite these drawbacks, analyzing git data provides valuable insights into individual participation on student software teams. Instructors should interpret repository metrics not as absolute contribution measures but as launching points for further investigation.

Conclusion

By balancing quantitative git data with qualitative peer evaluations, product assessments, and student interviews, instructors can obtain a more equitable evaluation of individuals. Nonetheless, there is a strong correlation between subject and objective measures of contribution to a project (Hundhausen et al., 2022). Software engineering courses require team projects, but assessing individual accountability remains vital. Combining subjective reviews and objective repository analysis helps reveal a more accurate picture of each student’s contributions and commitment.

References

Lima, J., Christoph Treude, Fernando Figueira Filho, & Kulesza, U. (2015). Assessing developer contribution with repository mining-based metrics. https://doi.org/10.1109/icsm.2015.7332509

Gardner, W. (2003). Assessing individual contributions to group software projects. In 8th Western Canadian Conference on Computing Education (WCCCE’03) (pp. 33-50).

Hundhausen, C. D., Conrad, P. T., Carter, A. S., & Adesope, O. (2022). Assessing individual contributions to software engineering projects: a replication study. Computer Science Education32(3), 335–354. https://doi.org/10.1080/08993408.2022.2071543

West, R. E. (2018). Foundations of Learning and Instructional Design Technology. https://doi.org/10.59668/3

Glassy, L. (2006). Using version control to observe student software development processes. Journal of Computing Sciences in Colleges21(3), 99–106.

McCabe, T. J. (1976). A Complexity Measure. IEEE Transactions on Software EngineeringSE-2(4), 308–320. https://doi.org/10.1109/tse.1976.233837

Poncin, W., Serebrenik, A., & Mark. (2011). Mining student capstone projects with FRASR and ProM. https://doi.org/10.1145/2048147.2048181

Leave a Reply

Your email address will not be published. Required fields are marked *