Time: h 16:15 pm
Location: Room Ofek, Polo scientifico e tecnologico "Fabio Ferrari", Building Povo 1, Via Sommarive 5, Povo (Trento)
- Dr. Michael Mogessie Ashenafi
Abstract of Dissertation
Student performance is commonly measured using summative assessment methods such as midterms and final exams as well as high-stakes testing. Although not as common, there are other methods of gauging student performance. Formative assessment is a continuous, student-oriented form of assessment, which focuses on helping students improve their performance through continuous engagement and constant measurement of progress.
One assessment practice that has been in use for decades in such a manner is peer-assessment. This form of assessment relies on having students evaluate the works of their peers. The level of education in which peer-assessment is used may vary across practices. The research discussed here was conducted in a higher education setting.
Despite its cross-domain adoption and longevity, peer-assessment has been a practice difficult to utilize in courses with a high number of students. This directly stems from the fact that it has been used in traditional classes, where assessment is usually carried out using pen and paper. In courses with hundreds of students, such manual forms of peer-assessment would require a significant amount of time to complete. They would also contribute much to both student and instructor load.
Automated peer-assessment, on the other hand, has the advantage of reducing, if not eliminating, many of the issues relating to efficiency and effectiveness of the practice. Moreover, its potential to scale up easily makes it a promising platform for conducting large-scale experiments or replicating existing ones.
The goal of this thesis is to examine how the potential of automated peer-assessment may be exploited to improve student engagement and to demonstrate how a well-designed peer-assessment methodology may help teachers identify at-risk students in a timely manner.
A methodology is developed to demonstrate how online peer-assessment may elicit continuous student engagement. Data collected from a web-based implementation of this methodology are then used to construct several models that predict student performance and monitor progress, highlighting the role of peer-assessment as a tool of early intervention.
The construction of open datasets from online peer-assessment data gathered from five undergraduate computer science courses is discussed.
Finally, a promising role of online peer-assessment in measuring levels of student proficiency and test item difficulty is demonstrated by applying a generic Item Response Theory model to the peer-assessment data.