Results from a Survey on Post-Primary Teachers’ Experiences with Calculated Grading during COVID-19

Results from a Survey on Post-Primary Teachers’ Experiences with Calculated Grading during COVID-19

In May 2020, as a result of Covid-19, the high stakes assessment at the end of post-primary education in Ireland (the Leaving Certificate Examination – LCE) was cancelled replaced by a system of calculated grades. In documentation sent to schools, the Department of Education and Skills (DES) made it clear that a calculated grade would result from the combination of two data sets:

  • an overall percentage mark and ranking in each subject awarded to each student by their teacher (the school-based estimation process)
  • data on past performance of students in each school and nationally (the standardisation process)

Following the issuing of results to students and the completion of the appeals process, an online questionnaire survey was conducted in the final months of 2020 by researchers at the Institute of Education, Dublin City University, with the aim of investigating how teachers’ engaged with the calculated grades process in their schools.  Data from a total of 713 respondents were used in a report published by the Centre for Assessment Research, Policy and Practice in Education (CARPE) on April 15th 2021. This report is now available to download from www.dcu.ie/carpe.  The following are some highlights from this report.

 

Assessment Evidence Used

Teachers considered many different types of formative and summative assessments when estimating mark and ranks for their students. Particularly important were final year exams prior to lockdown (98%) and final year continuous assessments (92%). Four out of every five teachers indicated that knowledge of how previous students had performed in the LC influenced their decision-making.  Significantly, 88% said that formative assessments were important also. One respondent noted:

Personally, I feel very competent in assigning the predicted grades to my LC students in 2020 since I had assessed their performance in detail over a 2-year period…. Each exam/ portfolio/homework was assigned a weighting and a record of their performance updated to our Schoology platform. Students could readily assess their own progress over this period and all this data enabled a solid predicted grade for each student.

 

Teachers’ Reflections on the School-Based Estimation Process

At least 90% of teachers indicated that they were able to apply the DES calculated grades guidelines strictly when estimating marks and ranks for the majority of their students. However, some reported experiencing difficulties in adjudicating marks at grade boundaries.  For example, 61% said that they gave 5% or more of their students the benefit of the doubt and gave them a mark that moved them above a grade boundary, with 21% saying that they should have awarded a failing mark but didn’t.  One-third of respondents said that they awarded a higher mark for 5% or more of their students because they thought the national standardisation process might bring the student’s grade down.  While 73% said that the moderation process to align grades within their schools worked well, 26% reported raising a mark and 17% lowering a mark following engagement in the process. Significantly, the vast majority of teachers (92%) felt that the marks they awarded were fair.

 

Other Reflections

One in three respondents added commentary at the end of the questionnaire, with many focusing on the stress brought about by the fact that they lived in the same small communities as the students they were grading. Many identified parents, school management, media and politicians as sources of the pressure they felt.  One teacher expressed it thus:

I believe that while it would be ok for more teacher involvement in urban centres, the nature of rural and small town Ireland made the entire process very uncomfortable and I am sure that teachers will feel the rippling exponential impact of this for some time.

A number of events that transpired following the submission of school data to the DES were also highlighted as problematic.  The fact that the DES provided students with their rank order data came as a surprise to teachers and caused great disquiet. The removal, in late August, of school historical data from the standardisation process, following controversy about its use for calculated grades in the UK, was a source of great annoyance, especially among those working in high achieving schools. That said, some teachers noted that calculated grades had been an acceptable option in the context of a pandemic and that many students benefited from the fact that the grades awarded in 2020 were the highest ever.

 

Conclusion

The implementation of calculated grades in Ireland was a historic event as, for the first time since the introduction of the LCE in 1924, post-primary teachers engaged in the assessment of their own students for certification purposes. While difficulties arose, all those involved worked diligently to ensure that the class of 2020 could progress in their education and/or careers.  In 2021, Irish teachers will be asked to engage in a similar process while at the same time they will be preparing their students to take the traditional LC examinations.  The plan is that the two assessment systems will run side-by-side, and students will be given the option of choosing their best result in each subject.  Our hope is that findings from this survey will be useful to all those responsible for overseeing and implementing this challenging task.

References and Further Reading

Doyle, A., Z. Lysaght and M. O’Leary. 2021. Preliminary Findings from a Survey of Post- Primary Teachers Involved in the Leaving Certificate 2020. Calculated Grades Process in Ireland. Dublin: Centre for Assessment, Research, Policy and Practice in Education (CARPE), Dublin City University. Accessed April 15, 2021. https://www.dcu.ie/sites/default/files/inline-files/calculated_grades_2020_preliminary_findings_v2_2.pdf

Doyle, A., Lysaght, Z., & O’Leary, M. 2021. High stakes assessment policy implementation in the time of COVID-19: The case of calculated grades in Ireland. Irish Educational Studies, 40. DOI: 10.1080/03323315.2021.1916565 https://www.tandfonline.com/doi/full/10.1080/03323315.2021.1916565 

Prof. Michael O'Leary,

Prof. Michael O'Leary,

Prometric Chair in Assessment, School of Policy and Practice, Institute of Education, Dublin City University

Michael O’Leary holds the Prometric Chair in Assessment at Dublin City University where he also directs the Centre for Assessment Research, Policy and Practice in Education (CARPE). He leads a programme of research at CARPE focused on assessment across all levels of education and in the workplace.

Dr. Audrey Doyle

Dr. Audrey Doyle

Assistant Professor, School of Policy and Practice, Institute of Education, Dublin City University

Audrey Doyle is an assistant professor in the School of Policy and Practice in DCU. A former second-level principal of a large all-girls post-primary school in Dublin, she achieved her Ph.D. in Maynooth University in 2019. She now lectures on curriculum and assessment across a diversity of modules in DCU, contributing to the Masters in Leadership and the Doctorate in Education.

Dr. Zita Lysaght

Dr. Zita Lysaght

Assistant Professor, School of Policy and Practice, Institute of Education, Dublin City University

Zita Lysaght is a member of the School of Policy and Practice and a Research Associate and member of the Advisory Board and Advisory Panel of CARPE at DCU. She coordinates and teaches classroom assessment and research methodology modules on undergraduate, masters and doctoral programmes and directs and supervises a range of research and doctoral projects.

Artificial Intelligence in Student Assessment: What is our Trajectory?

Artificial Intelligence in Student Assessment: What is our Trajectory?

Bengi Birgili is a Research Assistant in the Mathematics Education Department at MEF University in Istanbul. Here she shares her research and insights into the development of Artificial Intelligence applications in the field of education and explains the current trajectory of AI in the Turkish education system.

As a mathematics teacher and doctoral candidate in educational sciences, I closely follow the latest developments in Artificial Intelligence (AI) applications in the field of education. Innovations in AI become outdated within a few months because of the rapidly increasing studies on image processing, speech recognition, natural language processing, robotics, expert systems, machine learning, and reasoning. With Google, Facebook, and IBM AI studies being open source, these companies help speed up developments.

If we think of education as a chair, the legs are the four essential parts that keep it standing: that is, the student, the teacher, the teaching process, and measurement-evaluation – the four basic elements of education. Key areas of AI for education are determining the right strategies, making functional decisions, and coming up with the most appropriate designs for the education and training process. I believe there are many areas in which teachers can work in cooperation with Artificial Intelligence systems in the future.

Human behaviour modelling

The main focus of AI studies worldwide is human behavior modelling. The relationship between how humans model thinking and how we can, therefore, accurately measure and evaluate students is still a subject of exploration. Essentially, the question is: how do humans learn, and how can we teach this to AI expert systems?

Presently, AI expert systems learn in three ways:

  • supervised learning
  • unsupervised learning
  • reinforcement learning

As an educator, whenever I hear these categories, I think of the conditional learning and reward-punishment methods we learn about in educational sciences. These methods, which are prevalent at the most fundamental level in the individual teaching and learning process, are central to the design of AI systems being developed today, which are developed on the behavioristic approach in learning theories.

Just as in the classroom environment, where we can reinforce a students’ behavior by using a reward, praise, or acknowledgment in line with the behaviorist approach while teaching knowledge or skills so that we can strengthen the frequency of the behavior and increase the likelihood that how the response will occur. In a similar vein, an agent or a machine which is under development learns from the consequences of its actions.

AI in the Measurement-Evaluation Process

One area for the use of natural language processing in the measurement-evaluation process is the evaluation of open-ended examinations. In Turkey, large-scale assessment consists mostly of multiple-choice examinations, chosen for their broad scope, objective scoring, high reliability, and ease of evaluation. On the other hand, open-ended examinations are more challenging because they measure students’ higher-level thinking skills in much more detail than multiple-choice, fill-in-the-blanks, true-false, and short-answer questions.

Education systems in other countries make more use of open-ended items because they allow students to thoroughly use their reading comprehension skills. Also, students are able to demonstrate their knowledge in their own words and use multiple solution strategies, which is a better test of their content knowledge. But these open-ended items do not just measure students’ knowledge of a topic; at the same time, they mediate between higher-level thinking skills such as cognitive strategies and self-discipline. This is an area in which AI studies have begun to appear in the educational literature. 

Countries using open-ended items in new generation assessment systems are France, the Netherlands, Australia, and, in particular, the United States and the UK. These systems provide teachers, parents, and policymakers with the opportunity to monitor student progress based on student performance as well as student success. The development of Cognitive Diagnostic Models (CDM) and Computerized Adaptive Tests (CAT) changed testing paradigms. These models classify student response models in a test into a series of characteristics related to different hierarchically defined mastery levels. Another development is immersive virtual environments such as EcoMUVE, which can make stealth/invisible assessments, evaluating students’ written responses and automatically creating follow-up questions.

AI in Student Assessment in Turkey

It is a very broad concept that we call “artificial intelligence [AI] in education”. To simplify it, we can define it as a kind of expert system that sometimes takes the place of teachers (i.e., the intelligent tutors) by making pedagogical decisions about the student in the teaching or measurement-evaluation process. Sometimes the system assists by analyzing the student in-depth in the process, enabling them to interact with the system better. It aims to guide and support students. To make more computational, precise, and rigorous decisions in the education process, the field of AI and Learning Sciences collaborate and contribute to the development of adaptive learning environments and more customized, inclusive, flexible, effective tools by analyzing how learning occurs with its external variables.

Turkey is a country of tests and testing. Its education system relies on selection and placement examinations. However, developments in educational assessment worldwide include individual student follow-up, formative assessments, alternative assessments, stealth assessments, and learning analytics, and Turkey has yet to find its own trajectory for introducing AI in student assessment.

However, the particular structure of the Turkish language makes it more difficult than in other countries to design, model, develop, and test AI systems – which explains the limited number of studies being carried out. The development of such systems depends on big data, so it is necessary to collect a lot of qualified student data in order to pilot deep learning systems. Yet the Monitoring and Assessment of Academic Skills report of 2015-2018 noted that 66% of Turkish students do not understand cause and effect relationships in reading.

In AI testing, students are first expected to grasp what they read and then to express what they know in answering questions, to express themselves, to come up with solutions, and to be able to use metacognitive skills. The limited number of students who can clearly demonstrate these skills in Turkey limits the amount of qualified data to which studies have access. There is a long way to go in order to train AI systems with qualified data and to adapt to the complexities of the Turkish language. In short, Turkey is not yet on a trajectory for introducing AI for education measurement and evaluation – we are still working to get ourselves on an appropriate trajectory. We are still oscillating through the universe. However, there are signs that the future in this area will be designed faster, addressing the questions I have raised.

The Outlook for AI in Student Assessment

While designing and developing such systems, it should be remembered that students and teachers also need to adapt to the system. Their readiness to do so will help us measure the quality of education in general as well as the level of students’ knowledge and skills in particular. Authentic in-class examinations and national and international large-scale assessments should serve the same purpose. In the future, we will need AI systems to play a greater role in generating and categorizing questions and evaluating student responses. And they need to do this is a system whose main goal must be to provide a learning process that positively supports the curiosity and ability of all our students
Bengi Birgili

Bengi Birgili

Research Assistant in the Mathematics Education Department at MEF University, Istanbul.

Bengi Birgili is a research assistant in the Mathematics Education Department at MEF University, Istanbul. She experienced in research at the University of Vienna. She is currently a PhD candidate in the Department of Educational Sciences Curriculum and Instruction Program at Middle East Technical University (METU), Ankara. Her research interests focus on curriculum development and evaluation, instructional design, in-class assessment. She received the Emerging Researchers Bursary Winners award at ECER 2017 for her paper titled “A Metacognitive Perspective to Open-Ended Questions vs. Multiple-Choice.”

In 2020, a co-authored research became one of the 4 accepted studies among Early-Career Scholars awarded by the International Testing Commission (ITC) Young Scholar Committee in the UK [Postponed to 2021 Colloquium due to COVID-19].

In Jan 2020, she completed the Elements of AI certification offered by the University of Helsinki.

Researchgate:https://www.researchgate.net/profile/Bengi-Birgili-2

Twitter: @bengibirgili

Linkedin: https://www.linkedin.com/in/bengibirgili/

ORCID:https://orcid.org/0000-0002-2990-6717

Medium: https://bengibirgili.medium.com