Artificial Intelligence in Student Assessment: What is our Trajectory?

Artificial Intelligence in Student Assessment: What is our Trajectory?

Bengi Birgili is a Research Assistant in the Mathematics Education Department at MEF University in Istanbul. Here she shares her research and insights into the development of Artificial Intelligence applications in the field of education and explains the current trajectory of AI in the Turkish education system.

As a mathematics teacher and doctoral candidate in educational sciences, I closely follow the latest developments in Artificial Intelligence (AI) applications in the field of education. Innovations in AI become outdated within a few months because of the rapidly increasing studies on image processing, speech recognition, natural language processing, robotics, expert systems, machine learning, and reasoning. With Google, Facebook, and IBM AI studies being open source, these companies help speed up developments.

If we think of education as a chair, the legs are the four essential parts that keep it standing: that is, the student, the teacher, the teaching process, and measurement-evaluation – the four basic elements of education. Key areas of AI for education are determining the right strategies, making functional decisions, and coming up with the most appropriate designs for the education and training process. I believe there are many areas in which teachers can work in cooperation with Artificial Intelligence systems in the future.

Human behaviour modelling

The main focus of AI studies worldwide is human behavior modelling. The relationship between how humans model thinking and how we can, therefore, accurately measure and evaluate students is still a subject of exploration. Essentially, the question is: how do humans learn, and how can we teach this to AI expert systems?

Presently, AI expert systems learn in three ways:

  • supervised learning
  • unsupervised learning
  • reinforcement learning

As an educator, whenever I hear these categories, I think of the conditional learning and reward-punishment methods we learn about in educational sciences. These methods, which are prevalent at the most fundamental level in the individual teaching and learning process, are central to the design of AI systems being developed today, which are developed on the behavioristic approach in learning theories.

Just as in the classroom environment, where we can reinforce a students’ behavior by using a reward, praise, or acknowledgment in line with the behaviorist approach while teaching knowledge or skills so that we can strengthen the frequency of the behavior and increase the likelihood that how the response will occur. In a similar vein, an agent or a machine which is under development learns from the consequences of its actions.

AI in the Measurement-Evaluation Process

One area for the use of natural language processing in the measurement-evaluation process is the evaluation of open-ended examinations. In Turkey, large-scale assessment consists mostly of multiple-choice examinations, chosen for their broad scope, objective scoring, high reliability, and ease of evaluation. On the other hand, open-ended examinations are more challenging because they measure students’ higher-level thinking skills in much more detail than multiple-choice, fill-in-the-blanks, true-false, and short-answer questions.

Education systems in other countries make more use of open-ended items because they allow students to thoroughly use their reading comprehension skills. Also, students are able to demonstrate their knowledge in their own words and use multiple solution strategies, which is a better test of their content knowledge. But these open-ended items do not just measure students’ knowledge of a topic; at the same time, they mediate between higher-level thinking skills such as cognitive strategies and self-discipline. This is an area in which AI studies have begun to appear in the educational literature. 

Countries using open-ended items in new generation assessment systems are France, the Netherlands, Australia, and, in particular, the United States and the UK. These systems provide teachers, parents, and policymakers with the opportunity to monitor student progress based on student performance as well as student success. The development of Cognitive Diagnostic Models (CDM) and Computerized Adaptive Tests (CAT) changed testing paradigms. These models classify student response models in a test into a series of characteristics related to different hierarchically defined mastery levels. Another development is immersive virtual environments such as EcoMUVE, which can make stealth/invisible assessments, evaluating students’ written responses and automatically creating follow-up questions.

AI in Student Assessment in Turkey

It is a very broad concept that we call “artificial intelligence [AI] in education”. To simplify it, we can define it as a kind of expert system that sometimes takes the place of teachers (i.e., the intelligent tutors) by making pedagogical decisions about the student in the teaching or measurement-evaluation process. Sometimes the system assists by analyzing the student in-depth in the process, enabling them to interact with the system better. It aims to guide and support students. To make more computational, precise, and rigorous decisions in the education process, the field of AI and Learning Sciences collaborate and contribute to the development of adaptive learning environments and more customized, inclusive, flexible, effective tools by analyzing how learning occurs with its external variables.

Turkey is a country of tests and testing. Its education system relies on selection and placement examinations. However, developments in educational assessment worldwide include individual student follow-up, formative assessments, alternative assessments, stealth assessments, and learning analytics, and Turkey has yet to find its own trajectory for introducing AI in student assessment.

However, the particular structure of the Turkish language makes it more difficult than in other countries to design, model, develop, and test AI systems – which explains the limited number of studies being carried out. The development of such systems depends on big data, so it is necessary to collect a lot of qualified student data in order to pilot deep learning systems. Yet the Monitoring and Assessment of Academic Skills report of 2015-2018 noted that 66% of Turkish students do not understand cause and effect relationships in reading.

In AI testing, students are first expected to grasp what they read and then to express what they know in answering questions, to express themselves, to come up with solutions, and to be able to use metacognitive skills. The limited number of students who can clearly demonstrate these skills in Turkey limits the amount of qualified data to which studies have access. There is a long way to go in order to train AI systems with qualified data and to adapt to the complexities of the Turkish language. In short, Turkey is not yet on a trajectory for introducing AI for education measurement and evaluation – we are still working to get ourselves on an appropriate trajectory. We are still oscillating through the universe. However, there are signs that the future in this area will be designed faster, addressing the questions I have raised.

The Outlook for AI in Student Assessment

While designing and developing such systems, it should be remembered that students and teachers also need to adapt to the system. Their readiness to do so will help us measure the quality of education in general as well as the level of students’ knowledge and skills in particular. Authentic in-class examinations and national and international large-scale assessments should serve the same purpose. In the future, we will need AI systems to play a greater role in generating and categorizing questions and evaluating student responses. And they need to do this is a system whose main goal must be to provide a learning process that positively supports the curiosity and ability of all our students
Bengi Birgili

Bengi Birgili

Research Assistant in the Mathematics Education Department at MEF University, Istanbul.

Bengi Birgili is a research assistant in the Mathematics Education Department at MEF University, Istanbul. She experienced in research at the University of Vienna. She is currently a PhD candidate in the Department of Educational Sciences Curriculum and Instruction Program at Middle East Technical University (METU), Ankara. Her research interests focus on curriculum development and evaluation, instructional design, in-class assessment. She received the Emerging Researchers Bursary Winners award at ECER 2017 for her paper titled “A Metacognitive Perspective to Open-Ended Questions vs. Multiple-Choice.”

In 2020, a co-authored research became one of the 4 accepted studies among Early-Career Scholars awarded by the International Testing Commission (ITC) Young Scholar Committee in the UK [Postponed to 2021 Colloquium due to COVID-19].

In Jan 2020, she completed the Elements of AI certification offered by the University of Helsinki.

Researchgate:https://www.researchgate.net/profile/Bengi-Birgili-2

Twitter: @bengibirgili

Linkedin: https://www.linkedin.com/in/bengibirgili/

ORCID:https://orcid.org/0000-0002-2990-6717

Medium: https://bengibirgili.medium.com