Peer review in the era of generative AI models: An ethical call

Peer review in the era of generative AI models: An ethical call

The emergence of generative AI models, such as ChatGPT, is significantly impacting various facets of society, including research and academia. Given their ability to generate human-like text based on input data or prompts, generative AI models have profound implications for the academic community. These implications entail ethical and societal challenges within the peer review process, raising questions about the potential role of such models (Schintler, McNeely and Witte, 2023).

As a reviewer, I believe that it is my responsibility to highlight and discuss these critical and timely ethical concerns. Thus, while acknowledging the potential benefits of using such models for academic purposes, this blog post aims to highlight the caveats of using such models in peer-reviewing, and some of their potential pitfalls. This, in turn, emphasises the fundamental need for clearly shared and regularly updated ethical guidelines that ensure the healthy use of such models in academia.  

Can generative AI assist or replace expert reviewers?

How much of AI is too much?  This raises the ethical dilemma: should generative AI assist or replace expert reviewers? To address such fundamental questions, we should first acknowledge that AI, like any technology, is designed to increase the productivity of professionals, not necessarily replace them.

To begin with, generative AI (GAI) can assist reviewers, particularly those for whom English is not their first language, in producing clear and concise reports in less time. For instance, generative AI tools, such as editGPT, have the potential to save time in assessing text readability; tasks that are less intellectually demanding (Checco et al., 2021). I therefore believe it is acceptable for expert reviewers to use generative AI to streamline the review process, while emphasising the necessity of checking the report accuracy.

The pitfalls of AI in the peer review process

Generative AI models are typically unable to offer recommendations based on the latest research findings in the dynamic and complex field of education. The expertise of human peer reviewers is generally beyond the capabilities of generative AI models, which lack the required domain knowledge and intellectual capacity, at least in the foreseeable future. These limitations may have serious implications where GAI models typically provide general comments that lack critical content about the concerned manuscript (Donker, 2023); this means the lack of proper improvement recommendations with the possibility of the manuscript being unjustly dismissed.   

Generative AI models may produce reviews that contain ethical concerns and biases. AI algorithms risk copying and/or possibly expanding human biases (Schintler, McNeely and Witte, 2023). An example of one ethical consideration, which even human reviewers may not fully adhere to, is the importance of respecting the authors’ perspective and not converting their manuscript into that of the reviewers. AI might make recommendations that do not really respect or consider the authors’ perspective enough. This could potentially result in humans being responsible not just to fellow humans but also to machines (Schintler, McNeely and Witte, 2023). Reviewers equally need to be aware of caveats such as breaching the confidentiality of the manuscript under review, as generative AI models may use or share the original ideas as part of their machine-learning processes (Mollaki, 2024). These ethical issues clearly highlight a call for the wise use of unfolding AI technology.

Effective GAI Implementation

We acknowledge that GAI technology is developing at a fast pace (Checco et al., 2021), so it is not easy to predict its exact capabilities. Accordingly, we might witness the emergence of generative AI models that address some of the above ethical concerns. Therefore, it is the collective responsibility of all involved in knowledge production (authors, reviewers, editors, academic supervisors) to continually review and update the scientific community, as well as society members in general, on best practices for using generative AI in academia.

Peer reviewers have an ethical duty to uphold their full responsibility and resist the temptation to simply delegate the job to generative AI models. While researchers importantly advocate for policies that govern the use of AI in peer review (Mollaki, 2024), I do believe that it is, first and foremost, an ethical responsibility that should be central to the review process, with clear repercussions for those disregarding these fundamental ethics.

Thus, this blog post, following Facer and Selwyn (2021), advocates for a ‘non-stupid’ optimism that acknowledges the limitations of using digital technologies in academia. This necessitates that the dialogue be positioned within continuous academic discussions and research on reliable and ethical AI-powered peer review. All those involved in advancing educational research need to ensure that ethics is at the heart of the knowledge production process; otherwise, the integrity of the entire process would be compromised.

Key Messages

  • Generative AI models like ChatGPT have critical implications for academic peer review.
  • Expert reviewers play a crucial role in maintaining the quality and integrity of the peer review process.
  • Generative AI should complement, not replace, human judgment and expertise in academia.
  • Continuous review and dialogue are necessary to ensure ethical and effective use of AI in peer review.
Dr Ayman Hefnawi

Dr Ayman Hefnawi

Mathematics Instructor, ADVETI, UAE

Ayman Hefnawi holds a Doctor of Education from the University of Bath, United Kingdom, and a master’s degree in educational leadership and management from the University of Warwick, United Kingdom. Additionally, Ayman serves as a reviewer for educational journals and maintains memberships in various academic associations

https://www.researchgate.net/profile/Ayman-Hefnawi 
https://www.researchgate.net/profile/Ayman-Hefnaw 
https://twitter.com/aymanhefnawi 
https://orcid.org/my-orcid?orcid=0000-0002-7744-6997 
https://www.scopus.com/authid/detail.uri?authorId=57315619100 

Other blog posts on similar topics:

References and Further Reading

Checco, A., Bracciale, L., Loreti, P., Pinfield, S. and Bianchi, G., 2021. AI-assisted peer review. Humanities and Social Sciences Communications, 8(1), pp.1-11. https://www.nature.com/articles/s41599-020-00703-8

Donker, T., 2023. The dangers of using large language models for peer review. The Lancet Infectious Diseases, 23(7), p.781. https://www.thelancet.com/journals/laninf/article/PIIS1473-3099(23)00290-6/fulltext?rss=yes

Facer, K. and Selwyn, N., 2021. Digital technology and the futures of education: Towards ‘Non-Stupid’optimism. Futures of Education initiative, UNESCO.

Mollaki, V., 2024. Death of a reviewer or death of peer review integrity? The challenges of using AI tools in peer reviewing and the need to go beyond publishing policies. Research Ethics, p.17470161231224552. https://journals.sagepub.com/doi/10.1177/17470161231224552

Schintler, L.A., McNeely, C.L. and Witte, J., 2023. A Critical Examination of the Ethics of AI-Mediated Peer Review. arXiv preprint arXiv:2309.12356. https://arxiv.org/abs/2309.12356

The ERIKA Project to explore Ukrainian scholars’ digital mastery

The ERIKA Project to explore Ukrainian scholars’ digital mastery

In the ever-evolving landscape of academia, the intersection of the digital realm with traditional research methodologies underscores the pressing need for scholars to embrace evolving competencies. This blog post navigates the digital horizon, emphasizing the imperative integration of open science principles, data literacy, and research competencies for researchers to thrive in the dynamic academic ecosystem. Focusing on the insights gleaned from the 2022-2023Enhancing Empirical Academic Research in Ukraine (ERIKA) project, this exploration sheds light on a critical digital skills gap among Ukrainian academics.

The need for digital competency among researchers

As society transitions into an era where data shapes an intangible reality, the transformative role of science and innovation becomes evident (Lagoudakis et al., 2022; Tavares et al., 2022). Acknowledging that the very entities driving digital transformations must undergo significant changes (Ayris et al., 2018; European Commission, 2020a, 2020b), the expert community underscores the core competencies of a modern “Digital Scholar,” including Open Science practices, adherence to FAIR principles, and proficiency in data management (Van Petegem et al., 2021; Weller, 2018).

International studies further reveal a global lack of digital competencies among researchers, emphasizing the need for enhanced professional training programs and continuous education to meet the demands of the digital age (Cabero-Almenara et al., 2021; Dias-Trindade & Albuquerque, 2022; EU DGRI, 2017, 2020; Suyo-Vega et al., 2022; EU DGRI & EOSC EB, 2021).

Regrettably, the situation in Ukraine mirrors this global trend. Compared to many other countries, comprehensive studies assessing the digital competencies of researchers in Ukraine are notably limited (Hladchenko et al., 2018; Hladchenko, 2022). The absence of such research poses a significant challenge, as it inhibits our ability to identify specific areas of weakness and design targeted interventions to uplift the digital skills of Ukrainian academics.

The ERIKA project

The ERIKA project, conducted in 2023, provides a noteworthy snapshot of the digital competencies among Ukrainian scholars. This initiative, supported by the Ukrainian Educational Research Association and the European Educational Research Association, aimed to enhance empirical academic research capabilities in Ukraine.

The ERIKA project engaged over 50 participants from 13 Ukrainian regions, representing 29 higher education institutions. The participants, ranging from senior lecturers to professors, covered diverse disciplines and career levels, providing a comprehensive view of the academic landscape.

 The training sessions, held in March and October 2023, facilitated in-depth discussions and insights into the digital competencies of Ukrainian academics. The multifaceted and nuanced responses of participants shed light on the challenges and opportunities within the country’s academic community.

The competency gap revealed by the ERIKA survey

It was found that 80% of surveyed respondents demonstrated only superficial awareness of open science principles. A further 40% needed to be made aware of the FAIR guidelines, underlining the urgent need for initiatives that promote open science literacy.

In a world dominated by vast datasets and interconnected information, only 19% of Ukrainian academics exhibited proficiency in working with common data file formats like CSV and JSON. The need for more understanding regarding metadata, reported by 71% of respondents, raises concerns about the effectiveness of data-sharing practices.

A significant gap exists in the awareness of available national open data resources, hindering the potential for impactful research studies that utilise real-world data. This knowledge gap is detrimental to the nation’s academic community, limiting their ability to engage in cutting-edge research.

The survey uncovered deficiencies in basic research skills, such as crafting focused search queries, applying Boolean operators, and locating relevant datasets. These fundamental skills are the building blocks of impactful, evidence-based research.

While most respondents demonstrated awareness of the risks associated with predatory publishing, only 15% could independently assign Digital Object Identifiers (DOIs) to enhance the discoverability of their research outputs. This finding is concerning, especially considering the increasing requirement for DOIs in academic publishing after 2022 (due to National regulations).

The survey revealed a need for more utilisation of reference management tools, with 63% of respondents admitting to never using platforms like Zotero, Mendeley, and EndNote. Additionally, 71% had minimal legal and ethical understanding regarding use of personal data in research studies.

 Encouragingly, despite these challenges, nearly half of the surveyed researchers who had yet to gain prior grant application experience expressed eagerness to build expertise in this area. This enthusiasm bodes well for the future, as acquiring grant-writing skills opens avenues for accessing national and international funding opportunities.

The development of a training program on data skills for academics

The ERIKA project highlights the pressing need for tailored programs to build digital competencies among Ukrainian academics. Bridging the identified gaps in open science, data handling, and research fundamentals is crucial to unlocking the nation’s scientific potential and integrating it into the global and European research ecosystem. In this context, the importance and urgency of creating well-designed professional training programs at educational institutions is evident, and a more centralized approach under the Ministry of Education and Science Leadership could be the best choice.

The “ERIKA” case is a successful example of a professional training program. The final assessment results demonstrated that the topics proposed in the “ERIKA” course (Koblianska & Kostetska, 2023), their content, and the training organization help address the outlined problems and improve researchers’ competencies.

 The project entails developing and teaching an integrated educational course with two modules:

1) “Data Collection for Academic Research” (3 ECTS)

2) “Academic Research in EU Countries: Institutional, Organizational and Motivational Dimensions” (3 ECTS)

The first module covers open science principles and its role in modern research; data collection, analysis, and management in academic research, including legal and ethical components; skills for finding scientific information and formulating search queries; and practical data work aspects.

The second module examines organizing academic research in EU countries, particularly the scientific personnel training system, graduate and doctoral program structures, establishing academic communication, publishing research results, review procedures, and motivational factors affecting researcher productivity. The “ERIKA” experience could be scalable.

Conclusion

As we navigate the increasingly digital and data-intensive academic world, sustained monitoring and responsive training programs are pivotal for ensuring researchers’ success in Ukraine. The collaboration between the Ukrainian Educational Research Association and the European Educational Research Association exemplifies the importance of such initiatives in enhancing research capabilities and quality. By addressing the digital skills gap, Ukraine can position itself as a formidable force in the global academic arena, contributing substantively to advancing knowledge and innovation. Bridging the digital divide is not just a necessity; it is a pathway to unlocking the full potential of Ukrainian scholarship on the world stage.

The post image was generated via Dream Studio AI

Key Messages

  • Digital Competency Urgency. In the evolving landscape of academia, there is a pressing need for scholars to embrace evolving digital competencies, emphasizing the integral role of skills like open science principles and data literacy.
  • Global Digital Competency Trends. International studies reveal a pervasive lack of digital competencies among researchers globally, sparking discussions on the necessity of enhanced professional training programs and continuous education.
  • Ukraine’s Digital Skills Gap. We shed light on the concerning trend in Ukraine, mirroring the global situation, where comprehensive studies assessing the digital competencies of researchers are notably limited, posing a significant challenge.
  • ERIKA Project Results. The ERIKA project in 2023 provides a snapshot of digital competencies among Ukrainian scholars, engaging participants from diverse disciplines and career levels, offering insights into challenges and opportunities within the academic community.
  • Survey Insights and Concerns. Surprising survey findings indicate gaps in open science principles awareness, proficiency in working with data file formats, and deficiencies in basic research skills, emphasizing the need for tailored programs.
  • Call to Action. The need for tailored programs to bridge identified gaps in open science, data handling, and research fundamentals, is urgent, positioning Ukraine as a formidable force in the global academic arena.
Dr Inna Koblianska

Dr Inna Koblianska

Associate Professor of the Department of Economics, Entrepreneurship, and Business Administration of Sumy State University

Dr Koblianska is an Associate Professor of the Department of Economics, Entrepreneurship, and Business Administration of Sumy State University. Her scientific interests include sustainable development, regional development, spatial economy, and logistics management. She is the laureate of the award of the President of Ukraine for young scientists (2019).

She is ERIKA Project Executor, responsible for the development and teaching of the Data Collection for Academic Research module.

Internship: Sustainable Farming Assessment (2017, Bern University of Applied Sciences); School of Agricultural Economics (IAMO, Halle (Saale), 2019); Educational training session on data collection (Statistics Germany and University of Applied Sciences Weinstein-Triesdorf, 2019); DAAD projects (University of Applied Sciences Weienstefan-Triesdorf, 2018-2022); Applied econometric analysis using R (German-Ukrainian Agricultural Policy Dialogue and IAMO, 2021).

She is the author of numerous scientific works.

https://orcid.org/0000-0002-7844-9786 https://econ.biem.sumdu.edu.ua/en/inna-koblianska

Dr Iryna Kostetska

Dr Iryna Kostetska

Senior Lecturer, Department of Economic Theory, Management and Marketing, National University of Ostroh Academy

Dr Kostetska is a Senior Lecturer at the Department of Economic Theory, Management and Marketing, National University of Ostroh Academy. Her scientific interests include business planning in agricultural enterprises and strategic planning of the development of rural areas.

She is ERIKA’s project manager, responsible for the development and teaching of the educational module Academic research in EU: institutional, organizational and motivational dimensions.

Internship under the programs of the French Agricultural Institute (SevrEurope de Bressuire, l’IREO de Bressuire, France 2009, 2010), the Polish-American Freedom Foundation (Lane Kirkland Program, Poland 2018-2019), the Polish UNESCO Committee (Poland 2019), the Visegrad Foundation (2021-2022). She worked on economic and regional development projects with the support of USAID, the British Council in Ukraine, and the Czech Republic.

She is the author of numerous scientific works.

https://orcid.org/0000-0001-5340-0145

Other blog posts on similar topics:

References and Further Reading

Ayris, P., Lopez de San Roman, A., Maes, K., & Labastida, I. (2018). Open Science and its role in universities: A roadmap for cultural change. LERU. https://www.leru.org/publications/open-science-and-its-role-in-universities-a-roadmap-for-cultural-change (access date: 10.09.2023)

Cabero-Almenara, J., Guillén-Gámez, F. D., Ruiz-Palmero, J., & Palacios-Rodríguez, A. (2021). Digital competence of higher education professor according to DigCompEdu. Statistical research methods with ANOVA between fields of knowledge in different age ranges. Education and Information Technologies, 26(4), 4691–4708. https://doi.org/10.1007/s10639-021-10476-5

Dias-Trindade, S., & Albuquerque, C. (2022). University Teachers’ Digital Competence: A Case Study from Portugal. Social Sciences, 11(10), 481. https://doi.org/10.3390/socsci11100481

EU DGRI & EOSC EB. (2021). Digital skills for FAIR and Open Science: Report from the EOSC Executive Board Skills and Training Working Group. European Commission. Directorate General for Research and Innovation. EOSC Executive Board. Publications Office. https://data.europa.eu/doi/10.2777/59065

EU DGRI. (2020). Country sheets analysis: Report from the EOSC Executive Board Working Group (WG) Landscape. Directorate-General for Research and Innovation (European Commission). Publications Office of the European Union. https://data.europa.eu/doi/10.2777/568900

EU DGRI. (2017). Providing researchers with the skills and competencies they need to practise Open Science.European Commission. Directorate General for Research and Innovation. Publications Office. https://data.europa.eu/doi/10.2777/121253

European Comission. (2020a). Digital Education Action Plan (2021-2027). European Education Area. https://education.ec.europa.eu/node/1518 (access date: 10.09.2023)

European Comission. (2020b). Research and innovation strategy 2020-2024. https://research-and-innovation.ec.europa.eu/strategy/strategy-2020-2024_en (access date: 10.09.2023)

Hladchenko, M. (2022). Implications of Publication Requirements for the Research Output of Ukrainian Academics in Scopus in 1999–2019. Journal of Data and Information Science, 7(3), 71–93. https://doi.org/10.2478/jdis-2022-0016

Hladchenko, M., Dobbins, M., &Jungblut, J. (2018). Exploring Change and Stability in Ukrainian Higher Education and Research: A Historical Analysis Through Multiple Critical Junctures. Higher Education Policy, 33. https://doi.org/10.1057/s41307-018-0105-9

Lagoudakis, M. G., Gkizeli, M., Fotiou, A., Fragkedaki, D., &Kollnig, S. (2022). Teaching and Research in the Digital World. BHM Berg- Und HüttenmännischeMonatshefte, 167(10), 489–494. https://doi.org/10.1007/s00501-022-01283-7

Koblianska I., Kostetska I. (2023). Enhancing Empirical Academic Research in Ukraine: training materials. Zenodo. 124 p. https://doi.org/10.5281/zenodo.7817137

Suyo-Vega, J. A., Meneses-La-Riva, M. E., Fernández-Bedoya, V. H., Ocupa-Cabrera, H. G., Alvarado-Suyo, S. A., da Costa Polonia, A., Miotto, A. I., & Gago-Chávez, J. de J. S. (2022). University teachers’ self-perception of digital research competencies. A qualitative study conducted in Peru. Frontiers in Education, 7. https://www.frontiersin.org/articles/10.3389/feduc.2022.1004967 (access date: 10.09.2023)

Tavares, M. C., Azevedo, G., & Marques, R. P. (2022). The Challenges and Opportunities of Era 5.0 for a More Humanistic and Sustainable Society—A Literature Review. Societies, 12(6), Article 6. https://doi.org/10.3390/soc12060149

Van Petegem, W., Bosman, J., De Klerk, M., & Strydom, S. (2021). Evolving as a Digital Scholar: Teaching and Researching in a Digital World. Leuven University Press. https://doi.org/10.2307/j.ctv20zbkk0

Weller, M. (2018). The Digital Scholar Revisited: The Digital Scholar: Philosopher’s Lab, 1(2), 52–71. https://doi.org/10.5840/dspl20181218

Resisting the marginalisation of children’s right to play

Resisting the marginalisation of children’s right to play

Why have we, as educators, accepted that play now occupies the margins of early childhood education and care? Whilst a long tradition of international research positions play as essential to early learning (Wood, 2015), tensions remain with play being foregrounded in classroom life. But can – and should – educators subvert the marginalisation of play in early childhood and care (ECEC)? It is one question that has provoked the recent scholarship on the resistance practices of educators. Dr Jo Albin-Clark and Dr Nathan Archer share their research and thoughts on the marginalisation of play in education.

Play in the current context

Over time, as researchers in ECEC, we have found that play seems to have slipped down the agenda in the push for formalised learning in countries such as England, as accountability bodies frame teaching within standards agendas that can sideline child-initiated play (Wood, 2019). Play seems to occupy a contested curriculum space (Fairchild and Kay, 2021, p. 1). Yet play is not just under erosion in school life, the pull of structured time and the chasing of high achievements reaches into family life (Sahlberg and Doyle, 2019). The result is the withholding of play from children (Murray et al., 2019).  

But play is much more than educational experiences. It is deeply associated with childhood itself. The entitlement to play is part of Article 31 of the United Nations Committee on the Rights of the Child (UNCRC) (Office for the High Commissioner for Human Rights OHCHR, 1989). Significantly, the right to play is an innovative component that acts as a gateway to other rights related to health and broader development (Davey and Lundy, 2011). Even though play is strongly associated with many domains of learning and development, it is not always taken seriously and because of that the status of play has suffered (Brooker and Woodhead, 2013).

Resistance practices

Play is a matter of social justice (Souto-Manning, 2017), and, for that reason, needs policymakers and educators to protect children’s entitlement to play, including through resistance to its marginalisation. As such, a growing body of literature in early childhood education (Moss 2019; Archer and Albin Clark 2022) focuses on the multiple manifestations of these resistances by educators.  Much of this resistance scholarship takes an explicit social justice position, with reconceptualist writers having increasingly called for greater advocacy and social activism in terms of both policy and practice (e.g., Bloch et al., 2018). Research reveals how the scope and scale of this resistance and activism varies from micro resistances to collective action. Nonetheless, both small and large-scale actions can produce sites for hopeful and flourishing pedagogies that can shift from marginalisation to more active politicised resistance.  

Resistance stories

Building on this prior work, we came together as researchers with two cases from separate studies (Albin-Clark, 2018; 2022; Archer, 2020; 2021). What is common to both case studies is a shared interest in how ECEC educators make sense of their experiences and enact forms of resistance. Through the stories of two early childhood educators working in England, we identified their commitment to ‘being the right thing’ and ‘doing the right thing’, foregrounding play in their practice as a matter of social justice. As such, both educators resisted and subverted pressures, scrutiny, and colleague expectations to make play happen, and demonstrate how play is implicated with concerns of justice (Nicholson and Wisneski, 2017).

Call to arms

In conclusion, we need to further problematise the implications and risks of mobilising play (Shimpi and Nicholson, 2014). Making play happen requires a critical awareness of the relationship between rights and play agendas and the tensions involved navigating the value of play in the complexity of ECEC (Wong, 2013). Saying ‘no’ to play’s marginalisation brings teachers into a professionalism founded on resistance (Fenech et al. 2010).

 Now is the time to acknowledge and amplify resistances that promote the right to play. But for educators there are risks of being labelled a ‘disobedient’ professional (Leafgren, 2018). In promoting play, it can mean thinking carefully about how curriculum content is framed (Wood and Hedges, 2016). Moreover, children’s access to and entitlement to play is positioned as a moral imperative by both educators in our studies, which suggests how seriously the right to play is positioned (Nicholson and Wisneski, 2017; Wood, 2007). Social justice needs serious play.

 

Key Messages

  • Play has an essential role in children’s educational lives and matters to their childhood.
  • Play and educational justice are related concepts.
  • There are both implications and risks in marginalising children’s right to play.
  • The increasing formalisation of education for our youngest children needs scrutiny.
  • Making play happen in educational practice might need forms of resistance.
Dr Jo Albin-Clark

Dr Jo Albin-Clark

Senior Lecturer Early Education

Dr. Jo Albin-Clark is a senior lecturer in early education at Edge Hill University. Following a teaching career in nursery and primary schools, Jo has undertaken a number of roles in teaching, advising and research in early childhood education. She completed a doctorate at the University of Sheffield in 2019 exploring documentation practices through posthuman and feminist materialist theories in early childhood education. Her research interests include observation and documentation practices and methodological collaboration and research creation through posthuman lenses. Throughout her work, teachers’ embodied experiences of resistances to dominant discourses has been a central thread.

https://orcid.org/0000-0002-6247-8363

https://research.edgehill.ac.uk/en/persons/joanne-albin-clark 

Dr Nathan Archer

Dr Nathan Archer

Researcher at Leeds Beckett University

Dr Nathan Archer is a researcher at Leeds Beckett University. Originally qualified as a Montessori teacher, Nathan has worked in practice, policy and research in early childhood education for twenty-five years. He gained a PhD from University of Sheffield in 2020 and has undertaken policy analysis with Sutton Trust, Nuffield Foundation and University of Leeds. He continues to research early childhood workforce policy, and the resistance and activism of early childhood educators. Nathan is Associate Editor of Journal of Early Childhood Research.  

 

Other blog posts on similar topics:

References and Further Reading

Albin-Clark, J. (2018). ‘I felt uncomfortable because I know what it can be’: The emotional geographies and implicit activisms of reflexive practices for early childhood teachers. Contemporary Issues in Early Childhood, 21(1), 20-32. https://doi:10.1177/1463949118805126

 Albin-Clark, J.  (2022). The right to play: Are young children free to determine their own actions? https://blogs.edgehill.ac.uk/isr/the-right-to-play-are-young-children-free-to-determine-their-own-actions/

 Albin-Clark, J. & Archer, N. 2023, “Playing social justice: How do early childhood teachers enact the right to play through resistance and subversion? ” Prism: Casting new light on learning, practice and theory, 5 (2), 1-22. https://doi.org/10.24377/prism.article714

 Archer, N. (2020). Borderland narratives: Agency and activism of early childhood educators [Doctoral dissertation, University of Sheffield]. https://etheses.whiterose.ac.uk/27993/

 Archer, N.  (2021). ‘I have this subversive curriculum underneath’: Narratives of micro resistance in early childhood education. Journal of Early Childhood Research https://doi:10.1177/1476718X211059907 

 Archer, N. & Albin-Clark, J. (2022, July 20). Telling stories that need telling: A dialogue on resistance in early childhood education. FORUM for Promoting 3-19 Comprehensive Education, 64 (2) https://journals.lwbooks.co.uk/forum/vol-64-issue-2/abstract-9564/

 Bloch, M. N., Swadener, B. B., & Cannella, G. S. (Eds.). (2018). Reconceptualizing Early Childhood Education and Care-a Reader: Critical Questions, New Imaginaries & Social Activism. Oxford: Peter Lang.

 Brooker, L., & Woodhead, M. (2013). The right to play. early childhood in focus, 9. The Open University with the support of Bernard van Leer Foundation

 Davey, C., & Lundy, L. (2011). Towards greater recognition of the right to play: An analysis of article 31 of the UNCRC. Children & Society, 25(1), 3-14. https://doi:10.1111/j.1099-0860.2009.00256.x

 Fairchild, N., & Kay, L. (2021, November 26). The early years foundation stage: Challenges and opportunities. BERA blog. https://www.bera.ac.uk/blog/the-early-years-foundation-stage-2021-challenges-and-opportunities

 Fenech, M., Sumsion, J., & Shepherd, W. (2010). Promoting early childhood teacher professionalism in the Australian context : The place of resistance. Contemporary Issues in Early Childhood, 11(1), 89-105. https://doi:10.2304/ciec.2010.11.1.89 

 Leafgren, S. (2018). The disobedient professional: Applying a nomadic imagination toward radical non-compliance. Contemporary Issues in Early Childhood, 19(2), 187-198. https://doi:10.1177/1463949118779217 

 Moss, P. (2019). Alternative narratives in early childhood. Abingdon: Routledge

 Murray, J., Smith, K., &Swadener, B. (2019). The Routledge international handbook of young children’s rights Abingdon: Routledge. https://doi:10.4324/9780367142025 

 Nicholson, J., &Wisneski, D. (2017). Introduction. Early Child Development and Care, 187(5-6), 788-797. https://doi:10.1080/03004430.2016.1268534 

 Office for the High Commissioner for Human Rights, (OHCHR). (1989, November 20). United Nations Convention on the Rights of the Child (UNCRC)https://www.ohchr.org/enhttps://www.ohchr.org/en

 Sahlberg, P., & Doyle, W. (2019). Let the children play. Oxford: Oxford University Press.

 Shimpi, P., & Nicholson, J. (2014). Using cross-cultural, intergenerational play narratives to explore issues of social justice and equity in discourse on children’s play. Early Child Development and Care, 184(5), 719-732. https://doi:10.1080/03004430.2013.813847 

 Souto-Manning, M. (2017). Is play a privilege or a right? and what’s our responsibility? on the role of play for equity in early childhood education. Early Child Development and Care, 187(5-6), 785-787. https://doi:10.1080/03004430.2016.1266588

 Wong, S. (2013). A ‘Humanitarian Idea’: using a historical lens to reflect on social justice in early childhood education and care. Contemporary Issues in Early Childhood14(4), 311-323.https://doi.org/10.2304/ciec.2013.14.4.31https://doi.org/10.2304/ciec.2013.14.4.31

 Wood, E. (2007). New directions in play: consensus or collision? Education 3-13, 35(4), 309-320. https://doi:10.1080/03004270701602426 

 Wood, E. (2015). The capture of play within policy discourses: A critical analysis of the UK frameworks for early childhood education. In J.L. Roopnarine, M.Patte, J.E. Johnson & D. Kuschner (Eds.), International perspectives on children’s play (pp. 187-198). Buckingham: Open University Press.

 Wood, E. and Hedges, H., 2016. Curriculum in early childhood education: Critical questions about content, coherence, and control. The curriculum journal, 27(3), pp.387-405.

“Building back better” with Inclusive Learning Assessments

“Building back better” with Inclusive Learning Assessments

School closures during the COVID-19 pandemic have threatened inclusion and exasperated the existing inequalities in education. Across the globe, children with disabilities are more likely to suffer from learning losses (OECD, 2020).

During this crisis, in Europe it was reported that limited guidance from international organisations was available on inclusion, measures taken immediately were sometimes inadequate, digital education challenged inclusion, and limited support could be provided to vulnerable children and their families.

Internationally, the term, building back better is being increasingly used in the global call for making recoveries in the economy and society in the post-COVID world. Our research shows, that in this context, education systems need to consider the role of reliable and rigorous learning assessment data in the education of children with disabilities. Education stakeholders will have a true picture of learning only when children with disabilities are included in all forms of learning assessment. On the ground, data will help teachers to target teaching appropriately so that every child progresses in their learning.

The Programme for International Student Assessment (PISA) countries, many of which are in Europe, have started including children with disabilities in the assessment programme. Despite an increase in the participation of children with special needs in PISA over the administration cycles, they still represent below 3% of the total number of participants (LeRoy et al. 2019).

Equitable Learning Assessment in the Asia-Pacific Region

Let us move beyond Europe and look at the Asia-Pacific Region, where countries have diverse policies, barriers, preparedness, and progress when it comes to inclusive education and inclusive learning assessments in particular. In the Asia-Pacific region, there has been a sluggish transition to inclusive education (Wu-Tien, Ashman, & Yung-Wook, 2008; Forlin, 2010). Most countries have a dual system of schooling where children with moderate disabilities study in general schools and those with severe difficulties in special schools.      

Our review Equitable Learning Assessments for Students with Disabilities found that learning assessment practices vary across countries in the Asia-Pacific. Countries with a history of participating in national and international assessments try to make their assessments inclusive through accommodations. The review reported the use of testing accommodations in Australia, Hong Kong (SAR China), India, the Philippines, and Singapore in the Asia-Pacific region. However, children with severe disabilities or children who cannot be accommodated are left out. Some countries assess students with disabilities through formative methods (Chakraborty et al. 2019). It has to be kept in mind that inclusive learning assessments are an outcome of developments in inclusive education and advances in learning assessment.

Ideally, a single assessment should measure the learning of all students without the need for accommodations (Douglas et al., 2016). But in most education systems, accommodations are used to make assessments accessible to children with disabilities. However, the use of accommodations needs to be normalised in every level of testing – classroom, national, and international assessments (Chakraborty et al., 2019).

National-level policies on inclusive education and assessment practices determine to what extent children with disabilities are included in assessments. For example, in Hong Kong (SAR China), the SAME (Systematic Approach matching Mainstream Education) system provides access to children with disabilities to the central curriculum (Forlin, 2010). Similarly, specific country level mandates on assessment will support the inclusion of children with disabilities in classroom, national, or international assessments.

Across the world, teachers continue to face challenges in assessing students with disabilities (Hussu & Strle, 2010; Brookhart & Lazarus, 2017). In the Asia- Pacific countries, not many teaching staff have been trained in inclusive education (Sin, Tsang, Poon, & Lai, 2010). This is even true for Singapore, which has a reputation for high scores in international assessments. The Singaporean education system has short and less rigorous training for special educators (Walker & Musti-Rao, 2016).

Teachers as Agents of Change

Our review suggests that teachers are powerful change agents in making inclusive assessment a reality, especially in middle- and low-income economies. To enable this change, development partners should prioritise the professional development of teachers in the complex topics of inclusive education and learning assessments.

This training should include all teachers from pre-service (student teachers), in-service (part-time/full-time school teachers), to special educators. Along with this, professional development courses should be designed to eliminate stigma and prejudices about disabilities. Moreover, school leaders should be trained regularly as they are responsible for setting the culture for assessment and inclusion in schools (Chakraborty et al., 2019).

Investments in research and projects on inclusive education, professional learning, and learning assessments are critical for making advancements in the field of inclusive learning assessments. As education systems are being reshaped to close learning gaps in the wake of the COVID-19 pandemic, strong partnerships between development partners, governments, and non-government organisations can contribute immensely to this area of inclusive assessment.

Other blog posts on similar topics:

Anannya Chakraborty,

Anannya Chakraborty,

Senior Communications Officer, ACER India

Anannya Chakraborty started working in the international development sector after completing her Post Graduate degree in Social Development from the University of Sussex, United Kingdom. In the last eight years, she has worked on various challenging social sector projects in the areas of research, knowledge management, and communications.
 
As a Senior Corporate Communications Officer at the Australian Council for Educational Research (India), Anannya works in global and Indian communications assignments along with commissioned research projects.
 
Before joining ACER, she has also worked on ethnographic qualitative research and social and behaviour change communication projects for international development.
 
Anannya has presented at international conferences and forums organised by the European Educational Research Association and UNESCO Bangkok.
Amit Kaushik

Amit Kaushik

CEO, ACER India

Amit Kaushik has been CEO of ACER India and a member of the Board of the Australian Council for Educational Research (India) since 2017. He specialises in consulting, policy planning, programme design, implementation, project management, monitoring and evaluation. His research interests include school management, quality improvement in education, skill development, non-formal education, inclusive education, and girls’​ education. From 2001-2006, Amit was Director, Elementary Education, in the Ministry of HRD, Government of India, where he was associated with the development and implementation of various policies related to Sarva Shiksha Abhiyan, as well as India’s international commitments on Education For All (EFA). Among other things, he worked closely on the 2005 draft of the Right to Education Bill, based on which The Right of Children to Free and Compulsory Education Act was passed in 2009. He has been a consultant to UNESCO Paris, Nigeria, Iraq and Lebanon, as well as to UNICEF Iraq and Yemen, working with them from time to time on assignments related to literacy, planning for Education for All, non-formal education, accelerated learning and the Global Partnership for Education.

ACER India is an independent, not-for-profit research organisation providing world class research, educational products and services to India and the South Asia region.

Follow ACER India on social media:

References and Further Reading

‘Building back better’ may seem like a noble idea. But caution is needed https://theconversation.com/building-back-better-may-seem-like-a-noble-idea-but-caution-is-needed-154587

Building back better – a sustainable and resilient recovery after COVID-19 https://www.oecd.org/coronavirus/policy-responses/building-back-better-a-sustainable-resilient-recovery-after-covid-19-52b869f5/

Building Back Better – achieving resilience through stronger, faster, and more inclusive post-disaster reconstruction https://documents1.worldbank.org/curated/en/420321528985115831/pdf/127215-REVISED-BuildingBackBetter-Web-July18Update.pdf

The Impact of COVID-19 on Inclusive Education at the European Level  https://www.european-agency.org/sites/default/files/COVID-19-Impact-Literature-Review.pdf

Students with special educational needs within PISA, (LeRoy et al. 2019) https://www.tandfonline.com/doi/abs/10.1080/0969594X.2017.1421523

 

Equitable learning assessments for students with disabilities https://research.acer.edu.au/cgi/viewcontent.cgi?article=1037&context=ar_misc

Developing and implementing quality inclusive education in Hong Kong: implications for teacher education (Forlin, 2010) https://nasenjournals.onlinelibrary.wiley.com/doi/abs/10.1111/j.1471-3802.2010.01162.x

Including Pupils with Special Educational Needs and Disability in National Assessment: Comparison of Three Country Case Studies through an Inclusive Assessment Framework (Douglas et al., 2016) https://www.tandfonline.com/doi/abs/10.1080/1034912X.2015.1111306

The assessment of children with special needs (Hussu & Strle, 2010) https://cyberleninka.org/article/n/1191085

Formative assessments for children with disabilities (Brookhart & Lazarus, 2017) https://ccsso.org/sites/default/files/2017-12/Formative_Assessment_for_Students_with_Disabilities.pdf

Upskilling all mainstream teachers – what is viable? (Sin, Tsang, & Poon, 2010) https://www.taylorfrancis.com/chapters/edit/10.4324/9780203850879-37/upskilling-mainstream-teachers-viable-kuen-fung-sin-kok-wai-tsang-chung-yee-poon

Inclusion in High-Achieving Singapore: Challenges of Building an Inclusive Society in Policy and Practice (Walker & Musti-Rao, 2016) https://files.eric.ed.gov/fulltext/EJ1114835.pdf

 

 

This thematic review Equitable Learning Assessments for Students with Disabilities has been funded by the Australian Council for Educational Research (India). The authors are grateful to Network on Education Quality Monitoring in the Asia-Pacific (NEQMAP), UNESCO Bangkok for publishing the review.  

Full report: Chakraborty, A., Kaushik, A., & UNESCO Office Bangkok and Regional Bureau for Education in Asia and the Pacific. (2019). Equitable learning assessments for students with disabilities(NEQMAP thematic review).UNESCO Office Bangkok.https://unesdoc.unesco.org/ark:/48223/pf0000372301?posInSet=2%26queryId=e5a90c3e-c567-4f6a-9eeb-d2f712203481

Read about ACER’s ongoing review of professional development programmes on inclusive teaching and learning: https://www.acer.org/au/discover/article/reviewing-professional-development-programs-on-inclusive-teaching-and-learning

Results from a Survey on Post-Primary Teachers’ Experiences with Calculated Grading during COVID-19

Results from a Survey on Post-Primary Teachers’ Experiences with Calculated Grading during COVID-19

In May 2020, as a result of Covid-19, the high stakes assessment at the end of post-primary education in Ireland (the Leaving Certificate Examination – LCE) was cancelled replaced by a system of calculated grades. In documentation sent to schools, the Department of Education and Skills (DES) made it clear that a calculated grade would result from the combination of two data sets:

  • an overall percentage mark and ranking in each subject awarded to each student by their teacher (the school-based estimation process)
  • data on past performance of students in each school and nationally (the standardisation process)

Following the issuing of results to students and the completion of the appeals process, an online questionnaire survey was conducted in the final months of 2020 by researchers at the Institute of Education, Dublin City University, with the aim of investigating how teachers’ engaged with the calculated grades process in their schools.  Data from a total of 713 respondents were used in a report published by the Centre for Assessment Research, Policy and Practice in Education (CARPE) on April 15th 2021. This report is now available to download from www.dcu.ie/carpe.  The following are some highlights from this report.

 

Assessment Evidence Used

Teachers considered many different types of formative and summative assessments when estimating mark and ranks for their students. Particularly important were final year exams prior to lockdown (98%) and final year continuous assessments (92%). Four out of every five teachers indicated that knowledge of how previous students had performed in the LC influenced their decision-making.  Significantly, 88% said that formative assessments were important also. One respondent noted:

Personally, I feel very competent in assigning the predicted grades to my LC students in 2020 since I had assessed their performance in detail over a 2-year period…. Each exam/ portfolio/homework was assigned a weighting and a record of their performance updated to our Schoology platform. Students could readily assess their own progress over this period and all this data enabled a solid predicted grade for each student.

 

Teachers’ Reflections on the School-Based Estimation Process

At least 90% of teachers indicated that they were able to apply the DES calculated grades guidelines strictly when estimating marks and ranks for the majority of their students. However, some reported experiencing difficulties in adjudicating marks at grade boundaries.  For example, 61% said that they gave 5% or more of their students the benefit of the doubt and gave them a mark that moved them above a grade boundary, with 21% saying that they should have awarded a failing mark but didn’t.  One-third of respondents said that they awarded a higher mark for 5% or more of their students because they thought the national standardisation process might bring the student’s grade down.  While 73% said that the moderation process to align grades within their schools worked well, 26% reported raising a mark and 17% lowering a mark following engagement in the process. Significantly, the vast majority of teachers (92%) felt that the marks they awarded were fair.

 

Other Reflections

One in three respondents added commentary at the end of the questionnaire, with many focusing on the stress brought about by the fact that they lived in the same small communities as the students they were grading. Many identified parents, school management, media and politicians as sources of the pressure they felt.  One teacher expressed it thus:

I believe that while it would be ok for more teacher involvement in urban centres, the nature of rural and small town Ireland made the entire process very uncomfortable and I am sure that teachers will feel the rippling exponential impact of this for some time.

A number of events that transpired following the submission of school data to the DES were also highlighted as problematic.  The fact that the DES provided students with their rank order data came as a surprise to teachers and caused great disquiet. The removal, in late August, of school historical data from the standardisation process, following controversy about its use for calculated grades in the UK, was a source of great annoyance, especially among those working in high achieving schools. That said, some teachers noted that calculated grades had been an acceptable option in the context of a pandemic and that many students benefited from the fact that the grades awarded in 2020 were the highest ever.

 

Conclusion

The implementation of calculated grades in Ireland was a historic event as, for the first time since the introduction of the LCE in 1924, post-primary teachers engaged in the assessment of their own students for certification purposes. While difficulties arose, all those involved worked diligently to ensure that the class of 2020 could progress in their education and/or careers.  In 2021, Irish teachers will be asked to engage in a similar process while at the same time they will be preparing their students to take the traditional LC examinations.  The plan is that the two assessment systems will run side-by-side, and students will be given the option of choosing their best result in each subject.  Our hope is that findings from this survey will be useful to all those responsible for overseeing and implementing this challenging task.

References and Further Reading

Doyle, A., Z. Lysaght and M. O’Leary. 2021. Preliminary Findings from a Survey of Post- Primary Teachers Involved in the Leaving Certificate 2020. Calculated Grades Process in Ireland. Dublin: Centre for Assessment, Research, Policy and Practice in Education (CARPE), Dublin City University. Accessed April 15, 2021. https://www.dcu.ie/sites/default/files/inline-files/calculated_grades_2020_preliminary_findings_v2_2.pdf

Doyle, A., Lysaght, Z., & O’Leary, M. 2021. High stakes assessment policy implementation in the time of COVID-19: The case of calculated grades in Ireland. Irish Educational Studies, 40. DOI: 10.1080/03323315.2021.1916565 https://www.tandfonline.com/doi/full/10.1080/03323315.2021.1916565 

Prof. Michael O'Leary,

Prof. Michael O'Leary,

Prometric Chair in Assessment, School of Policy and Practice, Institute of Education, Dublin City University

Michael O’Leary holds the Prometric Chair in Assessment at Dublin City University where he also directs the Centre for Assessment Research, Policy and Practice in Education (CARPE). He leads a programme of research at CARPE focused on assessment across all levels of education and in the workplace.

Dr. Audrey Doyle

Dr. Audrey Doyle

Assistant Professor, School of Policy and Practice, Institute of Education, Dublin City University

Audrey Doyle is an assistant professor in the School of Policy and Practice in DCU. A former second-level principal of a large all-girls post-primary school in Dublin, she achieved her Ph.D. in Maynooth University in 2019. She now lectures on curriculum and assessment across a diversity of modules in DCU, contributing to the Masters in Leadership and the Doctorate in Education.

Dr. Zita Lysaght

Dr. Zita Lysaght

Assistant Professor, School of Policy and Practice, Institute of Education, Dublin City University

Zita Lysaght is a member of the School of Policy and Practice and a Research Associate and member of the Advisory Board and Advisory Panel of CARPE at DCU. She coordinates and teaches classroom assessment and research methodology modules on undergraduate, masters and doctoral programmes and directs and supervises a range of research and doctoral projects.

Artificial Intelligence in Student Assessment: What is our Trajectory?

Artificial Intelligence in Student Assessment: What is our Trajectory?

Bengi Birgili is a Research Assistant in the Mathematics Education Department at MEF University in Istanbul. Here she shares her research and insights into the development of Artificial Intelligence applications in the field of education and explains the current trajectory of AI in the Turkish education system.

As a mathematics teacher and doctoral candidate in educational sciences, I closely follow the latest developments in Artificial Intelligence (AI) applications in the field of education. Innovations in AI become outdated within a few months because of the rapidly increasing studies on image processing, speech recognition, natural language processing, robotics, expert systems, machine learning, and reasoning. With Google, Facebook, and IBM AI studies being open source, these companies help speed up developments.

If we think of education as a chair, the legs are the four essential parts that keep it standing: that is, the student, the teacher, the teaching process, and measurement-evaluation – the four basic elements of education. Key areas of AI for education are determining the right strategies, making functional decisions, and coming up with the most appropriate designs for the education and training process. I believe there are many areas in which teachers can work in cooperation with Artificial Intelligence systems in the future.

Human behaviour modelling

The main focus of AI studies worldwide is human behavior modelling. The relationship between how humans model thinking and how we can, therefore, accurately measure and evaluate students is still a subject of exploration. Essentially, the question is: how do humans learn, and how can we teach this to AI expert systems?

Presently, AI expert systems learn in three ways:

  • supervised learning
  • unsupervised learning
  • reinforcement learning

As an educator, whenever I hear these categories, I think of the conditional learning and reward-punishment methods we learn about in educational sciences. These methods, which are prevalent at the most fundamental level in the individual teaching and learning process, are central to the design of AI systems being developed today, which are developed on the behavioristic approach in learning theories.

Just as in the classroom environment, where we can reinforce a students’ behavior by using a reward, praise, or acknowledgment in line with the behaviorist approach while teaching knowledge or skills so that we can strengthen the frequency of the behavior and increase the likelihood that how the response will occur. In a similar vein, an agent or a machine which is under development learns from the consequences of its actions.

AI in the Measurement-Evaluation Process

One area for the use of natural language processing in the measurement-evaluation process is the evaluation of open-ended examinations. In Turkey, large-scale assessment consists mostly of multiple-choice examinations, chosen for their broad scope, objective scoring, high reliability, and ease of evaluation. On the other hand, open-ended examinations are more challenging because they measure students’ higher-level thinking skills in much more detail than multiple-choice, fill-in-the-blanks, true-false, and short-answer questions.

Education systems in other countries make more use of open-ended items because they allow students to thoroughly use their reading comprehension skills. Also, students are able to demonstrate their knowledge in their own words and use multiple solution strategies, which is a better test of their content knowledge. But these open-ended items do not just measure students’ knowledge of a topic; at the same time, they mediate between higher-level thinking skills such as cognitive strategies and self-discipline. This is an area in which AI studies have begun to appear in the educational literature. 

Countries using open-ended items in new generation assessment systems are France, the Netherlands, Australia, and, in particular, the United States and the UK. These systems provide teachers, parents, and policymakers with the opportunity to monitor student progress based on student performance as well as student success. The development of Cognitive Diagnostic Models (CDM) and Computerized Adaptive Tests (CAT) changed testing paradigms. These models classify student response models in a test into a series of characteristics related to different hierarchically defined mastery levels. Another development is immersive virtual environments such as EcoMUVE, which can make stealth/invisible assessments, evaluating students’ written responses and automatically creating follow-up questions.

AI in Student Assessment in Turkey

It is a very broad concept that we call “artificial intelligence [AI] in education”. To simplify it, we can define it as a kind of expert system that sometimes takes the place of teachers (i.e., the intelligent tutors) by making pedagogical decisions about the student in the teaching or measurement-evaluation process. Sometimes the system assists by analyzing the student in-depth in the process, enabling them to interact with the system better. It aims to guide and support students. To make more computational, precise, and rigorous decisions in the education process, the field of AI and Learning Sciences collaborate and contribute to the development of adaptive learning environments and more customized, inclusive, flexible, effective tools by analyzing how learning occurs with its external variables.

Turkey is a country of tests and testing. Its education system relies on selection and placement examinations. However, developments in educational assessment worldwide include individual student follow-up, formative assessments, alternative assessments, stealth assessments, and learning analytics, and Turkey has yet to find its own trajectory for introducing AI in student assessment.

However, the particular structure of the Turkish language makes it more difficult than in other countries to design, model, develop, and test AI systems – which explains the limited number of studies being carried out. The development of such systems depends on big data, so it is necessary to collect a lot of qualified student data in order to pilot deep learning systems. Yet the Monitoring and Assessment of Academic Skills report of 2015-2018 noted that 66% of Turkish students do not understand cause and effect relationships in reading.

In AI testing, students are first expected to grasp what they read and then to express what they know in answering questions, to express themselves, to come up with solutions, and to be able to use metacognitive skills. The limited number of students who can clearly demonstrate these skills in Turkey limits the amount of qualified data to which studies have access. There is a long way to go in order to train AI systems with qualified data and to adapt to the complexities of the Turkish language. In short, Turkey is not yet on a trajectory for introducing AI for education measurement and evaluation – we are still working to get ourselves on an appropriate trajectory. We are still oscillating through the universe. However, there are signs that the future in this area will be designed faster, addressing the questions I have raised.

The Outlook for AI in Student Assessment

While designing and developing such systems, it should be remembered that students and teachers also need to adapt to the system. Their readiness to do so will help us measure the quality of education in general as well as the level of students’ knowledge and skills in particular. Authentic in-class examinations and national and international large-scale assessments should serve the same purpose. In the future, we will need AI systems to play a greater role in generating and categorizing questions and evaluating student responses. And they need to do this is a system whose main goal must be to provide a learning process that positively supports the curiosity and ability of all our students
Bengi Birgili

Bengi Birgili

Research Assistant in the Mathematics Education Department at MEF University, Istanbul.

Bengi Birgili is a research assistant in the Mathematics Education Department at MEF University, Istanbul. She experienced in research at the University of Vienna. She is currently a PhD candidate in the Department of Educational Sciences Curriculum and Instruction Program at Middle East Technical University (METU), Ankara. Her research interests focus on curriculum development and evaluation, instructional design, in-class assessment. She received the Emerging Researchers Bursary Winners award at ECER 2017 for her paper titled “A Metacognitive Perspective to Open-Ended Questions vs. Multiple-Choice.”

In 2020, a co-authored research became one of the 4 accepted studies among Early-Career Scholars awarded by the International Testing Commission (ITC) Young Scholar Committee in the UK [Postponed to 2021 Colloquium due to COVID-19].

In Jan 2020, she completed the Elements of AI certification offered by the University of Helsinki.

Researchgate:https://www.researchgate.net/profile/Bengi-Birgili-2

Twitter: @bengibirgili

Linkedin: https://www.linkedin.com/in/bengibirgili/

ORCID:https://orcid.org/0000-0002-2990-6717

Medium: https://bengibirgili.medium.com