Card Image

Assessment and Development Matters

Edited by: Dermot O’Halloran
  • Online ISSN: 2752-8111
  • Print ISSN: 2040-4069

Assessment & Development Matters (ADM) brings the latest news and developments on tests and testing related matters to certified test users, and is the only specialist magazine in its market. It is circulated to over 12,000 qualified educational, forensic, and occupational test users in the UK and overseas.

  • Article
    What competencies will be needed to manage Artificial Intelligence in the workplace? (A human perspective)
    Show abstract View article

    adm-1_12_fig1

    Artificial Intelligence is evolving at a breathtaking pace. It offers huge opportunities, yet creates significant challenges to virtually every organisation. Even the leaders of the companies which are at the forefront of unleashing its capabilities seem unsure as to its power, and governmental authorities are unsure about how – or even if – Artificial Intelligence should be controlled. Against this uncertain backdrop, all organisations need urgently to be defining – or revising – their competencies, so that these opportunities can practically be maximised, and the threats managed. This article explores what competencies might be relevant for all organisations facing up to a new world of AI.

  • Article
    Climbing the uncanny valley
    Show abstract View article

    Much has been said of AI’s risks in the hands of criminals, particularly new advances in generative AI. But should we be concerned? This article offers a perspective away from the prevailing opinion amongst experts and argues that we have less to be concerned about, than what is speculated. Fraud has been around since the dawn of the human condition. Many of our modern advances have contributed to its reach and impact, but it is doubtful that AI will be one of them.

  • Article
    Artificially disinformed and radicalised: How AI produced disinformation could encourage radicalisation
    Show abstract View article

    adm-1_08_fig1

    The rapid advancements in artificial intelligence technologies have enhanced the ability for individuals who want to generate a substantial amount of seemingly genuine discussions, images and/or videos that are tailored to promote specific narratives. Unfortunately, this advancement has also provided a valuable tool for actors who seek to promote potentially harmful ideologies and share disinformation to large online audiences. By leveraging AI, these individuals can significantly enhance their recruitment efforts and bolster their perceived credibility, by producing seemingly legitimate but artificially fabricated evidence that supports their proposed narrative. This pressing issue is discussed in terms of its potentially negative consequences on the encouragement of radicalisation in users exposed to this artificially produced disinformation. Not only does it pose a risk to the integrity of people’s perception of truth, but it also has the potential to exacerbate the likelihood of radicalisation occurring.

  • Article
    The role of Artificial Intelligence in digital forensics: Case studies and future directions
    Show abstract View article

    adm-1_10_fig1

    The increase in digital evidence, especially in cases involving Indecent Images of Children (IIOC), presents a pressing challenge for law enforcement agencies. In this article, we discuss two of the most prominent types of Artificial Intelligence (AI) and how they can be used in digital forensic processes, providing examples, and highlighting potential challenges that are likely to be experienced in developing and adopting AI. The two main types are of Data-Driven Model (DDM) age classification and ModelBased Reasoning (MBR), and in this article, examples for both are provided and discussed in the contents of IIOC investigations.

  • Article
    Large-scale testing in the face of AI
    Show abstract View article

    adm-1_11_fig1

    This article examines the expansive growth of ChatGPT and the implications for large-scale test design. The authors contend that the impressive test simulation results observed by Chat-GPT undergird ongoing construct validity concerns with student testing. In order to address these challenges, a set of strategies is proposed that emphasises authentic assessment, the importance of human elements in traditional paper-and-pencil questions, and the controversial issue of the stakes ascribed to test results. Collectively, these approaches are meant to help test developers more carefully consider existing limitations within traditional standardised and large-scale assessment programs. Ultimately, test design reforms that enhance validity are increasingly needed to address the challenges posed by AI applications.

  • Article
    The dark side of Artificial Intelligence – Risks arising in dating applications
    Show abstract View article

    Hiding behind a smartphone screen, online dating applications provide a playground of opportunity for fraudsters and scammers. With ease of access to artificial intelligence, the technological capabilities of nefarious individuals are quickly growing. From sophisticated chatbots designed to engage in conversations and extract personal data, to deepfake technology used to create convincing false personas. This article summarises the current and upcoming risks which artificial intelligence poses to dating application and social media users. Deepfake technology is a key risk; the world is experiencing greater use of attractive deepfake images to convince dating app users into involvement in a romance scam, face-swaps to target and blackmail social media users with their intimate images, and instant generation of child sexual abuse material. Other risks include stalkers tracking their victims with greater ease, and individuals downloading nefarious dating applications which utilise chatbots to gather information and get paid. Gaps in empirical research are identified and discussed.

Loading
Loading
Loading
Loading
Loading