Edinburgh Research Archive

View Item 
  •   DSpace Home
  • Informatics, School of
  • Informatics thesis and dissertation collection
  • View Item
  •   DSpace Home
  • Informatics, School of
  • Informatics thesis and dissertation collection
  • View Item
    • Login
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Algorithms for assessing the quality and difficulty of multiple choice exam questions

    Download
    Luger2016.pdf (3.554Mb)
    Date
    2016-06-27
    Author
    Luger, Sarah Kaitlin Kelly
    Metadata
    Show full item record
    Abstract
    Multiple Choice Questions (MCQs) have long been the backbone of standardized testing in academia and industry. Correspondingly, there is a constant need for the authors of MCQs to write and refine new questions for new versions of standardized tests as well as to support measuring performance in the emerging massive open online courses, (MOOCs). Research that explores what makes a question difficult, or what questions distinguish higher-performing students from lower-performing students can aid in the creation of the next generation of teaching and evaluation tools. In the automated MCQ answering component of this thesis, algorithms query for definitions of scientific terms, process the returned web results, and compare the returned definitions to the original definition in the MCQ. This automated method for answering questions is then augmented with a model, based on human performance data from crowdsourced question sets, for analysis of question difficulty as well as the discrimination power of the non-answer alternatives. The crowdsourced question sets come from PeerWise, an open source online college-level question authoring and answering environment. The goal of this research is to create an automated method to both answer and assesses the difficulty of multiple choice inverse definition questions in the domain of introductory biology. The results of this work suggest that human-authored question banks provide useful data for building gold standard human performance models. The methodology for building these performance models has value in other domains that test the difficulty of questions and the quality of the exam takers.
    URI
    http://hdl.handle.net/1842/20986
    Collections
    • Informatics thesis and dissertation collection

    Related items

    Showing items related by title, author, creator and subject.

    • Automated question answering for clinical comparison questions 

      Leonhard, Annette Christa (The University of Edinburgh, 2012-06-25)
      This thesis describes the development and evaluation of new automated Question Answering (QA) methods tailored to clinical comparison questions that give clinicians a rank-ordered list of MEDLINE® abstracts targeted to ...
    • Information fusion for automated question answering 

      Dalmas, Tiphaine. (The University of Edinburgh, 2007)
      Until recently, research efforts in automated Question Answering (QA) have mainly focused on getting a good understanding of questions to retrieve correct answers. This includes deep parsing, lookups in ontologies, ...
    • Ethnography of the status question and everyday politics in Puerto Rico 

      Ellis, Christopher David (The University of Edinburgh, 2015-06-30)
      This thesis is about the power of political elites to establish the framework of political discourse, and to thereby control political power, in Puerto Rico. The Puerto Rican 'status question' - the debate about the ...

    Privacy & Cookies | Takedown Policy | Accessibility | Contact
    Privacy & Cookies
    Takedown Policy
    Accessibility
    Contact
     

     

    Browse

    All of DSpaceCommunities & CollectionsIssue DateAuthorsTitlesSubjectsPublication TypeSponsorThis CollectionIssue DateAuthorsTitlesSubjectsPublication TypeSponsor

    My Account

    LoginRegister

    Privacy & Cookies | Takedown Policy | Accessibility | Contact
    Privacy & Cookies
    Takedown Policy
    Accessibility
    Contact