A need for improved admissions screening to predict potential student academic success was identified in a strategic planning session of online academic institutions. There is an increased concern student retention problems are related to the admissions process at the universities. Faculty are concerned that the critical writing skills of students who are admitted are not adequately evaluated until after students are financially committed to the enrolling institution. The scope of this paper is to research, analyze and conduct a feasibility study of Latent Semantic Analysis (LSA) software that could be used to automate essay grading.
Introduction
Student retention is a source of concern for all post secondary schools. Much research has been conducted on the subject to capture a clearer understanding of the mitigating causes of e-learning drop out rates. Retention rates have been found to be correlated to admissions standards.
With drop out rates of up to 35% in some online academic institutions (http://www.ejel.org/volume-2/vol2-issue1/issue1-art13.htm, 2004), it is imperative that the following areas of concern are addressed:
· learner readiness for online learning
· identification of the learner’s academic strengths and weaknesses
· learner academic, technical and administrative support.
There are two factors that directly affect retention rates of students, extrinsic factors (personal) and intrinsic factors (institutional). The extrinsic factors fall in the categories of financial, family, time commitment, professional obligations, subject matter interest, and academic preparation. The intrinsic factors that directly impact retention rates are the quality and availability of study materials, technical support, and academic support.
In order for a student to be successful in online education (Colby, 1986), the learner must exhibit competence in the following areas:
· self-directed learning (able to manage their own learning)
· metacognitive development (interact with the content)
· collaborative learning (interact with facilitators and classmates virtually).
These competencies are discussed extensively at most institutions during the intake interview conducted by enrollment and admissions counselors. Perspective students are informed about the time commitment associated with their program of study, the financial commitment, and of the impact attending an online university will have on them personally. However, these universities cannot evaluate the extrinsic factors affecting the potential success of students with the exception of academic preparation. It is imperative, however, that each university recognizes that student admissions standards are a fundamental element in predicting college success.
Intrinsically, a university has the ability to mitigate the flood of exiting students by implementing stricter admissions guidelines. Institution of a two step process to evaluate students could serve as a predictive measure of academic success. The use of cognitive and non-cognitive measures would create a more complete picture of the applicant. This aspect of evaluation allows for additional support for students and fosters academic success. It influences instructional strategies that can be most effective for individual learners achieving learning success at a distance. Non-cognitive admission indicators are very useful in predicting academic success (Colby, 1986). There is a high correlation between critical writing skills and academic success. The purpose of this study is to investigate the efficacy of a fully automated pre-entrance assessment (objective and summative) for predicting potential academic success of adult learners in an e-learning environment.
Current Practices:
Currently a screening assessment is distributed to applicants at most online institutions. The assessments evaluate four areas of student performance:
Critical Comprehension
Literal Comprehension
Composition Skills
Computation Skills
According to faculty the driving factor behind success in an online environment is the written communication of ideas. This skill is critical to success in all academic programs. Currently, the first indication of critical writing skills is not demonstrated by the student at most on-line institution until they enter their first course. The student enters the university with a false sense of security in their potential academic performance in whatever program they have entered. It is far too late at this point to assess their writing skills and identify their “fit” in the university. The student is now financially obligated and has plowed through their first course often floundering. Advisors are then stuck with the task of recommending courses that hopefully will meet the needs of the student. There is a great potential for students to enter with less than minimal writing skills that will haunt them the rest of their time with an institution. Faculty immediately recognize from the first writing samples which students will struggle from the beginning to make academic progress. Faculty strongly urges that a writing component be added to the screening assessment during the application phase.
The purpose of the essay component of the screening assessment would be to measure certain writing aptitudes. Essays accurately portray a student’s current knowledge base and present a snapshot of their writing and cognitive organization skills. Essay assessments require a student to create their own unique answers rather than choosing from a list of provided response options as well as demonstrating quality of writing. Essays assess non-cognitive qualities and are useful tools for identifying deficiencies in writing skills Critical writing skills are a predictive measure of online success.
The potential benefit of a two part screening assessment to each university is far reaching. The proposed screening tool using in-place automation would give admissions counselors immediate feedback for selection purposes. The proposed new instrument would provide a consistent, objective, and unbiased evaluation of student performance in five areas instead of the current four. The specific feedback with more focused skill analysis would be a valuable tool to identify a potential student’s overall writing ability thus giving academic advisors and enrollment counselors an early indication of a student’s strengths and weaknesses. This demonstration of the student’s writing skills would assist the advisor in recommending appropriate placement in remedial, basic college composition courses or an immediate recommendation for a language and communications competency exam.
The implementation of an outcomes based admissions assessment would help align admission standards with each university’s mission. The mission is to ensure that every minimally competent applicant admitted receives an opportunity for success. The ultimate effectiveness of this assessment would be measured by the increased matriculation rates of on-line post secondary students.
Latent Semantic Analysis
Implementation of an essay assessment during the admissions application process has the potential of being a labor intensive and costly proposition. The need for an assessment component to identify and screen for critical writing skills is a crucial part in predicting an applicant’s potential success. Currently there are several software products that automate essay scoring. This software is designed using algorithms that are designed specifically for analyzing statistical data and content information from pre-programmed domains of knowledge or a “gold standard” essay (Page, 1994). The algorithm used is Latent Semantic Analysis (LSA). LSA analyzes an essay for the following components:
· Syntactic Variety – LSA using parser technology identifies specific syntactic structures
Ø Subjunctive auxiliary verbs
Ø Clausal structures – compliments, infinitives, and subordinate clauses
Ø Ambiguity
· Discourse Analysis – identifies a conceptual framework of conjunctive relationships cued by specific language constructions
Ø Discourse markers- words or phrases that indicate direction
Ø Conjunctions (and, or, but, nor, etc)
Ø Pragmatic Particles
· Content Vector Analysis – weighted words proportioned to word usage
Frequency
· Lexical Complexity Features – identifies the frequency of a number of word forms that may exist for use in different syntactic roles.
Ø Range
Ø Frequency
Ø Morphological vocabulary complexity (prefixes, free stem words, bound root words, form and meaning, how the forms combine)
· Grammar, Usage, and Mechanics –identifies errors in subject-verb agreement, verb form, punctuation, and typos.
· Confusable Words – homophones
· Undesirable Style – passive voice, repetition, etc.
· Discourse Elements – introduction, thesis statement, main idea, supporting details, conclusion.
LSA scores for information content versus factors in the quality of the writing. It looks for strong relationships between semantic content and the quality of the writing using a component scoring system. LSA is an effective tool for scoring and commenting essays by providing accurate judgments of the internal consistency of a text compared to the actual quality of the writing. This computational model provides evaluation on a secure server, scores that are an accurate measure of essay quality, and scores as precisely as a human rater. The scores can be delivered in two ways:
Either method of scoring provides a highly consistent and objective assessment of critical writing skills. Feedback of results is totally automated and is specifically articulated in a scoring guide. The scoring guides are linked to established writing standards and give an overall view of student writing skills.
Many post secondary institutions have already implemented automatic scoring using LSA software to evaluate student writing. ETS uses e-rater and c-rater to assess the volumes of essay assessments they administer in the GMAT, GRE, and TOEFL exams. They use authentic topics developed by in-house assessment development experts that meet stringent assessment specification guidelines. ETS has successfully scored over two million assessments (Washington Post, 2004). The Rand Corporation’s Institute for Education and Training uses e-rater for measuring analytical reasoning in their program. Other colleges and universities using LSA technology for automated essay grading are Azusa Pacific, Baylor College of Medicine, The Citadel, University of Maryland, University of Oklahoma, and the University of Illinois, to list a few.
Besides ETS’s e-rater and c-rater (Criterion) products, there are many other LSA assessment products used around the globe. Intelligent Essay Assessor (IEA), developed by Thomas Landauer (University of Colorado, Boulder doctoral candidate who first conceptualized and authored LSA programs) and Peter Foltz (New Mexico State University Professor), is distributed by Pearson Knowledge Technology. The University Of Colorado School Of Technologies uses IEA to assess student essays in the Physical Sciences/Engineering/Information department (http://www.knowledge-technologies.com/) .
Project Essay Grader, distributed by The Vantage Learning Corporation, is used by Indiana University, Purdue University, and Indianapolis University to assess their perspective students in a one hour admissions/placement essay exam.
Perception’s QuestionMark assessment product line has an essay grader that fully integrates with their Perception automated data base. The U.S. Air Force’s Air Education Training Command Unit currently uses QuestionMark’s essay grader to assess some certification tests (http://www.questionmark.com/us/casestudies/index.htm).
Application of LSA
Question Mark uses an online platform for delivery of all objective assessments. The delivery system is fully automated on a secure server. The assessments are delivered to the student, scored, recorded, and a snapshot of information (assessment results and individual component results) is disseminated to the assessment administrator in a span of 30 seconds. Question Mark’s essay grader component can provide the same immediate feedback tailored to individual university assessment needs. It is a fully automated, touchless system that reports scores not only to the university but can even direct the feedback to the student via an email response.
Statistical analysis of composition has been conducted for over thirty years. LSA is proven to grade to 85% rater reliability compared to 80% rater reliability between two human judges. The computer is capable of completing the task in significantly less time (20-25 second elapsed rating time average). Humans are influenced by many external factors in their rating of essays; time available to grade, reader bias, etc. The greater burden of using human graders is the added expense that is eventually passed on to the student in the form of tuition and fees. By instituting a fully automated essay assessment in the admissions process, the enrollment counselors in conjunction with the appropriate academic assessment development team could better identify potentially successful students for writing intensive programs. The cost factors involved would be minimal due to in-house assessments developed by QuestionMark. The essay grader does not require incurring extra fees for its use. The current version of QuestionMark requires minor reconfigurations to accommodate the essay grader component of their software. An automated component scoring system would provide accurate unbiased judgment of writing quality and would be an effective tool for scoring and commenting essays of perspective students.
The student admission experience is an essential factor in college success. The direct implications of incorporating a battery of admissions evaluations (intake interview, objective assessment, and an essay demonstrating critical writing skills) are extensive. With a more complete picture of each applicant, universities would have more information to correlate student admission scores with predicting potential academic success. Additionally, academic advisors could immediately identify those students who required some form of writing remediation and recommend a course of action for academic support. The effectiveness of implementing new admission standards would ensure that every minimally competent student admitted to the university would have an equal opportunity to succeed in the e-learning environment.
References
Birch, P. E-Learner Competencies. Learning Circuits American Society for Training and Development.1(3) Retrieved February 28, 2005, from http://www.learningcircuits.org/2002/jul2002/birch.html
Colby, A.Y. (1986). Writing Instruction in the Two-Year College. [Digest]. Los Angeles ERIC Clearinghouse for Junior Colleges.
DeLoughry, T.J. (1995, October 20). Duke professor pushes concept of grading essays by computer. Chronicle of Higher Education, 42(8), A24.
Education Testing Services (NJ) Integrating criterion into your assessment and instructional activities. ETS Technologies. 1(2), Retrieved March 1, 2005, from http://etstechnologies.com/html/integratingcriterion.htm
Foltz, P., Laham, D., Landauer, T.(1999). Automated essay scoring: Applications to educational technology. EdMedia. 1(7) Retrieved March 1, 2005, from http://www-psych.nmsu.edu/~pfoltz/reprints/Edmedia99.html
Foltz, P., Gilliam, S., Kendall, S.(2000). Supporting content-based feedback in online writing evaluation with LSA. Interactive Learning Environments. 8(2): 111-129. New Mexico State University, Las Cruces.
Hofmann, J. Building Success for E-Learners. Learning Circuits American Society for Training and Development.. 1(4)Retrieved February 28, 2005, from http://www.learningcircuits.org/2003/jul2003/hofmann.htm
Hughes, J. (2004) Supporting the Online Learner.[Digest]. Retrieved February 28, 2005, from http://www.athabascau.ca/main/studserv.htm
Jones, P., Packham, G., Miller, C., Jones, A. ( 2004, December). An intitial evaluation of student withdrawals within an e-learning environment: the case of e-College Wales. Electronic Journal of e-Learning.2(10). Retrieved February 28, 2005, from http://www.ejel.org/volume-2/vol2-issue1/issue1-art13.htm
Murray, B. (1998, August). The latest techno tool: essay-grading computers. APA Monitor, 29(8).
Page, E.B. (1994). New computer grading of student prose, using modern concepts and software. Journal of Experimental Education, 62(2), 127-142.