The World’s Most Advanced Scoring Engines
Knowledge Analysis Technologies™ (KAT)
The technology underlying Summary Street, Intelligent Essay Assessor (IEA), and WriteToLearn is based on the KAT engine, including Pearson’s unique implementation of Latent Semantic Analysis (LSA), an approach that is trained to measure the semantic similarity of words and passages by analyzing large bodies of relevant text. LSA can then closely approximate the degree of similarity of meaning between two texts as judged by human readers.
Intelligent Essay Assessor (IEA)
An Internet-based tool for automatically scoring the quality of electronically submitted essays, IEA uses Pearson’s state-of-the-art Knowledge Analysis Technologies™ (KAT) engine, which automatically evaluates the meaning of text, as well as grammar, style, and mechanics. What’s more, IEA can also evaluate short constructed responses. In tests with thousands of constructed responses, the Intelligent Essay Assessor has proven as reliable as professional human scorers.
Reading Maturity Metric (RMM)
In the past, the only way to automatically evaluate the reading level of a text was to measure the length of the words and sentences and the difficulty of the words used. These methods rely too much on correlative data rather than causative, and ignore the divergence in vocabulary and syntax that occurs as readers advance from elementary school through university level readings. Pearson’s RMM is able to model more accurately the way words are learned through reading. It does a better job of identifying the relative meaning between words in the text, providing a much more accurate measure of complexity for texts of all sizes and types. All of these calculations are done within seconds, allowing you to quickly choose the most appropriate level of text for your middle school classroom, your university level seminar, or your children at home.
The Versant testing system, based on the patented Versant technology, uses a speech processing system specifically designed to analyze speech in a manner that distinguishes between native and non-native speakers. In addition to recognizing words, the system also locates and evaluates relevant segments, syllables, and phrases in speech. Statistical modeling techniques are applied to assess the spoken performance. With an average of 0.97 correlation across Versant tests, the computer-generated Versant test scores are virtually indistinguishable from human scoring.
An effort by Pearson, the giant education company, to recruit people to score essays on state standardized tests through ads on Craigslist has sparked controversy in Texas.
Here’s what’s going on in Texas, where there has been a growing revolt among parents, school boards and educators against obsessive standardized testing:
Pearson ran an ad on Craigslist last Nov. 29 looking for people to be be hired and then trained to score the written portion of the Texas Assessment of Knowledge and Skills. Pearson has a contract with the state to write, administer and score the tests.
According to a press release released by leaders of a coalition of people in Texas seeking to reform standardized testing in the state, the ad says college graduates would be paid $12 an hour for doing the job. According to the Craigslist ad, “Bachelor degree required – any field welcome.”
Three leaders of the coalition — Tom Pauken, a commissioner of the Texas Workforce Commission; Thomas Ratliff, a member of the State Board of Education; and Dineen Majcher, a founder of the grassroots group Texans Advocating for Meaningful Student Assessment — issued a release that said:
Prior to seeing this ad, we have heard concerns from across the state about the state’s standardized testing system, the rigidity of the state’s accountability system, and the quality of the people grading the written portion of the state’s test and the consistency of the results. Now we have a better idea why.
To be fair, Pearson and the Texas Education Agency have developed “rubrics” to help train these people to grade our student’s tests. These rubrics can be found at http://www.tea.state.tx.us/student.assessment/staar/writing/. So, in addition to the concern that teachers are “teaching to the test,” now our test graders are being “taught how to grade the test”.
This highlights what we think is another weak link in the accountability chain. This type of training results in grading based on static formulas by people who aren’t truly qualified to grade the quality of a student’s writing skills. This results in teachers teaching students to “write for the test”, not write well. We can do better.
The three want the Texas Legislature and the Texas Education Agency to find a way to make sure that English teachers grade the written portions of the standardized tests. They wrote:
We view this as a win/win/win for Texas Public Education. The parents and taxpayers win because they have more confidence in the grading process. TEA and Pearson win because they get a higher quality grader that provides better data to evaluate our student’s performance. The teachers win because they have more confidence in the grading system and they have a potential to earn a little extra money.
Discussions are now under way in Texas about how the Legislature in the upcoming session will change the standardized testing system, and sources say that one idea being considered is stipulating who can grade the exams.
Pearson responded to the release with a statement (you can read the whole thing here) called “Just the Facts: How Pearson Hires Test Scorers” that says in part:
Pearson conducts an exhaustive search for the very best people to score student tests. Pearson works with the same employment resources used by school systems across the state such as the Austin, Houston and San Antonio independent school districts to promote career opportunities.
Pearson advertises broadly for qualified test scorers. The extensive search for test scorers in Texas includes advertisements with 21 different organizations and publications including the National Council of Teachers of English, the Austin American-Statesman newspaper and the Texas Workforce Commission….
….All test scorers hired by Pearson must have at least a four-year degree and undergo very rigorous, state-approved training before they are allowed to begin work. As part of this rigorous training, applicants must complete and pass practice sets before being eligible to work. The rigorous training program for scorers was developed with the Texas Education Agency, and TEA must approve all final training materials.
Pearson said it has more than 1,800 full time and 2,425 part time employees—many former educators—working at Pearson’s Texas offices in Austin, Dallas and San Antonio.
Read more here about the standardized test revolt in Texas and other states.