Skip to content

Accused of cheating by an algorithm, and a professor she’d never met

    dr. Orridge did not respond to requests for comment for this article. A Broward College spokeswoman said she was unable to discuss the matter due to student privacy laws. In an email, she said the faculty is “using their common sense” about what they see in Honorlock reports. She said an initial warning of dishonesty would appear on a student’s file, but would not have more serious consequences, such as preventing the student from graduating or transferring credits to another institution.

    Honorlock has not previously disclosed exactly how its artificial intelligence works, but a company spokeswoman revealed that the company performs facial detection using Rekognition, an image analysis tool that Amazon began selling in 2016. The Rekognition software searches for facial landmarks – nose, eyes, eyebrows, mouth – and gives a confidence score that what is displayed on the screen is a face. It can also infer the emotional state, gender and angle of view.

    Honorlock will mark a test taker as suspicious if it detects multiple faces in the room, or if the test taker’s face disappears, which can happen when people cover their faces with their hands in frustration, said Brandon Smith, Honorlock’s president and chief operating officer.

    Honorlock sometimes uses human employees to control test subjects; “live invigilators” come in by chat when there are many flags on an exam to find out what’s going on. Recently, these invigilators discovered that Rekognition incorrectly registered faces in photos or posters as extra people in the room.

    When something like this happens, Honorlock tells Amazon’s engineers. “They’re taking our real data and using it to improve their AI,” said Mr. Smith.

    Recognition had to be a step above what Honorlock had used. A previous face detection tool from Google was worse at detecting the faces of people of different skin tones, Mr Smith said.

    But Rekognition has also been accused of bias. In a series of studies, Joy Buolamwini, a computer researcher and executive director of the Algorithmic Justice League, found that gender classification software, including Rekognition, worked least well on dark-skinned women.