Rite Aid’s ‘reckless’ use of facial recognition got it banned from using the technology in stores for five years



New York — 

Rite Aid has agreed to a five-year ban from using facial recognition technology after the Federal Trade Commission discovered that the chain falsely accused clients of crimes and unfairly focused folks of shade.

The FTC and Rite Aid reached a settlement Tuesday after a complaint accused the chain of using artificial intelligence-based software in lots of of stores to determine folks Rite Aid “deemed likely to engage in shoplifting or other criminal behavior” and kick them out of stores – or stop them from coming inside.

But the imperfect technology led staff to behave on false-positive alerts, which wrongly recognized clients as criminals. In some instances, the FTC accused Rite Aid staff of publicly accusing folks of legal exercise in entrance of mates, household and strangers. Some clients had been wrongly detained and subjected to searches, the FTC mentioned.

Rite Aid mentioned in an announcement that it’s “pleased to reach an agreement” with the FTC however added that “we fundamentally disagree with the facial recognition allegations in the agency’s complaint.” The tech was a pilot program and was solely used in a “restricted quantity of stores. The take a look at stopped greater than three years in the past earlier than the FTC’s investigation started.

The FTC’s authorized submitting, which comprises buyer complaints spanning from 2012 to 2020, mentioned that some clients had been “erroneously accused by employees of wrongdoing” as a result of Rite Aid’s technology “falsely flagged the consumers as matching someone who had previously been identified as a shoplifter or other troublemaker.” The facial recognition software program was principally deployed in neighborhoods with giant Black, Latino and Asian communities, the FTC mentioned.

A facial recognition camera is shown pointed at the entrance of a Rite Aid store in downtown Los Angeles, California, U.S., October 16, 2019. Picture taken October 16, 2019.

“Rite Aid’s reckless use of facial surveillance systems left its customers facing humiliation and other harms, and its order violations put consumers’ sensitive information at risk,” mentioned Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a launch.

The proposed order implies that Rite Aid must “implement comprehensive safeguards” to forestall hurt of its clients when deploying the AI-based technology to its areas. The order additionally prevents Rite Aid from using the tech if it “cannot control potential risks to consumers.”

“The safety of our associates and customers is paramount,” Rite Aid mentioned. “As part of the agreement with the FTC, we will continue to enhance and formalize the practices and policies of our comprehensive information security program.”

The pilot program concerned making a database of hundreds of low-quality photos from retailer cameras and staff’ telephones of buyer faces, which had been labeled as “persons of interest” as a result of Rite Aid thought they had been engaged in legal exercise its stores. The FTC is requiring Rite Aid to delete these photos and notify clients that they’re in a database.

Since Rite Aid is engaged in bankruptcy proceedings, the FTC mentioned its orders would go into impact after approval from the courts.



Sources

Leave a Reply

Your email address will not be published. Required fields are marked *