Lemonade: This $5 billion insurance company likes to talk up its AI. Now it’s in a mess over it

Yet a lot less than a 12 months after its community market place debut, the business, now valued at $5 billion, finds alone in the center of a PR controversy linked to the technological know-how that underpins its services.

On Twitter and in a site article on Wednesday, Lemonade spelled out why it deleted what it referred to as an “terrible thread” of tweets it had posted on Monday. Those now-deleted tweets experienced stated, between other factors, that the company’s AI analyzes the videos that end users submit when they file coverage promises for indications of fraud, picking up “non-verbal cues that common insurers are unable to, given that they really don’t use a electronic statements system.”
The deleted tweets, which can even now be considered by using the Internet Archive’s Wayback Equipment, triggered an uproar on Twitter. Some Twitter buyers were alarmed at what they observed as a “dystopian” use of technological innovation, as the company’s posts recommended its customers’ insurance coverage promises could be vetted by AI centered on unexplained factors picked up from their movie recordings. Some others dismissed the company’s tweets as “nonsense.”
“As an educator who collects examples of AI snake oil to notify learners to all the damaging tech that is out there, I thank you for your excellent assistance,” Arvind Narayanan, an affiliate professor of personal computer science at Princeton University, tweeted on Tuesday in reaction to Lemonade’s tweet about “non-verbal cues.”

Confusion about how the corporation procedures coverage promises, triggered by its preference of phrases, “led to a unfold of falsehoods and incorrect assumptions, so we are writing this to explain and unequivocally confirm that our people aren’t treated otherwise based on their overall look, habits, or any individual/actual physical attribute,” Lemonade wrote in its web site article Wednesday.

Lemonade’s initially muddled messaging, and the general public reaction to it, serves as a cautionary tale for the escalating selection of firms internet marketing on their own with AI buzzwords. It also highlights the challenges offered by the technology: While AI can act as a promoting place, these types of as by dashing up a ordinarily fusty process like the act of having coverage or filing a assert, it is also a black box. It can be not constantly obvious why or how it does what it does, or even when it is remaining utilized to make a final decision.

In its website put up, Lemonade wrote that the phrase “non-verbal cues” in its now-deleted tweets was a “terrible decision of text.” Fairly, it mentioned it meant to refer to its use of facial-recognition technology, which it relies on to flag insurance policies statements that one person submits under much more than just one identity — statements that are flagged go on to human reviewers, the organization noted.

The rationalization is identical to the process the corporation explained in a blog site put up in January 2020, in which Lemonade shed some light-weight on how its claims chatbot, AI Jim, flagged efforts by a male making use of distinct accounts and disguises in what appeared to be tries to file fraudulent statements. Though the enterprise did not state in that put up whether or not it used facial recognition know-how in people circumstances, Lemonade spokeswoman Yael Wissner-Levy confirmed to CNN Small business this week that the technology was employed then to detect fraud.
Although more and more common, facial-recognition technologies is controversial. The technological innovation has been proven to be less accurate when figuring out persons of shade. Many Black adult men, at least, have been wrongfully arrested following bogus facial recognition matches.
Lemonade tweeted on Wednesday that it does not use and isn’t trying to build AI “that makes use of physical or particular functions to deny promises (phrenology/physiognomy),” and that it isn’t going to take into account elements this sort of as a person’s history, gender, or physical traits in evaluating claims. Lemonade also claimed it under no circumstances lets AI to automatically decline claims.
But in Lemonade’s IPO paperwork, submitted with the Securities and Exchange Commission very last June, the enterprise wrote that AI Jim “handles the full claim through resolution in about a 3rd of instances, having to pay the claimant or declining the declare with out human intervention”.

Wissner-Levy instructed CNN Business enterprise that AI Jim is a “branded time period” the enterprise takes advantage of to converse about its statements automation, and that not everything AI Jim does utilizes AI. Though AI Jim makes use of the engineering for some steps, these kinds of as detecting fraud with facial recognition application, it employs “very simple automation” — effectively, preset regulations — for other duties, this kind of as identifying if a customer has an energetic insurance coverage policy or if the sum of their declare is less than their insurance coverage deductible.

“It can be no secret that we automate declare managing. But the decrease and approve steps are not performed by AI, as stated in the blog publish,” she stated.

When requested how consumers are intended to fully grasp the big difference involving AI and straightforward automation if both equally are completed beneath a products that has AI in its title, Wissner-Levy reported that though AI Jim is the chatbot’s identify, the corporation will “by no means let AI, in phrases of our synthetic intelligence, figure out no matter whether to auto reject a declare.”

“We will enable AI Jim, the chatbot you’re talking with, reject that centered on principles,” she additional.

Questioned if the branding of AI Jim is bewildering, Wissner-Levy reported, “In this context I guess it was.” She stated this 7 days is the initially time the corporation has read of the identify perplexing or bothering consumers.