Could Investors Sour on Israeli AI?
Sloppy practices by insurtech Lemonade highlight what local AI startups should do
There are nearly 400 artificial intelligence (AI) startups in Israel. The country has more than 10% of the world’s AI startups, according to data from Statista. Investors have poured more than $4.7 billion into funding those firms in the last five years.
Is it possible that recent behavior by just one Tel Aviv firm could stoke skepticism over all that is AI here?
Unlikely, but never say “never.”
That Tel Aviv firm in question is insurance tech Lemonade (NYSE:LMND), less than six months into its life as a publicly traded company and already possibly running afoul of advice from the U.S. Federal Trade Commission.
Chatbots Say the Darndest Things
The investment world got its first introduction to Lemonade’s chatbots back in January when the company filed for its IPO. Far from bit players in the automation that underpins the firm’s business model, chatbots AI Maya and AI Jim each make a dozen appearances in the prospectus narrative.
Chatbot AI Jim “handles” claims from Lemonade’s insured customers who make a selfie video explaining what happened. “AI Jim is our claims bot, and, as of September 30, 2020, 96% of the time, it is AI Jim that will take the first notice of loss from a customer making a claim,” reads the S-1. “Claims are commonly paid and declined” by the chatbot.
But in explaining how that works, Lemonade last week tweeted (and later deleted)
”Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal cues that traditional insurers can’t, since they don’t use a digital claims process”
That raised the ire of AI researchers, who questioned that such capabilities exist (yet) and if they did, they may be ‘neath the law.
The blowback, from customers and AI researchers, was fast and fierce. Lemonade quickly backtracked and wrote something of a “mea culpa” on its blog, positing that its tweets “led to a spread of falsehoods and incorrect assumptions, so we’re writing this to clarify and unequivocally confirm that our users aren’t treated differently based on their appearance, behavior, or any personal/physical characteristic.”
“The term non-verbal cues was a bad choice of words to describe the facial recognition technology we’re using to flag claims submitted by the same person under different identities,” reads the post from Team Lemonade.
The Face Rings a Bell (and Sets Off Alarms)
“We do not use, and we’re not trying to build, AI that uses physical or personal features to deny claims,” Team Lemonade wrote.
And right there, they stepped on the AI’s third rail… and possibly caught the attention of the FTC.
That’s because just weeks ago, the consumer watchdog described the behaviors that would likely draw its scrutiny to the AI industry. “Hold yourself accountable – or be ready for the FTC to do it for you,” warned the agency in an April 19 business blog post.
“The FTC, in this case, is telling companies to not exaggerate what their algorithms can do and to be on alert for discriminatory outcomes,” Marketplace Tech’s Molly Wood reported on April 23.
Following last week’s Lemonade furor, she revisited the FTC’s warning, observing in conversation with Ryan Calo, a law professor at the University of Washington, that the agency made it clear “don’t do A, don’t do B” and “here’s Lemonade bragging about doing those things” just weeks later.
“News cycles are fast. Consent decrees last for 20 years. For the long run, it’s important that federal bodies pursue these kinds of things,” Calo said.
Why This Matters for Israeli AI
It’d be a shame if those aforementioned 400 Israeli AI startups were painted with the same brush. And while Israeli companies, in general, can be myopic in their view of the regulatory and social impact landscape, the startups could get a jump on things by learning from this experience.
They could embrace the FTC’s advice to tell the truth about how they use data. As well, don’t exaggerate what their super-secret algorithm can do or whether it can deliver fair and unbiased results.
But perhaps most importantly, “watch out for discriminatory outcomes.” That third-rail of AI is electrified with fears — likely more real than imagined — that the technology is biased against black, indigenous and people of color.
One way to do that is to ensure that the company hires and is led by a group of diverse executives and board members. As of May 30, Lemonade’s website displayed information about the five members of its all-male executive team, four of whom are White. The eight-person board of directors includes one male person of color and one White woman, both external directors.
As Carey Ann Nadeau, co-founder and co-CEO of another insurance tech startup, Loop, tweeted in one researcher’s thread commenting on Lemonade’s behavior last week:
On the date of publication, Robert Lakin did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer.