iPeople – Can They Be Held Liable?

By Annie Dike, Esq.

Fear of our jobs being replaced by machines dates all the way back to the invention of the cotton gin in 1793 and maybe even earlier. But as new machines, computers, and robots are developed and jobs are lost, we have learned new jobs are created: programmers, code-writers, and so on, and society adapts. However, as technology continues to advance, the artificial intelligence capabilities of some of these new “machines” make them seem more human than ever. As a lawyer, it makes you wonder: Can these new iPeople be held liable?

Think about the bank teller and the fear he must have experienced when the invention of the ATM was announced, or the cashier when self-checkout was introduced. You all remember when the self-checkout register looked foreign and felt a bit like a sci-fi experience. Now they’re everywhere. Who is willing to admit, though, that you often opt out of the self-checkout because they often malfunction? One point for the humans.

Evia, a new auto insurance “agent” was recently launched in Cambridge, Massachusetts by a 39 year-old native of Bulgaria. Insurify is a new computer-based insurance placement provider. Think of it like Travelocity or PriceLine for your auto insurance. You snap a shot of your license plate and text it to Evia. She will ask you a few questions via text, then scour eighty-two different insurance providers’ plans and reveal the best one for the money in an instant. Evia is not the equivalent of your old insurance agent. She is smarter, faster, and cheaper than a human. Most smart machines are these days. The amount of data input they can process “in an instant,” the types of algorithms they can create to predict patterns, and their ability to teach themselves is mind-blowing. Soon, we will likely see many more “iPeople” in our lives ─ perhaps postmen, drivers, waiters, and flight attendants.

The questions: It is counterintuitive to think about the “mind” of these iPeople. Will they be expected to behave as humans must to avoid liability? Can they be found negligent or reckless? Can they breach a fiduciary duty to their clients? What about other actions they may take that might give rise to liability?

In a recent BullsEye article, we discussed the Ninth Circuit’s finding that a statement made by a machine is not hearsay. This may change over time as the capabilities of artificial intelligence continue to advance. It is also interesting in light of the debate between Apple and the FBI that has now escalated to a determination of whether computer code qualifies as “speech” for protection under the First Amendment. “Smart” machines ─ their development, advancement, and increasing integration in our everyday lives ─ are now starting to have an impact on legal decisions and legislation, and likely, our findings of liability in litigation.

What kind of impact do you see this having on litigation? Perhaps a new standard for proper machine performance, a breach of which can create liability, or ─ like the court’s handling of a statement made by Google Earth ─ will the machines be immune?

Avatar

Annie Dike, Esq.

As a former trial and litigation attorney, Annie Dike has a keen eye for expert evidentiary issues and a clear voice for practical solutions.  Annie is a published author of both fiction, non-fiction, and a comprehensive legal practitioner's guide to hourly billing published by LexisNexis. Annie graduated from the University of Alabama School of Law cum laude.  While in law school, she served as Vice President of both the Bench and Bar Legal Honor Society and the Farrah Law Society and was a member of the Alabama Trial Advocacy Competition Team as well as Lead Articles Editor of The Journal of the Legal Profession.  Ms. Dike has published articles in The Alabama Lawyer and DRI MedLaw Update and has spoken on numerous legal issues at various conferences nationwide.

Get the best expert

Fill out the form and one of our representatives will be in touch with you shortly. Or, you can call or email us directly.