While practicalities like ensuring a proper ROI and what it means to increase efficiency, effectiveness and safety when it comes to utilizing a UAV are a priority in the present, it’s just as important to think about what developments in automation and artificial intelligence (AI) will mean to the future of the technology. Considering the legal ramifications of that future is exactly what Drone Law Today host Steve Hogan did during the Drones & A.I.: Flying Robots, Artificial Intelligence, and the Law! episode of his Drone Law Today podcast series.

Steve discusses how AI could inherently change what operators will be able to do with a drone, and considers how that will impact legal matters like responsibility. I wanted to dig into the concepts he talked through a bit more, and asked him what it will mean for operators to utilize “smart” drone technology, how this topic relates to driverless cars as well as a few other things.

Find out what he has to say about all of that and more via the interview below before or after you listen to the episode in iTunes or Stitcher or even your browser.

 

Jeremiah Karpowicz: Do you think conversations about the potential development of AI for drones will change what a person or company expects from a UAV? Is that a good thing or a bad thing?

Steve Hogan: Expectations will absolutely change. This is a function of increasing capabilities of drone systems. The things that drone companies are working on are going to require AI in order to function. For example, Tim Ferriss recently interviewed venture capitalist (and tech icon) Marc Andreessen about emerging tech. It came out in the interview that Andreessen’s VC fund is backing a drone startup that will help police track suspects in real time – that capability is going to require an AI loadout advanced enough that the drone can make decisions on its own without waiting for human input. (Link is here – the drone discussion starts at about 45 minutes in: http://fourhourworkweek.com/2016/05/29/marc-andreessen/).

I tend to think that greater expectations from technology is a good thing. When tech works well, we don’t think about it – it fades into the background and we think about the next thing. That’s what humans do. I don’t see a reason why Drones and AI wouldn’t be subject to the same dynamic.

 

The Newsweek article, "Once Drones Get Artificial Intelligence, They’ll Rule the World” served as your jumping off point for this episode, and it was somewhat jarring to hear the author refer to drone technology as still being very “dumb”. In your view, what kind of capabilities do we need to be dealing with in order to talk about “smart” drone technology?

Futurist and tech icon Kevin Kelly talks a great deal about AI. He’s said, if I remember correctly, that we keep “defining away” what we mean by AI. We used to say that it would take “AI” to play and win a game of chess. When that happened, we immediately said “well, that’s just a mechanical programming of game rules – not ‘intelligence,’ per se.” The truth is that AI keeps getting better, all the time. It will continue to do so in ways that we may keep discounting out of an instinctive human-bias.

It doesn’t have to be oppositional, though. In this interview with James Altucher, Kevin Kelly talks about his upcoming book, “The Inevitable,” and how AI and humans will end up working together to make a whole that is greater than the sum of their parts.

Thinking about drone/human collaboration is a healthier way to approach the issues, I believe.

 

As you were talking through how AI will impact issues like responsibility, I couldn’t help but think of the developments we’re seeing in autonomous vehicles. How much of a correlation do you see in these areas?

Oh, it’s huge. Autonomous / driverless technology is all of a piece – the air, ground, and sea domains are all using similar technologies to solve their specific problems. The AI and robotic responsibility issues that all three domains face will be different due to the different things that occur in each one. For example, a driverless car could kill a person much easier than a fully autonomous DJI Phantom. One weighs multiple tons, the other less than 55 pounds. The difference is obvious.

With that said, it’s telling that companies like Ford have hired ethicists to work with them on their driverless car initiatives. (Example: http://money.cnn.com/video/news/2015/01/15/ford-driverless-car-ethics.cnnmoney/) I’ve heard of other companies doing the same. This means that the companies pushing the tech are serious about programming their AI in ways that are ethical, whatever that means in a practical context. Interesting times we live in.

 

News about driverless cars is all over the place, but shouldn’t we should see an even faster and more powerful changes with AI for drones since driverless cars will always have to deal with people in a way autonomous drones will not?

I think it’s impossible to predict which will go “faster” – the AI is needed to do different things. The danger profile is different in the two domains as well. The advances in each will inform each other.

 

If a person or company is going to be liable for the choices a drone makes, why would any company ever put themselves in that situation? That’s too big of a risk for a company that might otherwise be interested in creating a fleet of drones for something like inspections, isn’t it?

That’s like saying why would a farmer ever keep a cow if it could break out and eat the neighbor’s corn. This is no different than any other technology. Think about how many people are killed or mangled each year by cars driven by humans – that hasn’t hurt the car industry much.

 

That’s true, although I don’t think anyone would ever think of cars as being “responsible” in the way we’d consider a drone that makes decisions for itself would be responsible. But let’s run with that and say that we get to a point where a drone is ultimately responsible for whatever choices it makes. Couldn’t an unscrupulous operator changes the parameters so that the choices a drone makes will always be beneficial to them, even if it causes harm to someone else?

Oh yes. Look at the questions raised by hackable cars! That’s a fun lawsuit waiting to happen. I’d love to litigate it.

 

It was interesting to hear how future litigation in this area could look to the precedents established with animal ownership. The owner of a dog that bites someone is ultimately responsible for that dog’s actions, even if they had nothing to do with it. AI could be interpreted in the same way, and while the comparison makes a lot of sense of paper, aren’t people likely to think of both situations as being distinct?

Yes, but analogies are what we live on in the law. Each case is different but they can inform each other. That’s what lawyers do for a living!

 

Why does the commercial operator who’s just concerned with flying safely and legally today need to think about how AI might influence the future of the technology?

I think it’s about being an informed citizen of the world. Beyond that, it could be your company that ends up as the test case. The more you know, the more you can avoid being the test case. Those are great for history books but they will take over your life. You’re better off making money than making precedent.

 

As these issues become more and more prevalent, how much will answers to questions like, “what is consciousness?” influence legal precedent?

Stay tuned to Drone Law Today! It’s going to be a fun issue to track.