Should AI Programmers Take A Hippocratic Oath?
While the potential exists for AI systems to learn from their own past decisions, someone first has to build these systems: programmers.
But who programs the programmers? How are the machine-based learning systems they build flawed?
Pedro Bizarro, chief science officer at Feedzai, a data science company that uses machine-based learning to detect fraud, sees three issues with machine learning models, especially as they are used in the financial industry.
-
“Control-ability”
-
“Explain-ability”
-
Bias
AI systems can be used in many ways. For Bizarro, it’s important that those who build the systems maintain control of their models, bake in a way for the AI to explain the decisions it makes, and ensure systems are not inherently biased by race, gender, or religion.
How? Bizarro posits that maybe programmers need to take a Hippocratic Oath: “Always be fair and don’t use users data against them,” he says. For example, imagine if Uber’s dynamic pricing took phone battery life into account. That’s not a good user experience.
This oath is needed because today AI is nearly invisible. Consumers are using services online and on mobile built on machine learning, whether it be Facebook, Twitter, Yelp, or Uber.
“[Machine learning] is basically invisible,” Bizarro says. “I think the general user is not aware. It makes it more important that we developers have an oath to protect the people because they don’t know what’s going on.
Can Robots Pick Up Garbage?
Prediction is hard to do, especially when it’s about the future.
That’s according to renowned futurist and professor of theoretical physics, Dr. Michio Kaku, quoting New York Yankee catcher and amateur philosopher Yogi Berra.
In Dr. Kaku’s speech, he walked attendees his view for the future of wealth and the economy. He knows who the next generation of billionaires will be, as well as those who will be unemployed — both the result of widespread adoption of AI systems.
In this mind, the US has undergone three waves of wealth building: the first was built on steam power; the second, on electricity; the third, on computers and transistors. The fourth wave? It will be built upon artificial intelligence, biotech, and nanotech.
Today, society is rapidly digitizing. And as the Internet of Things grows in scope, information will be easily accessible, and cheap. When that happens, certain jobs will be disrupted: specifically those based on repetitive work or research. And the snowball effect will be great: automobiles are going digital, so is the physical human body. Soon enough, he says, so will thoughts, emotions, and memories. Forget the heart emoji — humans will be able to send one another the feeling of a first kiss or heartbreak.
The more information becomes accessible, the more perfect capitalism becomes. Today, there are inefficiencies in markets: consumers who make purchases don’t know how much a product really costs to produce. Imagine walking into a Starbucks and immediately knowing how much it cost to grow, harvest, and process the beans that make one’s coffee. Once a consumer knows that it costs, say, $0.60 to produce 16 ounces of their favorite breakfast blend, why would they continue to pay $2.00?
“In the future you can tell who is cheating you, because you’ll know what things really cost,” Dr. Kaku says.
It’s not all bad news for the workforce, though. The winners of this shift to AI will be intellectual capitalists: doctors, lawyers, stockbrokers — workers who can combine easily accessible information with their own experience, know how, analysis, innovation, and creativity to create a service. AI is superior to humans in the way it can process information. But that’s a repetitive process. AI is not well equipped for variety, empathy, or physicality.
“That is the currency of the future,” Dr. Kaku says. “Robots can’t pick up garbage.”