Responsible AI, Where are You?
Artificial intelligence (AI) technology seems to advance at breakneck speed as people expect it to solve many problems at once. Shortage of developers? Let AI pick up the slack. Traffic accidents? AI-driven cars will hopefully sort that out. Investment decisions?AI will show us the way to maximize ROI.
Time will tell to what extent these wishes will be realized, but a complex technology like AI also creates unintended consequences for people to deal with. Here are three problem points that need immediate attention:
Putting biases into action
AI is not immune to acting in a biased way because the algorithm internalizes the bias inherent in the data it trains on. In a sense, what goes in predetermines what will come out in AI technology. As a result, problems like racism, sexism, and ageism are carried over from our social life to the data and the AI domain.
That’s why AI will exacerbate social problems rather than cure them unless we do something about it. For example, African Americans, who tend to have fewer health expenses than whites, are deemed less in need of healthcare because of their past data. Similarly, AI favors lending money to men compared to women with identical credit scores.
The problem here is the fact that AI not only repeats the existing biases in society but also gives them a veneer of objectivity. People tend to think that AI, having no thoughts or emotions, would not err, unaware of the bias fed into it.
AI operates like a black box, giving its operators no visibility into why it does what it does. This algorithmic opacity makes it impossible to diagnose the reasons for bias, let alone fix them. For this reason, AI should be rigorously tested under real-life circumstances before wide release. Additionally, organizations should specify KPIs to encourage people to conduct responsible AI practices and raise their voices when they detect a bias.
Power asymmetry
AI is an interesting technology in that it supercharges scalability of business operations and can easily go beyond the limits of different domains. That’s because it runs on data and can be applied to any domain that can benefit from data mining and analytics. There used to be different rules for different games in different industries in the past. That is no longer the case. There is a single game in town, and you don’t get to win if you can’t bring together, analyze, and interpret your data. Nailing data processing, analytics, and algorithms opens the door to success in a wide range of industries, as Amazon’s success in retail and cloud illustrates.
However, mastering data and turning AI into the backbone of your business are expensive endeavors that take a lot of resources in terms of a skilled workforce, advanced tools, and money. Therefore, AI, in its current form, is no great equalizer. On the contrary, it makes the rich richer and the poor poorer. Unless AI is democratized, companies that can afford to invest in AI will take off and grow to dominate adjacent markets, while those lacking the resources will struggle to stay afloat.
Problem of agency
Machines gaining a level of autonomy is an exciting thought, but this development might cause more problems than it solves. If AI-powered machines are to assume a bigger role in our lives, a complete overhaul of legislation regulating traffic, intellectual property and copyright, and military engagement is necessary.
Who bears the liability when two autonomous driving vehicles collide? Who gets the credit when AI produces artwork by training on the works of other artists? What about the code generated by tools like Open AI Codex? Who does that code belong to?
These are legitimate concerns. One of the popular technologies today, military drones, seem to get a pass for now because they have limited use cases and work within a human-in-the-loop framework where an operator interacts with the drone to make decisions during an operation. However, putting everything in the hands of AI in an industry like automotive seems far-fetched as the stakes are too high, where millions of people are expected to be affected. “The car is not an iPhone on wheels,” as Oliver Zipse, the CEO of BMW, says. He is clearly aware that we are a long way from self-driving cars:
“A Level 3 system, whether at 60, 80 or 120 kilometers per hour, which constantly switches off in the tunnel, switches off when it rains, switches off in the dark, switches off in fog – what’s that supposed to mean? No customer buys it. No one wants to be in the shoes of a manufacturer who misinterprets a traffic situation during the liability phase, for example, when control is handed back to the driver. We don’t take the risk.”
It’s the private companies that push the frontiers in AI. That explains both the speed of development and the lack of rules and regulation in the field. While the former is desirable, the latter poses risks for the industry and society. The AI sphere desperately needs a law like The General Data Protection Regulation (GDPR), which the EU ratified to check Big Tech companies that hoard data with no concern for privacy.
The GDPR brought tech giants into line by establishing a timeline, demanding certain actions, and backing those demands with hefty fines in the event of a violation. It signaled that things had to change as privacy violations had been out of control, and they would change whether the tech giants agreed or not. The AI field could use that kind of attitude from the U.S. government or the EU. Waiting for the industry to self-police and develop a regulatory framework on its own is just not realistic.
Solution: No-code and open-source
AI technology needs to be democratized. The costs involved in implementing and maintaining AI projects are prohibitively high for most companies. Resource-rich companies can invest in AI and reap the benefits, pulling away from the competition that cannot afford that kind of investment. The end result of this trend would be power and capital accumulating in the hands of a select few at the expense of competition.
That’s why integrating AI with low-code and no-code platforms is key. The low-code/no-code technology will make AI accessible to those organizations that cannot afford to employ teams of engineers and data scientists. Leveling the playing field for smaller players will encourage innovation, give customers more options, and even solve some ethical problems by merely preventing some players from gaining outsized influence.
There is a role for open-source to play on the compliance front. Regulations may come into force one day, but that will not be the be-all-end-all. To secure buy-in from companies of all sizes, hurdles to compliance will have to be removed. Governments and big tech corporations would be well-advised to support open-source projects, ensure that they are maintained, and help develop open-source AI compliance tools. Supporting open-source projects worked out really well for companies like Microsoft, Intel, and IBM in the past. There is no doubt that they stand to gain a lot from a uniform, regulated, and easy-to-navigate AI sphere so that they won’t have to adjust themselves to different rules in different jurisdictions.
Conclusion
Technology does not develop on its own in a vacuum. It is influenced by society and, in turn, impacts it. Thus, any technological breakthrough comes with a slew of problems and side effects that authorities need to take into account. AI is no exception, and considering its capabilities, the problems can be more challenging unless necessary measures are taken to ensure that it will be used in a responsible way. If AI is to be a force for good, it first needs to be brought under control. The time is now.