News Risk Management28 Mar 2018

AI creates new risks, even as it brings benefits to businesses-Allianz

28 Mar 2018

The widespread implementation of Artificial Intelligence (AI) applications brings many advantages for businesses such as increased efficiencies, fewer repetitive tasks and better customer experiences. However, in the wrong hands, the potential threats could easily counterbalance the huge benefits, as AI is vulnerable to risks especially cyber, said a new report from Allianz last week.

In The Rise of Artificial Intelligence: Future Outlook and Emerging Risks”, Allianz Global Corporate & Specialty (AGCS) identifies both the benefits and emerging risk concerns around the growing implementation of AI—also referred to as machine learning, it is essentially software that is able to think and learn like a human.

The report noted that AI applications today are basic or “weak”, exhibiting abilities in specific tasks such as driving a car or solving a puzzle. However, in future, “strong” AI applications will be capable of resolving difficult problems and executing more complex transactions. Its introduction will most likely be unprecedentedly disruptive to current business models,” said the report.

AI is beginning to find users in almost every industry, from chatbots which offer financial advice to helping doctors to diagnose cancer. The technology is used to power driverless cars, better predict the weather, process financial transfers or to monitor and operate industrial machines. AI could double the annual economic growth rate in 12 developed economies by 2035, an Accenture report estimated.

Risks--especially cyber

But with these potential benefits come risks, especially cyber. AI-powered software could help to reduce cyber risk for companies by better detecting attacks,but could also increase it if malicious hackers are able to take control of systems, machines or vehicles. AI could enable more serious and more targeted cyber incidents to occur by lowering the cost of devising attacks. The same hacker attack – or programming error – could be replicated on numerous machines.

Vulnerability to malicious cyber-attacks or technical failure will increase, as will the potential for larger-scale disruptions and extraordinary financial losses as societies and economies become increasingly interconnected. Companies will also face new liability scenarios as responsibility for decision-making shifts from human to machine and manufacturer.

Five areas of concern

The AGCS report highlighted five areas of concerns where risk management needs to be considered, so that AI development continues but its hazards are reduced:

  • Software accessibility

AI’s key component is software. Whether AI code should be closed or open to the public and in particular to the software development community, has both pros and cons. Open sourcing potentially accelerates strong AI development and enables industry outsiders to control its risks; yet closing access may prevent appropriation and misuse from those with harmful intentions.

  • Safety

AI safety is concerned with ensuring that an AI system is tested in an environment similar to the real world so that its goals and behaviours are appropriately specified and the system can be safely introduced into society. A misalignment between the developer’s objective and the interpreted goal of the AI agent can cause unexpected accidents that are apparent only when the system is introduced into the real world—especially as the race to bring AI to market may cause developers to underestimate verification and validation processes.

  • Accountability

This refers to the ability of an agent to make transparent and auditable decisions. With the proliferation of AI agents programmed to make decisions, regulators face the increasingly significant question of how to ensure that not only data input but also the process leading to AI-made decisions can be reviewed and audited, for example, by appropriate oversight bodies including lawyers, AI experts and final users.

Input data used to train AI algorithms is usually human-generated with prejudice and bias, and AI agents tend to amplify their effect, resulting in partial and unfair decisionss. Transparency of the decision-making process and the underlying training data would ensure that the outcome is unbiased.

  • Liability

While AI agents could take over many decisions from humans, they cannot legally be liable. Generally speaking, the manufacturer of a product is liable for defects that cause damages to users. The same applies to a producer of AI agents in case of damages due to defects in design or manufacturing. However, AI decisions that are not directly related to design or manufacturing, but are taken by an AI agent because of its interpretation of reality, would have no explicit liable parties, according to current law. Leaving the decision to courts may be expensive and inefficient if the number of AI-generated damages start increasing5.

The report proposes that a solution to the lack of legal liability would be to establish an experts-based agency with the purpose of ensuring AI safety and alignment with human interests. The agency would have certification powers and would establish a liability system under which designers, manufacturers and sellers of AI-based products would be subject to limited tort liability, while uncertified programs that are offered for commercial sale or use would be subject to strict joint and several liability.

  • Ethics

While decisions taken by AI agents are in many cases faster and more accurate, in some situations there is no objective view on what the optimal decision should be, as it is subjective and depends on ethics. Depending on its design or the information upon which it is trained, an AI agent may act against human interests. The challenge when developing AI agents is to instill the agent with a distinction between good and bad. One way is to let the agent observe human behavior in different situations and act accordingly. The longer humans are observed, the more virtuous, according to human standards, an AI agent becomes, yet humans also have biases.

What insurers need to know

The report noted that insurers will have a crucial role to play in helping to minimise, manage and transfer emerging risks from AI applications. Traditional coverages will need to be adapted to protect consumers and businesses alike. Insurance will need to better address certain exposures to businesses such as cyber-attacks, business interruption, product recall and reputational damage. New liability insurance models will likely be adopted – in areas such as autonomous driving for example – increasing the pressure on manufacturers and software vendors and decreasing the strict liability of consumers.

The insurance industry has been an early adopter of machine learning as it deals with lots of data and repetitive processes. “There is huge potential for AI to improve the insurance value chain. Initially, it will help automate insurance processes to enable better delivery to our customers. Policies can be issued, and claims processed, faster and more efficiently,”” said Mr Michael Bruch, Head of Emerging Trends at AGCS.

By boosting data analytics AI will also give insurers and their customers a much better understanding of their risks so that they can be more effectively reduced, while new insurance solutions could also be developed. For example, AI-powered analytics could help companies better understand cyber risks and improve security. At the same time the technology could assist insurers in identifying accumulations of cyber exposure. Last but not least, AI will change the way insurers interact with their customers, enabling 24/7 service.

| Print | Share

CAPTCHA image
Enter the code shown above in the box below.

Note that your comment may be edited or removed in the future, and that your comment may appear alongside the original article on websites other than this one.

 

Recent Comments

There are no comments submitted yet. Do you have an interesting opinion? Then be the first to post a comment.

Other News



Follow Asia Insurance Review