Artificial intelligence (AI) is revolutionising business opportunities and is expected to have an even greater transformative effect on the world than the steam engine once did. Socially responsible and ethical AI, and the related application development, could also represent a major opportunity and bring a competitive advantage.
AI applications are being developed for all areas of life: the related profits are expected to rise to tens of billions of euros in the near future. A critical debate on the application of artificial intelligence has begun at the same time. So far, the overall impacts of applying artificial intelligence have not been taken into account prior to its introduction. Certain cautionary tales have served as a wake-up call, promoting a more comprehensive examination of the development and application of artificial intelligence.
Artificial intelligence’s complexity makes it very different to many previous technologies. The MIT researchers Erik Brynjolfsson and Andrew Mcafee have summarised the technological causes of the risks posed by AI as follows: On the one hand, artificial intelligence is only as “intelligent” as its programming and the data it uses. Major problems and faults may be caused by programming errors and interpretative challenges related to data. Secondly if, due to the complexity of a system, we no longer understand how the program came to a particular decision or action, it could be very difficult or even impossible to correct a faulty function. These issues, and the enormous potential for the application of the technology, could lead to various ethical and societal challenges.
Has artificial intelligence the right to make ethical and moral choices?
Recent examples, which have generated a great deal of publicity, include robotic cars and the fatal accidents they have caused. Reinhard Stolle, Vice President of Artificial Intelligence and Machine Learning for BMW, has stated (HS 15.9.2017) that industry standards, practices and legislation should be clarified to avoid unclear liability issues relating to both manufacturers and consumers. What is the mutual responsibility between the consumer, manufacturer and subcontractor if applied AI causes an accident? Does artificial intelligence have the right to make choices in the ethical and moral sphere?
Ethical and legal issues are also associated with the data mass that intelligent systems will collect and share with each other in the future. For example, the vision involves cars exchanging data on their movements and traffic flows via the internet, and such data can be used to control the entire system. The data will open an entirely new window on personal privacy. How should we feel about its collection and sharing? In what cases should its use be permitted?
In the area of security related applications, facial recognition programs are examples of solutions that raise legal and ethical issues. Facial recognition programs can be used in countless ways, such as detecting and catching criminals. But what if we go one step further and begin combining data and monitoring people, say for preventative healthcare and cost allocation purposes? In Finland, too, as the number of older people increases, the pressure to emphasise home care is increasing and consideration is being given to new monitoring technologies. To what extent is monitoring people in their home permissible? Or in the workplace? Or in public spaces? What conclusions can be drawn from such data? Although the idea may seem far-fetched, China has already presented a plan for a Social Credit System based on reputation ratings (YLE, 28 March 2018).
The list of application areas and their potential challenges would be very long. Examining the impact of individual applications and application areas is important, but the same question is raised in every case: What kind of society do we want and in what direction will we develop it?
Responsibility and ethics as the basic elements of innovation
Socially responsible and ethical AI and the related application development could also represent a major opportunity and bring a competitive advantage. Responsible development strengthens a company’s brand and relationship with its users, as well as increasing confidence in products and their acceptability. In general, responsible development reduces business risks and undesirable consequences. The Volkswagen emissions scandal, which resulted in a fine of EUR 16 billion for the company and the collapse of its share price, serves as an example of the consequences of unethical conduct.
Responsible Research and Innovation (RRI) is one means of promoting the building of ethical conduct into a competitive advantage. A systematic approach to RRI has been developed in recent years by the EU Commission, in particular, and attempts have been made to integrate it horizontally with the ongoing H2020 framework programme. The growing social risks generated by science and technology and discussions about the social impact and ethical nature of research results form the background of this development. The aim is innovation activity with a greater social impact and acceptance via broad-based dialogue and forecasting involving stakeholders and citizens, based on concrete research and development projects.
VTT’s researchers have been involved in several European projects developing approaches and agendas for the assessment of responsible research and ethics. Examples of these include completed Responsible Industry projects such as Responsible Industry and SATORI, and the ongoing NewHoRRIzon.
For example, a Responsible Industry project focusing on innovation in technological solutions for the aging population has provided tangible evidence on how companies can achieve a major competitive edge, while contributing to the development of society, by complying with the principles of responsible innovation. The social and health sector is heavily regulated and involves ethical issues to which companies must be able to respond by developing solutions. For this reason, the cornerstones of the responsible development of gerontechnology involve the adaptation of innovation to user needs, stakeholder engagement, equality and transparency, and the anticipation of technology impacts. By following these principles, a company can produce meaningful, functional and ethically and socially-acceptable solutions for real needs – for example, in support of quality home-residence for the aging population – and thrive on the market.
How do we integrate ethical and responsible research and innovation dimensions into everyday R&D activities? The starting point could be Corporate Social Responsibility (CSR), which broadens RRI towards and anticipatory approach, while examining ethical issues from the outset of innovation activity. Another interesting question is whether artificial intelligence can be harnessed in support of responsible and ethically sustainable activities? Can artificial intelligence help us understand the complex social impacts of technology and our related decisions on a broader basis?
Responsible innovation and ethical anticipation can help us to promote the positive social and economic impacts of the deployment of artificial intelligence. We can anticipate and prevent potential negative impacts and find new approaches and perspectives with regard to good social impacts. At the same time, we can ensure the competitiveness of our businesses and national economy as ethically responsible players.