Human fears surrounding the future of AI is a hot topic of late. Are you fearful of these kinds of advancements? Or are you a futurephile like me, excited for the future and the exhilarating technologies it will present? If you are on the other side of the fence and perhaps a futurephobe, you’re not alone. In fact, many of us are fearful of AI and even the future as a whole.
Recent research commissioned by Thinque,* showed that one such fear surrounds ethics and the role ethics play in robotic programming and behaviour. The research revealed that 79% of Australians believe morals should be programmed into robots. When questioned why people felt that this was necessary, respondents said they believe that if robots have human consciousness, they must be equipped with morals too (50%), and that if morals are not programmed into robots, humans will be affected negatively (50%).
When probed further and asked who should be responsible for programming morals into robots, 59% said the original creator/software programmer should be, followed by the government (20%), the manufacturer (12%) and the company that owns them (9%).
As AI advances and its capabilities become more sophisticated, concerns around how we will manage these developments continue to grow. As such, the need for humans to build an ethical code into robots is necessary if they are to take on more key roles in our lives, such as performing complex medical procedures, driving cars and machinery, or undertaking teaching duties.
In order to be able to effectively programme ethics into AI, humans will have to have a collective set of ethical rules universally agreed upon (a far cry from the current state of the human world). For example, humans worldwide would need to determine whether it is ethically right to pull the lever to redirect a moving trolley that’s headed towards fatally injuring five people who cannot move, rather than killing just one person on its newly diverted path.
Another dilemma that AI advances raises is that once robots develop to adequately mimic human intelligence, awareness and emotions, we will need to decide if they should then also be granted human-equivalent rights, freedoms and protections.
With AI being utilised across a multitude of industries, it’s only a matter of time until your staff and business may be impacted (if it’s not happening already!)
But have you thought about what you’d do if an adverse situation resulted via the use of AI in your workplace? And do you have a plan for this kind of occurrence?
If not, it’s time to think about one, which would include elements such as:
*Research was conducted and analysed by The Digital Edge Research Company for Thinque. The data is based on analysis of over 1,000 Australians responses in January 2019. The ages used for this study were 18 – 70+ years old and consisted of a mix of both male and female respondents from across the country.
Featured Image credit: Engineers Journal