After years of banning the use of its artificial intelligence in harmful technologies, Google
reversed course last month – worrying one PNW technologist.
“I am opposed to it,” said Tae-Hoon Kim, a professor of Computer Information Technology who teaches Data Communication and AI. “It’s going to be used in a battlefield for killing people. … We shouldn’t get involved in anything like that with AI.”
“It should not be used for killing people in any field,” he said. “We develop AI to help make
human beings live better.”
Google, a leader in AI technology development, last month eliminated a list of banned uses for
its technology and wrote that it would implement “appropriate human oversight, due diligence
and feedback mechanisms to align with user goals, social responsibility, and widely accepted
principles of international law and human rights.”
The new company principles would permit its technology to be used in weapons and
surveillance – a reversal of the anti-harm position it adopted in 2018.
“In the Army they use AI simulation,” said Kim.. “The AI needs to strike specific areas to win the battle, but there is a lock on the Ai that prevents it from attacking … the area where civilians live.”
“Disconnecting this can cause problems because AI is not a human and doesn’t have emotions which can lead to the harming of civilians,” he said.
Kim called Google’s change a betrayal of its earlier position on AI.
Tahanna Tucker is a student who is learning about AI. She’s worried.
“I think Ai can be used as a secondary Intellectual device that can go into places that humans cannot, so if we use surveillance drones, it is very helpful when you’re in a situation that a human cannot get into to gain the advantage,” she said. “I do think that it can be a great thing but it’s also creepy.”
Kim said the real issue is that the world needs to develop guidelines governing the use of AI.
“We need to solve AI ethics, and we need to talk about it more and make more guidelines,” he said. “The AI community needs to talk more about this. We should not be driven by the government, it should be more driven by the AI community. So people in the AI community need to get together and talk more about the issue.”