Geoffrey Hinton, an artificial intelligence (A.I.) pioneer, is warning about the dangers of A.I. development. In 2012, Dr. Hinton and his students at the University of Toronto developed the technology that established the backbone of A.I. systems, which tech behemoths are currently racing to develop.
Dr. Hinton, on the other hand, has recently joined the rising chorus of opponents who warn that corporations are developing products based on generative AI without properly understanding the hazards. Generative A.I., which powers popular chatbots like ChatGPT, can already be used to spread misinformation and, in the long run, pose a threat to civilization.
Over 1,000 technology professionals and researchers signed an open letter calling for a six-month ban on the creation of new systems after OpenAI launched a new version of ChatGPT in March. This is due to the fact that AI technologies pose “profound risks to society and humanity.”
Dr. Hinton, dubbed “the Godfather of A.I.”, did not sign either letter but has since left from his position at Google, where he has worked for over a decade, in order to speak freely about the perils of A.I. Dr. Hinton has been an academic his entire life, and his career has been guided by his personal convictions regarding the development and application of artificial intelligence.
Dr. Hinton’s life’s work became his neural network, a mathematical system that learns abilities by analysing data. He and his students created a neural network in 2012 that could analyse hundreds of photographs and educate itself to recognise common items such as flowers, dogs, and cars. Google paid $44 million for the company, and their technology resulted in the development of significant technologies such as ChatGPT and Google Bard.
Dr. Hinton is vehemently opposed to the employment of artificial intelligence or “robot soldiers” on the battlefield. He feels that generative A.I., if not developed properly, could have negative repercussions. “It’s difficult to see how you can prevent bad actors from using it for bad things,” he argues.
While business executives think that A.I. systems will lead to achievements in fields ranging from medicinal research to education, sceptics are concerned that they could release something hazardous into the wild. With the AI sector possibly reaching a tipping point in decades, the discussion over its hazards and benefits is bound to continue.
After dedicating his life to the development of innovative technologies, he now want to engage in “more philosophical work.”
“I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he explains. “I can’t do that as long as I’m paid by Google,” he told MIT Technology Review.