A consortium of notable figures in the technology industry, including esteemed tech founders such as Elon Musk and Steve Wozniak, have expressed their concerns regarding the ongoing research and development of advanced Artificial Intelligence (AI).
These individuals maintain that AI has the potential to pose a significant risk to society and humanity, particularly as it increasingly competes with humans in general tasks. As such, they argue that powerful AI systems should only be developed once it has been established that their effects will be positive, and any risks they present can be effectively managed.
Join our WhatsApp ChannelIn light of these concerns, efforts to curtail research into AI that is more powerful than OpenAl’s latest ChatGPT-4 have been gaining momentum, with Elon Musk and over 1,300 other stakeholders in the global tech industry having signed a petition to that effect. The petition, titled “Pause Giant AI Experiments: An Open Letter,” has been endorsed by a diverse group of industry leaders, including tech CEOs, AI labs, and members of academia.
READ ALSO: Top 10 Fintech Apps In Nigeria 2023
The grounds for the petition are rooted in the potential risks that AI systems with human-competitive intelligence could pose to society and humanity, as acknowledged by leading AI labs and extensive research. The petition emphasizes the need to plan for and manage advanced AI systems with the utmost care and resources, as such systems could represent a profound change in the history of life on Earth. The petition also highlights the fact that recent months have seen AI labs racing to develop and deploy increasingly powerful digital minds that may be difficult, if not impossible, to control or understand.
READ: Zoom Activates ‘Catch-up’ Features For Latecomers
The petitioners raise several questions regarding the implications of AI becoming human-competitive at general tasks, such as whether machines should be allowed to flood information channels with propaganda and untruth, whether all jobs should be automated away, and whether non-human minds might eventually outnumber, outsmart, obsolete, and replace humans. They contend that such decisions should not be left to unelected tech leaders and that powerful AI systems should only be developed once their positive effects are well-justified, and their risks are manageable.
In light of these concerns, the petition calls on all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months, with the pause being public, verifiable, and inclusive of all key actors. Should such a pause not be quickly enacted, governments are urged to institute a moratorium. During this pause, AI labs and independent experts are encouraged to jointly develop and implement a set of shared safety protocols for advanced Artificial Intelligence design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.
In conclusion, while Artificial Intelligence has the potential to revolutionize our world and bring about unprecedented advancements, it is essential to recognize the significant risks it presents to society and humanity. Therefore, the petitioners advocate for a cautious approach that prioritizes safety and risk management over speed and development.
Follow Us