Categories
AI

๐˜›๐˜ฉ๐˜ฆ ๐˜ข๐˜ถ๐˜ต๐˜ฉ๐˜ฐ๐˜ณ๐˜ด ๐˜ฐ๐˜ง ๐˜ต๐˜ฉ๐˜ฆ ๐˜ญ๐˜ฆ๐˜ต๐˜ต๐˜ฆ๐˜ณ ๐˜ด๐˜ข๐˜บ ๐˜ต๐˜ฉ๐˜ข๐˜ต ๐˜ข๐˜ฅ๐˜ท๐˜ข๐˜ฏ๐˜ค๐˜ฆ๐˜ฅ ๐˜ข๐˜ณ๐˜ต๐˜ช๐˜ง๐˜ช๐˜ค๐˜ช๐˜ข๐˜ญ ๐˜ช๐˜ฏ๐˜ต๐˜ฆ๐˜ญ๐˜ญ๐˜ช๐˜จ๐˜ฆ๐˜ฏ๐˜ค๐˜ฆ ๐˜ค๐˜ฐ๐˜ถ๐˜ญ๐˜ฅ ๐˜ค๐˜ข๐˜ถ๐˜ด๐˜ฆ ๐˜ข ๐˜ฑ๐˜ณ๐˜ฐ๐˜ง๐˜ฐ๐˜ถ๐˜ฏ๐˜ฅ ๐˜ค๐˜ฉ๐˜ข๐˜ฏ๐˜จ๐˜ฆ ๐˜ช๐˜ฏ ๐˜ต๐˜ฉ๐˜ฆ ๐˜ฉ๐˜ช๐˜ด๐˜ต๐˜ฐ๐˜ณ๐˜บ ๐˜ฐ๐˜ง ๐˜ญ๐˜ช๐˜ง๐˜ฆ ๐˜ฐ๐˜ฏ ๐˜Œ๐˜ข๐˜ณ๐˜ต๐˜ฉ, ๐˜ง๐˜ฐ๐˜ณ ๐˜ฃ๐˜ฆ๐˜ต๐˜ต๐˜ฆ๐˜ณ ๐˜ฐ๐˜ณ ๐˜ธ๐˜ฐ๐˜ณ๐˜ด๐˜ฆ.

More than 2,600 tech leaders and researchers have signed an open letter urging a temporary pause on further artificial intelligence (AI) development, fearing โ€œprofound risks to society and humanity.โ€

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a host of AI CEOs, CTOs and researchers were among the signatories of the letter, which was publishedย by the United States think tank Future of Life Institute (FOLI) on March 22.

The institute called on all AI companies to โ€œimmediately pauseโ€ training AI systems that are more powerful than GPT-4 for at least six months, sharing concerns that โ€œhuman-competitive intelligence can pose profound risks to society and humanity,โ€ among other things.

โ€œAdvanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening,โ€ the institute wrote.

GPT-4 is the latest iteration of OpenAIโ€™s artificial intelligence-powered chatbot, which was released on March 14. To date, it has passed some of the most rigorous U.S. high school and law exams within the 90th percentile.ย It is understood to be 10 times more advanced than the original version of ChatGPT.

There is an โ€œout-of-control raceโ€ between AI firms to develop more powerful AI, which โ€œno one โ€” not even their creators โ€” can understand, predict, or reliably control,” FOLI claimed.

Among the top concerns were whether machines could flood information channels, potentially with โ€œpropaganda and untruthโ€ and whether machines will โ€œautomate awayโ€ all employment opportunities.

FOLI took these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:

โ€œShould we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?โ€

โ€œSuch decisions must not be delegated to unelected tech leaders,โ€ the letter added.

The institute also agreed with a recent statement from OpenAI founder Sam Altman that an independent review should be required before training future AI systems.

Altman in his Feb. 24 blog post highlighted the need to prepare for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.

Not all AI pundits have rushed to sign the petition, though. Ben Goertzel, the CEO of SingularityNET, explained in a March 29 Twitter response to Gary Marcus, the author of Rebooting.AI, that language learning models (LLMs) wonโ€™t become AGIs, which, to date, there have been few developments of.

Instead, he said research and development should be slowed down for things like bioweapons and nukes:

In addition to language learning models like ChatGPT, AI-powered deep fake technology has been used to create convincing images, audio and video hoaxes. The technology has also been used to create AI-generated artwork, with some concerns raised about whether it could violate copyright laws in certain cases.

Galaxy Digital CEO Mike Novogratz recently told investors he was shocked over the amount of regulatory attention that has been given to crypto, while little has been toward artificial intelligence.

โ€œWhen I think about AI, it shocks me that weโ€™re talking so much about crypto regulation and nothing about AI regulation. I mean, I think the governmentโ€™s got it completely upside-down,โ€ he opined during a shareholders call on March 28.

FOLI has argued that should AI development pause not be enacted quickly, governments should get involved with a moratorium.

โ€œThis pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,โ€ it wrote.

Leave a Reply

Your email address will not be published. Required fields are marked *

Calendar

September 2024
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Categories

Recent Comments