三六零
馬斯克公开信(全文)
暫停大型人工智能實驗:公开信
發布網站:Future of Life Institute
會議時間:2023年3月29日
我們呼籲所有人工智能實驗室立即暫停對比GPT-4更強大的人工智能系統的培訓至少6個月。
大量研究表明,具有人類競爭情報的人工智能系統可能對社會和人類構成深刻風險,並得到頂尖人工智能實驗室的認可。正如得到廣泛贊同的Asilomar人工智能原理,先進的人工智能可能代表着地球生命史上的一場深刻變革,應該以相應的關心和資源進行規劃和管理。不幸的是,這種水平的規劃和管理並沒有發生,盡管最近幾個月來,人工智能實驗室陷入了一場失控的競賽,开發和部署更強大的數字思維,沒有人——甚至他們的創造者——能夠理解、預測或可靠地控制。
當代人工智能系統在一般任務上正變得具有人類競爭力,我們必須問自己:
應該讓機器用宣傳和謊言淹沒我們的信息渠道嗎?
應該自動化了所有的工作嗎,包括令人滿意的工作嗎?
應該讓我們开發出非人類的思想,最終可能會在數量上超過我們,比我們聰明,被淘汰,甚至取代我們?
應該讓我們會冒着失去文明控制的風險嗎?
這樣的決策決不能委托給未經選舉產生的技術領導者。只有當我們確信強大的人工智能系統的影響將是積極的,其風險將是可控的時候,才應該开發它們。這種信心必須有充分的理由,並隨着系統潛在影響的大小而增加。OpenAI的最近關於人工通用智能的聲明指出,“在某個時候,在开始訓練未來的系統之前獲得獨立審查可能是重要的,對於最先進的努力,同意限制用於創建新模型的計算增長率。”我們同意這一點,就是現在。
因此,我們呼籲所有的人工智能實驗室立即暫停至少 6 個月的比GPT-4更強大的 AI 系統的培訓。這一暫停應該是公开的和可核查的,並包括所有關鍵行爲者。如果這樣的暫停不能迅速實施,各國政府應該介入並實施暫停。
人工智能實驗室和獨立專家應該利用這段時間,共同开發和實施一套高級人工智能設計和开發的共享安全協議,由獨立外部專家進行嚴格審計和監督。這些協議應確保遵守協議的系統是安全的,排除合理的懷疑。這不是意味着人工智能發展總體上的暫停,只是從危險的競賽中後退一步,以便向具有突發能力的更大的不可預測的黑盒模型前進。
人工智能的研究和开發應該重新聚焦於使今天強大的、最先進的系統更加准確、安全、可解釋、透明、強大、一致、值得信賴和忠誠。
同時,人工智能开發者必須與政策制定者合作,大幅加快开發強大的人工智能治理系統。這些至少應該包括:專門針對人工智能的新的和有能力的監管機構;監督和跟蹤高能力的人工智能系統和大型計算能力池;出處和水印系統,以幫助區分真實和合成,並跟蹤模型泄漏;強大的審計和認證生態系統;對人工智能造成的傷害承擔責任;爲人工智能安全技術研究提供強大的公共資金;以及資源充足的機構,以應對人工智能將導致的巨大的經濟和政治破壞。
人類可以通過人工智能享受一個繁榮的未來。現在,在成功創造出強大的人工智能系統之後,我們可以享受一個 "人工智能之夏"。在這個夏天,我們收獲了回報,爲所有人的共同利益設計這些系統,並給社會一個適應的機會。社會已經暫停了對社會有潛在災難性影響的其他技術,我們在這裏也可以這樣做。讓我們享受一個漫長的人工智能夏季,而不是在毫無准備的情況下匆忙墜入秋季。
公开信原文
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research,and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.
本文作者可以追加內容哦 !
鄭重聲明:本文版權歸原作者所有,轉載文章僅為傳播信息之目的,不構成任何投資建議,如有侵權行為,請第一時間聯絡我們修改或刪除,多謝。
標題:馬斯克公开信(全文)
地址:https://www.breakthing.com/post/50741.html