OpenAI

OpenAI, DeepMind insiders demand AI whistleblower protections AI 7 days ago

Individuals with past and present roles at OpenAI and Google DeepMind called for the protection of critics and whistleblowers on June 4.Authors of an open letter urged AI companies not to enter agreements that block criticism or retaliate against criticism by hindering economic benefits.Furthermore, they stated that companies should create a culture of “open criticism” while protecting trade secrets and intellectual property.The authors asked companies to create protections for current and former employees where existing risk reporting processes have failed. They wrote:“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”Finally, the authors said that AI firms should create procedures for employees to raise risk-related concerns anonymously. Such procedures should allow individuals to raise their concerns to company boards and external regulators and organizations alike.The letter’s thirteen authors described themselves as current and former employees at “frontier AI companies.” The group includes 11 past and present members of OpenAI, plus one past Google DeepMind member and one present DeepMind member, formerly at Anthropic.They described personal concerns, stating:“Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry.”The authors highlighted various AI risks, such as inequality, manipulation, misinformation, loss of control of autonomous AI, and potential human extinction.They said that AI companies, along with governments and experts, have acknowledged risks. Unfortunately, companies have “strong financial incentives” to avoid oversight and little obligation to share private information about their systems’ capabilities voluntarily.The authors otherwise asserted their belief in the benefits of AI.The request follows an April 2023 open letter titled “Pause Giant AI Experiments,” which similarly highlighted risks around AI. The earlier letter gained signatures from industry leaders such as Tesla CEO and X chairman Elon Musk and Apple co-founder Steve Wozniak.The 2023 letter urged companies to pause AI experiments for six months so that policymakers could create legal, safety, and other frameworks.Before transitioning to crypto writing in 2018, Mike studied library and information sciences. Currently, he resides on Canada’s West Coast.Also known as “Akiba,” Liam is a reporter, editor and podcast producer at CryptoSlate. He believes that decentralized technology has the potential to make widespread positive change. Follow us on X for instant crypto news and insights updates.The S-1 filing follows the SEC’s approval of NYSE Arca’s proposed rule change on behalf of ProShares.Rob Marrocco believes crypto ETFs beyond Bitcoin and Ethereum are unlikely without first establishing a futures market or changing regulation.Ripple said Standard Custody CEO Jack McDonald would help the firm achieve its USD-backed stablecoin plans.The examinations will determine S&C’s awareness of FTX misconduct and potential conflicts during SBF’s Robinhood shares acquisition.CryptoSlate’s latest market report dives deep into the effects corporate Bitcoin purchases have on the market.Disclaimer: Our writers’ opinions are solely their own and do not reflect the opinion of CryptoSlate. None of the information you read on CryptoSlate should be taken as investment advice, nor does CryptoSlate endorse any project that may be mentioned or linked to in this article. Buying and trading cryptocurrencies should be considered a high-risk activity. Please do your own due diligence before taking any action related to content within this article. Finally, CryptoSlate takes no responsibility should you lose money trading cryptocurrencies.