Space Cowboy Posted June 4 Posted June 4 Oh whatever it is, bring it. I'm so ready for the AI revolution. 2 1 1
shyboi Posted June 4 Author Posted June 4 2 minutes ago, Space Cowboy said: Oh whatever it is, bring it. I'm so ready for the AI revolution. same
Illuminati Posted June 4 Posted June 4 (edited) No one carrie about the names what is the actual tea, any summary? Edited June 4 by Illuminati
Bubble Tea Posted June 4 Posted June 4 @shyboi what is this lazy ass thread put some actual content in the OP 2 9
shyboi Posted June 4 Author Posted June 4 13 minutes ago, Tropical said: @shyboi what is this lazy ass thread put some actual content in the OP full letter in the replies
Gorjesspazze9 Posted June 4 Posted June 4 I read 50 Million jobs will be replaced by A.I. this year. And the reports from last year came true
Archetype Posted June 4 Posted June 4 For anyone curious, this is what they are asking for, which seems pretty reasonable (aside from maybe #4, depends on context) Quote We therefore call upon advanced AI companies to commit to these principles: 1. That the company will not enter into or enforce any agreement that prohibits "disparagement" or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit; 2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company's board, to regulators, and to an appropriate independent organization with relevant expertise; 3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company's board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected; 4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company's board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public. 1
Keter Posted June 4 Posted June 4 56 minutes ago, Space Cowboy said: Oh whatever it is, bring it. I'm so ready for the AI revolution. I for one welcome our machine overlords. We're closer and closer to a sci-fi future. It's great.
Anthinos Posted June 4 Posted June 4 I really wonder what's going on. There are constant warnings. The AI they use must be really powerful. I'm curious. When I look at some politicians, I sometimes wonder whether it wouldn't be better to replace them with an intelligent and empathetic AI. 1
Gesamtkunstwerk Posted June 4 Posted June 4 Quote We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. 1
ScorpiosGroove Posted June 4 Posted June 4 29 minutes ago, Gesamtkunstwerk said: god damn has the future ever been bleaker ?
Capris Groove Posted June 4 Posted June 4 1 hour ago, Joyride said: lol one of my coworkers got caught using ChatGPT and got fired! What do you do for work?
Capris Groove Posted June 4 Posted June 4 37 minutes ago, Gesamtkunstwerk said: AI is like this train that we're all on that noone agreed to board.
Joyride Posted June 4 Posted June 4 43 minutes ago, Capris Groove said: What do you do for work? Business Analyst 2
May Posted June 4 Posted June 4 2 hours ago, Joyride said: lol one of my coworkers got caught using ChatGPT and got fired! meanwhile at my old company I introduced my director to it and he became obsessed with it, and at my current company whenever anyone's stuck we get told 'ask ChatGPT'
Stimulus Posted June 4 Posted June 4 (edited) 2 hours ago, Archetype said: We therefore call upon advanced AI companies to commit to these principles: 1. That the company will not enter into or enforce any agreement that prohibits "disparagement" or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit; Context: Leaked OpenAI documents reveal aggressive tactics toward former employees Vox / Kelsey Piper / May 22, 2024 Has Sam Altman told the truth about OpenAI's NDA scandal? Quote On Friday, Vox reported that employees at tech giant OpenAI who wanted to leave the company were confronted with expansive and highly restrictive exit documents. If they refused to sign in relatively short order, they were reportedly threatened with the loss of their vested equity in the company — a severe provision that's fairly uncommon in Silicon Valley. The policy had the effect of forcing ex-employees to choose between giving up what could be millions of dollars they had already earned or agreeing not to criticize the company, with no end date. ----- 2 hours ago, Archetype said: 2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company's board, to regulators, and to an appropriate independent organization with relevant expertise; Context: OpenAI's Long-Term AI Risk Team Has Disbanded Wired / Will Knight / May 17, 2024 The entire OpenAI team focused on the existential dangers of AI has either resigned or been absorbed into other research groups, WIRED has confirmed. Quote In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI's chief scientist and one of the company's cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power. Now OpenAI's "superalignment team" is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday's news that Sutskever was leaving the company, and the resignation of the team's other colead. The group's work will be absorbed into OpenAI's other research efforts. ----- 2 hours ago, Archetype said: 3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company's board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected; 4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company's board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public. Context: Before Altman's Ouster, OpenAI's Board Was Divided and Feuding The New York Times / Cade Metz, Tripp Mickle and Mike Isaac / Nov. 21, 2023 Sam Altman confronted a member over a research paper that discussed the company, while directors disagreed for months about who should fill board vacancies. Quote A few weeks before Mr. Altman's firing, he met with Ms. Toner to discuss a paper she had co-written for the Georgetown center. Mr. Altman complained that the research paper seemed to criticize OpenAI's efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, a company that has become OpenAI's biggest rival, according to an email that Mr. Altman wrote to colleagues and that was viewed by The New York Times. In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology. Ms. Toner defended it as an academic paper that analyzed the challenges that the public faces when trying to understand the intentions of the countries and companies developing A.I. But Mr. Altman disagreed. "I did not feel we're on the same page on the damage of all this," he wrote in the email. "Any amount of criticism from a board member carries a lot of weight." Senior OpenAI leaders, including Mr. Sutskever, who is deeply concerned that A.I. could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the conversations said. But shortly after those discussions, Mr. Sutskever did the unexpected: He sided with board members to oust Mr. Altman, according to two people familiar with the board's deliberations. The statement he read to Mr. Altman said that Mr. Altman was being fired because he wasn't "consistently candid in his communications with the board." Former OpenAI board member explains why they fired Sam Altman The Verge / Richard Lawler / May 29, 2024 Helen Toner explains last year's OpenAI coup as the result of 'outright lying' that made it impossible to trust the CEO. Quote In her telling, once the board decided they needed to bring in a new CEO, they felt the only way to make it happen was to go behind his back. "It was very clear to all of us that as soon as Sam had any inkling that we might do something that went against him, he would pull out all the stops, do everything in his power to undermine the board, to prevent us from even getting to the point of being able to fire him," Toner said. Toner says that one reason the board stopped trusting Altman was his failure to tell the board that he owned the OpenAI Startup Fund; another was how he gave inaccurate info about the company's safety processes "on multiple occasions." Additionally, Toner says she was personally targeted by the CEO after she published a research paper that angered him. "Sam started lying to other board members in order to try and push me off the board," she says. After two executives spoke directly to the board about their own experiences with Altman, describing a toxic atmosphere at OpenAI, accusing him of "psychological abuse," and providing evidence of Altman "lying and being manipulative in different situations," the board finally made its move. ----- OpenAI is a dumpster fire and I'm not paying that company one cent for ChatGPT. Use HuggingChat for free instead. Edited June 4 by Stimulus
Rotunda Posted June 4 Posted June 4 (edited) 31 minutes ago, Joyride said: Business Analyst Were they like putting company information into the prompts? My job has a separate nerfed version of Chat GPT we're supposed to use. Edited June 4 by Rotunda
Joyride Posted June 4 Posted June 4 23 minutes ago, Rotunda said: Were they like putting company information into the prompts? My job has a separate nerfed version of Chat GPT we're supposed to use. they were indeed using confidential information.
Recommended Posts