BUSINESS

Before Altman was fired, OpenAI Researchers Alerted the Board to a “Major” AI Breakthrough

Two individuals familiar with the situation told Reuters that a number of staff researchers warned the board of directors about a potentially dangerous artificial intelligence breakthrough in a letter they prepared before OpenAI CEO Sam Altman’s four-day banishment.

The two individuals said that before Altman, the poster child of generative AI, was fired by the board, the previously undisclosed letter and the AI algorithm were significant advancements. Before his victorious comeback on Tuesday night, almost seven hundred workers had vowed to resign in support of their ousted boss and join Microsoft.theindiaprint.com speaking with george bailey about australias choice to field first in the icc odi

The sources identified the letter as one element in a wider list of board complaints that resulted in Altman’s termination, including worries about commercializing advancements before fully realizing the ramifications. No copy of the letter was available for Reuters to examine. Requests for response from the staff member who penned the letter were not answered.

One of the persons claimed that OpenAI, which did not want to comment when approached by Reuters, disclosed a project named Q* in an internal memo to employees and in a board letter prior to the events of the weekend. A representative for OpenAI stated that the communication, which was written by seasoned executive Mira Murati, informed employees of specific media reports without addressing their veracity.

According to someone who spoke with Reuters, some at OpenAI think Q*—pronounced Q-Star—might represent a significant advancement in the company’s quest for artificial general intelligence (AGI). AGI is defined by OpenAI as autonomous systems that outperform humans in the majority of economically significant activities.

The person, who spoke under anonymity because they were not allowed to talk on behalf of the corporation, said that the new model was able to solve certain mathematical issues given its enormous computational capacity. The source said that despite Q*’s rudimentary arithmetic skills, the fact that it aced these exams gave researchers great hope for Q*’s future prospects.

The researchers’ claims about Q*’s capabilities could not be independently confirmed by Reuters.

“VEIL OF IGNORANCE”

Math is seen by researchers as a frontier for the creation of generative AI. Answers to the same topic may differ greatly, and generative AI is now competent at writing and language translation by statistically predicting the following word. However, mastering arithmetic, where there is just one correct answer, suggests AI might be more capable of thinking like a human. Researchers in artificial intelligence think this may be used, for example, in new scientific studies.

AGI is able to learn, grasp, and generalize in contrast to a calculator, which is restricted in the amount of operations it can do.

The sources said that while the researchers did not name the specific safety issues raised in the letter to the board, they did highlight AI’s potential for harm. Computer scientists have long debated the threat presented by artificial intelligence (AI), including the possibility that these computers may conclude that it would be in their best interests to wipe out humankind.

Scholars have also brought attention to the activities of a group of “AI scientists,” whose existence has been verified by many sources. According to one of the participants, the group was investigating ways to optimize current AI models to enhance their reasoning and ultimately carry out scientific work. It was developed by integrating previous “Code Gen” and “Math Gen” teams.

Altman oversaw the development of ChatGPT into one of the fastest-growing software programs ever and convinced Microsoft to provide the funding and processing power required to get the program closer to artificial intelligence.

Apart from revealing an array of novel instruments during a presentation earlier this month, Altman hinted last week at a global leaders’ gathering in San Francisco that he thought significant breakthroughs were imminent.

“Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime,” he stated during the summit on Asia-Pacific Economic Cooperation.

Altman was let go by the board a day later.

(Editing by Kenneth Li and Lisa Shumaker; Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York)

Related Articles

Back to top button