BUSINESS

According to OpenAI, ChatGPT 5 might launch in the next months: Report

In the next few months, OpenAI plans to release ChatGPT’s revised model. The version, known as GPT-5, may be released by summer, according to a Business Insider story. Business Insider was informed by two individuals close to the Sam Altman-led AI company that certain companies received demonstrations of the enhanced and sophisticated ChatGPT model.

The model received approval from one of the CEOs who tried GPT-5, who said, “It is really good, like materially better.” “OpenAI demonstrated the new model with use cases and data unique to his company,” the CEO said.

GPT-5 training is ongoing at OpenAI. Red teaming will begin after OpenAI’s internal team completes the new multimodal big language model. Red teaming is a cybersecurity auditing procedure where a group of outsiders tests the software and looks for bugs or vulnerabilities that could have eluded its creators.

According to one insider who spoke with Business Insider, there is no set deadline for finishing safety testing, so ChatGPT-5’s release might be delayed—especially if the read team discovers bugs in the system.

Businesses that pay OpenAI for an improved or customized version of ChatGPT are the primary source of income for the chatbot. The OpenAI team wants to make an impression on both the general public and prospective clients with CPT-5. The debut date of ChatGPT was November 30, 2022. Since its inception, it has expanded in several ways and had an influence on a wide range of businesses, including customer service and education.

OpenAI has not released any information on GPT-5, but Sam Altman did present Sora, an AI-powered tool that can produce films in response to text commands. These movies may run for one minute. On February 15, Altman requested video suggestions for Sora from his X (formerly Twitter) followers. He then posted Sora’s videos on his X account.

The OpenAI team acknowledges that Sora still has a lot of flaws in spite of its remarkable successes. The study group said in a blog post that Sora “may not understand specific instances of cause and effect, and may struggle with accurately simulating the physics of a complex scene.”

Furthermore, it said that “spatial details of a prompt, for example, mixing up left and right,” may be confused by the model. Additionally, OpenAI emphasized that they are making sure Sora rejects requests that go beyond their use guidelines in relation to sexual material, hate speech, or intellectual property theft.

Related Articles

Back to top button