ChatGPT’s successor, GPT-4, has been announced by OpenAI

ChatGPT’s successor, GPT-4, has been announced by OpenAI

The latest version of OpenAI’s immensely popular AI chatbot ChatGPT, GPT-4, has been published.

In addition to producing labels and descriptions, the new model can also provide recipe suggestions based on images of components.

Its maximum word count is 25,000, which is eight times that of ChatGPT.

Since its release in November 2022, millions of individuals worldwide have been chatting on ChatGPT.

Requests for help with songwriting, poetry, advertising copy, programming, and homework are common, despite professors’ warnings against their use.

ChatGPT uses the internet as it was in 2021 as its knowledge database and responds to inquiries in a way that seems natural to a human user. It may also imitate the writing styles of other professionals, such as musicians and authors.

Many people worry that it will eventually replace them in the workforce.

OpenAI claimed that it had spent six months developing GPT-4’s safety measures and had used human feedback to teach the robot. It did, however, caution that it may still spread false information.

GPT-4

At launch, GPT-4 will be accessible only to ChatGPT Plus users, who pay an additional $20 per month for premium features.

Microsoft’s Bing search engine already uses it as its foundation. The company has put $10 billion towards OpenAI.

During a live demonstration, it answered a complex tax question, but this result could not be independently verified.

Like ChatGPT, GPT-4 belongs to the realm of generative AI. Artificially intelligent content generators use algorithms and predictive language to come up with fresh ideas in response to specific requests.

As part of OpenAI’s ongoing commitment to bring deep learning to a wider audience, ChatGPT have developed GPT-4. While GPT-4 is not yet as advanced as humans in many practical applications, it does demonstrate human-level performance on a number of industry-standard and scholarly benchmarks, making it one of the most promising multimodal models available.

 

Capabilities

In a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle. The difference comes out when the complexity of the task reaches a sufficient threshold—GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5.

To understand the difference between the two models, we tested on a variety of benchmarks, including simulating exams that were originally designed for humans. ChatGPT proceeded by using the most recent publicly-available tests (in the case of the Olympiads and AP free response questions) or by purchasing 2022–2023 editions of practice exams.

Visual inputs

GPT-4 can accept a prompt of text and images, which—parallel to the text-only setting—lets the user specify any vision or language task. Specifically, it generates text outputs (natural language, code, etc.) given inputs consisting of interspersed text and images. Over a range of domains—including documents with text and photographs, diagrams, or screenshots—GPT-4 exhibits similar capabilities as it does on text-only inputs. Furthermore, it can be augmented with test-time techniques that were developed for text-only language models, including few-shot and chain-of-thought prompting. Image inputs are still a research preview and not publicly available.

message-0-attachment-0
The image shows a package for a “Lightning Cable” adapter with three panels.

Panel 1: A smartphone with a VGA connector (a large, blue, 15-pin connector typically used for computer monitors) plugged into its charging port.

Panel 2: The package for the “Lightning Cable” adapter with a picture of a VGA connector on it.

Panel 3: A close-up of the VGA connector with a small Lightning connector (used for charging iPhones and other Apple devices) at the end.

The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port.

Limitations

Despite its capabilities, GPT-4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it “hallucinates” facts and makes reasoning errors). Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether) matching the needs of a specific use-case.

While still a real issue, GPT-4 significantly reduces hallucinations relative to previous models (which have themselves been improving with each iteration). GPT-4 scores 40% higher than our latest GPT-3.5 on our internal adversarial factuality evaluation.

Leave a Reply

Your email address will not be published. Required fields are marked *