OpenAI has recently unveiled its latest AI model, GPT-4, which has demonstrated the ability to perform at a level comparable to humans across a range of professional and academic benchmarks.
More Creative and Collaborative
OpenAI has officially announced the launch of GPT-4, its latest AI language model that is said to be more creative and collaborative than ever before, with greater accuracy in solving complex problems.
However, like its predecessors, the model has some limitations, including the tendency to “hallucinate” or fabricate information.
Additionally, GPT-4’s knowledge is limited to events that occurred before September 2021, according to OpenAI.
Still Flawed But Interpretes Complex Inputs
When announcing the launch of GPT-4, OpenAI CEO Sam Altman admitted on Twitter that the latest AI language model is not without its flaws and limitations.
However, he also noted that GPT-4, a large multimodal model that accepts image and text inputs and emits text outputs, performs at a human-level on various professional and academic benchmarks.
Despite its limitations, several companies, such as Duolingo, Stripe, and Khan Academy, have already partnered with OpenAI to integrate GPT-4 into their products.
GPT-4 is only available through ChatGPT Plus, OpenAI’s $20 monthly subscription service, and it’s also being used to power Microsoft’s Bing chatbot.
Furthermore, GPT-4 will be available as an API for developers to build on.
According to OpenAI, GPT-4’s advancements over its predecessor, GPT-3.5, may not be immediately evident in everyday conversation.
However, the company asserts that the model’s performance improvements are evident in its success on various tests and benchmarks, such as the Uniform Bar Exam, LSAT, SAT Math, and SAT Evidence-Based Reading & Writing exams.
GPT-4 has recently surprised many by scoring in the 88th percentile or higher on these tests.
While the model is multimodal and can process both text and image inputs, it can only produce text outputs.
Nevertheless, GPT-4’s ability to analyze text and images concurrently enables it to comprehend more intricate inputs.