What is new with GPT-4?

Jonathan Hui
6 min readMar 15, 2023

--

On March 14th, GPT-4 was released. It gave us some insight into its progress toward achieving superhuman proficiency. Nonetheless, you may feel let down by the lack of technical information on the GPT-4 model, such as the speculated model size. In fact, AI companies have become increasingly secretive about their models. OpenAI cited safety and competition as reasons for their refusal to answer those questions. When compared with ChatGPT, GPT-4 is smarter and safer. It is probable that the new model is more complex and/or sophisticated. And many improvements were accomplished through RLHF training with a more refined dataset. Thanks to this, OpenAI was able to mitigate potential risks, reduce misinformation and surpass the capabilities of ChatGPT.

Can GPT outperform humans?

OpenAI GPT 4.0 achieves a score in the top 10% of test takers on a simulated bar exam, which marks a notable improvement from GPT 3.5 which scored in the lower 10%. The standardized test results below were achieved by GPT 4.0 without undergoing any particular training for these exams. Even though it may not be sufficient for admission to Ivy League schools, the progress made since the release of ChatGPT (a version of GPT 3.5) within just a few months is remarkable.

Source

Its internal adversarial factuality evaluations indicate that GPT-4 outperforms GPT 3.5 by 40%.

Source

However, OpenAI asserts that while GPT 4.0 demonstrates human-level performance on various professional and academic benchmarks, it is less competent than humans in many real-world scenarios. In my own test, it refuses to answer certain toxic questions first.

But by tricking the system, I am still able to deceive it into providing responses. However, as demonstrated later, this becomes significantly more difficult or improbable for more sensitive questions.

It is easy to comprehend the variation in GPT performance between standardized tests and real-life scenarios. Standardized tests are designed with precision to eliminate any ambiguity. The intent of the inquirer is apparent. Conversely, in real-life situations, it is not always easy to grasp the questioner’s intent. GPT 4.0 still struggles to identify sarcasm and negative intentions in some situations. It tends to act quickly and struggles to pose further follow-up questions. Its main goal is not to provide proof for its claims. Nevertheless, it wouldn’t surprise me if GPT achieves superhuman performance for the benchmarks mentioned within the next 6 to 18 months. But it will require further advancements in managing fraud, intention, sentiment, and context for real-life scenarios.

Source

GPT 4 can take in and generate up to roughly 25K words. As a result, the system is capable of processing significantly larger articles and documents. However, I believe that there is still room to better understand and accurately capture contextual information in the dialogs of a conversation.

Multimodal model

GPT is capable of handling multiple modalities, including text and images. An example of this is seen in the “Be My Eyes” app, where users can send images and texts to an AI-powered Virtual Volunteer. The virtual volunteer can quickly address any questions related to the image and provide immediate visual assistance for a wide range of tasks, for instance in the areas of visual impairment and translation.

Source

Here are some more instances of OpenAI’s demonstrations, specifically showcasing its intelligence.

Source

OpenAI also exemplifies how to explicate a research paper. It has the ability to analyze and comprehend intricate ideas presented in academic articles.

Steerability

OpenAI utilizes a combination of internal and external expertise, as well as public opinion, to develop a set of rules for assessing the responses generated by its models. For instance, it provides guidelines to instruct professionals on how to write, review, and rank responses, which are then used to enhance the models through supervised and reinforcement training.

Source

Despite this, OpenAI holds the viewpoint that AI should be customizable by individual users while remaining within the limits set by society. Consequently, the responses generated by OpenAI are adaptable and can be personalized based on specific preferences, specified as “system” below. GPT-4 can offer context, limitations, and instructions on how to address queries.

This can be exemplified through the integration of the Bing interface with GPT.

Today, Greg Brockman, Co-Founder and President of OpenAI, provided a further demonstration of this concept. He presented a context for how GPT should respond to his tax question on the left side, and on the right side, he shared a 16-page US IRS tax code.

Source

After following the arduous IRS instructions, GPT was able to calculate and elucidate his family’s standard deduction correctly. If you have dealt with the US IRS tax code before, you will promptly recognize that even straightforward questions require a genius to decipher the answer.

Source

Risk Mitigation

From the outset of its training process, GPT-4 is designed with a stronger emphasis on alignment and safety.

Source
Source

The accuracy and safety of the model have been enhanced through RLHF training, in particular, to address adversarial usage, unwanted content, and privacy concerns.

Source

Despite RLHF already being implemented in ChatGPT, GPT-4 is further refined through the careful selection and filtering of pretraining data, expert evaluations, and feedback, enhancements to the model’s safety features, and ongoing monitoring and enforcement. OpenAI reports that training RLHF with the new data has resulted in a reduction of harmful outputs.

Source

To illustrate, additional data from domain experts were incorporated to enhance GPT-4’s ability to reject requests relating to the synthesis of hazardous chemicals. In my tests, GPT-4 consistently declines all my attempts to elicit information about breaching TSA security, regardless of how I attempt to deceive it.

To mitigate the risk of generating harmful outputs, GPT-4 employs reward signals during its RLHF training, teaching the model to decline requests for such content. A GPT-4 zero-shot classifier provides the reward signal by assessing the safety boundaries and completion style of safety-related prompts. In order to prevent the model from rejecting valid requests, a diverse dataset is gathered from multiple sources, and the safety reward signal is applied to both allowed and disallowed categories.

More Creative and Powerful

Thanks to its extensive general knowledge and advanced problem-solving capabilities, GPT-4 can tackle challenging issues with increased precision.

Source

Brockman also exhibited the remarkable capabilities of GPT 4.0 by asking it to summarize an article about GPT in one sentence, while also requiring every word in the sentence to start with the letter “g”.

Source

--

--