Crypto & Blockchain News

ChatGPT v4 masters the bar, SAT and can identify exploits in ETH contracts

[ad_1]

GPT-4 completed many tests in the top 10 cohort, whereas the original interpretation of ChatGPT often ended up in the bottom 10%.

GPT-4, the latest interpretation of the artificial intelligence chatbot ChatGPT, can pass college exams and law school exams with a score rating within the 90th percentile and has new processing capabilities not possible with previous interpretations.

Figures from the GPT-4 test scores were followed on March 14 by makers OpenAI, revealing that GPT-4 can also convert image, audio, and video recording inputs to textbooks in addition to handling “more nuanced instructions” more creatively and reliably.

“It passed the simulated stem test with a score in the top 10% or so of test takers,” added OpenAI. “In discrepancy, the GPT-3.5 score is in the bottom 10% or so. ”

The numbers show that the GPT-4 scored 163 in the 88th percentile on the LSAT test — the test board scholars must pass in the United States to be admitted to law college.

A GPT4 score will put him in a good position for acceptance into the top 20 law academies and few of the reported scores are required for admission to prestigious seminaries such as Harvard, Stanford, Princeton or Yale.

ChatGPT’s previous interpretation only scored 149 on the LSAT, placing it in the bottom 40%.

GPT-4 also scores 298 out of 400 on the Uniform Bar test — a test recently graduated law graduates accept that allows them to work as attorneys anywhere in the US. government.

The old interpretation of ChatGPT toiled in this test, finishing in the bottom 10 with a score of 213 out of 400.

As for the proof of the SAT-Based Reading & Writing and the Mathematics SAT exam taken by the US. college undergraduates to measure their board readiness, GPT-4 scores in the 93rd and 89th percentiles, independently.

GPT-4s did better in “difficult” knowledge as well, posting above-average percentile scores in AP Biology (85-100), Chemistry (71-88) and Medicine 2nd (66-84).

But his AP math score was fairly average, ranking in the 43rd to 59th percentile.

Another area where the GPT-4 is lacking is in the English literature exam, scoring in the 8th to 44th percentile on two separate exams.

OpenAI says the GPT-4 and GPT-3.5 took this test from the 2022-2023 practice exams, and that “no specific training” was taken by the language processing tool

“We didn’t do any special training for this exam. Most of the problems in the test were visible to the model during training, but we believe the results are representative. ”

The results also created fear in the Twitter community.

Nick Almond, author of FactoryDAO, told his 14,300 Twitter followers on March 14 that GPT4 will “scare people” and will “destroy” the global education system.

Former Coinbase director Conor Grogan said he installed Ethereum smart contracts directly into GPT-4, and the chatbot accidentally refocused on some “security vulnerabilities” and outlined how the code could be exploited.

Prior to the inspection of smart contracts in ChatGPT, its first interpretation was also able to find criminal bugs to a reasonable degree.

Rowan Cheung, author of the AI ​​newsletter The Rundown, participated in a GPT videotape that copied a fake website hand-drawn on a piece of paper into code.

The post ChatGPT v4 ace bar, SAT and can identify exploits in ETH contracts first appeared on BTC Wires.

[ad_2]

Source link

Related Articles

Back to top button