OpenAI’s GPT-4 exhibits “human-level performance” on professional benchmarks

A colorful AI-generated image of a radiating silhouette.

Enlarge (credit: Ars Technica)

On Tuesday, OpenAI announced GPT-4, a large multimodal model that can accept text and image inputs while returning text output that “exhibits human-level performance on various professional and academic benchmarks,” according to OpenAI. Also on Tuesday, Microsoft announced that Bing Chat has been running on GPT-4 all along.

If it performs as claimed, GPT-4 potentially represents the opening of a new era in artificial intelligence. “It passes a simulated bar exam with a score around the top 10% of test takers,” writes OpenAI in its announcement. “In contrast, GPT-3.5’s score was around the bottom 10%.”

OpenAI plans to release GPT-4’s text capability through ChatGPT and its commercial API, but with a waitlist at first. GPT-4 is currently available to subscribers of ChatGPT Plus. Also, the firm is testing GPT-4’s image input capability with a single partner, Be My Eyes, an upcoming smartphone app that can recognize a scene and describe it.

Read 13 remaining paragraphs | Comments

Quality IT Solutions For Large & Small Companies