OpenAI’s DALL-E AI is becoming a scary-good graphic artist - Rickey J. White, Jr. | RJW™
26324
post-template-default,single,single-post,postid-26324,single-format-standard,ajax_fade,page_not_loaded,,qode-theme-ver-16.3,qode-theme-bridge,wpb-js-composer js-comp-ver-5.4.7,vc_responsive
 

OpenAI’s DALL-E AI is becoming a scary-good graphic artist

OpenAI’s DALL-E AI is becoming a scary-good graphic artist

OpenAI, the San Francisco-based company best known for its massive GPT-3 natural language model, announced on Wednesday it is releasing a second version of its text-to-image AI model.

Like its predecessor, the new DALL-E 2 is a neural network that creates images based on natural language phrases fed in by the user. But while the original DALL-E‘s images were low resolution and conceptually basic, images generated by DALL-E 2 are five times more realistic and accurate, OpenAI researchers tell Fast Company. What’s more, the second DALL-E is actually a smaller neural network. (OpenAI declined to give DALL-E 2’s dimensions in parameters.)

DALL-E 2 was asked to create an image of an astronaut riding a horse. [Image: Open AI]

DALL-E 2 is also a multimodal neural network, meaning it is capable of processing both natural language and visual images. You can show the model two different images, for example, and ask it to create images that combine aspects of the source images in various ways.

And the creativity that the system seems to display while doing it is, well, a little unsettling. During a demonstration Monday, DALL-E 2 was given two images–one that looked like street art, the other something like art deco. It quickly created a set of 20 or so images arranged in a grid, each different from its neighbor. The system combined varying visual aspects of the source images in a number of ways. In some it seemed to allow the dominant style in one source image to be fully expressed, while suppressing the style of the other. Taken together the new images had a design language that was distinct from that of the source images.

DALL-E 2 created variations on “bowl of soup that looks like a monster,” first painted on plasticine, then spray painted on a wall. [Images: Open AI]

“It’s really fascinating watching these images being generated with math,” OpenAI algorithms researcher Prafulla Dhariwal says. “And it’s very beautiful.”

OpenAI engineers took pains to explain the steps they’re taking to prevent the model from creating untoward or harmful images. They removed all images containing nudity or violence or gore from the training data set, OpenAI researcher Mark Chen says. Without that Chen says it’s “exceedingly unlikely” that DALL-E 2 will produce such stuff accidentally. Human beings at OpenAI will also be monitoring the images users create with DALL-E 2. “Adult, violent, or political content won’t be allowed on the platform,” Chen says.

OpenAI says it plans to gradually roll out access to the new model to groups of “trusted” users. “Eventually we hope to offer access to DALL-E 2 through an API [application programming interface],” Dhariwal says. Developers will then be able to build their own apps on top of the AI model.

DALL-E 2 created a series of variations on the original Girl with a Pearl Earring painting. [Image: Open AI]

Looking at practical applications of the model, Dhariwal and Chen both envision DALL-E 2 being helpful for graphic designers who might use the tool to help open new creative avenues. And the developers who eventually access DALL-E 2 via the API will likely find new and novel applications for the technology.

Chen says DALL-E 2 could be an important tool because, while creating language feels natural to human beings, creating imagery doesn’t come quite as easily.

But DALL-E 2 is worth doing without any immediate practical application at all. As a multimodal AI, it has foundational research value that may benefit other AI systems for years to come.

“Vision and language are both key parts of human intelligence; building models like DALL-E 2 connects these two domains,” Dhariwal says. “It’s a very important step for us as we try to teach machines to perceive the world the way humans do, and then eventually develop general intelligence.”


Source: Fast Company

Tags:
No Comments

Sorry, the comment form is closed at this time.