NVIDIA Introduces AI Model That Converts Text into Images

NVIDIA Introduces AI Model That Converts Text into Images

not found

 In brief: 

Nvidia has introduced a new AI model called GauGAN2, the successor to its original and most famous GauGAN model. This time around they’re letting users create lifelike landscape images that don’t exist. GauGAN’s DL model enables anybody to turn their imagination into photorealistic masterpieces, and it’s easier than ever before. For example, input “sunset at a beach,” and AI will create the scenario in real-time. If you add a second adjective, such as “sunset at a rocky beach,” the model, which is based on generative adversarial networks, instantaneously changes the image. Users may also build a segmentation map, a high-level outline that depicts the placement of items in the scene with the click of a button. They may then switch to sketching, fine-tuning the landscape with rough sketches using names such as sky, tree, rock, and river, which the smart paintbrush will then merge into artwork.

Why this is important:

 GauGAN2’s AI model was trained on 10 million high-quality landscape photographs on the NVIDIA Selene supercomputer. This NVIDIA DGX SuperPOD system is among the top ten most powerful supercomputers in the world. The researchers utilised a neural network to understand the relationship between words and the pictures they represent. The neural network creates a wider variety and higher quality of pictures than state-of-the-art models, especially for text-to-image or segmentation map-to-image applications. The GauGAN2 research demo demonstrates the potential for image generation as a solid tool for artists in the future.

Please leave your comment to encourage us

Previous Post Next Post