[ad_1]
Amazon has released an upgraded version of its in-house image-generating model, Titan Image Generator, for AWS customers using its Bedrock generative AI platform.
Simply called Titan Image Generator v2, the new model brings with it several new capabilities, AWS principal developer advocate Channy Yun explains in a blog post. Users can “guide” the images they generate using reference images, edit existing visuals, remove backgrounds and generate variations of images, says Yun.
“Titan Image Generator v2 can intelligently detect and segment multiple foreground objects,” Yun writes. “With the Titan Image Generator v2, you can generate color-conditioned images based on a color palette. [And] you can use the image conditioning feature to shape your creations.”
Titan Image Generator v2 supports image conditioning, optionally taking in a reference image and focusing on specific visual characteristics in that image, like edges, object outlines and structural elements. The model can also be fine-tuned using reference images like a product or company logo, so that generated images maintain a consistent aesthetic.
AWS continues to remain vague about which data, exactly, it uses to train its Titan Image Generator models. The company previously told TechCrunch only that it’s a combination of proprietary and licensed data.
Few vendors readily reveal such information; they see training data as a competitive advantage and thus keep it and info relating to it a closely guarded secret. Training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much.
In lieu of transparency, AWS offers an indemnification policy that covers customers in the event a Titan model like Titan Image Generator v2 regurgitates (i.e., spits out a mirror copy of) a potentially copyrighted training example.
In the company’s recent second quarter earnings call, Amazon CEO Andy Jassy said he’s still “very bullish” on generative AI tech like AWS’ Titan models, despite signs of second guessing from the enterprise and the mounting costs related to training, fine-tuning and serving models.
“In the generative AI space, it’s going to get big fast,” he said, “and it’s largely all going to be built from the get-go in the cloud.”
[ad_2]
Source link