JAKARTA - NVIDIA opened early access to the NVIDIA Omniverse Avatar Cloud Engine (ACE), for developers to make it easier for them to develop avatars using NVIDIA's artificial intelligence (AI).
"Omniverse ACE makes avatar development easy, providing the AI building blocks needed to add intelligence and animation to any avatar, built on virtually any engine and deployed in any cloud. This AI assistant can be designed for organizations across industries, enabling organizations to improve workflows and open up new business opportunities," said NVIDIA's product marketing manager, Stephanie Rubenstein, in the announcement.
NVIDIA says that ACE is one of several generative AI applications that will help developers accelerate the development of 3D worlds and metaverses.
Announcing early access of @NVIDIAOmniverse Avatar Cloud Engine (ACE) for developers.
Apply now to contribute to the future of interactive #avatars for gaming, the #metaverse and beyond. https://t.co/WO8VNmScbL pic.twitter.com/hXRABzdywl
— NVIDIA AI (@NVIDIAAI) January 3, 2023
Members joining the program will later receive access to pre-release versions of NVIDIA's AI microservices, as well as the tools and documentation needed to develop cloud-native AI workflows for interactive avatar applications.
Since launching last September, Omniverse ACE has been shared with select partners for initial feedback. Now, NVIDIA is looking for partners who will provide feedback on microservices, collaborate to improve products, and push the boundaries of what is possible by living, interactive digital humans.
SEE ALSO:
As per the announcement, this early access program includes access to pre-release versions of ACE animated AI and conversational AI microservices, including:
- 3D animation AI microservice for third-party avatars, which uses Omniverse Audio2Face's generative AI to animate characters in Unreal Engine and other rendering tools by creating realistic facial animations from audio files only.
- A 2D animation AI microservice, called Live Portrait, enables easy animation of 2D portraits or stylizations of human faces using a live video feed.
- The micro text-to-speech service uses NVIDIA Riva TTS to synthesize natural-sounding speech from raw transcripts without any additional information, such as speech pattern or rhythm.
- Program members will also gain access to tools, sample reference applications, and support resources to help get started.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)