Soul App’s Multimodal AI Revolution at GITEX GLOBAL 2024

Soul App's Multimodal AI Revolution at GITEX GLOBAL 2024

Soul App, a social networking platform that enjoys tremendous fan following among China’s Gen Z has consistently worked on harnessing the power of artificial intelligence to create realistic and meaningful online social experiences. Since digital interactions play an integral role in shaping social lives, Soul’s endeavors have garnered both the attention of industry watchers as well as user engagement.

In a bid to ensure that the platform retains its pole position, the team behind Soul App has constantly been working on enhancing its artificial intelligence capabilities. The latest of these developments was presented at GITEX GLOBAL 2024 in Dubai.

Soul’s engineers unveiled the platform’s new 3D virtual human and multimodal AI interaction model, which is designed to redefine digital companionship and break down traditional social barriers in virtual spaces.

Since the birth of the platform, Soul has been committed to creating a unique social platform that shifts away from the typical appearance-focused networking experience. Unlike other social apps and websites, Soul shuns the use of real photos. This allows users to express themselves without the pressure of disclosing physical attributes.

When using the app, users are encouraged to use the platform’s facial customization tool, which can be used to build personalized virtual avatars that reflect the user’s individuality beyond physical appearance. While this groundbreaking approach was well received by Gen Z, the platform’s top brass were quick to realize that more would be needed to continually hold the interest of Zoomers.

So, Soul App’s engineers have put artificial intelligence to use in a myriad of ways to enhance user interaction on the platform. This new multimodal model is another step in that direction and is likely to set a new standard for social AI. The model helps users to create digital versions of themselves that are a visual depiction of their personality and facial features.

One of the factors that allows this model to deliver truly immersive and highly authentic human-like interactions is the model’s ability to use various forms of input—such as text, voice, and real-time motion capture. By combining these modalities, the avatars generated by the model can make expressions more naturally.

This model is built on Soul App’s NAWA engine which was introduced in 2022. A sophisticated system that can now render highly realistic 3D avatars in real time, analyzing more than 90 facial characteristics and allowing users to design avatars that are nearly identical to their real selves.

In fact, these 3D manifestations made virtual interactions feel as authentic as in-person conversations. Aptly termed “Digital Twin”, these 3D avatars can resemble users as closely as required. The best part is that the resemblance is not just limited to physical appearance.

Actually, what makes these avatars effective communication tools is the fact that they can be made to highlight the user’s personality, preferences, and even aspects of usage/interaction memories. This revolutionary concept brings a new level of personalization to the digital world, allowing users to experience social interactions that mirror their real-life dynamics.

At GITEX GLOBAL, Soul App demonstrated how monocular cameras can be employed to get the data needed for their real-time motion capture technology. This visual data was then fed to the machine learning algorithms that create the 3D avatars based on the user’s instructions.

Because the virtual avatars replicate these movements in real time, they create a deeply immersive experience that feels incredibly life-like. Soul treated the attendees of GITEX GLOBAL to their very own 3D avatars, which could replicate just about any physical movement that the users wanted them to. Eventgoers were left stunned by the authenticity with which the 3D avatars could reflect their emotions and movements and also by how quickly these inputs were being picked up.

It goes without saying that with this novel feature, Soul App has brought down the “dimensional barrier” in social networking. In essence, the 3D avatars generated by this large language model allow users to engage in lifelike interactions without the constraints of physical presence.

Another plus is that users are able to both express emotions and see the emotional responses of the person they are interacting with. And that makes a huge difference on a platform like Soul, which is 100% interest-based.

On most other social media platforms, avatars, even when provided, are minor tools of communication and expression, while the central aspect of the user’s profile is always the individual’s photograph. In other words, something that ties back to the user’s real life. But here is the thing – no matter how many times users change their profile pictures, they remain static elements, adding nothing to interactions with other users.

In contrast, 3D avatars on Soul App help to play a crucial role in infusing emotions and even empathy into interactions. Needless to say, the emotional undercurrent in the interactions makes the experience more fulfilling and enjoyable.

Mao Ting, the CTO of Soul, told the attendees of GITEX GLOBAL that the company will continue to further enhance its AI capabilities to offer more enriching social experiences to the platform’s users. Furthermore, he clarified that by the end of 2024, the platform’s multimodal AI model would have full-duplex video calling capabilities. This feature would allow users to engage in simultaneous audio and video conversations with their digital twins. Now, that statement has given Soul App’s users something even more exciting to look forward to.

Also read interesting articles at Disboard.co.uk

What is your reaction?

0
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly

You may also like

Leave a reply

Your email address will not be published. Required fields are marked *

More in Technology