This Neural Network Can Make Game Characters Move Like A Real Human

Aadhya Khatri - Nov 01, 2019


This Neural Network Can Make Game Characters Move Like A Real Human

This neural network will work just like the way deepfake videos are created but its database will have all of the motions of live performers on a soundstage

Video games offer players an opportunity to immerse themselves in a virtual world with the limit is the developers’ imagination. However, they have had lots of trouble making in-game characters move and interact more like a real person. So to avoid making weird postures that remind players that they are in a game, they are using the power of machine learning and AI to help create characters’ movements that look as realistic as possible.

In the conventional ways, developers usually rely on the way a real human walks, runs, jumps, and does other actions and translate them into their digital forms. With this method, we will have more realistic outcomes than animating game characters. However, it also has some shortcomings.

According to researchers, there is no way we can plan every possible movement of characters and the ways they will interact with the world around them. Developers try their best to list out as many possible motions as possible, but ultimately, they will miss something. So even when they know that the results may look stiff or unnatural, they still have to partly rely on some software to make the transition. And in most cases, the results fell more like a compromise.

Recently, experts at Adobe Research and the University of Edinburgh have found a solution for this problem with unrealistic and stiff movements of in-game characters, and they will present their findings at the ACM Siggraph Asia conference in Brisbane in December. The solution involves making use of machine learning to correct the used-to-be inevitable hiccups that gamers usually see in video games.

neural-network-games
The solution involves making use of machine learning to correct the used-to-be inevitable hiccups that gamers usually see in video games

The new method uses a similar approach to the way deepfake videos are created. First, a neural network will learn every angle of a person’s face with every possible expression. The database must contain tens of thousands of images of the subject’s heads. The process is undoubtedly time-consuming, but when the network has been trained successfully, face swaps can be created instantly, and the outcomes will look like the videos feature the real person.

This solution’s neural network will work just like that, but its database will have all of the motions of live performers on a soundstage. However, to have the best results possible, the database must contain a large number of movements with the subject does everything from picking up things, sitting down on a chair, or climbing on walls.

The neural network will analyze, learn, and adopt what it picks up to create game characters. The best part of this method is, the database does not need to be too inclusive as the AI can apply what it has learned in almost any situation and environment, somewhat like what a normal human does. The results can be as natural as in real life. The gaps created when a character walks toward a chair, slows down when he or she is close to it, turns around, and sits down will be filled and the AI will link all of these motions together so that the seams will not be noticed.

The benefit of giving games the ability to determine how their characters are supposed to move is the reduction in file sizes, which will become more and more relevant as game streaming is now a thing, and many tech giants are adopting it.

If it proves to be useful, we may soon see this neural network approach helping game characters show more complex motions. For now, we rarely see more than two characters fighting with one another, except in the cut scenes. That may not be the case anymore in the near future.

Tags

Comments

Sort by Newest | Popular

Next Story