What will the back and sides look like when you wear new clothes?
It is an important selection criterion when purchasing a smile, but it is difficult to understand when choosing clothing from an online shopping mall where you cannot try it on yourself. This is because models in online shopping malls often only show one frontal picture of them in their clothes.
However, artificial intelligence may be able to solve these problems in the near future. An artificial intelligence has been developed that can take various poses by using the detailed characteristics of clothes with just one photo of a model and change them into other clothes if necessary.
According to the British scientific journal ‘New Scientist’, researchers at Virginia Tech University in the US first developed an algorithm that identifies the limbs and joints of the body and separates the original photos by body parts.
Neural networks compete with each other to find a natural posture
The researchers then fed the pose of the model desired by the consumer into an artificial neural network. This is to allow the algorithm to determine the new location of that body part.
At this time, artificial intelligence uses a generative adversarial neural network (GAN) to adjust key elements such as the model’s face and clothes to the new posture.
This machine learning, called gan, is a technique widely used in the creation of deepfake videos that create fake faces that are similar to real ones. The artificial intelligence divides into a generator and a discriminator and competes to create the image closest to the target. When the creator presents a face and clothes according to a new posture, the discriminator judges how different it is from the face and clothes in the original photo and gives feedback. If this process, like hide and seek, is repeated, a new model photo with a natural posture that cannot be distinguished from the real one is completed.
To do this, you must first stretch out your face and clothes in a 2D image, and then put them back on according to your new posture. At this time, the researchers used a heat map in the ultraviolet color series to place body parts in place. The researchers used the same method to put other clothes on the model’s body.
Bias problem still… Poor model accuracy for people of color
‘New Scientist’ said, “The artificial intelligence developed by the research team has generally yielded successful results, but it was difficult to move the hand accurately.”
The bias of artificial intelligence, which is often a problem in facial image recognition, is also a problem here. It was also found that models of color rather than white models had lower accuracy when taking new postures with unnatural expressions. The researchers cited the lack of data on various clothing models to train AI as one reason. Bador Albaba, a doctoral student who led the study, said, “If we train artificial intelligence with more diverse data sets, we expect better results.”
Nikki Martinel, a professor at the University of Udine in Italy, told New Scientist, “Improving the biases of artificial intelligence could open up huge possibilities for the fashion design and retail industries.” This research was published in ‘Archive’, a pre-published online thesis on September 13th.