Instagram Explains How Its AI Learns Users' Behaviors
Aadhya Khatri - Nov 26, 2019
According to Ivan Medvedev, a software engineer of Instagram, this is the first time the company reveals the way one of the building blocks of the site work
- 4 Ways AI Could Change The Mobile Gaming Industry
- This South Korean YouTuber Is The Result Of Deepfake Technology
- Instagram Launches A Lite Version For Users In Rural And Remote Areas
Instagram recently revealed how its machine learning for content recommendation worked, with an emphasis on suggesting accounts each user may want to follow, rather than just posts.
The blog post contains lots of technical information and not much surprise. Still, it reveals a lot of the behind-the-scenes work, especially when the algorithms for content recommending have been under fire for leading users to danger.
While the public focuses more on YouTube when it comes to extremist and hateful content, Instagram surely has its fair share of the blame. As a popular social network, the site is no exception, and it is also full of misinformation.
The engineers of Instagram explained the way the Explore tab works in detail while avoiding any sensitive political issues. According to Ivan Medvedev, a software engineer of Instagram, this is the first time the company reveals in detail the way one of the building blocks of the site work.
The post explained how varied the content on Instagram can be with topics ranging from slime to Arabic calligraphy. This posts a difficulty to recommending the content each user may like. The solution they apply is to show accounts, not individual posts.
The machine learning method Instagram is using is quite common and it is called word embedding, which determines who related the two accounts are based on the order of words in texts. For example, the word “fire” appears less near “sandwich,” and “pelican” than in “truck,” and “alarm.”
The recommendations are made around the seed accounts, the ones users have liked the content or interacted with. The machine learning system then uses the above method to filter out the related accounts from all over Instagram. The next step is to pick 500 pieces of content from the short-listed users.
However, before these posts are shown in the Explore tab, the posts must be filtered to eliminate misinformation, spam, and violations of the site’s policies. The last step is to rank the posts based on the likelihood of users to interact with them and then pick out the top 25 to recommend.
The blog post has been quite honest but please note that Instagram’s engineers cannot say everything about the secret sauce behind its Explore tab. It said nothing on the criteria on which a post is considered spam or contain misinformation. It said nothing on the level of interference that machine learning has over the recommended content, which may raise concern given the fact that Facebook, the company that owns Instagram, seems to rely heavily on AI.
Instagram does try to flag inappropriate posts before showing them to users but the process does not seem to be really effective. Take the anti-vaccine trend as an example. Instagram relies on obvious hashtags to track posts as well as the aid from the World Health Organization and other agencies to identify which posts it needs to remove.
While the credibility of AI is still unclear, Medvedev said that Instagram was trying to improve its ability. He said that they were training the artificial intelligence to actively detect the misinformation and take action against it.
Another piece of information that users can take away from the post is that if they wish to see more of what they like, the best way is to interact with what interests them. If the machine learning did not do a good job guessing what you really like and recommend you something you would rather not see, Instagram has a tool that allows you to restrict certain kinds of content. The function can be accessed by tapping on the three-dot icon and choose “see fewer posts like this.”
Comments
Sort by Newest | Popular