Donate
Text Audio
00:00 00:00
Font Size

It’s the future you probably didn’t ask for -- being nagged by Artificial Intelligence to stop being “offensive” and “bullying.”

Instagram touted its new anti-bullying Artificial Intelligence program in its Dec. 16 blog about the social media giant’s “long-term commitment to lead the fight against online bullying.” Instagram claims the AI program “notifies people when their captions on a photo or video may be considered offensive, and gives them a chance to pause and reconsider their words before posting.”

Instagram originally announced this new AI that preempts offensive posts in a July 8th blog headlined “Our Commitment to Lead the Fight Against Online Bullying.” The Big Tech photo-sharing giant wrote that the program gives users “a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification. From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.”

Instagram has been experimenting with tackling the issue of “bullying” for quite some time now. Previously this took the form of a content filter that was created to “help keep Instagram a safe place for self-expression” by “blocking offensive comments.” Instagram CEO & Co-Founder Kevin Systrom wrote in a June 2017 blog that “we’ve developed a filter that will block certain offensive comments on posts and in live video,” further specifying that the content filter was intended “to foster kind, inclusive communities on Instagram.” This filter program came to fruition in May 2018 with a follow-up blog proclaiming that “Instagram will filter bullying comments intended to harass or upset people” in order to keep the platform an “inclusive, supportive” place.

Instagram followed this filter up with a separate AI program that will anticipate user’s offensive posts rather than merely filter them retroactively.

Providing an update on the AI program, Instagram wrote that the "[r]esults have been promising, and we’ve found that these types of nudges can encourage people to reconsider their words when given a chance.”

This program is initially being rolled out in “select countries,” though it will soon be
“expanding globally in the coming months,” noted Instagram in their blog.

The process, as Instagram explains it, is that when an Instagram user writes a caption on a post, “and our AI detects the caption as potentially offensive, they will receive a prompt informing them that their caption is similar to those reported for bullying.” Users will then have the “opportunity” to change their caption before posting it.

How serious are these warnings? What is the price of not heeding them? According to the recent blog, “In addition to limiting the reach of bullying, this warning helps educate people on what we don’t allow on Instagram, and when an account may be at risk of breaking our rules.”

[ads:im:1]

The example of one such offensive comment shown on the blog was a user commenting “you’re stupid” before getting sent the notification, which read: “This caption looks similar to others that have been reported.” The question remains on the platform as to what constitutes bullying, what constitutes a critique, and what are the potential biases that enable the AI to classify various comments as “offensive” or “bullying.”

But how can a computer program be biased? Rep. Alexandria Ocasio-Cortez explained this in a way that the left may find difficult to debunk. She accused algorithms of potentially being rife with bias while speaking at an MLK Now event in January. She claimed that algorithms "always have these racial inequities that get translated… if you don't fix the bias, then you're just automating the bias.” LiveScience backed up Ocasio-Cortez’ claim by citing an example about facial recognition. It wrote that if a program is being trained to recognize women in photographs, and all the images it is given are of women with long hair, “then it will think anyone with short hair is a man.”

If instagram’s algorithm is trained to see mere disagreement as a form of bullying or fact-checking by opposing political figures as offensive then it will be categorized as such. This has scary implications for current American politics.

Instagram may have done something similar already, when it protected Sen. Elizabeth Warren (D-MA) from critique in February, 2019. GOP spokeswoman Kayleigh McEnany tweeted, “I have been warned by @instagram and cannot operate my account because I posted an image of Elizabeth Warren’s Bar of Texas registration form via @washingtonpost. I’m warned that I am ‘harassing,’ ‘bullying,’ and ‘blackmailing’ her.”

Later, as reported by the Daily Caller, Instagram reinstated McEnany’s account and sent an apology, saying that it mistook the post for sharing her private address.

[ads:im:2]