Instagram has launched new measures to prevent bullying online, including a novel use of artificial intelligence to catch offensive messages prior to posting.
Bullying on social media, particularly among youth, has been seen in Japan and many other countries around the world, with online problems sometimes escalating to crime or suicide.
Noting that it has endeavored for years to reduce bullying via AI that detects harmful comments, photos and videos, the Facebook-owned platform said, "We started rolling out a new feature powered by AI that notifies people when their comment may be considered offensive before it's posted."
Calling bullying "a complex issue," Instagram said in a release on Monday, "We can do more to prevent bullying from happening on Instagram, and we can do more to empower the targets of bullying to stand up for themselves."
The new tool "gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification," Instagram said, adding that teens are unlikely to report online bullying even though they experience it the most.
Instagram said it will also test a new method called "Restricted" to protect a user's account from unwanted interactions.
"Once you restrict someone, comments on your posts from that person will only be visible to that person. You can choose to make a restricted person's comments visible to others by approving their comments."
Under the new feature, restricted people will not be able to see "when you're active on Instagram or when you've read their direct messages," the operator said.