Twitter Is Going To Let Twitteratis Choose How They Crop Images After Racial Bias Scrutiny

Twitter Coins
Twitter Coins

Twitter has finally given the cropping powers to its users, in the wake of a controversy that arose regarding Twitter’s cropping algorithm. Some users commented how it promoted racial bias, following which the authorities decided that they would stop depending on image cropping which is most machine learning-based. In case you are wondering about the incredulity of the entire situation, we are in the same boat too. A tech company is finally announcing that there are certain decisions that need a human touch- AI can’t solve every problem out there. It would actually be unwise to remove the human agency. 

Twitter Talks About Racial Bias In its New Algorithm

According to a report last month, Twitter found out that the algorithm it used for image-cropping was receiving critical attention after a Phd student found out that the algorithm simply showed a white male image. The surprising thing, a black faculty member standing next to him was continuously getting cropped. It was as if the fates had connected, for he had previously been discussing the same racial bias that Zoom’s backgrounds unfurled. 

The criticism was immediately addressed by Twitter who mentioned that they had tested the algorithm for some form of bias before shipping it, and found nothing that would be worth addressing. But they also made it clear that since there were some clear examples, one needed to do a much more comprehensive analysis of the algorithm. Twitter promised to share what they learnt, and the actions they would take in the coming months. 

The Follow Up Tests By Twitter

Following that up, the tech company came up with a few more details that highlighted the testing process that it had put the algorithm through. In a blog post, they debated if moving away from said algorithm would help remove the racial basis that the cropping preview was facing. They also conceded that they had made a mistake by not publishing the details of the test prior to launching the tool- something that would have nipped the entire debacle in the bud.

Twitter finally explained how the entire model actually functioned which had drawn such vociferous criticism from. The entire system was based on saliency- something that predicted how a person would look first. For every test, two demographic groups were tested in pairs- White-Black, White-Asian, Asian-Black, etc. The test reports stated that while there wasn’t any racial or gender bias, they agreed that cropping a photo automatically had a potential to cause harm in itself.

The giant tech company later stated that they should have known better when they were first building and designing the product. As of now, the company is working towards bettering their additional analysis which would definitely add some more rigor to the overall testing. 

While we can agree that the overindulgence of technology and AI can be a counter-measure, it is no denying that a little tweak might improve the entire function. Yet it goes without saying, removing human agents might not be the way to go.