For the second to last class in our semester-long investigation of race, gender, sexuality, class, nationality, and other categories in the history and historiography technology, we’ll be looking at articles chosen by the class. Please post a link to your article in a comment on this post, along with a few lines about why you chose it, how it relates to other readings we’ve done in class, and a possible discussion question. Your comment won’t show up until I approve it so don’t worry if you don’t see it show up immediately.
I was interested in how Noble’s book may have generated a response from Google or an attempt to remove bias from search results. Unfortunately, I could not find a direct response from Google but did find a few interesting parallels.
One study from phys.org looked at the links between websites as an indicator of the inequity in the actual structure of the internet. https://phys.org/news/2017-06-racism-internet.html
I also found a link to an older book “The Filter Bubble” which examined the Facebook news feed and idea that you start seeing results which match your political leanings – thus an algorithm filters out opposing views to create an echo chamber. As this remains a hot topic, seems it hasn’t been solved and might be a good point of discussion.
TED talk: https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles?utm_source=facebook.com&utm_medium=social&utm_campaign=tedspread
Proposed solution: https://ideas.ted.com/eli-pariser-on-upworthy-and-the-filter-bubble/
Study in Response by Facebook: https://medium.com/backchannel/facebook-published-a-big-new-study-on-the-filter-bubble-here-s-what-it-says-ef31a292da95
As a follow-up, I saw this article from BBC today which cites a number of studies countering the idea of a filter bubble having any real impact. However, it concludes with two articles about the deliberate manipulation of social media.
I’ve included a link here to a short piece in Medium dealing with algorithms and inequality. It is about Eubanks’ work on the ‘digital poorhouse’ which I thought would help us think through digital redlining a bit more.
I wanted to explore some of the ideas of surveillance that were brought up in dark matters further by looking at face recognition technology. One of the major features that the iPhone X promotes is its face recognition software but I was wondering how race and gender are represented in the algorithms.
Discussion Questions: Are people with certain identities harder for the software to recognize and how does the perpetuate inequalities?
Does face recognition software reduce gender and racial bias?
Does face recognition software violate personal privacy?
Content warning: Racial slurs + other demeaning language, violent images
This article from Slate entitled “The Internet of Hate” discusses various websites the alt-right have congregated as they have been losing web domains due to their hateful speech, causing platforms like Gab to lose access to about “70 to 75 percent of its potential U.S. market.” Obviously, this seems like a positive thing but the author of the article also points out that “even if you agree with banning Gab, the power of a handful of companies to banish anyone from the internet should give you pause.” I thought this was an interesting point because as we read in Cottom’s article on digital redlining, losing access to sites like Facebook results in a real material loss for people on the left as well.
This article seems to reveal that our problem lies not only in the hateful speech of the alt-right but also in the immense power of companies like Facebook and Google, who regulate a huge percentage of what happens on the Internet. Regulation and breaking up of monopolies is one change that would lessen their power but what about the strategy that this article highlights of creating a different Internet? This doesn’t seem super viable to me for reasons that the article lays out but I’m intrigued by the possibilities of trying to leave Facebook/Google/Twitter and striking out on one’s own.
I also included the Rolling Stone article about how Twitter’s alt-right purge fell short because I thought that might lead to a discussion of how much regulation of hate speech we should actually expect from these companies, since at the end of the day, they are trying to turn a profit. Should we instead look to government for legislation, as they’ve done in Germany, instead of thinking that these companies will actually look out for the best interests of their users?
I’ve been thinking about technology adoption for my final project, and that got me thinking about Black communities on social media (cf. my rambling about Vine two weeks ago).
Here’s a 2010 article on Black Twitter, which was influential and a little controversial.
I’m especially interested in what’s *missing* from that article. A few things stand out to me, and I wonder if y’all come to the same conclusions.