Ethics, Bias, Trust, …

YouTube’s recommendations are still bad – says Mozilla

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of “bottom-feeding”/low-grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...) | Tagged , | Leave a comment

Facebook Quietly Suspended Political Group Recommendations Ahead Of The US Presidential Election [BuzzFeed]

A few days before the US election, Facebook deactivated its recommender system for political content, according to BuzzFeed (Ryan Mac & Craig Silverman). Interestingly, this move was not announced publicly but only during a hearing in the Senate’s Commerce, Science, and Transportation Committee. During a contentious presidential election in the US, Facebook quietly stopped recommending […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...) | Tagged , , , , | Leave a comment

Big tech’s ‘blackbox’ [recommendation] algorithms face regulatory oversight under EU plans [Natasha Lomas @ TechCrunch]

The EU plans to require large tech firms to disclose how their algorithms work, including recommendations algorithms, reports Natasha Lomas from TechCrunch. In a speech today Commission EVP Margrethe Vestager suggested algorithmic accountability will be a key plank of the forthcoming legislative digital package — with draft rules incoming that will require platforms to explain how their […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...), Laws and Regulations | Tagged , , , , | Leave a comment

Mozilla is Crowdsourcing Research into (Harmful) YouTube Recommendations [Ashley Boyd @ Mozilla]

YouTube recommendations can be delightful, but they can also be dangerous. The platform has a history of recommending harmful content — from pandemic conspiracies to political disinformation — to its users, even if they’ve previously viewed harmless content. Indeed, in October 2019, Mozilla published its own research on the topic, revealing that YouTube has recommended harmful videos, […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...), Example Applications | Tagged , , | Leave a comment

Google restricts its recommender-system for search queries (aka auto-completion) for election-related searches

Google (Pandu Nayak, VP Search) announced yesterday that Google would expand its “protection” policies to its search-query recommender-system a.k.a. auto-completion for election-related searches. We expanded our Autocomplete policies related to elections, and we will remove predictions that could be interpreted as claims for or against any candidate or political party. We will also remove predictions that could […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...) | Tagged , , , , , , | Leave a comment

IBM gets out of facial recognition business, calls on Congress to advance policies tackling racial injustice [Lauren Hirsch @ CNBC]

Face recognition is a potentially interesting technique for recommender systems (e.g. 1, 2, 3, 4). However, IBM decided now to stop all facial recognition business as CNBC reports: IBM CEO Arvind Krishna called on Congress Monday to enact reforms to advance racial justice and combat systemic racism while announcing the company was getting out of […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...) | Tagged , , , , | Leave a comment