Ethics, Bias, Trust, …

Amnesty International warns that TikTok’s recommendations push children towards harmful mental health

In a press release and two reports (Driven into Darkness and “I feel Exposed”), Amnesty International strongly criticizes the recommender system of TikTok. I have not read the reports completely, but they each are around 60 pages long and contain serious allegations. In summary, Amnesty International claims that their research has revealed concerning findings about […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...) | Tagged , , , , , | Leave a comment

YouTube changes content recommendations to enhance ‘Teen Wellbeing’

James Beser, Director of Product Management YouTube Kids and Youth, reports that YouTube has updated its recommender system to better cater to the needs of teenagers, with a focus on safety, privacy, and wellbeing. James also reports that YouTube is introducing new partnerships with experts in youth, parenting, and mental health to enhance the experience […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...) | Tagged , , | Leave a comment

Algorithms aren’t fair. Robin Burke wants to change that. [Press Release Colorado U]

The University of Colorado Boulder released a press release about the well-known recommender-systems researcher Robin Burke, titled “Algorithms aren’t fair. Robin Burke wants to change that”. Scroll through an app on your phone looking for a song, movie or holiday gift, and an algorithm quietly hums in the background, applying data it’s gathered from you and […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...), Uncategorized | Tagged , | Leave a comment

YouTube’s recommendations are still bad – says Mozilla

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of “bottom-feeding”/low-grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...) | Tagged , | Leave a comment

Facebook Quietly Suspended Political Group Recommendations Ahead Of The US Presidential Election [BuzzFeed]

A few days before the US election, Facebook deactivated its recommender system for political content, according to BuzzFeed (Ryan Mac & Craig Silverman). Interestingly, this move was not announced publicly but only during a hearing in the Senate’s Commerce, Science, and Transportation Committee. During a contentious presidential election in the US, Facebook quietly stopped recommending […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...) | Tagged , , , , | Leave a comment

Big tech’s ‘blackbox’ [recommendation] algorithms face regulatory oversight under EU plans [Natasha Lomas @ TechCrunch]

The EU plans to require large tech firms to disclose how their algorithms work, including recommendations algorithms, reports Natasha Lomas from TechCrunch. In a speech today Commission EVP Margrethe Vestager suggested algorithmic accountability will be a key plank of the forthcoming legislative digital package — with draft rules incoming that will require platforms to explain how their […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...), Laws and Regulations | Tagged , , , , | Leave a comment

Mozilla is Crowdsourcing Research into (Harmful) YouTube Recommendations [Ashley Boyd @ Mozilla]

YouTube recommendations can be delightful, but they can also be dangerous. The platform has a history of recommending harmful content — from pandemic conspiracies to political disinformation — to its users, even if they’ve previously viewed harmless content. Indeed, in October 2019, Mozilla published its own research on the topic, revealing that YouTube has recommended harmful videos, […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...), Example Applications | Tagged , , | Leave a comment

Google restricts its recommender-system for search queries (aka auto-completion) for election-related searches

Google (Pandu Nayak, VP Search) announced yesterday that Google would expand its “protection” policies to its search-query recommender-system a.k.a. auto-completion for election-related searches. We expanded our Autocomplete policies related to elections, and we will remove predictions that could be interpreted as claims for or against any candidate or political party. We will also remove predictions that could […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...) | Tagged , , , , , , | Leave a comment

IBM gets out of facial recognition business, calls on Congress to advance policies tackling racial injustice [Lauren Hirsch @ CNBC]

Face recognition is a potentially interesting technique for recommender systems (e.g. 1, 2, 3, 4). However, IBM decided now to stop all facial recognition business as CNBC reports: IBM CEO Arvind Krishna called on Congress Monday to enact reforms to advance racial justice and combat systemic racism while announcing the company was getting out of […]

Posted in Ethics (Bias, Equality, Trust, Transparency, ...) | Tagged , , , , | Leave a comment