A US centric primer on copyright law and liabilities, to take a bit of the anxiety off from Mastodon hosters.
A mostly US-specific guide to potential liability pitfalls for people who are running a Mastodon instance, and how to mitigate them.
A general introduction into the things one should do when creating a Fediverse instance - from legal entities, code of conducts and technology.
A super interesting and useful concept, especially in a federated environment.
This applies to evaluating how the moderation works in other instances, trust in the core software, and so on.
One of the side effects of Federation is that sharing a link widely unintentionally DDoSes the target, as every Mastodon server at once tries to grab a preview of that link.
It's something that needs to get resolved, hopefully before we break too much.
also known as "Digital Services Act":
> The Digital Services Act and Digital Markets Act aim to create a safer digital space where the fundamental rights of users are protected and to establish a level playing field for businesses.
This is a fundamental (and quite dry) legal document that legislates digital services in the EU.
A more legible explanation and summary can be found at https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
A conversation between german lawyers on the various legal aspects of running or participating a Mastodon instance
This is a comprehensive overview of what kind of moderation actions exist in the Mastodon branch of the Fediverse, and how they work.
> A guide on data protection obligations, challenges & pitfalls for Mastodon Users & Instance owners / Admins.
myrmepropagandist posted a range of Sample Moderation Problems. These are thorny issues that most often cannot be solved algorithmically, and on top don't lend themselves to simple rulesets.
Very insightful, and helpful to calibrate ones baseline.
> Another way I like to think about this feature is as Consent Widgets, where instead of trying to trigger people on their timelines and getting rewarded for that, you erect some friction akin to a subject line, creating a layer of inferred consent in the timeline experience. A Consent Widget asks, would you consent to opening up my post about an intense topic?
(via https://mastodon.social/@jayrosen_nyu/109412527023092584)
This is at the heart of a lot of the current friction: How do we grok the concept of Content Warnings / Content Notes, and as a result, how will they be used? Should the onus always be on the poster, or often also on the recipient? What kind of personalised filters can and should we build, and who is responsible for maintaining them?
"first step: stop doing band-aids"
Source: https://chaos.social/@rriemann/109384055798565711
Collecting some articles on legal opinions. This is the first.
Sharing this because I think it is a useful metaphor to talk about reach and privacy: To design a social space, we have to reflect on what kind of space we want to create.
I myself like to explain how we think certain spaces work with a series of analogies I call “Bedroom to Broadcast”.
A model for governance
Justin Hendrix and Dr. Johnathan Flowers talk about what makes Black Twitter unique, how they and other communities use Twitter, and what challenges they face within the Fediverse.
from the Darcy archives:
## What works:
- Deplatforming problematic speech (hate, bullying…)
- It is proving to be an efficient deterrent against hate speech according to a [study](http://comp.social.gatech.edu/papers/cscw18-chand-hate.pdf) that questioned whether banning specific hateful sub-reddits would successfully diminish hateful speech or just relocate it (Chandrasekharan et al., 2017).
- While the above mentioned study does have technical limitations, various news reports come to a similar conclusion including an in-depth [New York Times report on Alex Jones’s Infowars](https://www.nytimes.com/2018/09/04/technology/alex-jones-infowars-bans-traffic.html),
- Context sensitivity to understand content according to a [survey](https://datasociety.net/wp-content/uploads/2018/11/DS_Content_or_Context_Moderation.pdf) of 10 major platforms (Caplan, 2018)
- Automated flagging (for human human review)
- Caplan (2018) notes that larger companies practicing what she calls industrial moderation, use use automated detection to flag spam, child pornography, or pro-terrorism content. This then usually goes to human review.
- This is consistent with YouTube’s [transparency reports](https://transparencyreport.google.com/youtube-policy/removals) which highlight the efficiency of automated flagging (built out of pattern recognition) as primary sources of detection.
- Individual trusted flaggers
- Individual trusted flaggers are individuals who have access to “priority flags”. They are often professionals in a field relevant to the content they flag (anti-terrorism, child safety, anti-racism…). According to a YouTube Transparency report from October 2018, these individuals represent only 6.2% of overall flaggers but are responsible for more than 3 times as much accurate flagging.
- Taking more time to review each report, which is only possible with low volumes (Caplan, 2018).
## What doesn’t work:
- Over-reliance on the community / social norms
- Allow for users to directly reach out to moderators, as it causes retaliation and harassment, or rely on volunteers left to fend for themselves (ex [Reddit](https://www.engadget.com/2018/08/31/reddit-moderators-speak-out/)).
- Unsupervised AI
- Image-recognition classifiers did not work for [Tumblr trying to take down porn](https://edition.cnn.com/2019/01/02/tech/ai-porn-moderation/index.html)
- Nor is it working for speech moderation (Young Swamy & Danks, 2017).
- Also see [this](https://www.theverge.com/2019/2/27/18242724/facebook-moderation-ai-artificial-intelligence-platforms) article (Vincen 2019)
- Adapting different community standards for various cultural context.
- While [research](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8554954) (on hate speech) indicates that while interpretation does vary by country, there is also significant difference from one individual to another (Salminen et al., 2018).
- In addition as many people live transnational lives and online communities are increasingly the product of intersecting offline context, making offline and online context correspond is not feasible beyond tracking user’s geographic distribution
https://octodon.social/@natematias outlines the complexity of moderation work as mods navigate competing forces of users, other mods, and the platform itself. He notes that moderation isn’t just about mod actions (like removing comments or banning users) but also involves negotiating their role in a system that relies on their labour. (via https://hci.social/@sarahgilbert/109389687180865822)