Christian Buggedei
  • 16 Posts
Joined 4M ago
Cake day: Nov 24, 2022

EFF: User Generated Content and the Fediverse: A Legal Primer
A US centric primer on copyright law and liabilities, to take a bit of the anxiety off from Mastodon hosters.

A guide to potential liability pitfalls for people running a Mastodon instance
A mostly US-specific guide to potential liability pitfalls for people who are running a Mastodon instance, and how to mitigate them.

How To Make The Fediverse Your Own
A general introduction into the things one should do when creating a Fediverse instance - from legal entities, code of conducts and technology.

Introducing the concept of the "Trust Thermocline"
A super interesting and useful concept, especially in a federated environment. This applies to evaluating how the moderation works in other instances, trust in the core software, and so on.

The MastoDDoS Effect
One of the side effects of Federation is that sharing a link widely unintentionally DDoSes the target, as every Mastodon server at once tries to grab a preview of that link. It's something that needs to get resolved, hopefully before we break too much.

EU Regulation on a Single Market For Digital Services and amending Directive
also known as "Digital Services Act": > The Digital Services Act and Digital Markets Act aim to create a safer digital space where the fundamental rights of users are protected and to establish a level playing field for businesses. This is a fundamental (and quite dry) legal document that legislates digital services in the EU. A more legible explanation and summary can be found at

As an addendum: Here is the imprint and policy of the Mastodon instance of one of the participants of that podcast: (again, german)

A german podcast episode about the regulatory needs for german Mastodon instances
A conversation between german lawyers on the various legal aspects of running or participating a Mastodon instance

Overview of Mastodon moderation actions
This is a comprehensive overview of what kind of moderation actions exist in the Mastodon branch of the Fediverse, and how they work.

Mastodon Privacy Guide V.1.0
> A guide on data protection obligations, challenges & pitfalls for Mastodon Users & Instance owners / Admins.

Sample Moderation Problems
myrmepropagandist posted a range of Sample Moderation Problems. These are thorny issues that most often cannot be solved algorithmically, and on top don't lend themselves to simple rulesets. Very insightful, and helpful to calibrate ones baseline.

how to grok Content Warnings
> Another way I like to think about this feature is as Consent Widgets, where instead of trying to trigger people on their timelines and getting rewarded for that, you erect some friction akin to a subject line, creating a layer of inferred consent in the timeline experience. A Consent Widget asks, would you consent to opening up my post about an intense topic? (via This is at the heart of a lot of the current friction: How do we grok the concept of Content Warnings / Content Notes, and as a result, how will they be used? Should the onus always be on the poster, or often also on the recipient? What kind of personalised filters can and should we build, and who is responsible for maintaining them?

I think this very much highlights that moderation is not just a set of rules, but a set of values. And on top of that, these values need to be interpreted, which necessitates a common understanding of how the world works.

exactly. Also, if you click on the headline, there is the full explanation linked.

The Whiteness of Mastodon
Justin Hendrix and Dr. Johnathan Flowers talk about what makes Black Twitter unique, how they and other communities use Twitter, and what challenges they face within the Fediverse.

What works and doesn't work in moderation
from the Darcy archives: ## What works: - Deplatforming problematic speech (hate, bullying…) - It is proving to be an efficient deterrent against hate speech according to a [study]( that questioned whether banning specific hateful sub-reddits would successfully diminish hateful speech or just relocate it (Chandrasekharan et al., 2017). - While the above mentioned study does have technical limitations, various news reports come to a similar conclusion including an in-depth [New York Times report on Alex Jones’s Infowars](, - Context sensitivity to understand content according to a [survey]( of 10 major platforms (Caplan, 2018) - Automated flagging (for human human review) - Caplan (2018) notes that larger companies practicing what she calls industrial moderation, use use automated detection to flag spam, child pornography, or pro-terrorism content. This then usually goes to human review. - This is consistent with YouTube’s [transparency reports]( which highlight the efficiency of automated flagging (built out of pattern recognition) as primary sources of detection. - Individual trusted flaggers - Individual trusted flaggers are individuals who have access to “priority flags”. They are often professionals in a field relevant to the content they flag (anti-terrorism, child safety, anti-racism…). According to a YouTube Transparency report from October 2018, these individuals represent only 6.2% of overall flaggers but are responsible for more than 3 times as much accurate flagging. - Taking more time to review each report, which is only possible with low volumes (Caplan, 2018). ## What doesn’t work: - Over-reliance on the community / social norms - Allow for users to directly reach out to moderators, as it causes retaliation and harassment, or rely on volunteers left to fend for themselves (ex [Reddit](   - Unsupervised AI - Image-recognition classifiers did not work for [Tumblr trying to take down porn]( - Nor is it working for speech moderation (Young Swamy & Danks, 2017). - Also see [this]( article (Vincen 2019) - Adapting different community standards for various cultural context. - While [research]( (on hate speech) indicates that while interpretation does vary by country, there is also significant difference from one individual to another (Salminen et al., 2018). - In addition as many people live transnational lives and online communities are increasingly the product of intersecting offline context, making offline and online context correspond is not feasible beyond tracking user’s geographic distribution

The Bedroom to Broadcast Scale of communication spaces
Sharing this because I think it is a useful metaphor to talk about reach and privacy: To design a social space, we have to reflect on what kind of space we want to create. I myself like to explain how we think certain spaces work with a series of analogies I call “Bedroom to Broadcast”.

The civic labor of volunteer moderators online outlines the complexity of moderation work as mods navigate competing forces of users, other mods, and the platform itself. He notes that moderation isn’t just about mod actions (like removing comments or banning users) but also involves negotiating their role in a system that relies on their labour. (via

Metaphors in moderation
in which Joseph Seering, Geoff Kauffman, and interviewed community mods across platforms and compiled metaphors they used to describe their work. My favorite is moderation as gardening (see my header image!) in which moderation is a form of caregiving that allows communities to grow (via

Welcome to the Darcy Social Lemmy instance. We're still in the process of setting everything up properly. That means things like email notifications, the right communities, admin and moderation duties, setting up policies and a code of conduct. Also, this is as of yet a private instance, but we will federate eventually.