
INCITING SELF-HARM:
LIBERALISM, SELF-RESPECT, AND THE LIMITS OF FREE EXPRESSION
The internet it suffused with content inciting people to violently harm themselves — to take their own lives, to adopt eating disorders, or to injure their bodies in countless other ways. Strikingly, speech encouraging violent self-harm is largely legal in many societies, even as platforms enact policies restricting it. This state of affairs a host of surprisingly undertheorized normative questions. Is there a moral duty to refrain from speech encouraging self-harm speech, and if so, what is its justifying ground? Is such speech, or some subset of it, nevertheless protected under the moral right to freedom of expression, such that laws restricting this speech are unjustified? Do platforms have a moral duty to limit content that encourages self-harm? If platforms have such a duty, what does it require, and ought it be legally enforced? The aim of this paper is to think through these questions.
I take a hawkish line, defending a sweeping moral duty to refrain from speech encouraging violent self-harm. While there is plainly such a duty involving speech directed at children, I argue it encompasses speech even when directed to adults. Such speech is incompatible with the appropriate reverence that citizens must have for each other as self-respecting moral agents, capable of conceiving a conception of a good and meaningful life and pursuing it confidently into the future. Such agency is necessarily embodied; encouraging people wantonly to damage or destroy their bodies is thus presumptively incompatible with the duty to support people’s ongoing life projects as self-respecting agents. Speech can breach this duty, I argue, even when it is not ex ante likely to inspire imminent harm, even when it isn’t targeted at any particular individual, and even when it isn’t successful in achieving its intended goal.
Moreover, this duty forms one principled limit on the moral right to freedom of expression; viewpoint-discriminatory prohibitions of speech encouraging self-harm can, I argue, be justified. Intrinsic free speech concerns (such as respecting listener autonomy or enabling democratic governance) are not, I contend, sufficiently engaged by this sort of speech. And instrumental free speech concerns (such as preventing government abuse) are not tightly implicated in the regulation of this speech, either. Criminal restrictions on such speech, then, are not generally illiberal. I say generally because there is an important exception, concerning cases in which those who legally seek assisted dying (e.g., end-of-life patients) solicit others’ views on what they ought to do. There will be cases in which speech defending the prudence of death in such cases is morally protected; and so ideally would be legally protected, too. But drawing such lines, I will show, turns out to be easier said than done.
Finally, I argue that platforms have a duty to take action against this content. Search engines should be designed to exclude it from search results; and social media networks should enforce content moderation rules that prohibit such content and sanction those who post it. While refraining from amplifying this content is better than amplifying it, for speech explicitly encouraging violent self-harm, this is not sufficient; removal of such content is necessary. Some content, however, will constitute an encouragement to self-harm only when aggregated and amplified; for example, images of unhealthy body images directed to teenage girls plausibly cause harm only when flooding their feeds. In such cases, demoting such content to reduce its visibility is sufficient to defuse harm.
For more information on this research, contact Jeffrey Howard (jeffrey.howard@ucl.ac.uk).