top of page

FORMAL NEUTRALITY AND UNEQUAL LIABILITY: HOW ALGORITHMIC AVERSION DISTORTS LIABILITY FOR ALGORITHMIC TORTS

  • Jingkang Gao
  • 22 hours ago
  • 1 min read

Abstract:

The shift in the cause of machine-induced harm from mechanical failures to algorithmic decision-

making is challenging the applicability of products liability. Because algorithms now operate machines analogously to humans, a doctrinally coherent response is to subject algorithmic torts to a negligence

framework that evaluates the reasonableness of decisions rather than the content of algorithms. This

approach offers a theoretically grounded, formally neutral, and normatively appealing solution. In practice, however, it may result in unequal liability. Even under a negligence regime, algorithmic decision-

makers may face systematically greater liability if injured parties are more inclined to pursue litigation against algorithmic tortfeasors than against human tortfeasors due to algorithmic aversion. A survey

experiment using autonomous vehicles as a representative algorithmic tortfeasor shows that victims are

significantly more likely to sue algorithmic actors than they are to sue human actors when both are subject

to negligence. These findings suggest that doctrinal coherence and neutrality in theory do not necessarily

translate into equality in practice. Negligence, while appealing as a formal solution, may impose higher

expected liability and ownership costs on algorithm-operated machines than on their human-operated

counterparts, potentially hindering the adoption of socially beneficial technologies. The results highlight the importance of considering behavioral responses in the design and evaluation of tort regimes for

algorithmic decision-makers.



bottom of page