Prior research has shown that people judge algorithmic errors more harshly than identical mistakes made by humans—a bias known as algorithm aversion. We explored this phenomenon across two studies (N = 1199), focusing on the often-overlooked role of conventionality when comparing human versus algorithmic errors by introducing a simple conventionality intervention. Our findings revealed significant algorithm aversion when participants were informed that the decisions described in the experimental scenarios were conventionally made by humans. However, when participants were told that the same decisions were conventionally made by algorithms, the bias was significantly reduced—or even completely offset. This intervention had a particularly strong influence on participants’ recommendations of which decision-maker should be used in the future—even revealing a bias against human error makers when algorithms were framed as the conventional choice. These results suggest that the existing status quo plays an important role in shaping people’s judgments of mistakes in human–algorithm comparisons.