The pervasive growth of algorithmic enforcement magnifies current debates regarding the virtues of transparency. Using codes to conduct robust online enforcement not only amplifies the settled problem of magnitude, or “too-much-information,” often associated with present- day disclosures, but it also imposes practical difficulties on relying on transparency as an adequate check for algorithmic enforcement. Algorithms are non-transparent by nature; their decision-making criteria are concealed behind a veil of code that we cannot easily read and comprehend. Additionally, these algorithms are dynamic in their ability to evolve according to different data patterns. This further makes them unpredictable. Moreover, algorithms that enforce online activity are mostly implemented by private, profit-maximizing entities, operating under minimal transparency obligations. As a result, generating proper accountability through traditional, passive observation of publicly available disclosures becomes impossible. Alternative means must therefore be ready to allow the public a meaningful and active interaction with the hidden algorithms that regulate its behavior.

This Essay explores the virtues of “black box” tinkering as means of generating accountability in algorithmic systems of online enforcement. Given the far-reaching implications of algorithmic enforcement of online content for public discourse and fundamental rights, this Essay advocates active public engagement in checking the practices of automatic enforcement systems. Using the test case of algorithmic online enforcement of copyright law, this Essay demonstrates the inadequacy of transparency in generating public oversight. This Essay further establishes the benefits of black box tinkering as a proactive methodology that encourages social activism. Finally, this Essay evaluates the possible legal implications of this methodology and proposes means to address them.