This paper provides an empirical hypothesis test and partial verification of the “algorithmic folk theory” of “reaching the wrong audience.” This user folk theory claims that content posted to TikTok can sometimes become sequestered to a hostile audience, resulting in a sharp influx of harassment and oppositional comments. To test this theory, this paper employs a graphical analysis to identify trends in interaction rates, comments over time, and comment sentiment. The data collected consisted of 1,455 posts and 454,540 comments, which were then evaluated using a natural language processing (NLP) sentiment analysis tool for a total of 297,009,882 effective observations of viewer sentiment response. Using this data, this paper employs a time-series analysis to identify a “resurrecting” post behavior characterized by a sudden increase in engagement of an otherwise “dead” TikTok post as far as ten months after the content’s initial post date. Further, the findings highlighted how this “resurgent behavior” would commonly occur when the sudden influx of engagement contained either distinctly positive or negative comment sentiment. These findings suggest the existence of “audience sentiment sequestering,” explained as the algorithmic restriction of viewership to a specific audience type and a core mechanism of the user folk theory of “reaching the wrong audience.” Lastly, this paper proposes a new theoretical algorithmic phenomenon, the anti-preference theory, to explain why automated algorithmic decision-making may cause a user’s content to “reach the wrong audience” and remain stuck there. This theory suggests that the recommendation algorithm implemented on TikTok is impartial to the positive or negative sentiment of a viewer’s comment but still susceptible to the user’s propensity to comment. In conjunction, these traits can cause the recommendation algorithm to “misinterpret” a user’s negative comment as a successful recommendation for that viewer. This “misinterpretation” can create a feedback loop, where the recommendation algorithm will show the content to other hostile users with similar “anti-preferences.” Expectedly, this hostile audience would share a similar high propensity to leave hostile comments of their own, thus restarting the loop. From the content creator’s side, the anti-preference phenomenon can appear as a sudden and seemingly systematic increase in hostile comments, similar to the experiences described within the “reaching the wrong audience” folk theory.