With all this in mind, we can explore the three main approaches to dealing with the Sorites and borderline cases. Each has some weird implications, and none offer any truly satisfying solutions.

The most popular approach to vagueness is a semantic one. Vagueness, the thinking goes, is the result of semantic indecision, and so avoiding the Sorites Paradox is simply a matter of developing a non-standard semantics. The most popular (but not only!) version of this is supervaluationism.

The supervaluationist accepts that in borderline cases certain terms neither definitely apply nor definitely do not apply. But the supervaluationist avoids the Sorites Paradox by appealing to the notion of a precisification. Vague terms, the thought is, are semantically deficient only because they could be made more precise. The term “control” is vague because we could make that term more precise in any number of different ways. We could define, for instance, lots of specific values for when that predicate would apply (i.e. “holding the ball for 0.5 seconds,” “holding the ball for 0.75 seconds”).

The supervaluationist draws on this notion of precisification to define two notions of truth and falsity. Take a value associated with the vague term like “heap”: a million grains of sand. The supervaluationist says that the proposition, “A million grains of sand is a heap” is super-true. And that’s because no matter how we might make the predicate “heap” more precise, a million grains is definitely going to count as a heap. On the other hand, the proposition, “One grain of sand is a heap” is super-false. And that’s because no matter how we might make the term “heap” more precise, one grain of sand definitely won’t count as a heap. The driving thought is that vague terms admit multiple precisifications. And the non-borderline cases, in which we can determinately apply a term, are those in which no matter how we might make that term more precise, it will definitely apply or not apply.

How does this avoid the Sorites Paradox? Well, it’s super-true that one strand of hair doesn’t make someone not bald. And it’s super-true that two strands of hair don’t make someone not bald. And so on. But eventually we’ll get to the borderline cases, in which the term bald does not definitively apply or not apply. So, it isn’t super-true or super-false whether a term applies.

And that means we avoid the paradox. Because it is only a paradox if we have seemingly true premises that lead to a false conclusion. But on any precisification, at least some of the premises will be neither true nor false. And so, there is no paradox.

This is all a bit esoteric and technical and probably making your brain hurt, but it’s about to get weird, too. This is because the supervaluationist has a problem with bivalence, which is an intuitive assumption about truth that claims every proposition is either true or false (or at least every proposition of the declarative sort that we’re concerned with here). The supervaluationist has to deny bivalence. This is because the supervaluationist accepts that some propositions, namely those involving borderline cases, are neither super-true nor super-false.

Here’s why that’s weird: the supervaluationist still accepts the Law of Excluded Middle. This is the claim, going back to Aristotle, that for any proposition, either it or its negation is true. But if we accept the Law of Excluded Middle and deny bivalence, then we are left in a truly knotty situation.

Turn your mind back to Lambeau Field, January 2015 during the review of Bryant’s “catch.” Here’s a claim: “Dez Bryant controlled the ball or Dez Bryant did not control the ball.” The supervaluationist says that claim is super-true. No matter how we make the term “control” more precise, that proposition comes out as true. It’s just an instance of the Law of Excluded Middle.

But that above proposition isn’t super-true because “Dez Bryant controlled the ball” is super-true. The proposition “Dez Bryant controlled the ball” is neither super-true nor super-false. It’s a borderline case of control, after all. And that above proposition isn’t super-true because “Dez Bryant did not control the ball” is super-true. The proposition “Dez Bryant did not control the ball” is neither super-true or super-false.

So where does that leave us? It’s super-true that Dez Bryant either did or did not control the ball. But it isn’t true or false whether that particular instance was an instance of control or not. For the supervaluationist, in other words, it’s true that Dez Bryant either did or didn’t catch the ball. But it isn’t true or false whether that particular play was a catch.

That’s an aggravating result! It’s rooted in the supervaluationist commitment to propositions about borderline cases being neither definitely true nor definitely false. But this also shows why there is no satisfactory use of video review on this approach. Because if vagueness is semantic indecision, then in the case of Dez Bryant’s catch, there is no determinate application of the term “control.” There is no determinate answer to whether we can say Bryant controlled the football or not, and so video review is irrelevant. Taking a closer look doesn’t matter. Since it’s a borderline case, it isn’t super-true or super-false whether that was a catch, because on some precisifications it comes out a catch. On others, it doesn’t.

A second approach to vagueness is an epistemic one, which proposes that vagueness is a result of ignorance rather than any semantic deficiency. The epistemicist thinks that there is some sharp boundary between, for instance, being bald and not bald. There is one strand of hair that makes all the difference, but the moment at which that sharp cut-off occurs is simply unknowable.

And here’s where the epistemic solution bears some resemblance to the semantic one. Because the standard epistemic view takes this ignorance to be rooted in language. One thought is that the meaning of words supervenes on their use. And so, any sharp boundaries for a term would be a function of our dispositions and patterns regarding the use of that term. But we can’t know the totality of those dispositions and patterns. So, we can’t figure out the sharp boundary determined by the totality of those dispositions and patterns. This ignorance prevents our knowledge of any term’s sharp boundaries.

All this provides an easy solution to the Sorites Paradox. Because the epistemicist just accepts that one hair, or grain of sand, or whatever, makes all the difference. So, the argument has a false premise at some point. Because at some unknowable point one strand of hair makes someone not bald. And notice that, unlike the supervaluationist, they can keep bivalence. And so, every proposition involving a vague term is still true or false.

But this is exactly why epistemicism is so weird. Because the epistemicist thinks, for instance, that there is some exact moment when a player has adequate control of a football. There is some sharp boundary between control and lack of control. So, in the case of Dez Bryant, either he passed that sharp cut-off or he didn’t.

That in itself is pretty wild, but what’s wilder is that we can’t know when that cut-off happens or doesn’t happen. The dividing line between control and lack of control is literally unknowable. There is some precise set of circumstances that completes the process of controlling a football, but there is no way to know what those circumstances are.

So it should be obvious why there is no satisfying use of video review on this approach. Because in a borderline case like Dez Bryant’s, it is unknowable whether that set of circumstances was an instance of controlling the ball. There is no point in going frame by frame to take a closer look, because while there is some fact of the matter as to whether Bryant controlled the football, we can’t know what it is. If vagueness is ignorance, then there is no satisfying use of video review.

But wait, things can get even stranger. Those who defend semantic and epistemic views locate vagueness in us. For the supervaluationist, vagueness is a feature of language. For the epistemicist, vagueness results from our ignorance.

But a third approach to vagueness ventures into much headier territory. This approach denies that vagueness is a result of how we represent the world, and argues that it comes from the world itself. This is known as ontic or metaphysical vagueness. The Sorites Paradox, then, isn’t so much a problem to be solved, but an illustration of the world’s indeterminacy.

Here it’s worth emphasizing the weirdness of this approach up front. Because it seems reasonable to think that the world is fully determinate. While it might be unclear whether a certain term should apply to some state of affairs, that isn’t because of the state of affairs itself. When Dez Bryant made that acrobatic play in 2015, it was unclear whether we could apply certain terms to that state of affairs. Terms like “catch” or “control.” But what happened was fully determinate. There was nothing vague about the world or action itself. Or at least, that’s the intuitive thought.

The defender of ontic vagueness denies all this. They deny that the world itself is fully determinate. Instead, the world itself is vague. It can be vague whether someone is bald, for instance, because the world itself can be indeterminate with respect to baldness. And if Dez Bryant’s play was a genuine borderline case of control, then the indeterminacy with respect to whether it was a catch comes from the world itself. That’s pretty wild!

You might think that they very notion of ontic vagueness is unintelligible, but it’s not. Here’s a good way to get a handle on it: Take the notion of a precisification. The defender of ontic vagueness will claim that for some terms, even if we made them maximally precise, it would still be indeterminate whether that term applies.

Say for instance (borrowing an example from the philosopher Elizabeth Barnes) that the predicate “bald” is made maximally precise. So, someone is bald only if they have “less than 1,000 hairs.” How can it still be vague whether “bald” applies if the term is maximally precise? Well, say someone has exactly 999 hairs on his scalp, but one hair is super loose and about to fall off. In that case, it’s suddenly indeterminate how many hairs this person has. So, it’s indeterminate whether the term “bald” applies, even though we’ve made the term maximally precise. And we know all about that precisification! Thus, it isn’t semantic indecision or ignorance that’s causing trouble. The vagueness is a matter of the world itself being vague.

The implications for video review should be obvious here as well. Because if vagueness is metaphysical, then some state of affairs under review might be fully indeterminate. There is no fact of the matter as to whether someone controlled the ball or not. And so looking frame by frame won’t help. There is nothing there for a referee to discover.

Let the current haze you find yourself in after reading all of this act as evidence that there can be no satisfactory use of video review in sports, so long as the rules it’s tasked with adjudicating are vague. Either there is no fact of the matter as to which call is the right one, or that fact of the matter is unknowable. Furthermore, the motivation for widespread use of video review is to get calls right. But if the rules are vague, then this motivation is undermined on each approach, too.

This doesn’t mean we should get rid of video review entirely, because some rules don’t involve vagueness at all. Line calls in tennis are a good example. Hawkeye can stay! And maybe there are other ways to take vagueness into account. One might be to have some set amount of time to look over a replay in real time. Officials can look quickly over a few real time angles, and then move on. No need to look any closer, as there isn’t anything there to look for.

But here’s an objection: You might think these problems can be avoided if video review is used only to correct “clear and obvious” errors. This is the supposed standard for use of VAR in soccer. Things aren’t so simple, though, because there aren’t just borderline cases. There are borderline borderline cases. That is, it can be vague when the borderline cases start. It isn’t just that there is unclarity, but that there is unclarity about when the unclarity even begins.

More VAR Nonsense

This is the phenomenon of higher-order vagueness. And it shows why falling back on “clear and obvious error” won’t help. Because the notion of clear and obvious error is supposed to eliminate borderline cases of error from consideration, but it’s unclear when those toss-up cases even start.

And this phenomenon actually gets at the heart of some of the current dissatisfaction with VAR. For instance, this summer Argentina lost to Brazil in the Copa America semifinals. The game turned in part on a few close calls that went Brazil’s way and weren’t reviewed by VAR. Afterward, Lionel Messi said, “They [the officials] had called a lot of bullshit… But they didn’t even check the VAR [tonight], that’s unbelievable.” Messi thought those cases were borderline enough to be reviewed. The officials did not. Neither is talking about higher-order vagueness (as far as I know!), but that is the phenomenon at work here. It is vague when the borderline even begins. This means the only way to avoid that vagueness would be to review everything, which is something that no one wants.

The Stoics were one of the great Greek philosophical schools to emerge after Plato. They were system builders, and had intricate and intertwined logical, metaphysical, physical, and ethical theories. And part of their theory of logic was a deep commitment to bivalence, the claim that every proposition had to be true or false.