I’m an Analyst. An expert in tennis strategy and tactics and how they’re applied by different players at different levels of the sport. The business that I co-own develops technology to assist players in their development.
I don’t mention all that at the outset as an advertisement. As part of our business, we have gone into a partnership with a North American outfit to develop the Artificial Intelligence to match our systems. We should have the first “cut” ready for use in September this year.
It’s been an interesting process so far to say the least. So, what are some of the limitations of A.I. in tennis? How can it be applied effectively? Most importantly, how can it help develop tennis players and, in particular, our juniors.
Current Limitations – Unforced Errors v Forced Errors
I say “current” because no limitation is enduring. How long each limitation will last is a “how long is a piece of string” question.
The first limitation is the measurement of what is a forced error versus what is an unforced error. There are some analysts and tennis people out there who will tell you that “tennis is a game of errors”.
Rubbish I say! That’s only true if you lump the forced in with the unforced. The fact is, if I hit an unforced error, it’s my fault. If my opponent forces me to make an error, that’s on them. Forced errors are more akin to winners than they are to errors.
Back to the problem at hand. What is forced and what is unforced is not as difficult as the above analysts make it. It’s actually just lazy analysis that makes them do that. It’s easier because, for example, Hawkeye only measures shots that went in the court. Based on that, it’s very easy to get a very large data set and say “over 1 million points of professional tennis, this many winners were hit vs this many errors”.
In our analysis, only around 5% of points in a match are debateable. That’s around 6 points per match. There are ways that we can make our A.I. learn what is forced versus what is unforced by providing examples from our database.
So, our A.I. will display the difference between forced and unforced errors, and it must because our systems are designed that way – if you’re not measuring the difference, how do you know what to improve?
Next Limitation – This One’s Much Harder
It’s scoring.
How does A.I. tell that a try has been scored in Rugby? The referee sticks his arm in the air and blows a whistle. A goal in AFL? The goal umpire……you know what he does. At the junior level of those sports, and countless others, there’s a ref or an umpire who tells you what happened. Not so in the bulk of tennis competitions and tournaments.
No doubt there’s now a chorus of calls for “this is the reason we need umpires at junior tournaments”. Nice thought, but not economically practical I’m afraid. That leaves us with players self-umpiring matches. Even at ITF junior events, self-umpiring is generally the case.
To explain the difficulty of getting A.I. to know the score, let me provide an example: Player A serves a fault on the first point of a game. That’s easy. Then Player A’s second serve is long, but Player B doesn’t call it long. Instead, Player B hits a return in the net.
Not that they call the score (not in modern day juniors any way), but the players are now playing 15-0, while the A.I has them at 0-15. That’s not trivial, because it has a knock-on effect for the rest of that service game.
The solution? The A.I. learns probability of scoring and, if it’s 0-15 case is wrong, it goes back and amends the score when the players swap servers.
That may not help a case where there are 2, 3 or more cases like that in any given service game. The further problem is, in College Tennis for example, if the above situation happened at sudden death Deuce, the A.I. doesn’t actually know who won the game.
This sort of thing keeps me up at night!
What if we just leave the score out? Not cutting it with this Analyst. The score is an integral part of how we teach the importance of analytics in tennis. What happens at 30-40 is a very different risk/reward scenario to what happens at 40-0, and it’s vital to know that.
The Final Limitation
Simply, the A.I. is not as good as a human Tennis Analyst. Not yet any way. It’s the main reason we provide an analytics/analysis “service”, rather than a set-and-forget tech plan.
Technology needs to back up the service that’s being provided, the coach’s message and developmental goals, not attempt to be the ultimate solution, or all things to all players.
Over-Complicating
When it comes to complication, it’s incredibly easy to get carried away with piles and piles of numbers. “Millions of points of pro or college tennis have taught us that if you do x, you have a higher likelihood of a positive outcome than if you do y or z”.
Shallow.
Tennis is an individual sport, and individualising the analysis of a player’s game is vital to their development. To tell a 14 year-old that “a million points of pro tennis tells us that you will win more points when you serve wide to Deuce” and use that as the reason for them to use that serve more often than other serve locations is a mistake.
Carlos Alcaraz proves it. He has more variations in his kit bag than most pros, and certainly more than any 20 year old I can remember.
Last Word
A.I., just like technology itself, plays a huge part in the development of tennis players, and the part it plays will become greater and greater as time goes on. That’s why we’re developing it.
Like everything, it needs to be used in the right way for developing players, and certainly shouldn’t be over-used by parents, coaches or analysts to pluck numbers to suit a narrative.
Comments