I disagree that you can't have an educated guess though. I think formulating an opinion by considering past patterns and precedents in Jamal's career is by definition, an educated guess.
There are lots of things we can't know deterministically, but for which we can make probabilistic models that allow for testable predictions.
So while we can't know with certainty how many games a given player will play in the next season (unless it's zero, when we know the player won't be playing, perhaps because of the time to recover from an injury), we can estimate how many games that player is
likely to play, an expected number of games. We can estimate quite a bit about the probabilities of all possible results (that is, about the full
distribution of expected results.
And we can test such predictions by taking the relative frequency of a given result (the fraction of times in a given sample when that result is obtained) as the best estimator for the probability and compare the long-run relative frequencies to our predictions. That language can be confusing, so let me use an example.
Let's look at weather predictions. Let's say that given the current data available, a model says there's a 10% chance of rain tomorrow, and that's the prediction that's published. A probabilistic prediction was made, so it's neither correct nor incorrect if it rains the next day. This was a common huge mistake in the reporting on Nate Silver's presidential election models in 2008 and 2012. Many media stories said Silver "got all 50 states right" in 2008 and 49 out of 50 in 2012. But that's not what happened. The media were saying Silver's model was "right" about a state where Obama won when Silver's model had him the highest chance of winning or where Obama's opponent (McCain in '08, Romney in '12) won when Silver's model had given
him the highest chance of winning that state.
The way to test probabilistic predictions is described in the paper "The Well-Calibrated Bayesian," by A.P. Dawid, published in the
Journal of the American Statistical Association in September of 1982 and
available here. Basically, you have to collect predictions and look at how often a given thing happened when you said it had, for example, a 10% chance of happening. Repeating that for other predicted chances, you could make a graph of predicted chance
vs. how frequently it actually happened. You'd hope for it to be roughly linear, and with an average slope close to 1.
You might have to "bin" predictions to increase the size of the sample. For example, you might not have predicted exactly a 53% chance of rain for a given day many times, but you might have made predictions in the range of 50%-54.9% enough times for the sample to be "big enough" (that's a whole other topic on which a lot of work has been done and a lot can be said). If the graph is not linear, or has an average slope far from 1, then the probabilistic predictions are not very good. If it comes out looking roughly linear, with a slope close to 1, that's a sign that the probabilistic predictions have done pretty well.
Dawid also proves that a statistician using now-standard Bayesian methods should expect his predictions to do well as described above, but the part I wanted to cover here was how the quality of probabilistic predictions is measured.