Very true. But the grades are still subjective. You can definitely look at things like average gain per play while in, average gain per play to your side, sacks, negative yards, etc. Just watching it and trying to assign a grade is so subjective.
There is a lot of subjective stuff that still goes into player evaluation. Professional scouts are human beings with biases like all of us have. Choices about how to measure the value of specific in-game events (which stats) are subjective. If they're based on some specific objective criterion, the specific choice of what criterion to use is subjective.
The "Bayesian" approach to statistics allows for subjective expert knowledge to be included as part of the initial guess at how likely a quantity of interest is to take on different specific values, known as the "prior." One of the most influential Bayesian statisticians of all time, Bruno de Finetti, considered probabilities to be completely subjective, and many Bayesians treat probability theory that way today. But don't think that using frequentist statistics (the other major "school" of statistic - the one with all the stuff from Fisher, Neyman, Pearson... that gang) will get you away from subjectivity. In all of parametric frequentist statistics, there's the subjective choice of a likelihood function, and that's just as subjective as any prior (or likelihood!) in Bayesian statistics.
The choices of the tools to use are subjective. You can't get away from subjectivity.
The thing is that there's value in subjective data. Scouting reports are a great example. There's a lot of noise, but there's some really interesting "signal" there too.
Modern tools give us a bunch of ways of including subjective information in models. Think of "Twitter sentiment trading," for example. A program can analyze what is being said on Xitter (Twitter when the term was invented) about a specific stock or currency or something else and generate additional signals to a trading application. Similarly, models of NFL rookie performance can take into account the content of (text) scouting reports on players.
Pulling together scouting reports and analyzing them as predictors of NFL player performance, modern tools also allow us to detect and correct for individual scouts' preferences and biases.
The thing about PFF ratings is that there's more noise than in, for example, professional scouting reports. I'm sure there's useful information in there, but I wouldn't even try to figure out how much of it is based on actual observation and how much of it is based on players' stats and reputations. Like I said elsewhere, I wouldn't be at all surprised to learn that PFF ratings are more lagging indicators of reputation than leading indicators of performance and value, and I'd expect a wide range of statistical models to give better estimates of the value of previous performance and infinitely better predictions of what to expect next from a given player (PFF grades are descriptive and not in any way predictive).