siarhei Thanks, will check out the script.
Seems you already have a nice automated test setup. However in this case I was referring more to the averages of a segment. If you have a movie with various segments of a somewhat consistent action / context (like a certain position and intensity) you can compare aggregated metrics for these parts of the movie. Things like: total strokes, average stroke range, average stroke position and stroke variability. I noticed big overall differences in these metrics between AI scripts and human made scripts and the difference varies greatly by position and intensity (that's why I pointed out the suggestion).
Now, I've already pointed out some of the specifics (like the overall intensity of mostly fast parts, some of the "perspective" issues and the bottom range issue) but another important one is the stroke variability during fast parts. I first noticed this when trying out the script for long night. It felt all over the place in the fast parts and when you analyze the script it becomes pretty obvious why. I haven't compared this in detail (as in strokes per minute etc.) but it wouldn't surprise me if it is this high variability and not the actual speed that made these parts uncomfortable. I just can't imagine that such an irregular pattern could feel good at high speeds.
AI generated fast part (long night)
For comparison: a fast part of a manual script (screwing out of school)
Anyways, I think it's these kind of things an automated test like I described (in combination with comparing the heatmaps) could catch early on and will allow you to better measure progress on between iterations. But I will shut up about it now 🙂
Good luck with the project!I