Pointless leftfield question perhaps, but are you considering using an LLM to interpret vocal signals (anything from full on speech to grunts, moans, orgasmic screaming and the like) as a way to add 'imagination' to the script? Was originally thinking about it for JOI's, but more interestingly, maybe it could add texture to the blank bits in AI scripts when the AI can't see what's happening?
Also, less pointless question, how often do you do a mass regen for AI scripts? The slow incremental improvements really do make a difference. A quick search shows there's some v1.0.1's still out there from a year ago, which are completely blown out of the water by v1.4, to such an extent that i'd rather put them on a watchlist and wait.
Any idea when's the next regen likely? Will it be a complete overhaul, and bootstrap EVERYTHING up to v2.0 immediately...?