There's a big difference between processing an image vs building an AI that can do thousands of videos in real time, all with different LUTs, bitdepths, etc. And work on multiple different platforms.
There's very good background removal software, but you still need to edit them individually semi-manual.
The ideal way would be using a depth sensor cam like Azure Kinect or Intel RealSense to record depth maps that then can be used as alpha mask, but there are technical hurdles every step of the way. It also doesn't work retroactively.
So how can we make an AI that does this with certain accuracy. Teach it all the body parts and movement flow prediction. Remove the rest. - I'm pretty impressed with the computer vision our SLR team has put together so far!
This AI can then also be used for other things, like censoring of JAV video. I believe this is still mostly done manually now. Generating scripts...
Vrsumo2017 hopefully that sheds some light on why it's not as simple as it might seem. Instead of identifying background to remove, it needs to identify what not to remove. This is not a one-off tool we're building. This is for making it part of the workflow in SLR, actual development and production.
Future, this can then be expanded in multi axis toys, hololenses, haptic feedback other than strokers, to eventually something like sex-androids.
@doublevr has posted each of these things individually all over the place already, but he never bothers to explain it or paint the bigger picture 🙂
As SLR staff I'm of course biased, but our R&D really sets SLR apart from any other studio!