I have started to
test translations with artificial intelligence, and I am getting quite acceptable results...
View attachment 2957533
View attachment 2957534
View attachment 2957535
It only works if the dialogues are clear and well pronounced, and only one person speaks at a time. If several people speak, speak in whispers, or speak between moans, it doesn't understand it.
But in the introduction scenes in one-on-one movies, where only the girl speaks, the AI manages to translate quite acceptable between 50 or 60% of the dialogues, which is very good.
You still miss phrases, but in the movies I've tried it I've managed to understand the plot and the main thread of the dialogues, so I'm happy.
The problem is that few VR players accept subtitles. HereSphere does not accept them, I had to use Virtual Home Theater on PC. I don't know if DeoVR accepts them, I have to test it.
The way to translate is simple: With AutoSub you use Google AI to convert the audio to Japanese subtitles in a standard subtitle format (SRT). And with the web
DeepL (which is better than Google translator), you translate those texts from Japanese to your native language.
To translate first I have tried this solution proposed by Jppilot with
AutoGenSubGui:
It works very well and offers the best translation, but if you use files larger than 2 GB it gives a memory error....

And 99% of VR files are bigger than 2 GB...
Then I have tried
autosub-0.5.7-alpha-win-x64-pyinstaller (
Download), which accepts files of any size, but it works with command line, through a RUN.BAT file, and the translation it offers is the worst of all, I don't know the reason.
Now I am using
PyTranscriber 1.5 (
Download). It works well, with an acceptable translation, but with some videos it crashes and does not translate them.
So for the moment I haven't found the perfect solution, but at least I get to know the plots. If someone knows a better solution, please explain it, thanks!