Docs
- Recording mobile games
- OBS
- Preprocess
- Multiplayer Game Editing
- Encoding from Kdenlive
- Kdenlive notes
- Audio Mastering
- Video Rendering
- Muxing
Recording mobile games
For this we use an old Motorola G4 with Lineage OS installed on it. This version of Lineage comes with a recorder. However, it will only record the device’s microphone. So we use the phone’s recorder to capture the game’s video, and us talking. We use obs to record ourselves talking and also the game sound using the line-in port on the PC. This creates a lot of noise, which we remove using Noise Suppression (-30db).
Because we have all the ingredients out of sync, we can sync up the video from the device, and the audio from obs via the two recordings of our vox, one in the video from the device, and one in obs where the game sound also is.
OBS
Output mode is advanced, we record in mkv with 3 audio tracks. We use Quicksync with ICQ set to 1 for nigh-lossless, but smaller. Audio bit-rates for active channels are set to 320 and named GME, VOX, and MIX. Generally we stick to 30 or 60 fps.
Make sure Advanced has colour space set to 709.
Preprocess
Videos should be transferred from the local storage to Network (/mnt/LPWorking).
We use ai-based noise suppression (https://github.com/GregorR/rnnoise-models) to get a clean vocal track.
Uncompressed because there are issues with clicks and pops in certain circumstances with compressed lossless formats.
Multiplayer Game Editing
Lessons learned from Sea of Thieves
If you can embed character portrait stuff into the source videos via transcoding without blocking anything, do so, it’s always good to have who is playing at any point on screen.
Set up tracks that have the effects pre-prepared, like left vertisplit, right vertisplit, that way you can move track segments onto the relevant track rather than adding or removing chains of filters to / from clips.
Keep the sound mono unless completely isolated.
Encoding from Kdenlive
We need a few different encoding profiles. The main one is for actually producing episodes, for this we use x265. It’s slow, but makes very small files. These must be small as we’ll be archiving them forever, and they must be high enough quality to not take a big hit when youtube transcodes them.
Lossless encoding is great for in-project usage to make layers easier to manipulate. Technically not lossless due to chroma subsampling, but good enough for most purposes.
You can make a transparent video with kdenlive by using a profile that supports it along with a base layer which is a transparent image and setting the internal format to rgb24a (otherwise the internal format is yuv422)
Audio-only encoding is another part of the process which means no quality is lost when exporting, like edited sound to be mixed together outside of kdenlive. kdenlive and flac have issues, which is why we avoid it in footage that will be used in kdenlive. For final export it’s fine, and then we re-mux it into the final video as an opus file (256k). flac will also be missing length information for some reason (it’s there if you put it in a .mka file, but not a .flac file). The same problems are unluckily also there with .alac files.
Kdenlive notes
Switch off Track compositing, it gives strange results, better off with affines (rgb) or composites (yuv).
Don’t use fade to / from black, use dissolve instead (maybe path this out of local versions?)
Audio Mastering
Recording is done at 80 +20db
Keep a log of anything you do outside of this in the final output folder named after the game, i.e. Soul Reaver 2 recipe.
Note that the idea isn’t to get an exact overall i of -23 as that’s actually fairly quiet, the idea is to get a repeatable process that produces stuff at the same consistent volumes across LPs, but if r128 compliance is then later forced on the finished product, it should adjust to the new volume easily as it will just need to be lowered in volume, but the game track on its own, is.
- Render audio only using the flac profile, selecting full project and stem audio, which will automatically make an audio file for each separate track.
- You can downmix the vox here with sox (sox input.flac output.flac channels 1) to speed up the process, or do it in audacity next.
- Import each track into audacity for final audio mix.
- Make the vox mono (tracks, mix, Mix Stereo down to mono)
- Compress the vox (threshold -30, noisefloor -70, ratio 3:1, attack time 0.10, release time 1.0, make-up gain unticked, compress based on peaks untick)
- Export just the vox track from audacity as flac
- ffmpeg -i voxnorm.flac -af ebur128=dualmono=1 -f null /dev/null
- Work out the difference between I and -27 (i - 27)
- Amplify vox by this.
- Use the limiter (Soft, 10db gain, limit to -3db, hold 10ms, no make-up gain) on the vox to top up the volume.
- Now do something similar for the game audio (alternative you can mix with sox -m -v1 input -v1 input output) volumes. ffmpeg -i gme.flac -af ebur128 -f null /dev/null
- Work out the difference between I and -23 (i - 23)
- Amplify game to -23 so the vox is clear over the game when it is autoducked, but fairly loud when there is no talking. If the game is very quiet, and you can’t amplify up, use the limiter to get to -23. This will need to done separately on video tracks from sources outside the game. Very rarely, you might need to adjust the video sound before you normalise the volumes, because one bit is an anomaly, or because intro videos are too loud. Turning up or down individual sections or even soft limiting the entire video.
- Autoduck the game sound, with the voice underneath, at -12 duck, with 1s pause, and outer fades of 0.5. Threshold should be -15db
- Export the audio as selections to flac so you end up with the first two tracks as a mix, and the un-ducked game audio as a separate file.
Video Rendering
We render in guides mode to get each episode out of the same project.
Another set of scripts that produce 4k versions with lanczos scaling are created with fast as the preset for samish rendering speeds. At the end we have _2160 versions and _1080 versions.
Muxing
Due to various audio issues and the added flexibility of rendering video and audio separately (we can start rendering video before we master the audio)
We mux thus, note the framerate in the sed bit and the episode numbers after the first triple colon (:::).
parallel --ungroup -j1 ffmpeg -hide_banner -i {1.}_2160.mkv -ss {2} -i /home/user/kdenlive/Game_mix.flac -ss {2} -i /home/user/kdenlive/Game_gme.flac -c:v copy -map 0:v -map 1 -map 2 -c:a libopus -ab 256k -shortest -y {1} ::: /mnt/LPWorking/Fin/Game{1..9}.mkv :::+ `cat /home/user/kdenlive/scripts/Game_00?.sh | grep -oP 'in=\d+' | grep -oP '\d+' | sed 's:$:/60:g' | bc -l | tr '\n' ' '`