Vahana VR and OpenCV can stitch images/videos in real time, could such a technique be applied to game engines/GPUs/monitors to align and stitch frames that are tearing?, align the new frame with the previous frame, maybe by using multiple viewports which are right next the player's controlled camera?, I think it will be a cool alternative or an addition to adaptive sync.