You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
iOS 15 now offers us more options to control the person segmentation with the Vision framework (reference), allowing us to customize its "quality", and also apply it on top of any image. This can allow us to do things more manually instead of using ARKit, then, for example, we'd be able to use the ultra-wide camera or apply a segmentation with better quality to a pre-recorded video.
Not sure if the best way to communicate this question.
Do you foresee any changes to Reality Mixer and its functionality after the rollout of the new version of ARKit (and related) at WWDC 2021?
https://developer.apple.com/augmented-reality/
The text was updated successfully, but these errors were encountered: