Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding tracked images with running session. #2

Open
Maksims opened this issue Nov 16, 2020 · 5 comments
Open

Adding tracked images with running session. #2

Maksims opened this issue Nov 16, 2020 · 5 comments

Comments

@Maksims
Copy link

Maksims commented Nov 16, 2020

In order to allow seamless experience without need to end/start xr session, it would be great to be able to add/remove tracked images with running xr session.

Additionally requirement of widthInMeters is fairly limiting. Underlying AR systems are capable of estimating and refining dimensions over time. Also technology is improving over time, likely making such requirement obsolete.
We already have measurer apps WebXR AR, which do a good job on allowing measuring physical space with virtual measurer.

@klausw
Copy link
Contributor

klausw commented Nov 25, 2020

One concern I had was around implementation constraints. ARCore would support adding images at runtime, and recommends but doesn't require a specified physical width. However, based on my reading of ARKit documentation, it sounds as if images need to specified before starting an AR session, and that the width specification is mandatory. I don't have experience with ARKit myself, and am not involved with any AR-on-iOS project, but it seemed safer to start with what seems to be a common baseline and then potentially add additional features such as on-the-fly image registration as a followup. @grorg, do you happen to know more about ARKit's requirements in this context, separately from any specific browser implementations?

If all platforms interested in image tracking could support on-the-fly registration and undimensioned images, we could of course revisit this.

Separately, I think specifying images as part of the session request may be preferable from a privacy point of view. A user agent would then have the option of letting the user view the images that are going to be tracked as part of the permission/consent prompt.

@Maksims
Copy link
Author

Maksims commented Nov 26, 2020

One of the use cases, where in-session adding images would be mandatory: user using depth sensing or hit test APIs, select plane from real world. Using camera access we cut that image, and transform it, also using selected plane we know the dimensions.
Then we can add this image into image tracking API for active tracking.

This would allow to be in-session. I can come up with many other use cases, with encyclopedia like application, where we would want to stay inside of AR session and load/unload images dynamically, as in encyclopedia can be too many entries.

Also, I've noticed providing not to real world width in current state, leads to wrong distance estimation. Basically, object looks to be tracked well in terms of screen space, but if provided width is larger than in real world, then returned transformation will be further from camera than in real world, and vice versa. Perhaps combining image tracking, and then using depth sensing / hit test for better precise placement would be required in this case.

@michaelvogt
Copy link

Has there been any progress?

My use case is, to load markers based on the current geo location. When my app starts an AR session, the user can geo locate the device. Based on the reported location I can request from a so called discovery system if there are any active image targets close by, together with other geo located objects.

When I understand correctly, currently I would need to restart the AR session to use the image targets, which would lead to a negative user experience.

Regarding providing widthInMeters: I'm happy to provide it, to make sure the tracking does have the distance / dimensions correct from the beginning, preventing corrections later on. Not limiting at all. User experience comes first for me.

@TomDDH
Copy link

TomDDH commented Nov 21, 2022

adding/updating/remove image target at runtime could be a big benefit of user experience. I can think a use case is when people use headset walking on street, we can use our own AI tool to detect which image target to use, then base on the AI returns, we can update our image target to show contents, and user can endless walking on the street.

@papadako
Copy link

Hello,

I guess it is obvious that there is a need for updating in real-time the tracked images and managing the current state of the AR app session without having to restart the session.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants