-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to obtain the center point coordinates and depth values during NYU inference #47
Comments
Hello, I don't understand the purpose of the central coordinates. What else is there besides getting bbox |
@weilanShi |
非常感谢您的回复, 我最终是想去除center-point 这个需要提前准备好的值(主要是不想推理时要太多模型), 我看代码里center-point 的作用主要有两个: 1. 用来结合xy_thres 取出大致的bbox区域;2. 用来结合depth_thres去除其异地点,同时将 imgResize-center(为了将imgResize的x,y,z统一到同一个量级). 为了去除center-point 这个需要提前准备好的值,所以我想用以下方法替代center 的两个作用: 1. 使用手部检测器来获取bbox;2. 通过取imgResize的中心小roi区域的depth_mean 来作为 center 值,同时重新使用该方法得到mean/std. 按照上述方法进行了一些实验,目前loss比艰难收敛. test error为24. 想请问下上述方法是否有问题, 或者您能给出其他一些建议.非常感谢,期待回复,提前祝国庆节快乐! |
你好,我还有一个问题: UVD坐标里的z就是采集数据时,当时手部关节点到深度相机的距离吧? 还是需要进行其他坐标转换? |
I'm appreciating your great work.
when i test own depth data, I can't get the center coordinates and depth values of hand bbox advance. Therefore, when I try to remove "- center [index] [0] [2]" during training, the loss cannot converge, Can you provide some help for me to adjust the parameters so that the network can converge, thanks !
The text was updated successfully, but these errors were encountered: