Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to fix the map. Using Oak-D-Pro-W. Very noisy map, not suitable for Nav2 #1228

Open
mizrex opened this issue Oct 25, 2024 · 6 comments
Open

Comments

@mizrex
Copy link

mizrex commented Oct 25, 2024

I realized, I posted, this issue in the wrong repo, so, moving it here.
I've tried playing with various parameters within rtabmap ( Grid/CellSize, Grid/ClusterRadius, Grid/MinClusterSize, Grid/MaxGroundAngle, Grid/MinGroundHeight etc. As it was suggested in other posts, but nothing seems to resolve it.
I'm including the db file here along with the Rviz screenshots for reference.
You can see how the visual odom thinks that it's going within the ground. Not sure where to fix that. If it's a camera issue, or something in the TFs ?
map_rtabmap3
map_rtabmap4
map_rtabmap5

@matlabbe
Copy link
Member

Is the IMU published by the camera? If not, there could be a small angle difference between IMU and camera frames.

For the grid, many obstacles are caused by the ceiling:
Screenshot from 2024-10-26 14-55-20

Setting Grid/MaxObstacleHeight to 1 meter:
image

The disparity estimation gives also points under the ground detected as obstacle, we can filter them by setting a minimum ground height with Grid/MinGroundHeight=-0.15:
image

The have a more dense occupancy grid, assuming it is a 2D robot, you can disable Grid/3D and enable Grid/RayTracing:
Screenshot from 2024-10-26 15-01-55

@mizrex
Copy link
Author

mizrex commented Oct 28, 2024

@matlabbe , thank you so much for getting back and looking into this with such detail.
I tried your suggestions and they worked like a charm. Thanks a lot for resolving this. Yes, the IMU is provided by the camera.
However, I'm running into a new issue now. With featureless walls. (I can put this on a new issue, if you like?)
In the map shown below, the red region is supposed to be a hallway. But you can see how it perceives it as an open space. It's a featureless hallway. Is there a parameter, I can tweak to make it see it? Here is the db file accompanying this map.
map_rtabmap_1028_hallwayIssue_2
`
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch_ros.actions import Node
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource

def generate_launch_description():
    parameters=[{'frame_id':'oak-d-base-frame',
                 'subscribe_rgbd':True,
                 'subscribe_odom_info':True,
                 'approx_sync':False,
                 'wait_imu_to_init':True,
                 'Grid/RayTracing':'True',
                 'Grid/3D':'False',
                 'Grid/MaxGroundHeight': '-0.15',
                #  'Grid/MaxGroundAngle': '10.0',
                #  "Odom/Strategy": '1',
                 "Grid/MaxObstacleHeight":'2'
                 }]

    remappings=[('imu', '/imu/data')]

    return LaunchDescription([

        # Launch camera driver
        IncludeLaunchDescription(
            PythonLaunchDescriptionSource([os.path.join(
                get_package_share_directory('depthai_examples'), 'launch'),
                '/stereo_inertial_node.launch.py']),
                launch_arguments={'depth_aligned': 'false',
                                  'enableRviz': 'false',
                                  'monoResolution': '400p'}.items(),
        ),

        # Sync right/depth/camera_info together
        Node(   
            package='rtabmap_sync', executable='rgbd_sync', output='screen',
            parameters=parameters,
            remappings=[('rgb/image', '/right/image_rect'),
                        ('rgb/camera_info', '/right/camera_info'),
                        ('depth/image', '/stereo/depth')]),

        # Compute quaternion of the IMU
        Node(
            package='imu_filter_madgwick', executable='imu_filter_madgwick_node', output='screen',
            parameters=[{'use_mag': False, 
                         'world_frame':'enu', 
                         'publish_tf':False}],
            remappings=[('imu/data_raw', '/imu')]),

        # Visual odometry
        Node(
            package='rtabmap_odom', executable='rgbd_odometry', output='screen',
            parameters=parameters,
            remappings=remappings),

        # VSLAM
        Node(
            package='rtabmap_slam', executable='rtabmap', output='screen',
            parameters=parameters,
            remappings=remappings,
            arguments=['-d']),

        # Visualization
        Node(
            package='rtabmap_viz', executable='rtabmap_viz', output='screen',
            parameters=parameters,
            remappings=remappings)
    ])

Further, I also have an on-board lidar, that can provide laserscan or pointcloud data. To improve the detection in case of such featureless hallways. But, I don't know how, I can add the lidar as a backup (with the Oak-D) being the primary.

@matlabbe
Copy link
Member

With pure stereo, that is an issue:
Screenshot from 2024-10-30 18-44-40

If you can add texture somehow on the wall, that would help to see obstacles on it at least.

What kind of lidar do you have? In your parameters, you could add 'subscribe_scan':True, and in the remappings, add 'scan': 'your/scan/topic'.

@mizrex
Copy link
Author

mizrex commented Nov 1, 2024

@matlabbe , thank you for the advise. It worked great. For some reason, it didn't work directly with the lidar's point-cloud (it's a Helios-32 from Robosense). But when I passed it through the point_cloud_assembler node, it worked fine. I passed /assembled_cloud instead of /robosense_points.
However, I'm running into a new issue, where the initial map that comes out is great, but as I move the robot, the map deteriotes badly. I'm attaching the db file and the screen-recording of Rviz for your review. I was hoping you might have a better idea as to what is causing it. If I can somehow, get the quality of that first map repeat in all the subsequent mapping.

@matlabbe
Copy link
Member

matlabbe commented Nov 2, 2024

You may want to do icp_odometry instead (example here). There is a lot of drift or you didn't set correctly the fixed_frame_id of point_cloud_assembler, causing point_cloud_assembler to not correctly assemble the point clouds.

I would not use point_cloud_assembler until the mapping is debugged.

@borongyuan
Copy link
Contributor

I recommend setting "Grid/FlatObstacleDetected" to false. Because the data from stereo depth estimation is noisy. Even after decimation and filtering, it is still easy to generate some small clusters when performing ground segmentation. Either you need to carefully tune the cluster related parameters.
In addition, I wrote a parameter tuning tool to improve the performance of the SGBM pipeline in OAK. Our new configurations may be released in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants