-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sensor_msgs/PointCloud2 grows heap by 32MB every message. #149
Comments
In general during deserialization, no. It does happen when deserializing uint8 arrays unless you are using on-the-fly messages (#117). As I mentioned in #118 just now, the easiest way around this is to allow users to do custom deserialization from the buffer. A longer term solution could be another deserialization option that wraps DataView. |
@trusktr , I'm trying to understand. You are saying that the browser does a major GC. But rosnodejs is a server-side package only. Are we really talking about the same thing here? Perhaps you can still avoid the GC if you change the way you receive the data in the client from the server? |
Yeah, it is Electron (Node.js with an added Chrome Blink renderer). In Electron we get devtools out of the box. So I am running in the same Node.js JavaScript context but with a UI added to the mix. Perhaps I shouldn't call it a "browser". :)
I happen to have the const rosnodejs = (await import('rosnodejs')).default
const node: any = await rosnodejs.initNode('point_cloud_visualizer', {onTheFly: true})
const {PointCloud2} = rosnodejs.require('sensor_msgs').msg Let me see what happens if I remove that... Side note, the |
That's waaaaaaaaaay better with Can you clarify when exactly I would want to use |
I see the EDIT: I tested it like this: let _previousBuff: ArrayBuffer
const callback = async (msg: {data: Buffer; point_step: number}) => {
console.log(msg)
console.log(msg.data.buffer === _previousBuff)
_previousBuff = msg.data.buffer
// ...
} And the refs are different each time. |
We aren't re-using some common buffer for each message. Message chunks coming off the wire are stitched together into a single buffer here. That buffer is later deserialized when the message queue is drained. During deserialization in the pre-generated message case, uint8 array fields return a new buffer referencing the stitched-together memory (done here). |
It may be interesting to allow the user to specify an expected message length for a certain field of a message. So that a common buffer is re-used starting with that length. If the message chunks don't fit, then it would create a new buffer. F.e., if I have a message Not sure what a clean API would be, but it would definitely be even faster. I imagine it will be perhaps 400% faster than currently based on having done something similar in another project (non-ROS related, but similar piping of input stream data into the same buffer on each message). |
Is there a feature of rosnodejs that can use constant memory instead of creating new objects each message?
I am making a realtime web-based visualizer for lidar data, and receiving point clouds over rosnodejs (with the standard
sensor_msgs/PointCloud2
message type) results in 32MB heap growth per frame (frames are at 10fps right now), and every couple seconds the browser does a major GC to clean up 400 to 500 MB. It is very inefficient as you can imagine.Ideally I would like to have points for each message written into an existing ArrayBuffer (or even a normal
Array
) so as to avoid allocating new memory for every message that arrives.Does rosnodejs have a way to do this?
The text was updated successfully, but these errors were encountered: