You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your great work~
I have tried to run the fast-gauss and calculate the total render function time as following sample code. Compared with the original (diff-gaussian-rasterization), the spent time in all is a bit longer than (diff-gaussian-rasterization) in my pre-trained 3DGS model.
Is that making sense?
I'm running on Ubuntu20.04 with 4090 nvidia, (EasyVolcap is missing)
for outer_idx in range(outer_loop_count):
for idx, view in enumerate(tqdm(views, desc="Rendering progress")):
start_time = time.time()
rendering = render(view, gaussians, pipeline, background)["render"]
end_time = time.time()
# gt = view.original_image[0:3, :, :]
# torchvision.utils.save_image(rendering, os.path.join(render_path, '{0:05d}'.format(idx) + ".png"))
# torchvision.utils.save_image(gt, os.path.join(gts_path, '{0:05d}'.format(idx) + ".png"))
global frame_count
frame_count += 1
fps = calculate_fps(end_time - start_time, frame_count)
if frame_count % 30 == 0:
print(f"Current FPS: {fps:.2f}")
print(f"Final after all frames: {elapsed_time:.2f} seconds | Total Frames: {frame_count}")
My question is, how can I achieve the 35x faster rendering speed?
Thanks for your great work
The text was updated successfully, but these errors were encountered:
Hi, thanks for using my code!
As mentioned in the readme, this implementation benefits more from high-resolution cases (much more pixels than Gaussians) and can be quite underperforming in offline rendering compared to online rendering. If possible, you could try setting up a GUI (or maybe just use the EasyVolcap GUI).
For offline rendering, one way to speed things up is to slightly mitigate the copying overhead from GL to CUDA but switching the output buffer to uint8 (which I recently supported in the repo, just change the initialization parameter!).
Hi Den,
Thanks for your great work~
I have tried to run the fast-gauss and calculate the total render function time as following sample code. Compared with the original (diff-gaussian-rasterization), the spent time in all is a bit longer than (diff-gaussian-rasterization) in my pre-trained 3DGS model.
Is that making sense?
I'm running on Ubuntu20.04 with 4090 nvidia, (EasyVolcap is missing)
My question is, how can I achieve the 3
5x faster rendering speed?Thanks for your great work
The text was updated successfully, but these errors were encountered: