Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HuggingFace - RuntimeError #24

Open
lucellent opened this issue Jan 24, 2025 · 3 comments
Open

HuggingFace - RuntimeError #24

lucellent opened this issue Jan 24, 2025 · 3 comments

Comments

@lucellent
Copy link

When I try to run the inference there it just shows a red box with RuntimeError, nothing else (after loading for a minute)

@yhliu04
Copy link
Collaborator

yhliu04 commented Jan 24, 2025

@lucellent Since the inference time will exceed the ZeroGPU limitation, you could try to duplicate the code and run with a paid GPU (e.g. L4, A100, …)

@lucellent
Copy link
Author

I rented ZeroGPU but still am unable to run it.

ZeroGPU tensors packing: 0.00B [00:00, ?B/s]
ZeroGPU tensors packing: 0.00B [00:00, ?B/s]

  • Running on local URL: http://0.0.0.0:7860, with SSR ⚡
    Attempted to select a non-interactive or hidden tab.
    Attempted to select a non-interactive or hidden tab.

To create a public link, set share=True in launch().
Attempted to select a non-interactive or hidden tab.
Attempted to select a non-interactive or hidden tab.
2025-01-31 12:03:46,326 - video_to_video - INFO - checkpoint_path: ./pretrained_weight

open_clip_pytorch_model.bin: 0%| | 0.00/3.94G [00:00<?, ?B/s]
open_clip_pytorch_model.bin: 1%|▏ | 52.4M/3.94G [00:01<01:15, 51.2MB/s]
open_clip_pytorch_model.bin: 27%|██▋ | 1.08G/3.94G [00:02<00:04, 619MB/s]
open_clip_pytorch_model.bin: 53%|█████▎ | 2.11G/3.94G [00:03<00:02, 803MB/s]
open_clip_pytorch_model.bin: 83%|████████▎ | 3.26G/3.94G [00:04<00:00, 940MB/s]
open_clip_pytorch_model.bin: 100%|█████████▉| 3.94G/3.94G [00:05<00:00, 738MB/s]
2025-01-31 12:04:04,695 - video_to_video - INFO - Build encoder with FrozenOpenCLIPEmbedder
Model not found at ./pretrained_weight, downloading...
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 622, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2016, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1569, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 210, in gradio_handler
raise error("ZeroGPU worker error", "GPU task aborted")
gradio.exceptions.Error: 'GPU task aborted'

@lucellent
Copy link
Author

@yhliu04 Any idea if this is something from my side?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants