Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] lvm-llama-vision and lvm-llama-vision-guard failed processing requests #1137

Open
2 of 6 tasks
lianhao opened this issue Jan 13, 2025 · 3 comments
Open
2 of 6 tasks
Labels
bug Something isn't working

Comments

@lianhao
Copy link
Collaborator

lianhao commented Jan 13, 2025

Priority

Undecided

OS type

Ubuntu

Hardware type

Xeon-GNR

Installation method

  • Pull docker images from hub.docker.com
  • Build docker images from source

Deploy method

  • Docker compose
  • Docker
  • Kubernetes
  • Helm

Running nodes

Single Node

What's the version?

git commit 1cc4d21

Description

Following the lvm-llama-vision README, when test the service lvm-llama-vision or lvm-llama-vision-guard, it fails with the docker logs as listed below

Reproduce steps

Following the lvm-llama-vision README

Raw log

/home/user/.local/lib/python3.10/site-packages/pydantic/_internal/_fields.py:132: UserWarning: Field "model_name_or_path" in Audio2TextDoc has conflict with p
rotected namespace "model_".

You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
  warnings.warn(
[2025-01-13 12:36:42,740] [    INFO] - Base service - CORS is enabled.
[2025-01-13 12:36:42,741] [    INFO] - Base service - Setting up HTTP server
[2025-01-13 12:36:42,741] [    INFO] - Base service - Uvicorn server setup on port 9399
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:9399 (Press CTRL+C to quit)
[2025-01-13 12:36:42,752] [    INFO] - Base service - HTTP server setup successful
INFO:     192.168.103.224:58408 - "POST /v1/lvm HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/user/.local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/home/user/.local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
    return await self.app(scope, receive, send)
  File "/home/user/.local/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/home/user/.local/lib/python3.10/site-packages/starlette/applications.py", line 113, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/user/.local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in __call__
    raise exc
  File "/home/user/.local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in __call__
    await self.app(scope, receive, _send) 
... ...
... ...
  File "/home/user/.local/lib/python3.10/site-packages/fastapi/routing.py", line 301, in app
    raw_response = await run_endpoint_function(
  File "/home/user/.local/lib/python3.10/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
    return await dependant.call(**values)
  File "/home/user/comps/lvms/src/integrations/dependency/llama-vision/lvm.py", line 74, in lvm
    initialize()
  File "/home/user/comps/lvms/src/integrations/dependency/llama-vision/lvm.py", line 40, in initialize
    model = AutoModelForVision2Seq.from_pretrained(
  File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
    return model_class.from_pretrained(
  File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3372, in from_pretrained
    raise ImportError(
ImportError: Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install 'accelerate>=0.26.0'`

Attachments

No response

@lianhao lianhao added the bug Something isn't working label Jan 13, 2025
@lianhao
Copy link
Collaborator Author

lianhao commented Jan 13, 2025

@lvliang-intel please take a look when you have bandwidth, thanks!

@xiguiw
Copy link
Collaborator

xiguiw commented Jan 14, 2025

@lianhao
Seemed some package not installed.
Would you please some details? How do you set up then environment, especially requirements.txt files you installed with?

    return model_class.from_pretrained(
  File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3372, in from_pretrained
    raise ImportError(

@lianhao
Copy link
Collaborator Author

lianhao commented Jan 14, 2025

@lianhao Seemed some package not installed. Would you please some details? How do you set up then environment, especially requirements.txt files you installed with?

I'm following the lvm-llama-vision README to build the docker image, run and test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants