Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend] support image embeds #13955

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

chaunceyjiang
Copy link
Contributor

@chaunceyjiang chaunceyjiang commented Feb 27, 2025

Fix #13540

prompt = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": [
        {
            "type": "image_embeds",
            "image_embeds": {
                     "image_embeds": "{base64_image_embedding}" ,
                    "image_sizes": ...        # Required by openbmb/MiniCPM-V-2_6
                    "image_grid_thw":  {base64}   # Required by Qwen/Qwen2-VL-2B-Instruct

                },
        }],
    },
]


or

prompt = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": [
        {
            "type": "image_embeds",
            "image_embeds": "{base64_image_embedding}" 
        }],
    },
]

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@DarkLight1337
Copy link
Member

After merging #14017, can you update the Multimodal Inputs documentation page with an example on how to pass embedding inputs in online inference? Thanks

@DarkLight1337
Copy link
Member

DarkLight1337 commented Feb 28, 2025

I think we should not use data URL to pass the image embeddings. You can directly pass the binary array data to be decoded on server side (similar to the format of input_audio rather than audio_url).

@chaunceyjiang chaunceyjiang force-pushed the image_embeds branch 5 times, most recently from 31150c8 to 088a85f Compare February 28, 2025 14:17
Signed-off-by: chaunceyjiang <[email protected]>
embeds["image_embeds"] = embedding # decoded image data
embeds |= self._parse_image_embeds_params(image_embeds)

placeholder = self._tracker.add("image", embeds)
Copy link
Contributor Author

@chaunceyjiang chaunceyjiang Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DarkLight1337
I'm a bit confused about this part.

After embeds is added to _items_by_modality, it will be processed into

multi_modal_data = {
    "image": [{ 
        "image_embeds": image_embeds,
        # image_grid_thw is needed to calculate positional encoding.
        "image_grid_thw": torch.load(...),  # torch.Tensor of shape (1, 3),
    }] #### <<<<<<- This is a list.
}

https://docs.vllm.ai/en/latest/serving/multimodal_inputs.html#embedding-inputs

multi_modal_data = {
    "image": {
        "image_embeds": image_embeds,
        # image_grid_thw is needed to calculate positional encoding.
        "image_grid_thw": torch.load(...),  # torch.Tensor of shape (1, 3),
    } #### <<<<<- This is a dict.
}

I believe I should convert the image_embeds passed by the user into the format mentioned above to pass to the VLLM engine.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The data in the tracker is split by multimodal items. You should perform an extra step when combining the inputs together to convert from list of dicts to dict of lists.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: support image_embeds in openai api as well
2 participants