Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

End-to-End Implementation of Video Model By Lightricks to Pipelines in Livepeer AI Network. #14

Open
JJassonn69 opened this issue Nov 23, 2024 · 0 comments

Comments

@JJassonn69
Copy link
Owner

Overview

There are new open wights models with improved capabilities coming almost every month and choosing the one to add to the network is a perilous task for the fact that there are so many. Ergo we are specially excited to expand the capabilities of the Livepeer AI Network by adding new video model by Lightricks with its impressive it produces 24 FPS videos at a 768x512 resolution faster than they can be watched claim. It will be intriguing to see how this model in the Livepeer AI Network will be used by developers to improve the existing flow of media generation as well as creative possibilities. 🏅

We are seeking community and bounty hunters support to implement this model within the Livepeer AI network integrating with existing pipelines of image-to-video aswell as adding new pipeline for text-to-video. The weights for the model are available at Lightricks/LTX-Video. Let the innovation unfold and creativity soar! 🚀

Required Skillset

Bounty Requirements

  1. Implementation: Create a functional /text-to-video route and pipeline within the AI-worker repository. This new pipeline should be accessible through Docker on port ____. Also, develop the necessary code within the go-livepeer repository to integrate access to the text-to-video pipeline from the ai-worker component. This includes implementing the payment logic and job routing to ensure seamless operation within the network. Regards to image-to-video since it is functioning pipeline so your task will be to add support for LTXV model.

[DISCLAMER]
There is a significant development effort by a dev on text-to-videoso it will be beneficial to collaborate on the task, so there isn't duplication of the work. PR #187 and PR #3161

  1. Functionality: The pipeline must follow the existing norms for handling the request such as accepting prompts and images based on the models' requirements and returning the result. It should also include the necessary post processing steps for handling video on the go-livepeer side. Thus ensuring that users can submit AI job requests to the network in a manner consistent with other AI-Network features.

Example request:
curl -X POST "https://your-gateway-url/text-to-video -F pipeline="text-to-video” -F model_id="" -F prompt=""

curl -X POST "https://your-gateway-url/image-to-video -F pipeline="image-to-video” -F model_id="" -F prompt="" -F [email protected]

Scope Exclusions

  • None

Implementation Tips

Getting started with initial pipeline structure, refer to the HuggingFace space. You can also explore the following pull requests to see how other video and image handling pipelines were implemented:

In some cases, you might encounter dependencies conflicts and not be able to integrate the new pipeline directly into the regular AI Runner. If this occurs, you can follow the approach outlined in SAM2 PR to create a custom container for the pipeline. This approach uses the regular AI runner as the base while keeping the base container lean.

To streamline development, keep these best practices in mind:

  • Use Developer Documentation: Leverage developer documentation for the worker and runner that provides tips for mocking pipelines and direct debugging, which can streamline the development process. Similarly, developer documentation for the installation of go-livepeer and the general Livepeer documentation for example usage and setup instructions offer valuable insights that can expedite your development process, including automatic scripts for orchestrators and gateways.
  • Update OpenAPI Specification: Execute the runner/gen_openapi.py script to generate an updated OpenAPI specification.
  • Generate Go-Livepeer Bindings: In the main repository directory, execute the make command to generate the necessary bindings, ensuring compatibility with the go-livepeer repository.
  • Build Binaries: Run the make command in the main repository folder to generate Livepeer binaries. This will allow you to test your implementation and ensure it integrates smoothly.
  • Create Docker Images: Build Docker images of Livepeer and test them using appropriate tools and settings to identify any edge cases or bugs. This step is crucial for ensuring robustness and reliability in your implementation.

How to Apply

  1. Express Interest: Comment on this issue with a brief explanation of your experience and suitability for this task.
  2. Await Review: Our team will review the applications and select a qualified candidate.
  3. Get Assigned: If selected, the GitHub issue will be assigned to you.
  4. Start Working: Begin the task! For questions or support, comment on the issue or join discussions in the #developer-lounge channel on our Discord server.
  5. Submit Your Work: Create a pull request in the relevant repository and request a review.
  6. Notify Us: Comment on this GitHub issue once your pull request is ready for review.
  7. Receive Your Bounty: Upon pull request approval, we will arrange the bounty payment.
  8. Earn Recognition: Your contribution will be highlighted in our project’s changelog.

We look forward to your interest and contributions to this exciting project! 💛

Warning

Please ensure the issue is assigned to you before starting work. To avoid duplication of efforts, unassigned issue submissions will not be accepted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant