You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are new open wights models with improved capabilities coming almost every month and choosing the one to add to the network is a perilous task for the fact that there are so many. Ergo we are specially excited to expand the capabilities of the Livepeer AI Network by adding new video model by Lightricks with its impressive it produces 24 FPS videos at a 768x512 resolution faster than they can be watched claim. It will be intriguing to see how this model in the Livepeer AI Network will be used by developers to improve the existing flow of media generation as well as creative possibilities. 🏅
We are seeking community and bounty hunters support to implement this model within the Livepeer AI network integrating with existing pipelines of image-to-video aswell as adding new pipeline for text-to-video. The weights for the model are available at Lightricks/LTX-Video. Let the innovation unfold and creativity soar! 🚀
Basic understanding of Go programming and Hugging Face models is advantageous.
Bounty Requirements
Implementation: Create a functional /text-to-video route and pipeline within the AI-worker repository. This new pipeline should be accessible through Docker on port ____. Also, develop the necessary code within the go-livepeer repository to integrate access to the text-to-video pipeline from the ai-worker component. This includes implementing the payment logic and job routing to ensure seamless operation within the network. Regards to image-to-video since it is functioning pipeline so your task will be to add support for LTXV model.
[DISCLAMER]
There is a significant development effort by a dev on text-to-videoso it will be beneficial to collaborate on the task, so there isn't duplication of the work. PR #187 and PR #3161
Functionality: The pipeline must follow the existing norms for handling the request such as accepting prompts and images based on the models' requirements and returning the result. It should also include the necessary post processing steps for handling video on the go-livepeer side. Thus ensuring that users can submit AI job requests to the network in a manner consistent with other AI-Network features.
Example request: curl -X POST "https://your-gateway-url/text-to-video -F pipeline="text-to-video” -F model_id="" -F prompt=""
Getting started with initial pipeline structure, refer to the HuggingFace space. You can also explore the following pull requests to see how other video and image handling pipelines were implemented:
In some cases, you might encounter dependencies conflicts and not be able to integrate the new pipeline directly into the regular AI Runner. If this occurs, you can follow the approach outlined in SAM2 PR to create a custom container for the pipeline. This approach uses the regular AI runner as the base while keeping the base container lean.
To streamline development, keep these best practices in mind:
Use Developer Documentation: Leverage developer documentation for the worker and runner that provides tips for mocking pipelines and direct debugging, which can streamline the development process. Similarly, developer documentation for the installation of go-livepeer and the general Livepeer documentation for example usage and setup instructions offer valuable insights that can expedite your development process, including automatic scripts for orchestrators and gateways.
Update OpenAPI Specification: Execute the runner/gen_openapi.py script to generate an updated OpenAPI specification.
Generate Go-Livepeer Bindings: In the main repository directory, execute the make command to generate the necessary bindings, ensuring compatibility with the go-livepeer repository.
Build Binaries: Run the make command in the main repository folder to generate Livepeer binaries. This will allow you to test your implementation and ensure it integrates smoothly.
Create Docker Images: Build Docker images of Livepeer and test them using appropriate tools and settings to identify any edge cases or bugs. This step is crucial for ensuring robustness and reliability in your implementation.
How to Apply
Express Interest: Comment on this issue with a brief explanation of your experience and suitability for this task.
Await Review: Our team will review the applications and select a qualified candidate.
Get Assigned: If selected, the GitHub issue will be assigned to you.
Start Working: Begin the task! For questions or support, comment on the issue or join discussions in the #developer-lounge channel on our Discord server.
Submit Your Work: Create a pull request in the relevant repository and request a review.
Notify Us: Comment on this GitHub issue once your pull request is ready for review.
Receive Your Bounty: Upon pull request approval, we will arrange the bounty payment.
Earn Recognition: Your contribution will be highlighted in our project’s changelog.
We look forward to your interest and contributions to this exciting project! 💛
Warning
Please ensure the issue is assigned to you before starting work. To avoid duplication of efforts, unassigned issue submissions will not be accepted.
The text was updated successfully, but these errors were encountered:
Overview
There are new open wights models with improved capabilities coming almost every month and choosing the one to add to the network is a perilous task for the fact that there are so many. Ergo we are specially excited to expand the capabilities of the Livepeer AI Network by adding new video model by Lightricks with its impressive
it produces 24 FPS videos at a 768x512 resolution faster than they can be watched
claim. It will be intriguing to see how this model in the Livepeer AI Network will be used by developers to improve the existing flow of media generation as well as creative possibilities. 🏅We are seeking community and bounty hunters support to implement this model within the Livepeer AI network integrating with existing pipelines of
image-to-video
aswell as adding new pipeline fortext-to-video
. The weights for the model are available at Lightricks/LTX-Video. Let the innovation unfold and creativity soar! 🚀Required Skillset
Bounty Requirements
/text-to-video
route and pipeline within the AI-worker repository. This new pipeline should be accessible through Docker on port____
. Also, develop the necessary code within the go-livepeer repository to integrate access to thetext-to-video
pipeline from theai-worker
component. This includes implementing the payment logic and job routing to ensure seamless operation within the network. Regards toimage-to-video
since it is functioning pipeline so your task will be to add support for LTXV model.go-livepeer
side. Thus ensuring that users can submit AI job requests to the network in a manner consistent with other AI-Network features.Scope Exclusions
Implementation Tips
Getting started with initial pipeline structure, refer to the HuggingFace space. You can also explore the following pull requests to see how other video and image handling pipelines were implemented:
In some cases, you might encounter dependencies conflicts and not be able to integrate the new pipeline directly into the regular AI Runner. If this occurs, you can follow the approach outlined in SAM2 PR to create a custom container for the pipeline. This approach uses the regular AI runner as the base while keeping the base container lean.
To streamline development, keep these best practices in mind:
runner/gen_openapi.py
script to generate an updated OpenAPI specification.make
command to generate the necessary bindings, ensuring compatibility with the go-livepeer repository.make
command in the main repository folder to generate Livepeer binaries. This will allow you to test your implementation and ensure it integrates smoothly.How to Apply
#developer-lounge
channel on our Discord server.We look forward to your interest and contributions to this exciting project! 💛
Warning
Please ensure the issue is assigned to you before starting work. To avoid duplication of efforts, unassigned issue submissions will not be accepted.
The text was updated successfully, but these errors were encountered: