Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transform from v2d format into video_transcript format and save in video_transcript/ directory. #12

Open
kdu4108 opened this issue Jul 4, 2024 · 0 comments

Comments

@kdu4108
Copy link
Collaborator

kdu4108 commented Jul 4, 2024

Goal: given v2d format of

 ├── 00000.tar
 |     ├── 00000.mp4
 |     ├── 00000.txt
 |     ├── 00000.json
 |     ├── 00001.mp4
 |     ├── 00001.txt
 |     ├── 00001.json
 |     └── ...
 |     ├── 10000.mp4
 |     ├── 10000.txt
 |     ├── 10000.json
 ├── 00001.tar
 |     ├── 10001.mp4
 |     ├── 10001.txt
 |     ├── 10001.json
 │     ...
 ...

produce a video_transcript/ modality data folder of the following format:

root/video_transcript/shard-00000.tar
 |     ├── 00000.jsonl # this corresponds to one video. each line within it corresponds to one subsequence of frames.
 |     ├── 00001.jsonl
 |     └── ...

Each jsonl should look something like

[
            {
                "transcript": "here's a transcript",
                "start_frame_index": 0,
                "end_frame_index": 5,
            },
            {
                "transcript": "here's another transcript",
                "start_frame_index": 10,
                "end_frame_index": 13,
            } 
]

Note that the txt/jsons in the v2d might not correspond exactly to the representation we want here (e.g., we might need some logic to determine the start/end frame indices from timestamps).

We may also want to run whisper to get improved transcripts.

Child issue of #3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants