Skip to content

Commit

Permalink
Update index.html
Browse files Browse the repository at this point in the history
  • Loading branch information
ZrrSkywalker authored Aug 5, 2024
1 parent bd63600 commit 398b882
Showing 1 changed file with 39 additions and 11 deletions.
50 changes: 39 additions & 11 deletions 2024-08-05-llava-onevision/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -238,19 +238,47 @@ <h2 class="title is-3">Introduction</h2>
</div>
</section>

<h2 id="acknowledgement">Related Blogs</h2>

<ul>
<li><a href="https://llava-vl.github.io/blog/2024-01-30-llava-next/">LLaVA-NeXT: Improved reasoning, OCR, and world knowledge</a></li>
<section class="section">
<div class="container" style="margin-bottom: 2vh;">
<!-- Abstract. -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Related Blogs</h2>
<div class="content has-text-justified">
<ul>
<li><a href="https://llava-vl.github.io/blog/2024-01-30-llava-next/">LLaVA-NeXT: Improved reasoning, OCR, and world knowledge</a></li>

<li><a href="https://llava-vl.github.io/blog/2024-04-30-llava-next-video/">LLaVA-NeXT: A Strong Zero-shot Video Understanding Model</a></li>
<li><a href="https://llava-vl.github.io/blog/2024-05-10-llava-next-stronger-llms/">LLaVA-NeXT: Stronger LLMs Supercharge Multimodal Capabilities in the Wild</a></li>
<li><a href="https://llava-vl.github.io/blog/2024-05-25-llava-next-ablations/">LLaVA-NeXT: What Else Influences Visual Instruction Tuning Beyond Data?</a></li>
<li><a href="https://llava-vl.github.io/blog/2024-06-16-llava-next-interleave/">LLaVA-NeXT: Tackling Multi-image, Video, and 3D in Large Multimodal Models</a></li>
<li><a href="https://lmms-lab.github.io/lmms-eval-blog/lmms-eval-0.1/">Accelerating the Development of Large Multimodal Models with LMMs-Eval</a></li>
</ul>
</div>
</div>
</div>
<!--/ Abstract. -->
</div>
</section>

<li><a href="https://llava-vl.github.io/blog/2024-04-30-llava-next-video/">LLaVA-NeXT: A Strong Zero-shot Video Understanding Model</a></li>
<li><a href="https://llava-vl.github.io/blog/2024-05-10-llava-next-stronger-llms/">LLaVA-NeXT: Stronger LLMs Supercharge Multimodal Capabilities in the Wild</a></li>
<li><a href="https://llava-vl.github.io/blog/2024-05-25-llava-next-ablations/">LLaVA-NeXT: What Else Influences Visual Instruction Tuning Beyond Data?</a></li>
<li><a href="https://llava-vl.github.io/blog/2024-06-16-llava-next-interleave/">LLaVA-NeXT: Tackling Multi-image, Video, and 3D in Large Multimodal Models</a></li>
<li><a href="https://lmms-lab.github.io/lmms-eval-blog/lmms-eval-0.1/">Accelerating the Development of Large Multimodal Models with LMMs-Eval</a></li>
</ul>

<h2 id="citation">Citation</h2>

<!-- @PAN TODO: bibtex -->
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title is-3 has-text-centered">Citation</h2>
<pre><code>@misc{li2024llava-onevision,
author = {Li, Bo and Zhang, Yuanhan and Guo, Dong and Zhang, Renrui and Li, Feng and Zhang, Hao and Zhang, Kaichen and Li, Yanwei and Liu, Ziwei and Li, Chunyuan},
title = {LLaVA-OneVision: Visual Task Transfer Made Easy},
url = {https://llava-vl.github.io/blog/2024-08-05-llava-onevision/},
month = {2024}
year = {2024}
}</code></pre>
</div>
</section>

<!-- <h2 id="citation">Citation</h2>
<div class="language-bibtex highlighter-rouge">
<div class="highlight"><pre class="highlight"><code>
Expand All @@ -263,7 +291,7 @@ <h2 id="citation">Citation</h2>
<span class="na">year</span><span class="p">=</span><span class="s">{2024}</span>
<span class="p">}</span>
</code></pre></div>
</div>
</div> -->

</div>
<a class="u-url" href="/blog/2024-08-05-llava-onevision/" hidden></a>
Expand Down

0 comments on commit 398b882

Please sign in to comment.