An interactive visualization system designed to help NLP researchers and practitioners analyze and compare attention weights in transformer-based models with linguistic knowledge.
For more information, check out our manuscript:
Dodrio: Exploring Transformer Models with Interactive Visualization. Zijie J. Wang, Robert Turko, and Duen Horng Chau. arXiv preprint 2021. arXiv:2103.14625.
For a live demo, visit: http://poloclub.github.io/dodrio/
Clone or download this repository:
git clone [email protected]:poloclub/dodrio.git
# use degit if you don't want to download commit histories
degit poloclub/dodrio
Install the dependencies:
npm install
Then run Dodrio:
npm run dev
Navigate to localhost:5000. You should see Dodrio running in your broswer :)
To see how we trained the Transformer or customize the visualization with a different model or dataset, visit the ./data-generation/
directory.
Dodrio was created by Jay Wang, Robert Turko, and Polo Chau.
@inproceedings{wangDodrioExploringTransformer2021,
title = {Dodrio: {{Exploring Transformer Models}} with {{Interactive Visualization}}},
shorttitle = {Dodrio},
booktitle = {Proceedings of the 59th {{Annual Meeting}} of the {{Association}} for {{Computational Linguistics}} and the 11th {{International Joint Conference}} on {{Natural Language Processing}}: {{System Demonstrations}}},
author = {Wang, Zijie J. and Turko, Robert and Chau, Duen Horng},
year = {2021},
pages = {132--141},
publisher = {{Association for Computational Linguistics}},
address = {{Online}},
language = {en}
}
The software is available under the MIT License.
If you have any questions, feel free to open an issue or contact Jay Wang.