Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: JULIELab/MEmoLon
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.1
Choose a base ref
...
head repository: JULIELab/MEmoLon
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref
  • 6 commits
  • 136 files changed
  • 3 contributors

Commits on May 1, 2020

  1. Update README.md

    svenbuechel authored May 1, 2020
    Copy the full SHA
    5dcbc3d View commit details

Commits on May 14, 2020

  1. Update README.md

    svenbuechel authored May 14, 2020
    Copy the full SHA
    b2f1668 View commit details

Commits on May 18, 2020

  1. Major commit

    svenbuechel committed May 18, 2020
    Copy the full SHA
    95e12f7 View commit details
  2. minor

    svenbuechel committed May 18, 2020
    Copy the full SHA
    756dd0b View commit details

Commits on Jan 4, 2021

  1. adds lexicon download size

    Added the file size to the download link of the lexicons. With typical download speeds in home office this, is a download of several minutes.
    benjamir authored Jan 4, 2021
    Copy the full SHA
    0cdb7d9 View commit details

Commits on May 20, 2021

  1. Update README.md

    svenbuechel authored May 20, 2021
    Copy the full SHA
    7f8eb10 View commit details
Showing with 22,811 additions and 3 deletions.
  1. +10 −0 .gitignore
  2. +160 −3 README.md
  3. +1 −0 activate.src
  4. +1 −0 memolon/analyses/.gitignore
  5. +92 −0 memolon/analyses/baseline_results.csv
  6. +460 −0 memolon/analyses/comparison_against_human_reliability.ipynb
  7. +1 −0 memolon/analyses/comparison_against_human_reliability.json
  8. +456 −0 memolon/analyses/dev_experiment_results.csv
  9. +92 −0 memolon/analyses/generated_lexica.csv
  10. +570 −0 memolon/analyses/gold-evaluation.ipynb
  11. +27 −0 memolon/analyses/gold_evaluation.csv
  12. +27 −0 memolon/analyses/gold_lexica.csv
  13. +4 −0 memolon/analyses/gold_silver_agreement.csv
  14. +1,274 −0 memolon/analyses/gold_vs_silver_evaluation.ipynb
  15. +1,113 −0 memolon/analyses/overview-generated-lexicons.ipynb
  16. +223 −0 memolon/analyses/overview-gold-lexica.ipynb
  17. +2,031 −0 memolon/analyses/silver-evaluation.ipynb
  18. BIN memolon/analyses/silver-line.png
  19. +92 −0 memolon/analyses/silver_evaluation.csv
  20. +79 −0 memolon/analyses/translation_vs_prediction.csv
  21. +541 −0 memolon/analyses/translation_vs_prediction.ipynb
  22. +2 −0 memolon/data/Embeddings/.gitignore
  23. +5 −0 memolon/data/Source/.gitignore
  24. +1,296 −0 memolon/data/Source/dev.txt
  25. +1,032 −0 memolon/data/Source/test.txt
  26. +11,463 −0 memolon/data/Source/train.txt
  27. +2 −0 memolon/data/TargetGold/.gitignore
  28. +6 −0 memolon/data/TargetPred/.gitignore
  29. +2 −0 memolon/data/TargetPred/MTL_all/.gitignore
  30. +2 −0 memolon/data/TargetPred/MTL_grouped/.gitignore
  31. +2 −0 memolon/data/TargetPred/STL/.gitignore
  32. +2 −0 memolon/data/TargetPred/ridge/.gitignore
  33. +1 −0 memolon/data/TranslationTables/af.json
  34. +1 −0 memolon/data/TranslationTables/am.json
  35. +1 −0 memolon/data/TranslationTables/ar.json
  36. +1 −0 memolon/data/TranslationTables/az.json
  37. +1 −0 memolon/data/TranslationTables/be.json
  38. +1 −0 memolon/data/TranslationTables/bg.json
  39. +1 −0 memolon/data/TranslationTables/bn.json
  40. +1 −0 memolon/data/TranslationTables/bs.json
  41. +1 −0 memolon/data/TranslationTables/ca.json
  42. +1 −0 memolon/data/TranslationTables/ceb.json
  43. +1 −0 memolon/data/TranslationTables/co.json
  44. +1 −0 memolon/data/TranslationTables/cs.json
  45. +1 −0 memolon/data/TranslationTables/cy.json
  46. +1 −0 memolon/data/TranslationTables/da.json
  47. +1 −0 memolon/data/TranslationTables/de.json
  48. +1 −0 memolon/data/TranslationTables/el.json
  49. +1 −0 memolon/data/TranslationTables/en.json
  50. +1 −0 memolon/data/TranslationTables/eo.json
  51. +1 −0 memolon/data/TranslationTables/es.json
  52. +1 −0 memolon/data/TranslationTables/et.json
  53. +1 −0 memolon/data/TranslationTables/eu.json
  54. +1 −0 memolon/data/TranslationTables/fa.json
  55. +1 −0 memolon/data/TranslationTables/fi.json
  56. +1 −0 memolon/data/TranslationTables/fr.json
  57. +1 −0 memolon/data/TranslationTables/fy.json
  58. +1 −0 memolon/data/TranslationTables/ga.json
  59. +1 −0 memolon/data/TranslationTables/gd.json
  60. +1 −0 memolon/data/TranslationTables/gl.json
  61. +1 −0 memolon/data/TranslationTables/gu.json
  62. +1 −0 memolon/data/TranslationTables/he.json
  63. +1 −0 memolon/data/TranslationTables/hi.json
  64. +1 −0 memolon/data/TranslationTables/hr.json
  65. +1 −0 memolon/data/TranslationTables/ht.json
  66. +1 −0 memolon/data/TranslationTables/hu.json
  67. +1 −0 memolon/data/TranslationTables/hy.json
  68. +1 −0 memolon/data/TranslationTables/id.json
  69. +1 −0 memolon/data/TranslationTables/is.json
  70. +1 −0 memolon/data/TranslationTables/it.json
  71. +1 −0 memolon/data/TranslationTables/ja.json
  72. +1 −0 memolon/data/TranslationTables/jv.json
  73. +1 −0 memolon/data/TranslationTables/ka.json
  74. +1 −0 memolon/data/TranslationTables/kk.json
  75. +1 −0 memolon/data/TranslationTables/km.json
  76. +1 −0 memolon/data/TranslationTables/kn.json
  77. +1 −0 memolon/data/TranslationTables/ko.json
  78. +1 −0 memolon/data/TranslationTables/ku.json
  79. +1 −0 memolon/data/TranslationTables/ky.json
  80. +1 −0 memolon/data/TranslationTables/la.json
  81. +1 −0 memolon/data/TranslationTables/lb.json
  82. +1 −0 memolon/data/TranslationTables/lt.json
  83. +1 −0 memolon/data/TranslationTables/lv.json
  84. +1 −0 memolon/data/TranslationTables/mg.json
  85. +1 −0 memolon/data/TranslationTables/mk.json
  86. +1 −0 memolon/data/TranslationTables/ml.json
  87. +1 −0 memolon/data/TranslationTables/mn.json
  88. +1 −0 memolon/data/TranslationTables/mr.json
  89. +1 −0 memolon/data/TranslationTables/ms.json
  90. +1 −0 memolon/data/TranslationTables/mt.json
  91. +1 −0 memolon/data/TranslationTables/my.json
  92. +1 −0 memolon/data/TranslationTables/ne.json
  93. +1 −0 memolon/data/TranslationTables/nl.json
  94. +1 −0 memolon/data/TranslationTables/no.json
  95. +1 −0 memolon/data/TranslationTables/pa.json
  96. +1 −0 memolon/data/TranslationTables/pl.json
  97. +1 −0 memolon/data/TranslationTables/ps.json
  98. +1 −0 memolon/data/TranslationTables/pt.json
  99. +1 −0 memolon/data/TranslationTables/ro.json
  100. +1 −0 memolon/data/TranslationTables/ru.json
  101. +1 −0 memolon/data/TranslationTables/sd.json
  102. +1 −0 memolon/data/TranslationTables/si.json
  103. +1 −0 memolon/data/TranslationTables/sk.json
  104. +1 −0 memolon/data/TranslationTables/sl.json
  105. +1 −0 memolon/data/TranslationTables/so.json
  106. +1 −0 memolon/data/TranslationTables/sq.json
  107. +1 −0 memolon/data/TranslationTables/sr.json
  108. +1 −0 memolon/data/TranslationTables/su.json
  109. +1 −0 memolon/data/TranslationTables/sv.json
  110. +1 −0 memolon/data/TranslationTables/sw.json
  111. +1 −0 memolon/data/TranslationTables/ta.json
  112. +1 −0 memolon/data/TranslationTables/te.json
  113. +1 −0 memolon/data/TranslationTables/tg.json
  114. +1 −0 memolon/data/TranslationTables/th.json
  115. +1 −0 memolon/data/TranslationTables/tl.json
  116. +1 −0 memolon/data/TranslationTables/tr.json
  117. +1 −0 memolon/data/TranslationTables/uk.json
  118. +1 −0 memolon/data/TranslationTables/ur.json
  119. +1 −0 memolon/data/TranslationTables/uz.json
  120. +1 −0 memolon/data/TranslationTables/vi.json
  121. +1 −0 memolon/data/TranslationTables/yi.json
  122. +1 −0 memolon/data/TranslationTables/yo.json
  123. +1 −0 memolon/data/TranslationTables/zh.json
  124. +93 −0 memolon/data/languages_overview.tsv
  125. +149 −0 memolon/src/TargetPred_MTLall.py
  126. +167 −0 memolon/src/TargetPred_MTLgrouped.py
  127. +153 −0 memolon/src/TargetPred_STL.py
  128. +55 −0 memolon/src/TargetPred_ridge.py
  129. +74 −0 memolon/src/constants.py
  130. +26 −0 memolon/src/download_embeddings.py
  131. +36 −0 memolon/src/model.py
  132. +9 −0 memolon/src/run_all.sh
  133. +787 −0 memolon/src/utils.py
  134. 0 memolon/tests/__init__.py
  135. +35 −0 memolon/tests/test_utils.py
  136. +68 −0 requirements.txt
10 changes: 10 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
*.ipynb_checkpoints*
/env.yaml

*.DS_Store
*/__pycache__
.idea*
*.jpg
*.pyc
*.out
*/.ipynb_checkpoints
163 changes: 160 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,162 @@
# MEmoLon – The Multilingual Emotion Lexicon
# MEmoLon – The Multilingual Emotion Lexicon

Repository for our ACL 2020 paper "Learning and Evaluating Emotion Lexicons for 91 Languages".
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3779901.svg)](https://doi.org/10.5281/zenodo.3779901)


This is the main repository for our ACL 2020 paper [Learning and Evaluating Emotion Lexicons for 91 Languages](https://www.aclweb.org/anthology/2020.acl-main.112/).

## Overview
Data and code for this research project are distributed across different places. This github repository serves as landing page linking to other relevant sites. It also contains the code necessary to re-run our experiments and analyses. Releases of this repository are archived as Zenodo records under [DOI 10.5281/zenodo.3779901](https://doi.org/10.5281/zenodo.3779901). While this repository contains our codebase and experimental results, the generated lexicon is archived in an second Zenodo record under [DOI 10.5281/zenodo.3756606](https://doi.org/10.5281/zenodo.3756606) due to its size.

### Links
* [Zenodo record of this repository](https://doi.org/10.5281/zenodo.3779901)

* [Zenodo record of the lexicon](https://doi.org/10.5281/zenodo.3756606)

* [arXiv version of the paper](https://arxiv.org/abs/2005.05672)

* [ACL Anthology version of the paper](https://www.aclweb.org/anthology/2020.acl-main.112/)




## The Lexicon
We created emotion lexicons for 91 languages, each one covers eight emotional variables and comprises over 100k word entries. There are several versions of the lexicons, the difference being the choice of the expansion model: There is a linear regression baseline and three versions of neural network models. The *main* version of our lexicons (the version we refer to in the main experiments of our paper and the one we would recommend to use) is referred to as as **MTL_grouped** (applying multi-task learning within two groups of our target variables). **If you are mainly interested in our lexicons, download [this](https://zenodo.org/record/3756607/files/MTL_grouped.zip?download=1) zip file (2.2GB).** It contains 91 tsv files which are named `<iso language code>.tsv`. Please refer to the [description of the Zenodo record](https://doi.org/10.5281/zenodo.3756606) for more details.



## The Experimental Results

The analyses and results we present in the paper can be found in `/memolon/analyses` in form of jupyter notebooks and csv / json files. The names of the notebooks follow the section names in the paper.



## The Codebase

If you are interested in the implementation of our methodology, replicating the lexicon creation or re-running our analyses, this section describes how to work with our code.



### Set-Up

Make sure you have `conda` installed on your machine. We ran the code on Debian 9. Necessary steps may differ across operating systems.

Clone this repository, `cd` into the project's root directory, and run the following commands.

```
conda create --name "memolon2020" python=3.8 pip
conda activate memolon2020
pip install -r requirements.txt
source activate.src
```

The last line configures your `PYTHONTPATH`.



### Re-Running the Lexicon Generation

Recreating the lexicons from scratch requires the Source lexicon, data splits, and the translation tables for all 91 languages. The data split (word lists in `train.txt`, `dev.txt`, and `test.txt` in `/memolon/data/Source`) as well as the translation tables (see content of `/memolon/data/TranslationTables`) are already included in this repository. So, you only have to download the source lexicon. There are two files:

* Get the file [Ratings_Warriner_et_al.csv](https://github.com/JULIELab/XANEW/blob/master/Ratings_Warriner_et_al.csv) (commit b1ed97e from 11 Nov 2019) and place it in `/memolon/data/Source`.
* Get the file [Warriner_BE.tsv](https://github.com/JULIELab/EmoMap/blob/master/coling18/main/lexicon_creation/lexicons/Warriner_BE.tsv) (commit dbfa3b9 from 15 Jun 2018) and place it in ``/memolon/data/Source``.



The python scripts for creating the lexicons can be found in `/memolon/src`. You can either `cd` there and simply run `run_all.sh` or follow the more detailed instructions below. Please take note that the whole process may take several hours. **You do not have to have a GPU to run our code in a reasonable amount of time.**

* To download the fastText embedding models run `download_embeddings.py` which will download the vec.gz files and place them into `/memolon/data/Embeddings`.
* To train and use our models to create all four different versions of the target lexicons (`TargetPred`) run the following scripts (or just the one you want to use). They will create the lexicons and place them into the respective subfolder of `/memolon/data/TargetPred`:
* `TargetPred_MTLgrouped.py`: Multi-task learning within the two groups (VAD and BE5) but not across both. This is the version we mainly refer to in the paper and **recommend to use**.
* `TargetPred_MTLall.py`: Multi-task learning among all 8 target variables.
* `TargetPred_STL.py`: Single-task learning all 8 variables separately.
* `TargetPred_ridge.py`: The ridge regression baseline.



### Re-Running the Analyses

As stated above, analyses are organized as jupyter notebooks in the folder `/memolon/analyses`. Please note that running some of the notebooks requires data files from other notebooks. The recommended order of running the notebooks is the following, although other orders are possible as well.

1. `overview-gold-lexica.ipynb`
2. `silver-evaluation.ipynb`
3. `gold-evaluation.ipynb`
4. `comparison_against_human_reliability.ipynb`
5. `translation_vs_prediction.ipynb`
6. `gold_vs_silver_evaluation.ipynb`
7. `overview-generated-lexicons.ipynb`

Running the silver evaluation is quite simple. You can either generate our lexicons from scratch (see above), or, much easier, download our lexicons from the [Zenodo record](https://doi.org/10.5281/zenodo.3756606) (see above). Unzip all four versions of the lexicons and place the tsv files in the respective subfolders of `/memolon/data/TargetPred`.

Running the gold evaluation and related analyses requires you to manually collect all the gold datasets listed in the paper. This is a tedious process because they all have different copyright and access restrictions. Please find more detailed instructions below.

* en1. This is our Source lexicon, see the above section on lexicon generation.
* en2. Either request the **1999**-version of the Affective Norms for English Words (ANEW) from the [Center for the Study of Emotion and Attention](https://csea.phhp.ufl.edu/Media.html#bottommedia) at the University of Florida, or copy-paste/parse the data from the Techreport *Bradley, M. M., & Lang, P. J. (1999). Affective Norms for English Words (Anew): Stimuli, Instruction Manual and Affective Ratings (C–1). The Center for Research in Psychophysiology, University of Florida.* Format the data as an tsv file with column headers `word`, `valence`, `arousal`, `dominance` and save it under `/memolon/data/TargetGold/ANEW1999.tsv`.
* en3. Get the file `Stevenson(2007)-ANEW_emotional_categories.xls` from [Stevenson et al. (2007)](https://doi.org/10.3758/BF03192999)
and place it in `/memolon/data/TargetGold`.
* es1. Get the file `Redondo(2007).xls` from [Redondo et al. (2007)](https://doi.org/10.3758/BF03193031) and place it `/memolon/data/TargetGold`.
* es2. Get the file `13428_2015_700_MOESM1_ESM.csv` from [Stadthagen-Gonzalez et al. (2017)](https://doi.org/10.3758/BF03192999) and save it as `/memolon/data/TargetGold/Stadthagen_VA.csv`
* es3. Get the file `Hinojosa et al_Supplementary materials.xlsx` from [Hinojosa et al. (2015)](https://link.springer.com/article/10.3758%2Fs13428-015-0572-5) and place it in `/memolon/data/TargetGold`.
* es4. Included in the download for es3.
* es5. Get the file `13428_2017_962_MOESM1_ESM.csv` from [Stadthagen-Gonzalez et al. (2018)](https://doi.org/10.3758/s13428-017-0962-y) and save it as `/memolon/data/TargetGold/Stadthagen_BE.csv`.
* es6. Get the file `13428_2016_768_MOESM1_ESM.xls` from [Ferré et al. (2017)](https://doi.org/10.3758/s13428-016-0768-3) ad save it as `/memolon/data/TargetGold/Ferre.xlsx`.
* de1. Get the file `13428_2013_426_MOESM1_ESM.xlsx` from [Schmidtke et al. (2014)](https://doi.org/10.3758/s13428-013-0426-y) and save it as `/memolon/data/TargetGold/Schmidtke.xlsx`
* de2. Get the file `BAWL-R.xls` from [Vo et al. (2009)](https://doi.org/10.3758/BRM.41.2.534) which is currently available
[here](https://www.ewi-psy.fu-berlin.de/einrichtungen/arbeitsbereiche/allgpsy/Download/BAWL/index.html). You will need to request a password from the authors. Save the file **without password** as `/memolon/data/TargetGold/BAWL-R.xls`. We had to run an automatic file repair when oping it with Excel for the first time.
* de3. Get the file `LANG_database.txt` from [Kaske and Kotz (2010)](https://doi.org/10.3758/BRM.42.4.987) and place it `/memolon/data/TargetGold`.
* de4. Get de2 (see above). Then, get the file `13428_2011_59_MOESM1_ESM.xls` from [Briesemeister et al. (2011)](https://doi.org/10.3758/s13428-011-0059-y) and save it as `/memolon/data/TargetGold/Briesemeister.xls`.
* pl1. Get the file `data sheet 1.xlsx` from [Imbir (2016)](https://doi.org/10.3389/fpsyg.2016.01081) and save it as `/memolon/data/TargetGold/Imbir.xlsx`.
* pl2. Get the file `13428_2014_552_MOESM1_ESM.xlsx` from [Riegel et al. (2015)](https://doi.org/10.3758/s13428-014-0552-1) and save it as `/memolon/data/TargetGold/Riegel.xlsx`
* pl3. Get pl2 (see above). Then, get the file `S1 Dataset` from [Wierzba et al. (2015)](https://doi.org/10.1371/journal.pone.0132305)
and save it as `/memolon/data/TargetGold/Wierzba.xlsx`.
* zh1. Get CVAW 2.0 from [Yu et al. (2016)](https://doi.org/10.18653/v1/N16-1066) which is distributed via
[this website](http://nlp.innobic.yzu.edu.tw/resources/cvaw.html). Use Google Translate to 'translate' the words in `cvaw2.csv`
from traditional to simplified Chinese characters (you can batch-translate by copy-pasting multiple words separated by newline directly from the file). Save the modified file as `/memolon/data/TargetGold/cvaw2_simplied.csv`.
* zh2. Get the file `13428_2016_793_MOESM2_ESM.pdf` from [Yao et al. (2017)](https://doi.org/10.3758/s13428-016-0793-2). Convert PDF to Excel (there are online tools for that but check the results for correctness) and save as `/memolon/data/TargetGold/Yao.xlsx`.
* it. Get the data from [Montefinese et al. (2014)](https://doi.org/10.3758/s13428-013-0405-3). The website offers a PDF version
of the ratings. However, the formatting makes it very difficult to process automatically. Instead, the first author Maria Montefinese provided us with an Excel version. Save the ratings as `/memolon/data/TargetGold/Montefinese.xls`.
* pt. Get the file `13428_2011_131_MOESM1_ESM.xls` from [Soares et al. (2012)](https://doi.org/10.3758/s13428-011-0131-7).
Save it as `/memolon/data/TargetGold/Soares.xls`.
* nl. Get the file `13428_2012_243_MOESM1_ESM.xlsx` from [Moors et al. (2013)](https://doi.org/10.3758/s13428-012-0243-8).
Save it as `/memolon/data/TargetGold/Moors.xlsx`.
* id. Get the file `Data Sheet 1.XLSX` from [Sianipar et al. (2016)](https://doi.org/10.3389/fpsyg.2016.01907). Save it as `/memolon/data/TargetGold/Sianipar.xlsx`
* el. Get the data from [Palogiannidi et al. (2016)](https://www.aclweb.org/anthology/L16-1458): We downloaded the ratings via the [link](www.telecom.tuc.gr/~epalogiannidi/docs/resources/greek_affective_lexicon.zip)
provided in the paper on March 13, 2018. The link pointed to zip containing a single file `greek_affective_lexicon.csv` which we saved under `/memolon/data/TargetGold`. However, the original link does not work anymore (as of April 22, 2020). We recommend contacting the authors for a replacement.
* tr1. Get the file `TurkishEmotionalWordNorms.csv` from [Kapucu et al. (2018)](https://doi.org/10.1177/0033294118814722)
which is available [here](https://osf.io/rxtdm/). Place it under `/memolon/data/TargetGold`.
* tr2. Included in the download for tr1.
* hr. Get the file `Supplementary material_Ćoso et al.xlsx` from [Coso et al. (2019)](https://doi.org/10.1177/1747021819834226)
which is available [here](https://www.ucace.com/links/). Save it as `/memolon/data/TargetGold/Coso.xlsx`.



## Citation

If you find this work useful, please cite our paper:

```bib
@inproceedings{buechel-etal-2020-learning-evaluating,
title = "Learning and Evaluating Emotion Lexicons for 91 Languages",
author = {Buechel, Sven and
R{\"u}cker, Susanna and
Hahn, Udo},
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.112",
doi = "10.18653/v1/2020.acl-main.112",
pages = "1202--1217",
abstract = "Emotion lexicons describe the affective meaning of words and thus constitute a centerpiece for advanced sentiment and emotion analysis. Yet, manually curated lexicons are only available for a handful of languages, leaving most languages of the world without such a precious resource for downstream applications. Even worse, their coverage is often limited both in terms of the lexical units they contain and the emotional variables they feature. In order to break this bottleneck, we here introduce a methodology for creating almost arbitrarily large emotion lexicons for any target language. Our approach requires nothing but a source language emotion lexicon, a bilingual word translation model, and a target language embedding model. Fulfilling these requirements for 91 languages, we are able to generate representationally rich high-coverage lexicons comprising eight emotional variables with more than 100k lexical entries each. We evaluated the automatically generated lexicons against human judgment from 26 datasets, spanning 12 typologically diverse languages, and found that our approach produces results in line with state-of-the-art monolingual approaches to lexicon creation and even surpasses human reliability for some languages and variables. Code and data are available at https://github.com/JULIELab/MEmoLon archived under DOI 10.5281/zenodo.3779901.",
}
```





## Contact

Please get in touch via sven dot buechel at uni-jena dot de.

Code and data are coming up soon.
1 change: 1 addition & 0 deletions activate.src
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
export PYTHONPATH=$PYTHONPATH:$(pwd)
1 change: 1 addition & 0 deletions memolon/analyses/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
*.html
Loading