Skip to content

Overview (in czech)

Ondřej Košarko edited this page Nov 12, 2020 · 1 revision

python "frontend": posloucha na portu :5000, "kresli" html, bere vstup od uzivatele a posila ho na model servery jako sluzbu (po startu systemu) obsluhuje systemd; cat /etc/systemd/system/transformer_frontend.service zdrojaky jsou v /opt/lindat_translation/ ma to vlastni venv v /opt/lindat_translation/virtualenv (vsechny zavislosti by mely byt v requirements.txt)

app/settings.py: krom jineho definice promennych, ktere lze pouzit v app/models.json pod klicem "server", pokud u daneho modelu "server" neni, pouzije se DEFAULT_SERVER (aktualne localhost:9000)

app/models.json: konfigurace modelu, ktere vidi/ukazuje frontend

V zasade do app/models.json asi moc sahat nebudete, odmazete modely, ktery vam nikde nebezej, pripadne nastavite adresu serveru (ale to jde i pres DEFAULT_SERVER, pokud vsechny modely pobezej z jednoho stroje).

source/target - jsou pole jazyku z/do kterych dany model preklada (aktualne nemame vicejazycny model, ale jednu dobu se s tim experimentovalo, https://github.com/ufal/lindat-translation/commit/bf2525ca14c39d56c36ac68835ca957758f4fee2#diff-896b9f334980fe91e549ea08ff5f40ae1f46b72f0b15a2536245911189d4319b) problem - je tensor2tensor problem; nejaka konfigurace pro encoding/decoding danym modelem domain - se aktualne pro nic nepouziva, ale asi jsem ji tam nechal jako povinnou (ted nevim). model - jmeno servable (takhle to musi byt pojmenovany v konfiguraci tensorflow serving)

minimalne je potreba teda aspon: { "source": ["en"], "target": ["hi"], "problem": "translate_enhi_wat18", "domain": "", "model": "en-hi" } predpoklada to DEFAULT_SERVER a "tensorflow" model_framework

Pivot (napr cs-en-hi) to dela "automaticky" na zaklade source/target to postavi graf (jazyky jsou vrcholy, hrana mezi tema, pro ktery mam model. Ten model si pamatuje "na hrane") a pak si to pamatuje nejkratsi cesty. Kdyby bylo vic modelu pro jeden jazykovej par, tak na te hrane visi vzdy jen ten posledni (v poradi models.json). Pro experimentalni modely se da nastavit "include_in_graph": false,

Lepsi reference nez https://github.com/ufal/lindat-translation/blob/12f71574f3ce2f67b2423870b65fdbc88d99628c/app/models.json, kod kterej ten models.json nacita https://github.com/ufal/lindat-translation/blob/925a2f03a814b03c9b9fd8675cf55095030b1d02/app/model_settings.py a https://github.com/ufal/lindat-translation/blob/12f71574f3ce2f67b2423870b65fdbc88d99628c/app/model.py asi neni.


tensorflow_model_server posloucha na portu :9000 jako sluzbu to opet obstaravat systemd; cat /etc/systemd/system/tensorflow_serving.service

resp config pro aktualni "produkcni" verzi vypada:

[Unit]
Description=Transformer - tensorflow serving

[Service]
Environment=CUDA_VISIBLE_DEVICES=0
Environment=LD_LIBRARY_PATH=/opt/cuda/10.0/lib64/:/opt/cuda/10.0/cudnn/7.6/lib64/:/mnt/transformers-shared/TensorRT-5.1.5.0/lib
Environment=PYTHONPATH=/mnt/transformers-shared/venv/lib/python3.6/site-packages
ExecStart=/mnt/transformers-shared/bin/tensorflow_model_server --port=9000 --enable_batching=true --model_config_file=/opt/lindat_transformer_service/model.config --batching_parameters_file=/opt/lindat_transformer_service/batching.config
Restart=always
User=tfserver
WorkingDirectory=/home/tfserver

[Install]
WantedBy=multi-user.target

Nejaka verze tensorflow_model_server se da sehnat jako .deb pro ubuntu/debian systemy, ale byva to bez podpory GPU. pokud mate moznost nvidia-docker doporucoval bych vyzkouset https://www.tensorflow.org/tfx/serving/docker#serving_with_docker_using_your_gpu pokud tu moznost nemate a budete buildovat vlastni (hodne stesti), sve zazitky jsem sepsal tady: https://github.com/ufal/lindat-translation#serving-build

Tady na tom translation-dev je to buildnuty jeste u me v home, resp. je to u me v home v kopii home Dusana z jinyho stroje.

Kazdopadne dulezita konfigurace, kdyz mate funkcni model server:

--model_config_file=/opt/lindat_transformer_service/model.config konfigurace jednotlivych servables, hlavne cesta a jmeno (to se musi shodovat s tim co je app/models.json pro ten python)

--batching_parameters_file=/opt/lindat_transformer_service/batching.config max_batch_size { value: 20 } num_batch_threads { value: 1 }

max_batch_size se musi potkat s BATCH_SIZE v app/settings.py (jinak by "frontend" posilal vetsi batche nez to ocekava a hlasilo by to chyby). Jdou tu nastavit i veci, ktery pak maj vliv na "vykon" (https://github.com/tensorflow/serving/blob/c4d430906aad17d0db9cef945c79596be3ef0029/tensorflow_serving/batching/README.md#performance-tuning), ale je to potreba odladit podle poctu nactenejch modelu (kdyz pocita vic batch_threads tak to zere vic pameti a muzete dostat oom).

Clone this wiki locally