From e00ddb508895a35f837d7a5a0630c11e34c5eff5 Mon Sep 17 00:00:00 2001 From: Sean Lee Date: Thu, 1 Aug 2024 21:22:13 +0800 Subject: [PATCH] layout --- docs/notes/tutorial.rst | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/docs/notes/tutorial.rst b/docs/notes/tutorial.rst index 65b86c4..011fee1 100644 --- a/docs/notes/tutorial.rst +++ b/docs/notes/tutorial.rst @@ -57,7 +57,7 @@ Here's an example of training a BERT-base model: --fp16 1 -And here's an example of training a BERT-large model: +And here's another example of training a BERT-large model: .. code-block:: bash @@ -84,15 +84,12 @@ And here's an example of training a BERT-large model: --fp16 1 -These examples use the `WhereIsAI/medical-triples` dataset and specify various hyperparameters for training. Adjust the hyperparameters as needed for your specific use case. - - Step 3: Evaluate the model ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AnglE provides a `CorrelationEvaluator `_ to evaluate the performance of sentence embeddings. -For convenience, we have processed the `PubMedQA pqa_labeled `_ data into the `DatasetFormats.A` format and made it available as `WhereIsAI/pubmedqa-test-angle-format-a `_ for evaluation purposes. +For convenience, we have processed the `PubMedQA `_ pqa_labeled subset data into the `DatasetFormats.A` format and made it available in `WhereIsAI/pubmedqa-test-angle-format-a `_ for evaluation purposes. The following code demonstrates how to evaluate the trained `pubmed-angle-base-en` model: @@ -135,7 +132,8 @@ Here, we compare the performance of our trained models with two popular models t +----------------------------------------+-------------------------+ -The results show that our trained models, `WhereIsAI/pubmed-angle-base-en` and `WhereIsAI/pubmed-angle-large-en`, performs better than other popular models on the PubMedQA dataset, with the large model achieving the highest Spearman's correlation of **86.21**. +The results show that our trained models, `WhereIsAI/pubmed-angle-base-en` and `WhereIsAI/pubmed-angle-large-en`, performs better than other popular models on the PubMedQA dataset. +The large model achieves the highest Spearman's correlation of **86.21**. Step 4: Use the model in your application @@ -164,4 +162,4 @@ By using `angle-emb`, you can quickly load the model for your applications. print(cosine_similarity(query_emb, emb)) # 0.8029839020052982 - # 0.4260630076818197 \ No newline at end of file + # 0.4260630076818197