Releases: Guest400123064/bbm25-haystack
v0.2.1
Documentation improvements and slight code refactoring.
v0.2.0-alpha
A major change to the underlying representation of tokenized sentences by incorporating n-gram models. Instead of a set of strings, now use a set of string n-tuples representing n-grams, such as [("hello", "world"), ("world", "!")]
v0.1.3
- make retriever run method set 'documents' attribute so that it can work in a pipeline
- set scores to returned documents
- return copied documents"
v0.1.2
Minor bug fix
v0.1.1
Enable evaluation over the BEIR benchmark!
v0.1.0-beta
- Code refactor
- Leverage Haystack filtering logic by default (configurable from initialing parameter)
- Use LLaMA-2 tokenizer as the default tokenizer
v0.1.0-alpha.1
Minimum viable product. This is an experimental project aiming to enhance the default InMemoryDocumentStore
by performing incremental indexing and incorporating SentencePiece for tokenization. Now installable from PyPI via pip install bbm25-haystack
v0.1.0-alpha
Minimum viable product. This is an experimental project aiming to enhance the default InMemoryDocumentStore
by performing incremental indexing and incorporating SentencePiece for tokenization. Now installable from PyPI via pip install bbm25-haystack