Skip to content

Commit

Permalink
Add batch embeddings to README (#98)
Browse files Browse the repository at this point in the history
  • Loading branch information
ahyatt authored Nov 5, 2024
1 parent 8ca514e commit dcd02c4
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions README.org
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,8 @@ For all callbacks, the callback will be executed in the buffer the function was
- ~llm-chat-streaming provider prompt partial-callback response-callback error-callback~: Similar to ~llm-chat-async~, but request a streaming response. As the response is built up, ~partial-callback~ is called with the all the text retrieved up to the current point. Finally, ~reponse-callback~ is called with the complete text.
- ~llm-embedding provider string~: With the user-chosen ~provider~, send a string and get an embedding, which is a large vector of floating point values. The embedding represents the semantic meaning of the string, and the vector can be compared against other vectors, where smaller distances between the vectors represent greater semantic similarity.
- ~llm-embedding-async provider string vector-callback error-callback~: Same as ~llm-embedding~ but this is processed asynchronously. ~vector-callback~ is called with the vector embedding, and, in case of error, ~error-callback~ is called with the same arguments as in ~llm-chat-async~.
- ~llm-batch-embedding provider strings~: same as ~llm-embedding~, but takes in a list of strings, and returns a list of vectors whose order corresponds to the ordering of the strings.
- ~llm-batch-embedding-async provider strings vectors-callback error-callback~: same as ~llm-embedding-async~, but takes in a list of strings, and returns a list of vectors whose order corresponds to the ordering of the strings.
- ~llm-count-tokens provider string~: Count how many tokens are in ~string~. This may vary by ~provider~, because some provideres implement an API for this, but typically is always about the same. This gives an estimate if the provider has no API support.
- ~llm-cancel-request request~ Cancels the given request, if possible. The ~request~ object is the return value of async and streaming functions.
- ~llm-name provider~. Provides a short name of the model or provider, suitable for showing to users.
Expand Down

0 comments on commit dcd02c4

Please sign in to comment.