Skip to content

Commit

Permalink
chore: regenerate docs
Browse files Browse the repository at this point in the history
  • Loading branch information
jvallesm committed Oct 18, 2024
1 parent e35adf6 commit d08bbf2
Show file tree
Hide file tree
Showing 28 changed files with 104 additions and 104 deletions.
2 changes: 1 addition & 1 deletion pkg/component/ai/anthropic/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ Anthropic's text generation models (often called generative pre-trained transfor
| Prompt (required) | `prompt` | string | The prompt text |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: The prompt images will be injected in the order they are provided to the 'prompt' message. Anthropic doesn't support sending images via image-url, use this field instead) |
| [Chat history](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed (Note: Not supported by Anthropic Models) |
| Temperature | `temperature` | number | The temperature for sampling |
| Top K | `top-k` | integer | Top k for sampling |
Expand Down
8 changes: 4 additions & 4 deletions pkg/component/ai/cohere/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Cohere's text generation models (often called generative pre-trained transformer
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| Documents | `documents` | array[string] | The documents to be used for the model, for optimal performance, the length of each document should be less than 300 words. |
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: As for 2024-06-24 Cohere models are not multimodal, so images will be ignored.) |
| [Chat history](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Each message should adhere to the format: : \{"role": "The message role, i.e. 'USER' or 'CHATBOT'", "content": "message content"\}. |
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Each message should adhere to the format: : \{"role": "The message role, i.e. 'USER' or 'CHATBOT'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed (default=42) |
| Temperature | `temperature` | number | The temperature for sampling (default=0.7) |
| Top K | `top-k` | integer | Top k for sampling (default=10) |
Expand Down Expand Up @@ -199,7 +199,7 @@ Rerank models sort text inputs by semantic relevance to a specified query. They
| Query (required) | `query` | string | The query |
| Documents (required) | `documents` | array[string] | The documents to be used for reranking |
| Top N | `top-n` | integer | The number of most relevant documents or indices to return. Defaults to the length of the documents (default=3) |
| Maximum number of chunks per document | `max-chunks-per-doc` | integer | The maximum number of chunks to produce internally from a document (default=10) |
| Maximum Number of Chunks per Document | `max-chunks-per-doc` | integer | The maximum number of chunks to produce internally from a document (default=10) |
</div>


Expand All @@ -211,8 +211,8 @@ Rerank models sort text inputs by semantic relevance to a specified query. They

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Reranked documents | `ranking` | array[string] | Reranked documents |
| Reranked documents relevance (optional) | `relevance` | array[number] | The relevance scores of the reranked documents |
| Reranked Documents | `ranking` | array[string] | Reranked documents |
| Reranked Documents Relevance (optional) | `relevance` | array[number] | The relevance scores of the reranked documents |
| [Usage](#text-reranking-usage) (optional) | `usage` | object | Search Usage on the Cohere Platform Rerank Models |
</div>

Expand Down
2 changes: 1 addition & 1 deletion pkg/component/ai/fireworksai/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Fireworks AI's text generation models (often called generative pre-trained trans
| Prompt (required) | `prompt` | string | The prompt text |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: According to Fireworks AI documentation on 2024-07-24, the total number of images included in a single API request should not exceed 30, and all the images should be smaller than 5MB in size) |
| [Chat history](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\} |
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\} |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Top K | `top-k` | integer | Integer to define the top tokens considered within the sample operation to create new text |
Expand Down
2 changes: 1 addition & 1 deletion pkg/component/ai/groq/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ Groq serves open source text generation models (often called generative pre-trai
| Prompt (required) | `prompt` | string | The prompt text |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: Only a subset of OSS models support image inputs) |
| [Chat history](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\} |
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\} |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Top K | `top-k` | integer | Integer to define the top tokens considered within the sample operation to create new text |
Expand Down
10 changes: 5 additions & 5 deletions pkg/component/ai/instill/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -365,7 +365,7 @@ Generate texts from input text prompts and chat history.
| Prompt (required) | `prompt` | string | The prompt text |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images |
| [Chat history](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Max New Tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
Expand Down Expand Up @@ -432,8 +432,8 @@ Generate images from input text prompts.
| Prompt (required) | `prompt` | string | The prompt text |
| Samples | `samples` | integer | The number of generated samples, default is 1 |
| Seed | `seed` | integer | The seed, default is 0 |
| Aspect ratio | `negative-prompt` | string | Keywords of what you do not wish to see in the output image. |
| Aspect ratio | `aspect-ratio` | string | Controls the aspect ratio of the generated image. Defaults to 1:1. |
| Aspect Ratio | `negative-prompt` | string | Keywords of what you do not wish to see in the output image. |
| Aspect Ratio | `aspect-ratio` | string | Controls the aspect ratio of the generated image. Defaults to 1:1. |
</div>


Expand Down Expand Up @@ -461,7 +461,7 @@ Answer questions based on a prompt and an image.
| Prompt (required) | `prompt` | string | The prompt text |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images |
| [Chat history](#visual-question-answering-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| [Chat History](#visual-question-answering-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Max New Tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
Expand Down Expand Up @@ -528,7 +528,7 @@ Generate texts from input text prompts and chat history.
| Prompt (required) | `prompt` | string | The prompt text |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images |
| [Chat history](#chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| [Chat History](#chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Max New Tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
Expand Down
2 changes: 1 addition & 1 deletion pkg/component/ai/mistralai/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Mistral AI's text generation models (often called generative pre-trained transfo
| Prompt (required) | `prompt` | string | The prompt text |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: The Mistral models are not trained to process images, thus images will be omitted) |
| [Chat history](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\} |
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\} |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Top K | `top-k` | integer | Integer to define the top tokens considered within the sample operation to create new text (Note: The Mistral models does not support top-k sampling) |
Expand Down
2 changes: 1 addition & 1 deletion pkg/component/ai/ollama/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Open-source large language models (OSS LLMs) are artificial intelligence models
| Prompt (required) | `prompt` | string | The prompt text |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images |
| [Chat history](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Top K | `top-k` | integer | Top k for sampling |
Expand Down
2 changes: 1 addition & 1 deletion pkg/component/ai/openai/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ OpenAI's text generation models (often called generative pre-trained transformer
| Prompt (required) | `prompt` | string | The prompt text |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| Image | `images` | array[string] | The images |
| [Chat history](#text-generation-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| [Chat History](#text-generation-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| Temperature | `temperature` | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top-p` but not both. |
| N | `n` | integer | How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. |
| Max Tokens | `max-tokens` | integer | The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. |
Expand Down
Loading

0 comments on commit d08bbf2

Please sign in to comment.