Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Doc Strings to Config Files #465

Open
wants to merge 25 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
9334097
Add docstrings and comments.
ParagEkbote Dec 19, 2024
72534f6
Merge branch 'huggingface:main' into Document-Custom-Model-Files
ParagEkbote Dec 20, 2024
f35ad57
Add docstrings for config class.
ParagEkbote Dec 20, 2024
e9d7bb0
Merge branch 'Document-Custom-Model-Files' of https://github.com/Para…
ParagEkbote Dec 20, 2024
291a92c
Merge branch 'main' into Document-Custom-Model-Files
ParagEkbote Dec 22, 2024
4f25938
Merge branch 'main' into Document-Custom-Model-Files
ParagEkbote Dec 26, 2024
bf50f2d
Add proper spacing between comments.
ParagEkbote Dec 30, 2024
1a96a1f
Merge branch 'Document-Custom-Model-Files' of https://github.com/Para…
ParagEkbote Dec 30, 2024
3ca0960
Re-write comments as per review
ParagEkbote Dec 30, 2024
bf65e27
Update spacing in yaml file.
ParagEkbote Dec 30, 2024
9b753af
Merge branch 'Document-Custom-Model-Files' of https://github.com/Para…
ParagEkbote Dec 30, 2024
caae117
Re-write comments as per review-2
ParagEkbote Dec 31, 2024
095f277
Make style.
ParagEkbote Dec 31, 2024
b10f9da
Merge branch 'Document-Custom-Model-Files' of https://github.com/Para…
ParagEkbote Dec 31, 2024
e3f6f13
make style.
ParagEkbote Jan 9, 2025
79a3e35
styling improvements.
ParagEkbote Jan 9, 2025
4f7ee64
Update Docstrings and fix formatting.
ParagEkbote Jan 11, 2025
dda8266
Merge branch 'main' of https://github.com/ParagEkbote/lighteval
ParagEkbote Jan 11, 2025
5d225b3
Update docstrings to files.
ParagEkbote Jan 11, 2025
262b1cd
Merge branch 'main' into Document-Custom-Model-Files
ParagEkbote Jan 17, 2025
4c33274
Merge branch 'main' into Document-Custom-Model-Files
ParagEkbote Jan 21, 2025
5815119
Merge branch 'main' into Document-Custom-Model-Files
clefourrier Jan 23, 2025
73af85b
Merge branch 'main' into Document-Custom-Model-Files
ParagEkbote Jan 23, 2025
ab38bd5
Merge branch 'main' into Document-Custom-Model-Files
ParagEkbote Jan 30, 2025
d809e39
Merge branch 'main' into Document-Custom-Model-Files
ParagEkbote Feb 6, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion examples/model_configs/base_model.yaml
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
model:
base_params:
model_args: "pretrained=HuggingFaceH4/zephyr-7b-beta,revision=main" # pretrained=model_name,trust_remote_code=boolean,revision=revision_to_use,model_parallel=True ...
model_args: "pretrained=HuggingFaceH4/zephyr-7b-beta,revision=main" # pretrained=model_name,trust_remote_code=boolean,revision=revision_to_use,model_parallel=True.To see the full list of parameters, please click here: https://huggingface.co/docs/lighteval/main/en/quicktour#model-arguments
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved
dtype: "bfloat16"
compile: true
merged_weights: # Ignore this section if you are not using PEFT models
Expand Down
2 changes: 1 addition & 1 deletion examples/model_configs/peft_model.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
model:
base_params:
model_args: "pretrained=predibase/customer_support,revision=main" # pretrained=model_name,trust_remote_code=boolean,revision=revision_to_use,model_parallel=True ... For a PEFT model, the pretrained model should be the one trained with PEFT and the base model below will contain the original model on which the adapters will be applied.
model_args: "pretrained=predibase/customer_support,revision=main" # pretrained=model_name,trust_remote_code=boolean,revision=revision_to_use,model_parallel=True ... For a PEFT model, the pretrained model should be the one trained with PEFT and the base model below will contain the original model on which the adapters will be applied.To see the full list of parameters, please see here: https://huggingface.co/docs/lighteval/main/en/package_reference/models#lighteval.models.transformers.adapter_model.AdapterModelConfig
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved
dtype: "4bit" # Specifying the model to be loaded in 4 bit uses BitsAndBytesConfig. The other option is to use "8bit" quantization.
compile: true
merged_weights: # Ignore this section if you are not using PEFT models
Expand Down
2 changes: 1 addition & 1 deletion examples/model_configs/quantized_model.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
model:
base_params:
model_args: "pretrained=HuggingFaceH4/zephyr-7b-beta,revision=main" # pretrained=model_name,trust_remote_code=boolean,revision=revision_to_use,model_parallel=True ...
model_args: "pretrained=HuggingFaceH4/zephyr-7b-beta,revision=main" # pretrained=model_name,trust_remote_code=boolean,revision=revision_to_use,model_parallel=True.To see the full list of parameters, please see here: https://huggingface.co/docs/lighteval/main/en/quicktour#model-arguments
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved
dtype: "4bit" # Specifying the model to be loaded in 4 bit uses BitsAndBytesConfig. The other option is to use "8bit" quantization.
compile: true
merged_weights: # Ignore this section if you are not using PEFT models
Expand Down
2 changes: 1 addition & 1 deletion examples/model_configs/serverless_model.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
model:
base_params:
model_name: "meta-llama/Llama-3.1-8B-Instruct" #Qwen/Qwen2.5-14B" #Qwen/Qwen2.5-7B"
model_name: "meta-llama/Llama-3.1-8B-Instruct" #Qwen/Qwen2.5-14B" #Qwen/Qwen2.5-7B"To see the full list of parameters, please see here: https://huggingface.co/docs/lighteval/package_reference/models#endpoints-based-models
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved
2 changes: 1 addition & 1 deletion examples/model_configs/tgi_model.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ model:
instance:
inference_server_address: ""
inference_server_auth: null
model_id: null # Optional, only required if the TGI container was launched with model_id pointing to a local directory
model_id: null # Optional, only required if the TGI container was launched with model_id pointing to a local directory. To see the full list of parameters, please see here: https://huggingface.co/docs/lighteval/package_reference/models#lighteval.models.endpoints.tgi_model.TGIModelConfig
21 changes: 21 additions & 0 deletions src/lighteval/models/endpoints/endpoint_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,27 @@ def from_path(cls, path: str) -> "ServerlessEndpointModelConfig":

@dataclass
class InferenceEndpointModelConfig:
"""
This class is designed to manage and define settings for deploying inference endpoints in machine learning models.
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved

Attributes:
endpoint_name (str, optional):The name of the inference endpoint.
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved
model_name (str, optional): The name of the model for inference.
reuse_existing (bool, default: False): Indicates whether to reuse an existing endpoint.
accelerator (str, default: "gpu"): Specifies the type of hardware accelerator.
model_dtype (str, optional): The data type used by the model. Defaults to the framework's choice if None.
vendor (str, default: "aws"): Cloud service provider for hosting the endpoint.
region (str, default: "us-east-1"): Cloud region, chosen based on hardware availability.
instance_size (str, optional): Specifies the size of the instance (e.g., large, xlarge).
instance_type (str, optional): Specifies the type of the instance (e.g., g5.4xlarge).
framework (str, default: "pytorch"): Framework used for inference (e.g., pytorch, tensorflow).
endpoint_type (str, default: "protected"): Security level of the endpoint (e.g., public, protected).
add_special_tokens (bool, default: True): Specifies if special tokens should be added during processing.
revision (str, default: "main"): The Git branch or commit hash of the model.
namespace (str, optional): The namespace under which the endpoint is launched.
image_url (str, optional): Docker image URL for the endpoint.
env_vars (dict, optional): Environment variables for the endpoint.
"""
endpoint_name: str = None
model_name: str = None
reuse_existing: bool = False
Expand Down
7 changes: 7 additions & 0 deletions src/lighteval/models/endpoints/openai_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,13 @@

@dataclass
class OpenAIModelConfig:
"""
A configuration class for OpenAI models. This class is used to specify settings related to OpenAI models,
including the model name or identifier.

Attributes:
model: It specifies the name or identifier of the OpenAI model to be used.
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved
"""
model: str


Expand Down
8 changes: 8 additions & 0 deletions src/lighteval/models/endpoints/tgi_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,14 @@ def divide_chunks(array, n):

@dataclass
class TGIModelConfig:
"""
This class provides a streamlined configuration for integrating with Text Generation Inference (TGI) endpoints.
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved

Attributes:
inference_server_address (str, required): The endpoint address of the inference server hosting the model.
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved
inference_server_auth (str, required): Authentication credentials or tokens required to access the server.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should only be a token

model_id (str, required): Identifier for the model hosted on the inference server.
"""
inference_server_address: str
inference_server_auth: str
model_id: str
Expand Down
26 changes: 25 additions & 1 deletion src/lighteval/models/transformers/adapter_model.py
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,10 @@

@dataclass
class AdapterModelConfig(BaseModelConfig):
"""
This class is used to manage the configuration of adapter models. Adapter models are designed to extend or adapt a
base model's functionality for specific tasks while keeping most of the base model's parameters frozen.
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved
"""
# Adapter models have the specificity that they look at the base model (= the parent) for the tokenizer and config
base_model: str = None

Expand All @@ -58,7 +62,19 @@ def init_configs(self, env_config: EnvConfig):


class AdapterModel(BaseModel):
"""
This class is designed to integrate adapter models with a pre-trained base model.
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved
"""
def _create_auto_tokenizer(self, config: AdapterModelConfig, env_config: EnvConfig) -> PreTrainedTokenizer:
"""
Creates and configures the adapter model by applying adapter weights to the base model.

Args:
config(AdapterModelConfig): An instance of AdapterModelConfig.
env_config(EnvConfig): An instance of EnvConfig.

Returns: PreTrainedTokenizer
"""
# By default, we look at the model config for the model stored in `base_model`
# (= the parent model, not the model of interest)
return self._create_auto_tokenizer_with_name(
Expand All @@ -71,7 +87,15 @@ def _create_auto_tokenizer(self, config: AdapterModelConfig, env_config: EnvConf
)

def _create_auto_model(self, config: AdapterModelConfig, env_config: EnvConfig) -> AutoModelForCausalLM:
"""Returns a PeftModel from a base model and a version fined tuned using PEFT."""
"""
It returns a PeftModel from a base model and a version fined tuned using PEFT.
ParagEkbote marked this conversation as resolved.
Show resolved Hide resolved

Args:
config(AdapterModelConfig): An instance of AdapterModelConfig.
env_config(EnvConfig): An instance of EnvConfig.

Returns: AutoModelForCasualLM
"""
torch_dtype = _get_dtype(config.dtype, self._config)
config.model_parallel, max_memory, device_map = self.init_model_parallel(config.model_parallel)

Expand Down
13 changes: 12 additions & 1 deletion src/lighteval/models/transformers/delta_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,9 @@

@dataclass
class DeltaModelConfig(BaseModelConfig):
"""
This class is used to manage the configuration class for delta models.
"""
Comment on lines +41 to +43
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explain what delta weights are

# Delta models look at the pretrained (= the delta weights) for the tokenizer and model config
base_model: str = None

Expand All @@ -59,7 +62,15 @@ def _create_auto_model(
config: DeltaModelConfig,
env_config: EnvConfig,
) -> AutoModelForCausalLM:
"""Returns a model created by adding the weights of a delta model to a base model."""
"""
It returns a model created by adding the weights of a delta model to a base model.

Args:
config(AdapterModelConfig): An instance of AdapterModelConfig.
env_config(EnvConfig): An instance of EnvConfig.

Returns: AutoModelForCasualLM
Comment on lines +69 to +73
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The args and returns are not adding new ifnformation, remove

"""
config.model_parallel, max_memory, device_map = self.init_model_parallel(config.model_parallel)
torch_dtype = _get_dtype(config.dtype, self._config)

Expand Down
22 changes: 22 additions & 0 deletions src/lighteval/models/vllm/vllm_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,28 @@

@dataclass
class VLLMModelConfig:
"""
This class defines the configuration parameters for deploying and running models using the vLLM framework.

Attributes:
pretrained (str, required): The identifier for the pretrained model (e.g., model name or path).
gpu_memory_utilisation (float, default: 0.9): Fraction of GPU memory to allocate for the model. Reduce this value if you encounter memory issues.
revision (str, default: "main"): Specifies the branch or version of the model repository.
dtype (str | None, optional): Data type for computations (e.g., float32, float16, or bfloat16). Defaults to the model's preset if None.
tensor_parallel_size (int, default: 1): Number of GPUs used for splitting tensors across devices.
pipeline_parallel_size (int, default: 1): Number of GPUs used for pipeline parallelism.
data_parallel_size (int, default: 1): Number of GPUs used for data parallelism.
max_model_length (int | None, optional): Maximum sequence length for the model. If None, it is inferred automatically. Can be reduced to handle Out-of-Memory (OOM) issues.
swap_space (int, default: 4): Amount of CPU swap space (in GiB) per GPU for offloading.
seed (int, default: 1234): Seed for reproducibility in experiments.
trust_remote_code (bool, default: False): Whether to trust custom code provided by remote repositories.
use_chat_template (bool, default: False): Specifies if chat-specific templates should be used for input formatting.
add_special_tokens (bool, default: True): Indicates whether to add special tokens during tokenization.
multichoice_continuations_start_space (bool, default: True): Adds a space at the beginning of each continuation during multi-choice generation.
pairwise_tokenization (bool, default: False): Specifies if context and continuation should be tokenized separately or together.
subfolder (Optional[str], optional): Path to a specific subfolder in the model repository, if applicable.
temperature (float, default: 0.6): Sampling temperature for stochastic tasks. Ignored for deterministic tasks (set internally to 0).
"""
pretrained: str
gpu_memory_utilisation: float = 0.9 # lower this if you are running out of memory
revision: str = "main" # revision of the model
Expand Down