Skip to content

Commit

Permalink
[auto] Regenerate Ludwig API docs (#359)
Browse files Browse the repository at this point in the history
Co-authored-by: justinxzhao <[email protected]>
  • Loading branch information
github-actions[bot] and justinxzhao authored Jul 18, 2024
1 parent f9bccc7 commit 11c0818
Show file tree
Hide file tree
Showing 2 changed files with 70 additions and 12 deletions.
2 changes: 1 addition & 1 deletion docs/developer_guide/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ Work on your self-assigned issue and eventually create a Pull Request.
To do that, edit the file `requirements_extra.txt` and comment out the line that begins with `horovod`. After that,
please execute the long `pip install` command given in the previous step. With these work-around provisions, your
installation should run to completion successfully. If you are still having difficulty, please reach out with the
specifics of your environment in our Discord Community [Discord](https://discord.gg/CBgdrGnZjy).
specifics of your environment in the Ludwig Community [Discord](https://discord.gg/CBgdrGnZjy).

1. Develop features on your branch.

Expand Down
80 changes: 69 additions & 11 deletions docs/user_guide/api/LudwigModel.md
Original file line number Diff line number Diff line change
Expand Up @@ -412,6 +412,30 @@ Manually moves the model to CPU to force GPU memory to be freed.
For more context: https://discuss.pytorch.org/t/how-can-we-release-gpu-memory-cache/14530/35


---
## generate


```python
generate(
input_strings,
generation_config=None,
streaming=False
)
```


A simple generate() method that directly uses the underlying transformers library to generate text.

Args:
input_strings (Union[str, List[str]]): Input text or list of texts to generate from.
generation_config (Optional[dict]): Configuration for text generation.
streaming (Optional[bool]): If True, enable streaming output.

Returns:
Union[str, List[str]]: Generated text or list of generated texts.


---
## is_merge_and_unload_set

Expand Down Expand Up @@ -442,7 +466,8 @@ load(
gpus=None,
gpu_memory_limit=None,
allow_parallel_threads=True,
callbacks=None
callbacks=None,
from_checkpoint=False
)
```

Expand Down Expand Up @@ -470,6 +495,9 @@ determinism.
- __callbacks__ (list, default: `None`): a list of
`ludwig.callbacks.Callback` objects that provide hooks into the
Ludwig pipeline.
- __from_checkpoint__ (bool, default: `False`): if `True`, the model
will be loaded from the latest checkpoint (training_checkpoints/)
instead of the final model weights.

__Return__

Expand All @@ -491,7 +519,8 @@ ludwig_model = LudwigModel.load(model_dir)

```python
load_weights(
model_dir
model_dir,
from_checkpoint=False
)
```

Expand All @@ -502,6 +531,9 @@ __Inputs__

- __model_dir__ (str): filepath string to location of a pre-trained
model
- __from_checkpoint__ (bool, default: `False`): if `True`, the model
will be loaded from the latest checkpoint (training_checkpoints/)
instead of the final model weights.

__Return__

Expand Down Expand Up @@ -700,6 +732,32 @@ __Return__
- __return__ ( `None): `None`


---
## save_dequantized_base_model


```python
save_dequantized_base_model(
save_path
)
```


Upscales quantized weights of a model to fp16 and saves the result in a specified folder.

Args:
save_path (str): The path to the folder where the upscaled model weights will be saved.

Raises:
ValueError:
If the model type is not 'llm' or if quantization is not enabled or the number of bits is not 4 or 8.
RuntimeError:
If no GPU is available, as GPU is required for quantized models.

Returns:
None


---
## save_torchscript

Expand Down Expand Up @@ -971,23 +1029,23 @@ Uploads trained model artifacts to the HuggingFace Hub.
__Inputs__


- __repo_id (`str`)__ (`str`)::
- __repo_id__ (`str`):
A namespace (user or an organization) and a repo name separated
by a `/`.
- __model_path (`str`)__ (`str`)::
The path of the saved model. This is the top level directory where
the models weights as well as other associated training artifacts
are saved.
- __private (`bool`, *optional*, defaults to `False`)__ (`bool`, *optional*, defaults to `False`)::
- __model_path__ (`str`):
The path of the saved model. This is either (a) the folder where
the 'model_weights' folder and the 'model_hyperparameters.json' file
are stored, or (b) the parent of that folder.
- __private__ (`bool`, *optional*, defaults to `False`):
Whether the model repo should be private.
- __repo_type (`str`, *optional*)__ (`str`, *optional*)::
- __repo_type__ (`str`, *optional*):
Set to `"dataset"` or `"space"` if uploading to a dataset or
space, `None` or `"model"` if uploading to a model. Default is
`None`.
- __commit_message (`str`, *optional*)__ (`str`, *optional*)::
- __commit_message__ (`str`, *optional*):
The summary / title / first line of the generated commit. Defaults to:
`f"Upload {path_in_repo} with huggingface_hub"`
- __commit_description (`str` *optional*)__ (`str` *optional*)::
- __commit_description__ (`str` *optional*):
The description of the generated commit

__Returns__
Expand Down

0 comments on commit 11c0818

Please sign in to comment.