Releases: huggingface/huggingface_hub
[v0.25.1]: Raise error if encountered in chat completion SSE stream
Full Changelog : v0.25.0...v0.25.1
For more details, refer to the related PR #2558
v0.25.0: Large uploads made simple + quality of life improvements
📂 Upload large folders
Uploading large models or datasets is challenging. We've already written some tips and tricks to facilitate the process but something was still missing. We are now glad to release the huggingface-cli upload-large-folder
command. Consider it as a "please upload this no matter what, and be quick" command. Contrarily to huggingface-cli download
, this new command is more opinionated and will split the upload into several commits. Multiple workers are started locally to hash, pre-upload and commit the files in a way that is resumable, resilient to connection errors, and optimized against rate limits. This feature has already been stress tested by the community over the last months to make it as easy and convenient to use as possible.
Here is how to use it:
huggingface-cli upload-large-folder <repo-id> <local-path> --repo-type=dataset
Every minute, a report is logged with the current status of the files and workers:
---------- 2024-04-26 16:24:25 (0:00:00) ----------
Files: hashed 104/104 (22.5G/22.5G) | pre-uploaded: 0/42 (0.0/22.5G) | committed: 58/104 (24.9M/22.5G) | ignored: 0
Workers: hashing: 0 | get upload mode: 0 | pre-uploading: 6 | committing: 0 | waiting: 0
---------------------------------------------------
You can also run it from a script:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.upload_large_folder(
... repo_id="HuggingFaceM4/Docmatix",
... repo_type="dataset",
... folder_path="/path/to/local/docmatix",
... )
For more details about the command options, run:
huggingface-cli upload-large-folder --help
or visit the upload guide.
- CLI to upload arbitrary huge folder by @Wauplin in #2254
- Reduce number of commits in upload large folder by @Wauplin in #2546
- Suggest using upload_large_folder when appropriate by @Wauplin in #2547
✨ HfApi & CLI improvements
🔍 Search API
The search API have been updated. You can now list gated models and datasets, and filter models by their inference status (warm, cold, frozen).
- Add 'gated' search parameter by @Wauplin in #2448
- Filter models by inference status by @Wauplin in #2517
More complete support for the expand[]
parameter:
- Document baseModels and childrenModelCount as expand parameters by @Wauplin in #2475
- Better support for trending score by @Wauplin in #2513
- Add GGUF as supported expand[] parameter by @Wauplin in #2545
👤 User API
Organizations are now included when retrieving the user overview:
get_user_followers
and get_user_following
are now paginated. This was not the case before, leading to issues for users with more than 1000 followers.
📦 Repo API
Added auth_check
to easily verify if a user has access to a repo. It raises GatedRepoError
if the repo is gated and the user don't have the permission or RepositoryNotFoundError
if the repo does not exist or is private. If the method does not raise an error, you can assume the user has the permission to access the repo.
>>> from huggingface_hub import auth_check
>>> from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError
try:
auth_check("user/my-cool-model")
except GatedRepoError:
# Handle gated repository error
print("You do not have permission to access this gated repository.")
except RepositoryNotFoundError:
# Handle repository not found error
print("The repository was not found or you do not have access.")
- implemented
auth_check
by @cjfghk5697 in #2497
It is now possible to set a repo as gated from a script:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.update_repo_settings(repo_id=repo_id, gated="auto") # Set to "auto", "manual" or False
- [Feature] Add
update_repo_settings
function to HfApi #2447 by @WizKnight in #2502
⚡️ Inference Endpoint API
A few improvements in the InferenceEndpoint
API. It's now possible to set a scale_to_zero_timeout
parameter + to configure secrets when creating or updating an Inference Endpoint.
- Add scale_to_zero_timeout parameter to HFApi.create/update_inference_endpoint by @hommayushi3 in #2463
- Update endpoint.update signature by @Wauplin in #2477
- feat: ✨ allow passing secrets to the inference endpoint client by @LuisBlanche in #2486
💾 Serialization
The torch serialization module now supports tensor subclasses.
We also made sure that now the library is tested with both torch
1.x and 2.x to ensure compatibility.
- Making wrapper tensor subclass to work in serialization by @jerryzh168 in #2440
- Torch: test on 1.11 and latest versions + explicitly load with
weights_only=True
by @Wauplin in #2488
💔 Breaking changes
Breaking changes:
InferenceClient.conversational
task has been removed in favor ofInferenceClient.chat_completion
. Also removedConversationalOutput
data class.- All
InferenceClient
output values are now dataclasses, not dictionaries. list_repo_likers
is now paginated. This means the output is now an iterator instead of a list.
Deprecation:
multi_commit: bool
parameter inupload_folder
is not deprecated, along thecreate_commits_on_pr
. It is now recommended to useupload_large_folder
instead. Thought its API and internals are different, the goal is still to be able to upload many files in several commits.
- Prepare for release 0.25 by @Wauplin in #2400
- Paginate repo likers endpoint by @hanouticelina in #2530
🛠️ Small fixes and maintenance
⚡️ InferenceClient fixes
Thanks to community feedback, we've been able to improve or fix significant things in both the InferenceClient
and its async version AsyncInferenceClient
. This fixes have been mainly focused on the OpenAI-compatible chat_completion
method and the Inference Endpoints services.
- [Inference] Support
stop
parameter intext-generation
instead ofstop_sequences
by @Wauplin in #2473 - [hot-fix] Handle [DONE] signal from TGI + remove logic for "non-TGI servers" by @Wauplin in #2410
- Fix chat completion url for OpenAI compatibility by @Wauplin in #2418
- Bug - [InferenceClient] - use proxy set in var env by @morgandiverrez in #2421
- Document the difference between model and base_url by @Wauplin in #2431
- Fix broken AsyncInferenceClient on [DONE] signal by @Wauplin in #2458
- Fix
InferenceClient
for HF Nvidia NIM API by @Wauplin in #2482 - Properly close session in
AsyncInferenceClient
by @Wauplin in #2496 - Fix unclosed aiohttp.ClientResponse objects by @Wauplin in #2528
- Fix resolve chat completion URL by @Wauplin in #2540
😌 QoL improvements
When uploading a folder, we validate the README.md file before hashing all the files, not after.
This should save some precious time when uploading large files and a corrupted model card.
Also, it is now possible to pass a --max-workers
argument when uploading a folder from the CLI
- huggingface-cli upload - Validate README.md before file hashing by @hlky in #2452
- Solved: Need to add the max-workers argument to the huggingface-cli command by @devymex in #2500
All custom exceptions raised by huggingface_hub
are now defined in huggingface_hub.errors
module. This should make it easier to import them for your try/except
statements.
- Define error by @cjfghk5697 in #2444
- Define cache errors in errors.py by @010kim in #2470
At the same occasion, we've reworked how errors are formatted in hf_raise_for_status
to print more relevant information to the users.
- Refacto error parsing (HfHubHttpError) by @Wauplin in #2474
- Raise with more info on 416 invalid range by @Wauplin in #2449
All constants in huggingface_hub
are now imported as a module. This makes it easier to patch their values, for example in a test pipeline.
- Update
constants
import to use module-level access #1172 by @WizKnight in #2453 - Update constants imports with module level access #1172 by @WizKnight in #2469
- Refactor all constant imports to module-level access by @WizKnight in #2489
Other quality of life improvements:
- Warn if user tries to upload a parquet file to a model repo by @Wauplin in #2403
- Tag repos using
HFSummaryWriter
with 'hf-summary-writer' by @Wauplin in #2398 - Do not raise if branch exists and no write permission by @Wauplin in #2426
- expose scan_cache table generation to python by @rsxdalv in #2437
- Expose
RepoUrl
info inCommitInfo
object by @Wauplin in #2487 - Add new hardware flavors by @apolinario in #2512
- http_backoff retry with SliceFileObj by @hlky in #2542
- Add version cli command by @010kim in #2498
🐛 fixes
- Fix filelock if flock not supported by @Wauplin in #2402
- Fix creating empty commit on PR by @Wauplin in #2413
- fix expand in CI by @Wauplin (direct commit on main)
- Update quick-start.md by @AxHa in #2422
- fix repo-files CLI example by @Wauplin in #2428
- Do not raise if chmod fails by @Wauplin in #2429
- fix .huggingface to .cache/huggingface in doc by @lizzzcai in #2432
- Fix shutil move by @Wauplin in #2433
- Correct "login" to "log in" when used as verb by @DePasqualeOrg in #2434
- Typo for plural by @david4096 in #2439
- fix typo in file download warning message about symlinks by @joetam in #2442
- Fix typo double assignment by @Wauplin in #2443
- [webhooks server] rely on SPACE_ID to check if app is local or in a Sapce by @Wauplin in #2450
- Fix error message on permission issue by @Wauplin in #2465
- Fix: do not erase existi...
[v0.24.7]: Fix race-condition issue when downloading from multiple threads
Full Changelog: v0.24.6...v0.24.7
For more details, refer to the related PR #2534.
[v0.24.6]: Fix [DONE] handling for `AsyncInferenceClient` on TGI 2.2.0+
Full Changelog: v0.24.5...v0.24.6
[v0.24.5] Fix download process on S3 mount (v2)
Follow-up after #2433 and v0.24.4 patch release. This release will definitely fix things.
Full Changelog: v0.24.4...v0.24.5
[v0.24.4] Fix download process on S3 mount
When downloading a file, the process was failing if the filesystem did not support either chmod
or shutils.copy2
when moving a file from the tmp folder to the cache. This patch release fixes this. More details in #2429.
Full Changelog: v0.24.3...v0.24.4
[v0.24.3] Fix InferenceClient base_url for OpenAI compatibility
Fixing a bug in the chat completion URL to follow OpenAI standard #2418. InferenceClient
now works with urls ending with /
, /v1
and /v1/chat/completions
.
Full Changelog: v0.24.2...v0.24.3
[v0.24.2] Fix create empty commit PR should not fail
See #2413 for more details.
Creating an empty commit on a PR was failing due to a revision
parameter been quoted twice. This patch release fixes it.
Full Changelog: v0.24.1...v0.24.2
[v0.24.1] Handle [DONE] signal from TGI + remove logic for "non-TGI servers"
This release fixes 2 things:
- handle
"[DONE]"
message in chat stream (related to TGI update huggingface/text-generation-inference#2221) - remove the "non-TGI" logic in chat completion since all models support server-side rendering now that even transformers-backed models are TGI-server.
See #2410 for more details.
Full Changelog: v0.24.0...v0.24.1
v0.24.0: Inference, serialization and optimizations
⚡️ OpenAI-compatible inference client!
The InferenceClient
's chat completion API is now fully compliant with OpenAI
client. This means it's a drop-in replacement in your script:
- from openai import OpenAI
+ from huggingface_hub import InferenceClient
- client = OpenAI(
+ client = InferenceClient(
base_url=...,
api_key=...,
)
output = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Count to 10"},
],
stream=True,
max_tokens=1024,
)
for chunk in output:
print(chunk.choices[0].delta.content)
Why switching to InferenceClient
if you already use OpenAI
then? Because it's better integrated with HF services, such as the Serverless Inference API and Dedicated Endpoints. Check out the more detailed answer in this HF Post.
For more details about OpenAI compatibility, check out this guide's section.
- True OpenAI drop-in replacement by InferenceClient by @Wauplin in #2384
- Promote chat_completion in inference guide by @Wauplin in #2366
(other) InferenceClient
improvements
Some new parameters have been added to the InferenceClient
, following the latest changes in our Inference API:
prompt_name
,truncate
andnormalize
infeature_extraction
model_id
andresponse_format
, inchat_completion
adapter_id
intext_generation
hypothesis_template
andmulti_labels
inzero_shot_classification
Of course, all of those changes are also available in the AsyncInferenceClient
async equivalent 🤗
- Support truncate and normalize in InferenceClient by @Wauplin in #2270
- Add
prompt_name
to feature-extraction + update types by @Wauplin in #2363 - Send model_id in ChatCompletion request by @Wauplin in #2302
- improve client.zero_shot_classification() by @MoritzLaurer in #2340
- [InferenceClient] Add support for
adapter_id
(text-generation) andresponse_format
(chat-completion) by @Wauplin in #2383
Added helpers for TGI servers:
get_endpoint_info
to get information about an endpoint (running model, framework, etc.). Only available on TGI/TEI-powered models.health_check
to check health status of the server. Only available on TGI/TEI-powered models and only for InferenceEndpoint or local deployment. For serverless InferenceAPI, it's better to useget_model_status
.
Other fixes:
image_to_text
output type has been fixed- use
wait-for-model
to avoid been rate limited while model is not loaded - add
proxies
support
- Fix InferenceClient.image_to_text output value by @Wauplin in #2285
- Fix always None in text_generation output by @Wauplin in #2316
- Add wait-for-model header when sending request to Inference API by @Wauplin in #2318
- Add proxy support on async client by @noech373 in #2350
- Remove jinja tips + fix typo in chat completion docstring by @Wauplin in #2368
💾 Serialization
The serialization module introduced in v0.22.x
has been improved to become the preferred way to serialize a torch model to disk. It handles how of the box sharding and safe serialization (using safetensors
) with subtleties to work with shared layers. This logic was previously scattered in libraries like transformers
, diffusers
, accelerate
and safetensors
. The goal of centralizing it in huggingface_hub
is to allow any external library to safely benefit from the same naming convention, making it easier to manage for end users.
>>> from huggingface_hub import save_torch_model
>>> model = ... # A PyTorch model
# Save state dict to "path/to/folder". The model will be split into shards of 5GB each and saved as safetensors.
>>> save_torch_model(model, "path/to/folder")
# Or save the state dict manually
>>> from huggingface_hub import save_torch_state_dict
>>> save_torch_state_dict(model.state_dict(), "path/to/folder")
More details in the serialization package reference.
- Serialization: support saving torch state dict to disk by @Wauplin in #2314
- Handle shared layers in
save_torch_state_dict
+ addsave_torch_model
by @Wauplin in #2373
Some helpers related to serialization have been made public for reuse in external libraries:
get_torch_storage_id
get_torch_storage_size
- Support
max_shard_size
as string insplit_state_dict_into_shards_factory
by @SunMarc in #2286 - Make get_torch_storage_id public by @Wauplin in #2304
📁 HfFileSystem
The HfFileSystem
has been improved to optimize calls, especially when listing files from a repo. This is especially useful for large datasets like HuggingFaceFW/fineweb for faster processing and reducing risk of being rate limited.
- [HfFileSystem] Less /paths-info calls by @lhoestq in #2271
- Update token type definition and arg description in
hf_file_system.py
by @lappemic in #2278 - [HfFileSystem] Faster
fs.walk()
by @lhoestq in #2346
Thanks to @lappemic, HfFileSystem
methods are now properly documented. Check it out here!
✨ HfApi & CLI improvements
Commit API
A new mechanism has been introduced to prevent empty commits if no changes have been detected. Enabled by default in upload_file
, upload_folder
, create_commit
and the huggingface-cli upload
command. There is no way to force an empty commit.
Resource groups
Resource Groups allow organizations administrators to group related repositories together, and manage access to those repos. It is now possible to specify a resource group ID when creating a repo:
from huggingface_hub import create_repo
create_repo("my-secret-repo", private=True, resource_group_id="66670e5163145ca562cb1988")
Webhooks API
Webhooks allow you to listen for new changes on specific repos or to all repos belonging to particular set of users/organizations (not just your repos, but any repo). With the Webhooks API you can create, enable, disable, delete, update, and list webhooks from a script!
from huggingface_hub import create_webhook
# Example: Creating a webhook
webhook = create_webhook(
url="https://webhook.site/your-custom-url",
watched=[{"type": "user", "name": "your-username"}, {"type": "org", "name": "your-org-name"}],
domains=["repo", "discussion"],
secret="your-secret"
)
Search API
The search API has been slightly improved. It is now possible to:
- filter datasets by tags
- filter which attributes should be returned in
model_info
/list_models
(and similarly for datasets/Spaces). For example, you can ask the server to returndownloadsAllTime
for all models.
>>> from huggingface_hub import list_models
>>> for model in list_models(library="transformers", expand="downloadsAllTime", sort="downloads", limit=5):
... print(model.id, model.downloads_all_time)
MIT/ast-finetuned-audioset-10-10-0.4593 1676502301
sentence-transformers/all-MiniLM-L12-v2 115588145
sentence-transformers/all-MiniLM-L6-v2 250790748
google-bert/bert-base-uncased 1476913254
openai/clip-vit-large-patch14 590557280
- Support filtering datasets by tags by @Wauplin in #2266
- Support
expand
parameter inxxx_info
andlist_xxxs
(model/dataset/Space) by @Wauplin in #2333 - Add InferenceStatus to ExpandModelProperty_T by @Wauplin in #2388
- Do not mention gitalyUid in expand parameter by @Wauplin in #2395
CLI
It is now possible to delete files from a repo using the command line:
Delete a folder:
>>> huggingface-cli repo-files Wauplin/my-cool-model delete folder/
Files correctly deleted from repo. Commit: https://huggingface.co/Wauplin/my-cool-mo...
Use Unix-style wildcards to delete sets of files:
>>> huggingface-cli repo-files Wauplin/my-cool-model delete *.txt folder/*.bin
Files correctly deleted from repo. Commit: https://huggingface.co/Wauplin/my-cool-mo...
- fix/issue 2090 : Add a
repo_files
command, with recursive deletion. by @OlivierKessler01 in #2280
ModelHubMixin
The ModelHubMixin
, allowing for quick integration of external libraries with the Hub have been updated to fix some existing bugs and ease its use. Learn how to integrate your library from this guide.
- Don't override 'config' in model_kwargs by @alexander-soare in #2274
- Support custom kwargs for model card in save_pretrained by @qubvel in #2310
- ModelHubMixin: Fix attributes lost in inheritance by @Wauplin in #2305
- Fix ModelHubMixin coders by @gorold in #2291
- Hot-fix: do not share tags between
ModelHubMixin
siblings by @Wauplin in #2394 - Fix: correctly encode/decode config in ModelHubMixin if custom coders by @Wauplin in #2337
🌐 📚 Documentation
Efforts from the Korean-speaking community continued to translate guides and package references to KO! Check out the result here.
- 🌐 [i18n-KO] Trans...