CARVIEW |
Navigation Menu
-
Notifications
You must be signed in to change notification settings - Fork 764
Releases: huggingface/huggingface_hub
[v0.34.1] [CLI] print help if no command provided
Compare
Full Changelog: v0.34.0...v0.34.1
Assets 2
[v0.34.0] Announcing Jobs: a new way to run compute on Hugging Face!
Compare
🔥🔥🔥 Announcing Jobs: a new way to run compute on Hugging Face!
We're thrilled to introduce a powerful new command-line interface for running and managing compute jobs on Hugging Face infrastructure! With the new hf jobs
command, you can now seamlessly launch, monitor, and manage jobs using a familiar Docker-like experience. Run any command in Docker images (from Docker Hub, Hugging Face Spaces, or your own custom images) on a variety of hardware including CPUs, GPUs, and TPUs - all with simple, intuitive commands.
Key features:
- 🐳 Docker-like CLI: Familiar commands (
run
,ps
,logs
,inspect
,cancel
) to run and manage jobs - 🔥 Any Hardware: Instantly access CPUs, T4/A10G/A100 GPUs, and TPUs with a simple flag
- 📦 Run Anything: Use Docker images, HF Spaces, or custom containers
- 📊 Live Monitoring: Stream logs in real-time, just like running locally
- 💰 Pay-as-you-go: Only pay for the seconds you use
- 🧬 UV Runner: Run Python scripts with inline dependencies using
uv
(experimental)
All features are available both from Python (run_job
, list_jobs
, etc.) and the CLI (hf jobs
).
Example usage:
# Run a Python script on the cloud
hf jobs run python:3.12 python -c "print('Hello from the cloud!')"
# Use a GPU
hf jobs run --flavor=t4-small --namespace=huggingface ubuntu nvidia-smi
# List your jobs
hf jobs ps
# Stream logs from a job
hf jobs logs <job-id>
# Inspect job details
hf jobs inspect <job-id>
# Cancel a running job
hf jobs cancel <job-id>
# Run a UV script (experimental)
hf jobs uv run my_script.py --flavor=a10g-small --with=trl
You can also pass environment variables and secrets, select hardware flavors, run jobs in organizations, and use the experimental uv
runner for Python scripts with inline dependencies.
Check out the Jobs guide for more examples and details.
- [Jobs] Add huggingface-cli jobs commands by @lhoestq #3211
- Rename huggingface-cli jobs to hf jobs by @Wauplin #3250
- Docs: link to jobs cli docs by @lhoestq #3253
- [Jobs] Mention PRO is required by @Wauplin #3257
🚀 The CLI is now hf
! (formerly huggingface-cli
)
Glad to announce a long awaited quality-of-life improvement: the Hugging Face CLI has been officially renamed from huggingface-cli
to hf
! The legacy huggingface-cli
remains available without any breaking change, but is officially deprecated. We took the opportunity update the syntax to a more modern command format hf <resource> <action> [options]
(e.g. hf auth login
, hf repo create
, hf jobs run
).
Run hf --help
to know more about the CLI options.
✗ hf --help
usage: hf <command> [<args>]
positional arguments:
{auth,cache,download,jobs,repo,repo-files,upload,upload-large-folder,env,version,lfs-enable-largefiles,lfs-multipart-upload}
hf command helpers
auth Manage authentication (login, logout, etc.).
cache Manage local cache directory.
download Download files from the Hub
jobs Run and manage Jobs on the Hub.
repo Manage repos on the Hub.
repo-files Manage files in a repo on the Hub.
upload Upload a file or a folder to the Hub. Recommended for single-commit uploads.
upload-large-folder
Upload a large folder to the Hub. Recommended for resumable uploads.
env Print information about the environment.
version Print information about the hf version.
options:
-h, --help show this help message and exit
- Rename CLI to 'hf' + reorganize syntax by @Wauplin in #3229
- Rename huggingface-cli jobs to hf jobs by @Wauplin in #3250
⚡ Inference
🖼️ Image-to-image
Added support for image-to-image
task in the InferenceClient
for Replicate and fal.ai providers, allowing quick image generation using FLUX.1-Kontext-dev:
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai")
client = InferenceClient(provider="replicate")
with open("cat.png", "rb") as image_file:
input_image = image_file.read()
# output is a PIL.Image object
image = client.image_to_image(
input_image,
prompt="Turn the cat into a tiger.",
model="black-forest-labs/FLUX.1-Kontext-dev",
)
- [Inference Providers] add
image-to-image
support for Replicate provider by @hanouticelina in #3188 - [Inference Providers] add
image-to-image
support for fal.ai provider by @hanouticelina in #3187
In addition to this, it is now possible to directly pass a PIL.Image
as input to the InferenceClient
.
- Add PIL Image support to InferenceClient by @NielsRogge in #3199
🤖 Tiny-Agents
tiny-agents
got a nice update to deal with environment variables and secrets. We've also changed its input format to follow more closely the config format from VSCode. Here is an up to date config to run Github MCP Server with a token:
{
"model": "Qwen/Qwen2.5-72B-Instruct",
"provider": "nebius",
"inputs": [
{
"type": "promptString",
"id": "github-personal-access-token",
"description": "Github Personal Access Token (read-only)",
"password": true
}
],
"servers": [
{
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GITHUB_PERSONAL_ACCESS_TOKEN",
"-e",
"GITHUB_TOOLSETS=repos,issues,pull_requests",
"ghcr.io/github/github-mcp-server"
],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${input:github-personal-access-token}"
}
}
]
}
- [Tiny-Agent] Fix headers handling + secrets management by @Wauplin in #3166
- [tiny-agents] Configure inference API key from inputs + keep empty dicts in chat completion payload by @hanouticelina in #3226
🐛 Bug fixes
InferenceClient
and tiny-agents
got a few quality of life improvements and bug fixes:
- Recursive filter_none in Inference Providers by @Wauplin in #3178
- [Inference] Remove default params values for text generation by @hanouticelina in #3192
- [Inference] Correctly build chat completion URL with query parameters by @hanouticelina in #3200
- Update tiny-agents example by @Wauplin in #3205
- Fix "failed to parse tools" due to mcp EXIT_LOOP_TOOLS not following the ChatCompletionInputFunctionDefinition model by @nicoloddo in #3219
- [Tiny agents] Add tool call to messages by @NielsRogge in #3159
- omit parameters for default tools in tiny-agent by @hanouticelina in #3214
📤 Xet
Integration of Xet is now stable and production-ready. A majority of file transfer are now handled using this protocol on new repos. A few improvements have been shipped to ease developer experience during uploads:
- Improved progress reporting for Xet uploads by @hoytak in #3096
- upload large folder operations uses batches of files for preupload-lfs jobs for xet-enabled repositories by @assafvayner in #3228
- Override xet refresh route's base URL with HF Endpoint by @hanouticelina in #3180
Documentation has already been written to explain better the protocol and its options:
- Updates to Xet upload/download docs by @jsulz in #3174
- Updating Xet caching docs by @jsulz in #3190
- Suppress xet install WARN if HF_HUB_DISABLE_XET by @rajatarya in #3206
🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
- fix: update payload preparation to merge parameters into the output dictionary by @mishig25 in #3160
- fix(inference_endpoints): use GET
healthRoute
instead of GET / to check status by @mfuntowicz in #3165 - Update hf_api.py by @andimarafioti in #3194
- [Docs] Remove Inference API references in docs by @hanouticelina in #3197
- Align HfFileSystem and HfApi for the
expand
argument when listing files in repos by @lhoestq in #3195 - Solve encoding issue of repocard.py by @WilliamRabuel in #3235
- Fix pagination test by @Wauplin in #3246
- Fix Incomplete File Not found on windows systems by @JorgeMIng in #3247
- [Internal] Fix docstring param spacing check and
libcst
incompatibility with Python 3.13 by @hanouticelina in #3251 - [Bot] Update inference types by @HuggingFaceInfra in #3104
- Fix snapshot_download when unreliable number of files by @Wauplin in #3241
- fix typo by @Wauplin (direct commit on main)
- fix sessions closing warning with AsyncInferenceClient by @hanouticelina in #3252
- Deprecate missing_mfa, missing_sso, adding security_restrictions @Kakulukian #3254
🏗️ internal
- swap gh style bot action token by @hanouticelina in #3171
- improve style bot comment (notify earlier and update later) by @ydshieh in #3179
- Update tests following server-side changes by @hanouticelina in #3181
- [FIX DOCSTRING] Update hf_api.py by @cakiki in #3182
- Bump to 0.34.0.dev0 by @Wauplin in #3222
- Do not generate Chat Completion types anymore by @Wauplin in #3231
Assets 2
[v0.33.5] [Inference] Fix a `UserWarning` when streaming with `AsyncInferenceClient`
Compare
- Fix: "UserWarning: ... sessions are still open..." when streaming with
AsyncInferenceClient
#3252
Full Changelog: v0.33.4...v0.33.5
Assets 2
[v0.33.4] [Tiny-Agent]: Fix schema validation error for default MCP tools
Compare
- Omit parameters in default tools of tiny-agent #3214
Full Changelog: v0.33.3...v0.33.4
Assets 2
[v0.33.3] [Tiny-Agent]: Update tiny-agents example
Compare
- Update tiny-agents example #3205
Full Changelog: v0.33.2...v0.33.3
Assets 2
[v0.33.2] [Tiny-Agent]: Switch to VSCode MCP format
Compare
Full Changelog: v0.33.1...v0.33.2
Breaking changes:
- no more config nested mapping => everything at root level
- headers at root level instead of inside options.requestInit
- updated the way values are pulled from ENV (based on input id)
Example of agent.json
:
{
"model": "Qwen/Qwen2.5-72B-Instruct",
"provider": "nebius",
"inputs": [
{
"type": "promptString",
"id": "hf-token",
"description": "Token for Hugging Face API access",
"password": true
}
],
"servers": [
{
"type": "http",
"url": "https://huggingface.co/mcp",
"headers": {
"Authorization": "Bearer ${input:hf-token}"
}
}
]
}
Find more examples in https://huggingface.co/datasets/tiny-agents/tiny-agents
Assets 2
[v0.33.1]: Inference Providers Bug Fixes, Tiny-Agents Message handling Improvement, and Inference Endpoints Health Check Update
Compare
Full Changelog: v0.33.0...v0.33.1
This release introduces bug fixes for chat completion type compatibility and feature extraction parameters, enhanced message handling in tiny-agents, and updated inference endpoint health check:
- [Tiny agents] Add tool call to messages #3159 by @NielsRogge
- fix: update payload preparation to merge parameters into the output dictionary #3160 by @mishig25
- fix(inference_endpoints): use GET healthRoute instead of GET / to check status #3165 by @mfuntowicz
- Recursive filter_none in Inference Providers #3178 by @Wauplin
Assets 2
[v0.33.0]: Welcoming Featherless.AI and Groq as Inference Providers!
Compare
⚡ New provider: Featherless.AI
Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration abilities that makes an exceptionally large catalog of models available for users. Providers often offer either a low cost of access to a limited set of models, or an unlimited range of models with users managing servers and the associated costs of operation. Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing. Find the full list of supported models on the models page.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="featherless-ai")
completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1-0528",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)
⚡ New provider: Groq
At the heart of Groq's technology is the Language Processing Unit (LPU™), a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as Large Language Models (LLMs). LPUs are designed to overcome the limitations of GPUs for inference, offering significantly lower latency and higher throughput. This makes them ideal for real-time AI applications.
Groq offers fast AI inference for openly-available models. They provide an API that allows developers to easily integrate these models into their applications. It offers an on-demand, pay-as-you-go model for accessing a wide range of openly-available LLMs.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="groq")
completion = client.chat.completions.create(
model="meta-llama/Llama-4-Scout-17B-16E-Instruct",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image in one sentence."},
{
"type": "image_url",
"image_url": {"url": "https://vagabundler.com/wp-content/uploads/2019/06/P3160166-Copy.jpg"},
},
],
}
],
)
print(completion.choices[0].message)
🤖 MCP and Tiny-agents
It is now possible to run tiny-agents using a local server e.g. llama.cpp. 100% local agents are right behind the corner!
- [MCP] Add local/remote endpoint inference support by @hanouticelina in #3121
Fixing some DX issues in the tiny-agents
CLI.
- Fix
tiny-agents
cli exit issues by @Wauplin in #3125 - [MCP] reinject JSON parse & runtime tool errors back into the chat history by @hanouticelina in #3137
📚 Documentation
New translation from the Hindi-speaking community, for the community!
- Added Hindi translation for git_vs_http.md in concepts section by @february-king in #3156
🛠️ Small fixes and maintenance
😌 QoL improvements
- Make hf-xet more silent by @Wauplin in #3124
- [HfApi] Collections in collections by @hanouticelina in #3120
- Fix inference search by @Wauplin in #3022
- [Inference Providers] Raise warning if provider's status is in error mode by @hanouticelina in #3141
🐛 Bug and typo fixes
- Fix snapshot_download on very large repo (>50k files) by @Wauplin in #3122
- fix tqdm_class argument of subclass of tqdm by @andyxning in #3111
- fix quality by @hanouticelina in #3128
- second example in oauth documentation by @thanosKivertzikidis in #3136
- fix table question answering by @hanouticelina in #3154
🏗️ internal
- Create claude.yml by @julien-c in #3118
- [Internal] prepare for 0.33.0 release by @hanouticelina in #3138
Significant community contributions
The following contributors have made significant changes to the library over the last release:
Assets 2
[v0.32.6] [Upload large folder] fix for wrongly saved upload_mode/remote_oid
Compare
- Fix for wrongly saved upload_mode/remote_oid #3113
Full Changelog: v0.32.5...v0.32.6
Assets 2
[v0.32.5] [Tiny-Agents] inject environment variables in headers
Compare
- Inject env var in headers + better type annotations #3142
Full Changelog: v0.32.4...v0.32.5