Chatterbox Turbo

Hello everyone. When installing Chatterbox Turbo locally, I encountered this error. Please help me fix it. New tokens were created, but the problem persists.

”huggingface_hub.errors.LocalTokenNotFoundError: Token is required (token=True), but no token found. You need to provide a token or be logged in to Hugging Face with hf auth login or huggingface_hub.login. See Hugging Face – The AI community building the future. .

1 Like

The method for setting environment variables varies depending on your OS, but I think setting the HF_TOKEN environment variable is the simplest approach. For greater reliability, use hf auth login.


This error means one specific thing: some code called Hugging Face Hub with token=True, and your runtime could not find a locally stored Hugging Face token. Creating new tokens on the Hugging Face website does not fix anything unless the token is actually available inside the environment that runs Chatterbox Turbo.


What the error means (background)

Hugging Face Hub download functions accept a token argument with these semantics:

  • token=True means “read the token from the Hugging Face config folder.”
  • token=False or None means “do not use a token.”
  • token="hf_..." means “use this token string explicitly.” (Hugging Face)

Your traceback says: Token is required (token=True), but no token found.
So the code path is explicitly requesting token=True, and the library is failing because it cannot find a cached login token.


Why this happens specifically with Chatterbox Turbo

In the official Hugging Face Space code for Chatterbox Turbo, from_pretrained() downloads weights via snapshot_download() like this:

That line causes your exact failure mode:

  • If HF_TOKEN environment variable is set, it uses it.
  • If HF_TOKEN is not set (or is empty), it falls back to True.
  • True triggers “must read cached token from disk,” and if none is found, you get LocalTokenNotFoundError. (Hugging Face)

So you can see this error even when downloading a public model repo, because the code forces auth unless HF_TOKEN exists.


Root causes in real local installs (most common)

Cause 1: You created tokens online, but did not log in locally

Creating a token at huggingface.co/settings/tokens only creates credentials.
Your machine still has no saved token unless you do one of these:

  • Set HF_TOKEN in the environment.
  • Run hf auth login to save the token locally. (Hugging Face)

Cause 2: You logged in, but you are running under a different user or “different home”

This is extremely common with:

  • sudo python ...
  • Docker containers
  • systemd services
  • remote processes
  • IDE run configs that do not inherit your shell env

Hugging Face stores the token under HF_HOME (default ~/.cache/huggingface) and by default the token file is at ~/.cache/huggingface/token. (Hugging Face)
If your process runs with a different home directory, it will not see the saved token.

Cause 3: You set the wrong environment variable

huggingface_hub standardizes on:

If you set something else, Chatterbox Turbo still sees HF_TOKEN as missing and falls back to True. (Hugging Face)


Fixes that work reliably

Fix A (most reliable): Set HF_TOKEN where you run Chatterbox Turbo

This avoids all “where is the token file” issues.

Linux/macOS (bash/zsh):

export HF_TOKEN="hf_your_token_here"
python -c "import os; print(bool(os.getenv('HF_TOKEN')))"

Windows PowerShell:

$env:HF_TOKEN="hf_your_token_here"
python -c "import os; print(bool(os.getenv('HF_TOKEN')))"

Why this fixes it: Chatterbox Turbo checks HF_TOKEN first, and only falls back to token=True if it is missing. (Hugging Face)
Also, HF_TOKEN overrides any token stored on disk. (Hugging Face)

Fix B: Log in locally with the Hugging Face CLI

Run:

hf auth login
hf auth whoami

The docs state hf auth login validates and saves the token to HF_HOME (default ~/.cache/huggingface/token), and other libraries reuse it automatically. (Hugging Face)

If hf auth whoami works but your app still fails, you are almost certainly hitting the “different user / different HF_HOME” pitfall.

Fix C (Jupyter/Notebook): Login programmatically

If you run in a notebook kernel that does not inherit your shell login, use the Python authentication API. (Hugging Face)

Example (do not paste tokens into shared notebooks):

from huggingface_hub import login
login("hf_your_token_here")

Quick verification checklist (pinpoint the exact mismatch)

Run these in the same terminal/session that launches Chatterbox Turbo:

  1. Confirm the environment variable path:
python -c "import os; print('HF_TOKEN set:', bool(os.getenv('HF_TOKEN')))"
  1. Check where Hugging Face is looking for the token:
python -c "import os; print('HF_HOME=', os.getenv('HF_HOME')); print('HF_TOKEN_PATH=', os.getenv('HF_TOKEN_PATH'))"

What matters:

  • If HF_TOKEN is falsey, Chatterbox Turbo will force token=True and require a cached token. (Hugging Face)
  • If HF_HOME is set to something unexpected, your login token may be saved somewhere else. (Hugging Face)
  1. Confirm CLI login state:
hf auth whoami

If this fails, you are not logged in for that user environment. (Hugging Face)


Workarounds if you want “no token required”

Workaround 1: Patch the token forcing line

In chatterbox/tts_turbo.py, the forcing behavior is:

If you change it to:

  • token=os.getenv("HF_TOKEN")
    then public downloads typically work without login, because token becomes None when unset, and “no token is provided.” (Hugging Face)

This is a local patch. Upstream updates can overwrite it.

Workaround 2: Download once, then load from local directory

That same file includes a from_local() loader that takes a local checkpoint directory and loads weights from there. (Hugging Face)
So you can:

  1. download model files by any method you control (or on another machine),
  2. copy them into a directory,
  3. call from_local(path, device) instead of from_pretrained(device).

This is also the path you use on offline machines.


Extra context: token storage and common “it still doesn’t work” pitfalls

Token storage locations

  • HF_HOME defaults to ~/.cache/huggingface. (Hugging Face)
  • Token path defaults to "$HF_HOME/token" (so typically ~/.cache/huggingface/token). (Hugging Face)
  • Setting HF_TOKEN overrides the stored token. (Hugging Face)

Pitfall: sudo breaks it

If you ran hf auth login as your normal user but run the app with sudo, root has a different ~ and a different HF_HOME. Result: no token found.

Pitfall: Docker breaks it unless you pass env or mount cache

Inside Docker, you usually need either:

  • -e HF_TOKEN=...
  • or mount the Hugging Face cache directory into the container

Pitfall: “token created” but wrong scope

For reading/downloading, a read token is typically sufficient. Hugging Face describes tokens and scopes in their security tokens docs. (Hugging Face)


References (URLs)

https://huggingface.co/spaces/ResembleAI/chatterbox-turbo-demo/blob/main/chatterbox/tts_turbo.py
https://huggingface.co/docs/huggingface_hub/en/package_reference/file_download
https://huggingface.co/docs/huggingface_hub/en/quick-start
https://huggingface.co/docs/huggingface_hub/en/package_reference/environment_variables
https://huggingface.co/docs/huggingface_hub/en/guides/cli
https://huggingface.co/docs/huggingface_hub/en/package_reference/authentication

Summary

  • Your error persists because Chatterbox Turbo’s from_pretrained() forces token=True when HF_TOKEN is missing. (Hugging Face)
  • Creating tokens online is not enough. You must either set HF_TOKEN or run hf auth login in the same runtime environment. (Hugging Face)
  • If it still fails after login, the cause is almost always “different user / different HF_HOME.” (Hugging Face)
  • Fastest reliable fix: set HF_TOKEN in the exact shell/container that runs Chatterbox Turbo. (Hugging Face)

I’m sorry, I’m just learning. As far as I understand, I don’t have an “hf_token” environment. I don’t know which specific commands to use or how to set up the environment.

(venv) C:\Chatterbox Turbo\chatterbox-turbo-demo>python -c “import os; print(‘HF_TOKEN set:’, bool(os.getenv(‘HF_TOKEN’)))”
HF_TOKEN set: False

(venv) C:\Chatterbox Turbo\chatterbox-turbo-demo>python -c “import os; print(‘HF_HOME=’, os.getenv(‘HF_HOME’)); print(‘HF_TOKEN_PATH=’, os.getenv(‘HF_TOKEN_PATH’))”
HF_HOME= None
HF_TOKEN_PATH= None

(venv) C:\Chatterbox Turbo\chatterbox-turbo-demo>hf auth whoami
Not logged in

1 Like

I assume you are using Windows…


You are in the “expected broken” state for this project on Windows:

  • HF_TOKEN env var is not set.
  • hf auth whoami says “Not logged in”.

Chatterbox Turbo’s code does this when it downloads the model:

So if HF_TOKEN is missing, it passes True. In huggingface_hub, token=True means “I must find a locally saved token.” If you are not logged in, there is no saved token, so you get LocalTokenNotFoundError. (Hugging Face)

You fix it by giving Hugging Face a token in one of two beginner-friendly ways.


Background: what an “environment variable” is (Windows-friendly)

An environment variable is just a named setting that programs can read. On Windows, you can set it:

  • Temporarily, only for the current terminal window.
  • Permanently, via Windows GUI (recommended for beginners), so every new terminal can see it.

HF_TOKEN is the standard environment variable name that Hugging Face uses for authentication, and it overrides any token saved on disk. (Hugging Face)

Your venv is unrelated. A Python venv manages Python packages. It does not automatically create HF_TOKEN.


Choose one: easiest two solutions

Solution A (recommended): log in once with hf auth login

This stores the token so token=True can find it.

  1. Make sure you have a token on the website (you already do). Prefer a read token for downloading models. Hugging Face recommends least privilege and separate tokens per machine. (Hugging Face)
    Tokens page: https://huggingface.co/settings/tokens

  2. In the same terminal where you run Chatterbox Turbo (your (venv) C:\Chatterbox Turbo\...> prompt), run:

hf auth login
  1. It will prompt for the token. Paste the token (starts with hf_). Do not add quotes.

  2. Verify:

hf auth whoami

You should see your Hugging Face username. The CLI guide shows that login saves the token locally and then whoami works. (Hugging Face)

  1. Run Chatterbox Turbo again.

Why this works: now when Chatterbox Turbo passes token=True (because HF_TOKEN is unset), Hugging Face Hub can find the saved token. (Hugging Face)


Solution B (Windows GUI): set HF_TOKEN permanently

This is often the most beginner-proof method because it avoids “where is my token saved” questions.

Step-by-step via Windows GUI

  1. Open the environment variable editor:
  • Press the Windows key and type environment variables

  • Click either:

    • Edit environment variables for your account (user-level), or
    • Edit the system environment variables then click Environment Variables…
      A common Windows path is to search Settings for “Edit environment variables for your account.” (Server Fault)
  1. In the User variables section (top box), click New…

  2. Enter:

  • Variable name: HF_TOKEN
  • Variable value: your token (starts with hf_...)
  1. Click OK on all dialogs to save.

  2. Important: close your terminal and open a new terminal. Many environment changes only appear in new processes.

  3. Activate your venv again, then verify:

python -c "import os; print('HF_TOKEN set:', bool(os.getenv('HF_TOKEN')))"

You want HF_TOKEN set: True.

  1. Run Chatterbox Turbo again.

Why this works: Chatterbox Turbo checks os.getenv("HF_TOKEN") first. If it exists, it uses that token and does not fall back to True. (Hugging Face)

Security note (worth reading once)

Do not paste your token into code and commit it. Hugging Face explicitly warns against hardcoding tokens and recommends safer storage like environment variables. (Hugging Face)


Optional: set HF_TOKEN from the command line (if you prefer)

You asked for GUI, but these are handy.

Temporary (current CMD window only)

set HF_TOKEN=hf_your_token_here

Permanent (future terminals)

setx HF_TOKEN "hf_your_token_here"

setx writes to the registry and is available in future command windows, not the current one. (Microsoft Learn)


If it still fails after login or setting HF_TOKEN

These are the usual Windows gotchas:

  1. You set HF_TOKEN, but you did not open a new terminal
    Open a new CMD/PowerShell window and re-check HF_TOKEN set: True. setx in particular requires a new window. (Microsoft Learn)

  2. You are launching from an IDE that does not inherit your environment
    If you run from VS Code / PyCharm, set HF_TOKEN in that run configuration or use the CLI login method.

  3. Token pasted with extra characters
    Paste only the token string (starts hf_). No quotes, no spaces.

  4. You used the old command name
    Current docs use hf auth login and hf auth whoami. The HF blog shows the migration from huggingface-cli login to hf auth login. (Hugging Face)


Links


Summary

  • Your machine has no Hugging Face auth set up. That is why token=True fails. (Hugging Face)
  • Fix option A: hf auth login then hf auth whoami. (Hugging Face)
  • Fix option B: set HF_TOKEN via Windows GUI User variables, reopen terminal, verify HF_TOKEN set: True. (Hugging Face)
  • Do not hardcode tokens. Use env vars. (Hugging Face)

Thank you! It helped me install it using the command line. The installation was successful up to this point:

Traceback (most recent call last):
File “C:\Chatterbox Turbo\chatterbox-turbo-demo\app.py”, line 164, in
demo.queue().launch(
TypeError: Blocks.launch() got an unexpected keyword argument ‘css’ :expressionless_face:

1 Like

Gradio has poor compatibility when versions change, so either lock the version or use the version recommended by the software…


That traceback is a Gradio version mismatch.

Your app.py calls:

demo.queue().launch(mcp_server=True, css=CUSTOM_CSS, ssr_mode=False)

You can see css= is passed into launch() in the upstream demo code. (Hugging Face)

But the Gradio version you have installed does not support css as a launch() keyword, so Python raises:

TypeError: Blocks.launch() got an unexpected keyword argument 'css'

This is exactly what happens when code written for one Gradio API is run on a different Gradio major version.


Background: why this happens (Gradio 5 vs Gradio 6)

Gradio made a breaking change around v6:

  • In Gradio 5.x, app-level settings like css, theme, js, head lived on the gr.Blocks(...) constructor.
  • In Gradio 6.x, those app-level settings were moved to Blocks.launch(...). (Gradio)

So:

  • If your code does demo.launch(css="..."), it expects Gradio 6.x behavior. (Gradio)
  • If you are on Gradio 5.x, launch() won’t accept css= and you get your exact error.

There is also a clue in the demo repo itself:

  • requirements.txt pins gradio==5.44.1. (Hugging Face)
  • The Space metadata (README frontmatter) says sdk_version: 6.0.2. (Hugging Face)

So the upstream demo currently mixes signals: code looks like Gradio 6-style (css in launch), while requirements.txt pins Gradio 5. That is why local installs often trip here.


Step 1: Confirm your installed Gradio version (Windows CMD)

In your venv:

python -c "import gradio as gr; print(gr.__version__)"
pip show gradio

If you see 5.x (very likely), that explains the error.


Fix path A (recommended): Upgrade Gradio to match the code (Gradio 6)

This is usually the cleanest because your app.py is already written in the “css in launch” style used in Gradio 6. (Gradio)

A1) Upgrade Gradio inside your venv

Pick one of these:

Match the Space config exactly (likely safest):

pip install --upgrade "gradio==6.0.2"

If you want MCP support (because your code uses mcp_server=True):
Gradio’s MCP guide says to install the MCP extra: pip install "gradio[mcp]". (Gradio)
So you can do:

pip install --upgrade "gradio[mcp]==6.0.2"

A2) Re-check version

python -c "import gradio as gr; print(gr.__version__)"

A3) Run again

python app.py

A4) Prevent a downgrade later

If you re-run pip install -r requirements.txt and that file still pins gradio==5.44.1, it can downgrade you again. (Hugging Face)
So either:

  • edit requirements.txt to gradio==6.0.2, or
  • remove the gradio==... line entirely and install Gradio separately.

Fix path B: Keep Gradio 5 and change app.py to the Gradio 5 style

If you prefer not to upgrade, you can “move css back” into the Blocks() constructor, which is how Gradio 5 expects it. (This is the inverse of the Gradio 6 migration note.) (Gradio)

B1) Change the Blocks line

Find:

with gr.Blocks(title="Chatterbox Turbo") as demo:

Change to:

with gr.Blocks(title="Chatterbox Turbo", css=CUSTOM_CSS) as demo:

B2) Remove css= from launch()

Change:

demo.queue().launch(
    mcp_server=True,
    css=CUSTOM_CSS,
    ssr_mode=False
)

To:

demo.queue().launch(
    mcp_server=True,
    ssr_mode=False
)

If you hit another “unexpected keyword argument” after that, do the most minimal launch first:

demo.queue().launch()

Then add options back one at a time.


Which fix should you choose?

  • Choose Fix A (upgrade to Gradio 6) if you want the code to match what the Space is configured to use (sdk_version: 6.0.2). (Hugging Face)
  • Choose Fix B (edit code) if you want to keep your current Gradio 5 install and avoid more Gradio 6 breaking changes.

In practice, for this repo, upgrading is usually simplest because the current app.py already uses Gradio 6-style launch(css=...). (Hugging Face)


Summary

  • The error means “your Gradio version’s launch() does not support css=.”
  • Your app.py passes css= into launch(), which matches Gradio 6 behavior. (Hugging Face)
  • Fix it either by upgrading to Gradio 6.0.2 (recommended) or moving css into gr.Blocks(..., css=...) for Gradio 5. (Hugging Face)