Docker Space Runs Normally but Run Logs Only Show “Application Startup” Banner, No Container stdout/stderr

Hi everyone,

I’m encountering a strange issue with Hugging Face Spaces (Docker SDK) and would like to ask whether this might be a platform-side logging channel problem, or if I’ve misconfigured something on my end.

Space Details:

Observed Behavior:

  1. Build logs are normal, with complete output (e.g., npm run build, vite build, Pushing image, etc.).
  2. Runtime status appears healthy:
    • stage = RUNNING
    • replicas.current = 1
    • Domain status: READY
  3. Health checks pass:
    • GET /health returns HTTP 200
    • Response includes redis=true, queue=true, openai=true
  4. However, run logs (both in the web UI and via API SSE) contain almost only one line:
    • ===== Application Startup at ... =====
    • No subsequent application logs appear (container stdout/stderr seem not to be forwarded).

Key Points:

  • The service is not down; endpoints are accessible and health checks succeed.
  • Business requests can be triggered and receive responses (e.g., 401 errors, etc.).
  • Only the runtime log channel appears blank.

Troubleshooting Steps I’ve Taken:

  1. Checked runtime status via API:

    from huggingface_hub import HfApi
    api = HfApi(token="***")
    rt = api.get_space_runtime("Bin29/ManimCat")
    print(rt.stage, rt.raw)
    

    Result: RUNNING, domain READY.

  2. Fetched build logs via API (content present):

    curl -N -H "Authorization: Bearer $HF_TOKEN" \
    "https://huggingface.co/api/spaces/Bin29/ManimCat/logs/build"
    
  3. Fetched run logs via API (only startup banner):

    curl -N -H "Authorization: Bearer $HF_TOKEN" \
    "https://huggingface.co/api/spaces/Bin29/ManimCat/logs/run"
    
  4. Accessed health endpoint:

    curl -i https://huggingface.co/proxy/bin29-manimcat.hf.space/health
    

    Returns 200 OK.

Questions I’d Like to Confirm:

  1. Is this a known issue (e.g., Run logs SSE not forwarding container stdout/stderr)?
  2. Are there any additional Docker Space configurations that could cause run logs to display only the platform banner?
  3. Is there a way to trigger a “log channel reset” from the Space side (other than a factory reboot)?
  4. Could the HF team check the log forwarder status for this Space from the backend?

Additional Notes:

  • I have already tried both restart and factory reboot.
  • The build phase has always been normal; the issue is isolated to run logs visibility.

If anyone has encountered a similar situation, I’d greatly appreciate any debugging suggestions. Thank you!

1 Like

having same issus

1 Like

Maybe the hug’s problem?

1 Like

The following primarily covers troubleshooting methods.
Hmm… It’s certainly possible that some default setting on Hugging Face’s end changed and triggered this issue.


What your current signals say

Your Space is not “half-started” or wedged:

  • GET /health returns {"status":"ok", ... "services":{"redis":true,"queue":true,"openai":true}}, so the container is serving traffic and the app is executing code paths successfully. (bin29-manimcat.hf.space)
  • Build logs are complete, so the build pipeline and image push are fine (your observation).
  • The only anomaly is that run logs show only ===== Application Startup ... =====.

That pattern is consistent with (a) stdout/stderr not receiving anything after startup (logs going to files / another sink), (b) output buffering (especially Python), or (c) a logs UI/SSE retrieval problem. The “container logs are what your app prints to stdout/stderr” mental model is explicitly reflected in common debugging guidance for Spaces. (Hugging Face Forums)


Background: why “run logs” can be blank even when the app is healthy

Build logs vs run logs

  • Build logs: what happens while building your image and pushing it.
  • Run logs: what the running container prints to stdout/stderr (what “docker logs” would show in a normal Docker environment). (Hugging Face Forums)

The platform banner line (===== Application Startup ... =====) is not your app output; it’s the runtime marker that the container was started. Many threads show that exact marker as the first/only visible line when something else prevents useful stdout/stderr from appearing. (Hugging Face Forums)


Likeliest causes in your specific situation (healthy endpoints, empty run logs)

1) Your app is logging to files (or a process manager is)

A common real-world pattern on Spaces is: startup banner appears, then a library announces it is “writing logs to …/something.log”. After that, the logs UI looks “empty” because nothing goes to stdout/stderr. (Hugging Face Forums)

Typical culprits in Docker Spaces:

  • File-based logging configs (e.g., Winston/Pino to file, Python logging to logs/*.log)
  • Reverse proxy defaults (nginx access/error logs to /var/log/...)
  • Process managers (PM2/supervisord) routing logs to their own log files

Why this matches your case: a service can be perfectly healthy (responding on /health, returning 401/4xx on business endpoints) while still emitting nothing to stdout/stderr.

2) Output buffering (especially if any Python is involved)

Buffering can make the stream appear blank even though your code ran. This is discussed directly in Spaces runtime logging contexts; the common mitigation is PYTHONUNBUFFERED=1 / running Python unbuffered. (Hugging Face Forums)

3) Multi-process container where PID 1 is not your “chatty” process

If PID 1 is a shell script that backgrounds the actual server, or a supervisor that doesn’t forward child output, stdout/stderr can be quiet while the server itself is running.

4) Less common: log streaming/viewer path issue

There are recurring reports where people see the startup banner and little else, sometimes “fixed” by rebuild/reupload/restart, suggesting a mix of infra churn and observability limitations. (Hugging Face Forums)


The fastest discriminator: does anything you print reach run logs?

Make a temporary change that must show up if log forwarding works.

A. Force a BOOT line to stdout and stderr at container start

In your Dockerfile CMD (or entrypoint script), wrap your actual start command:

bash -lc 'echo BOOT_STDOUT; echo BOOT_STDERR 1>&2; exec /path/to/your/start-command'

Interpretation:

  • If BOOT_STDOUT / BOOT_STDERR appear in run logs: forwarding works → your app/process/logging config is the cause.
  • If they do not appear: either you are not looking at the correct log stream endpoint, or the log streaming path for that runtime instance is broken.

B. Add a periodic heartbeat to stderr

Even if your app is quiet, this proves continuous forwarding:

bash -lc 'while true; do echo "HEARTBEAT $(date -Is)" 1>&2; sleep 15; done & exec /path/to/start'

Ensure you’re using the “correct” streaming method (JWT + api.hf.space)

Even if https://huggingface.co/api/spaces/.../logs/run is quiet, verify via the method documented in huggingface_hub issue #2667:

  1. fetch a Space JWT from /api/spaces/{space_id}/jwt
  2. stream from https://huggingface.co/proxy/api.hf.space/v1/{space_id}/logs/run (SSE) (GitHub)

If logs show up there but not in the UI/Hub endpoint, that’s strong evidence of a viewer/endpoint issue rather than a container stdout/stderr issue.


Docker Spaces config checks that still matter (even though your /health works)

  • Confirm your Space exposes the expected port via app_port in README.md YAML (default 7860). (Hugging Face)
    (Your /health already implies external routing is correct, but it’s still worth validating app_port hasn’t drifted.)

Practical “fix patterns” once you know which bucket you’re in

If forwarding works but your app is silent

Do one (or more) of these:

  • Route logs to stdout/stderr

    • Node: ensure your logger uses a console transport (not file-only)
    • Python: attach a StreamHandler(sys.stdout) / StreamHandler(sys.stderr)
    • nginx: configure access/error logs to /dev/stdout and /dev/stderr (or symlink)
  • Make PID 1 the real server

    • Avoid &, nohup, daemon modes; use exec so the main server owns stdout/stderr.
  • If Python involved: set PYTHONUNBUFFERED=1 (or run python -u ...) to avoid buffered output. (Hugging Face Forums)

If forwarding does not work even for BOOT lines

  • Try the JWT + api.hf.space stream first. (GitHub)

  • If that is also blank:

    • push a trivial commit to force a fresh container schedule (new runtime instance)
    • (optional) use Dev Mode to inspect where logs are going from inside the container (files vs stdout). Dev Mode is explicitly intended for debugging/monitoring a running Space. (Hugging Face)

Direct answers to your 4 questions

1) Is this a known issue?

“Startup banner only” is a recurring symptom people report, but the most common underlying causes are app-side (logging-to-file, buffering, process manager / backgrounding). There are also occasional platform-side logging/viewer failures, but they’re less frequent than the app-side causes. (Hugging Face Forums)

2) Docker Space configs that cause banner-only logs?

Yes—anything that prevents meaningful stdout/stderr:

  • file-only logging
  • nginx/supervisord/PM2 defaults writing to files
  • daemonizing/backgrounding the server so PID 1 is quiet
  • buffered output (Hugging Face Forums)

3) “Log channel reset” other than factory reboot?

No dedicated “reset log forwarder” control is documented. The practical equivalents are:

  • force a new runtime instance (commit that changes startup behavior)
  • switch hardware and switch back
  • duplicate the Space as a control

4) Can HF check backend log forwarder status?

Only HF staff can. The most actionable escalation bundle is:

  • timestamps (UTC) when you streamed logs
  • confirmation /health returns OK (include body)
  • results of the BOOT stdout/stderr probe
  • whether JWT + api.hf.space stream shows anything (GitHub)

Suggested next move (highest leverage)

  1. Add the BOOT stdout/stderr lines in CMD and redeploy.
  2. Stream logs using JWT + api.hf.space once, in parallel with the UI. (GitHub)
  3. If BOOT appears: fix logging/buffering/process model. If BOOT does not appear anywhere: escalate with the evidence bundle above.

Thanks for the troubleshooting guidance.

We followed your suggestion and deployed a probe build on our Docker Space:

  • Space: Bin29/ManimCat
  • Probe commit: 100c4b9
  • Probe behavior: forced BOOT_STDOUT, BOOT_STDERR, and periodic HEARTBEAT_STDERR to stderr every 15s at container
    startup.

Observed results:

  1. Runtime is healthy:
    • stage = RUNNING
    • replicas = 1/1
    • domain = READY
  2. App is serving traffic:
    • GET /health returns 200 with redis=true, queue=true, openai=true
  3. Build logs are complete and normal.
  4. Run logs still show only:
    • “===== Application Startup at … =====”
      and no probe lines (no BOOT/HEARTBEAT output at all).

So this looks like run-log forwarding/viewer path issue rather than app logging config.
We have reverted the probe commit to restore normal behavior:

  • Revert commit: 0a6c24b

Could you please check backend log forwarding for this Space (Bin29/ManimCat), especially around the timestamps when
probe commit 100c4b9 was running?

1 Like

I think something’s off with how the logs are behaving in the Hugging Face Docker space… I don’t have any permissions or anything, so I’m just reporting it… @hysts


Title: Docker Space runs normally, but Run Logs only show “===== Application Startup … =====” (no container stdout/stderr) — reproduced with minimal probe

Spaces

  • Primary affected Space: Bin29/ManimCat (SDK: Docker)
  • Minimal repro Space: John6666/docker_log_test (SDK: Docker)

Summary
Both Spaces run and serve HTTP correctly, but Run logs (UI + SSE/API) show only the platform banner:
===== Application Startup at YYYY-MM-DD HH:MM:SS =====
No subsequent stdout/stderr from the container is visible, even when the container is forced to print to stderr periodically.


Evidence (minimal repro Space)

1) Runtime is healthy and serving traffic

Base URL:

  • https://huggingface.co/proxy/john6666-docker-log-test.hf.space

HTTP checks:

  • GET / → 200, body contains ok
  • GET /health → 200, JSON: {"status":"ok","ts":...}
  • GET /crash → 500 (intentional exception route)

This confirms the container is running and executing code paths.

2) Container is designed to emit logs to stderr continuously

At container startup:

  • prints BOOT_STDOUT and BOOT_STDERR
    Every 15 seconds:
  • prints HEARTBEAT_STDERR ... to stderr

So Run logs should show BOOT + periodic HEARTBEAT if stdout/stderr ingestion works.

3) Actual Run logs output (UI and SSE)

  • Hugging Face UI “Container Logs” shows only:

    • ===== Application Startup at 2026-03-04 09:38:10 =====
  • SSE stream also shows only:

    • 2026-03-04T09:38:10Z ===== Application Startup at 2026-03-04 09:38:10 =====
      …and then nothing else (no BOOT, no HEARTBEAT), causing SSE clients to eventually hit read timeouts.

Evidence (primary Space probe)

In Bin29/ManimCat, we deployed a probe commit that forces:

  • BOOT_STDOUT, BOOT_STDERR at startup
  • HEARTBEAT_STDERR every 15 seconds

Results:

  • Space is healthy (RUNNING, domain READY, endpoints respond, /health returns 200)
  • Build logs are normal
  • Run logs still show only the startup banner (no BOOT/HEARTBEAT)
    We reverted the probe after confirming the behavior.

Additional observation: logs endpoint redirect (auth mismatch hazard)

When calling the “runtime logs” endpoint on api.hf.space, we see a 308 redirect:

  • Request:

    • GET https://huggingface.co/proxy/api.hf.space/v1/{space_id}/logs/run (with Space JWT)
  • Response:

    • 308 Permanent Redirect
    • Location: https://huggingface.co/api/spaces/{space_id}/logs/run

If a client follows redirects automatically, it will end up calling the Hub endpoint with the Space JWT and get a 401.
Using HF user token (HF_TOKEN) against the Hub endpoint returns 200 text/event-stream, but still only contains the startup banner line (no container output).


Expected vs Actual

Expected

  • Run logs show:

    • BOOT lines at startup
    • HEARTBEAT lines every 15 seconds
    • request logs from /, /health, and a traceback from /crash

Actual

  • Only ===== Application Startup ... ===== is ever shown
  • No container stdout/stderr is visible anywhere (UI or SSE)

Request / Questions for HF team

  1. Can you check the backend log ingestion/forwarder for these Spaces (stdout/stderr capture from the runtime)?
  2. Is the 308 redirect from api.hf.spacehuggingface.co for /logs/run expected? If so, what is the supported auth/token method for each endpoint?
  3. Is there a known incident or regression where Docker Space run logs stop forwarding stdout/stderr while the service continues to run normally?

Repro commands (tokens redacted)

# Verify HF user token
curl -s https://huggingface.co/api/whoami-v2 \
  -H "Authorization: Bearer $HF_TOKEN"

# Fetch Space JWT
curl -s "https://huggingface.co/api/spaces/John6666/docker_log_test/jwt" \
  -H "Authorization: Bearer $HF_TOKEN"

# Observe redirect (do NOT follow redirects)
curl -i "https://huggingface.co/proxy/api.hf.space/v1/John6666/docker_log_test/logs/run" \
  -H "Authorization: Bearer $SPACE_JWT"

# Stream logs from Hub endpoint using HF_TOKEN
curl -N "https://huggingface.co/api/spaces/John6666/docker_log_test/logs/run" \
  -H "Authorization: Bearer $HF_TOKEN"
# -> Only shows: ===== Application Startup ... =====

Notes

  • This repro Space intentionally prints to stderr every 15 seconds; absence of those lines strongly suggests a logging ingestion/forwarding issue rather than application logging configuration.

Edit:
It seems the logs are now displaying correctly. (I haven’t made any changes to Spaces.)

This seems like a platform-side logging issue rather than your configuration.

Some debugging steps:

  1. Check if stdout/stderr are being captured by the container runtime
  2. Try adding explicit flush() calls after print statements
  3. Check the container logs via API: huggingface_hub.get_space_runtime()

If this is a new issue, it might be related to recent platform updates. Have you tried creating a minimal test case?

1 Like

This was a new issue, and I’m convinced it was a platform problem. I didn’t do anything, but it was back to normal this morning! Thank you everyone.

1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.