The following primarily covers troubleshooting methods.
Hmm… It’s certainly possible that some default setting on Hugging Face’s end changed and triggered this issue.
What your current signals say
Your Space is not “half-started” or wedged:
GET /health returns {"status":"ok", ... "services":{"redis":true,"queue":true,"openai":true}}, so the container is serving traffic and the app is executing code paths successfully. (bin29-manimcat.hf.space)
- Build logs are complete, so the build pipeline and image push are fine (your observation).
- The only anomaly is that run logs show only
===== Application Startup ... =====.
That pattern is consistent with (a) stdout/stderr not receiving anything after startup (logs going to files / another sink), (b) output buffering (especially Python), or (c) a logs UI/SSE retrieval problem. The “container logs are what your app prints to stdout/stderr” mental model is explicitly reflected in common debugging guidance for Spaces. (Hugging Face Forums)
Background: why “run logs” can be blank even when the app is healthy
Build logs vs run logs
- Build logs: what happens while building your image and pushing it.
- Run logs: what the running container prints to stdout/stderr (what “docker logs” would show in a normal Docker environment). (Hugging Face Forums)
The platform banner line (===== Application Startup ... =====) is not your app output; it’s the runtime marker that the container was started. Many threads show that exact marker as the first/only visible line when something else prevents useful stdout/stderr from appearing. (Hugging Face Forums)
Likeliest causes in your specific situation (healthy endpoints, empty run logs)
1) Your app is logging to files (or a process manager is)
A common real-world pattern on Spaces is: startup banner appears, then a library announces it is “writing logs to …/something.log”. After that, the logs UI looks “empty” because nothing goes to stdout/stderr. (Hugging Face Forums)
Typical culprits in Docker Spaces:
- File-based logging configs (e.g., Winston/Pino to file, Python logging to
logs/*.log)
- Reverse proxy defaults (nginx access/error logs to
/var/log/...)
- Process managers (PM2/supervisord) routing logs to their own log files
Why this matches your case: a service can be perfectly healthy (responding on /health, returning 401/4xx on business endpoints) while still emitting nothing to stdout/stderr.
2) Output buffering (especially if any Python is involved)
Buffering can make the stream appear blank even though your code ran. This is discussed directly in Spaces runtime logging contexts; the common mitigation is PYTHONUNBUFFERED=1 / running Python unbuffered. (Hugging Face Forums)
3) Multi-process container where PID 1 is not your “chatty” process
If PID 1 is a shell script that backgrounds the actual server, or a supervisor that doesn’t forward child output, stdout/stderr can be quiet while the server itself is running.
4) Less common: log streaming/viewer path issue
There are recurring reports where people see the startup banner and little else, sometimes “fixed” by rebuild/reupload/restart, suggesting a mix of infra churn and observability limitations. (Hugging Face Forums)
The fastest discriminator: does anything you print reach run logs?
Make a temporary change that must show up if log forwarding works.
A. Force a BOOT line to stdout and stderr at container start
In your Dockerfile CMD (or entrypoint script), wrap your actual start command:
bash -lc 'echo BOOT_STDOUT; echo BOOT_STDERR 1>&2; exec /path/to/your/start-command'
Interpretation:
- If
BOOT_STDOUT / BOOT_STDERR appear in run logs: forwarding works → your app/process/logging config is the cause.
- If they do not appear: either you are not looking at the correct log stream endpoint, or the log streaming path for that runtime instance is broken.
B. Add a periodic heartbeat to stderr
Even if your app is quiet, this proves continuous forwarding:
bash -lc 'while true; do echo "HEARTBEAT $(date -Is)" 1>&2; sleep 15; done & exec /path/to/start'
Ensure you’re using the “correct” streaming method (JWT + api.hf.space)
Even if https://huggingface.co/api/spaces/.../logs/run is quiet, verify via the method documented in huggingface_hub issue #2667:
- fetch a Space JWT from
/api/spaces/{space_id}/jwt
- stream from
https://huggingface.co/proxy/api.hf.space/v1/{space_id}/logs/run (SSE) (GitHub)
If logs show up there but not in the UI/Hub endpoint, that’s strong evidence of a viewer/endpoint issue rather than a container stdout/stderr issue.
Docker Spaces config checks that still matter (even though your /health works)
- Confirm your Space exposes the expected port via
app_port in README.md YAML (default 7860). (Hugging Face)
(Your /health already implies external routing is correct, but it’s still worth validating app_port hasn’t drifted.)
Practical “fix patterns” once you know which bucket you’re in
If forwarding works but your app is silent
Do one (or more) of these:
-
Route logs to stdout/stderr
- Node: ensure your logger uses a console transport (not file-only)
- Python: attach a
StreamHandler(sys.stdout) / StreamHandler(sys.stderr)
- nginx: configure access/error logs to
/dev/stdout and /dev/stderr (or symlink)
-
Make PID 1 the real server
- Avoid
&, nohup, daemon modes; use exec so the main server owns stdout/stderr.
-
If Python involved: set PYTHONUNBUFFERED=1 (or run python -u ...) to avoid buffered output. (Hugging Face Forums)
If forwarding does not work even for BOOT lines
Direct answers to your 4 questions
1) Is this a known issue?
“Startup banner only” is a recurring symptom people report, but the most common underlying causes are app-side (logging-to-file, buffering, process manager / backgrounding). There are also occasional platform-side logging/viewer failures, but they’re less frequent than the app-side causes. (Hugging Face Forums)
2) Docker Space configs that cause banner-only logs?
Yes—anything that prevents meaningful stdout/stderr:
- file-only logging
- nginx/supervisord/PM2 defaults writing to files
- daemonizing/backgrounding the server so PID 1 is quiet
- buffered output (Hugging Face Forums)
3) “Log channel reset” other than factory reboot?
No dedicated “reset log forwarder” control is documented. The practical equivalents are:
- force a new runtime instance (commit that changes startup behavior)
- switch hardware and switch back
- duplicate the Space as a control
4) Can HF check backend log forwarder status?
Only HF staff can. The most actionable escalation bundle is:
- timestamps (UTC) when you streamed logs
- confirmation
/health returns OK (include body)
- results of the BOOT stdout/stderr probe
- whether JWT +
api.hf.space stream shows anything (GitHub)
Suggested next move (highest leverage)
- Add the BOOT stdout/stderr lines in
CMD and redeploy.
- Stream logs using JWT +
api.hf.space once, in parallel with the UI. (GitHub)
- If BOOT appears: fix logging/buffering/process model. If BOOT does not appear anywhere: escalate with the evidence bundle above.