How I Built a Cron-Powered Operations Layer on a Home Linux Server
What started as three cron jobs grew into 100+ lines of scheduled automation — market briefings, game bots, security audits, batch task queues, health checks, and weekly reports, all orchestrated by a single crontab on a headless Alienware
Ingredients
- cron — Linux’s built-in task scheduler. You give it a time pattern and a command, and it runs that command on repeat forever. Installed on every Linux system by default (free)
- Headless Linux server — the always-on Alienware from the earlier posts (already set up)
- bash + Python 3 — the two languages every job is written in (free)
- Resend — email delivery for alerts and reports (free tier)
- healthchecks.io — external dead-man’s switch that alerts when jobs stop running (free tier)
How It Started: Three Jobs
When I first set up the Alienware as a headless server, the crontab had three entries: a Garmin recap at 7am, a site uptime check every 5 minutes, and a healthchecks.io heartbeat every 30 minutes. Three lines. That was the whole system.
Then I added the market briefing. Then the alert system. Then the TrophyManager bot. Then a batch task queue. Then security audits. Each project added its own cron jobs, and the crontab grew from 3 lines to over 100. At some point it stopped being a list of scheduled tasks and became something else — an operations layer.
This post isn’t about any one project. It’s about the layer underneath all of them: how cron jobs interact, how they fail, how you organize 100+ lines of scheduled automation without losing track of what’s running when.
The Full Stack: What Runs and When
Every job falls into one of six categories. Grouping them this way is the difference between a crontab you can read and one you can’t:
Every minute
- batch task queue worker — processes text files dropped into an inbox folder. The most frequent job on the system:
* * * * *
Every few minutes
- Site uptime monitor (every 5 min) — curls joseandgoose.com, alerts on non-200
- Resource sampler (every 10 min) — logs CPU, memory, and swap to a daily CSV
- healthchecks.io heartbeat (every 30 min) — external dead-man’s switch
- TM market scanner (every 10 min) — browses transfer market for undervalued players
- TM bid sniper (every 2 min) — checks and places bids on flagged targets
- 0DTE options monitor (every 5 min, 9:30–11:30 ET weekdays) — logs SPX options data
Daily
- Garmin recap (7am) — fetches health data, generates AI summary
- Garmin failure check (8am) — alerts if the 7am recap didn’t produce output
- Market briefing (8am weekdays) — AI-generated market email to subscribers
- TM sponsor check (6am) — renews sponsor deals when they expire
- TM market list (9am) — evaluates squad for sell candidates
- Fail2ban report (7pm) — daily delta of new SSH bans
- 0DTE EOD backfill (4:15pm weekdays) — realized vol and close prices
Match days (TrophyManager)
- TM lineup (2pm, Tue/Thu/Sat for league, Wed/Sun for cups) — sets starting XI before the 3pm deadline
- TM training (8am Tuesdays) — assigns all players to training groups
Weekly (Sundays)
- Log archiver (1am) — rotates logs to weekly archives
- Security updates (2am) — apt upgrades and journal vacuum
- Claude changelog (7am) — AI writes a plain-English weekly server summary
- Supabase health + GitHub activity (8am) — database ping and code activity report
- Weekly status report (9am) — everything in one Sunday email
- TM squad audit (10am) — full roster review
- TM self-grader (11am) — Claude reviews the week’s bot decisions
- Lynis security audit (Saturday 11pm) — system hardening scan
Monthly
- apt autoremove (1st at 2am) — cleans up unused packages
- Scheduled reboot (1st at 4am) — clean restart, picks up kernel updates
- Disk snapshot (Mondays 8am) — tracks disk usage over time
What Makes Cron Hard at Scale
1. The environment problem
Cron jobs don’t run in your normal terminal environment. They get a stripped-down version with almost nothing in PATH (the list of directories where Linux looks for programs). A script that works perfectly when you typepython3 myscript.py in a terminal will fail silently in cron because cron doesn’t know where python3 lives.
🔧 Developer section: PATH fix
- Option 1: set
PATHat the top of the crontab:PATH=/usr/local/bin:/usr/bin:/bin:/home/[user]/.local/bin - Option 2: use absolute paths in every command:
/usr/bin/python3 /home/[user]/scripts/market-daily.py - I use option 1 (global PATH header) for simplicity, with absolute paths as a fallback in wrapper scripts
- Environment variables like API keys are sourced from
.envfiles inside each wrapper script, not from the crontab
2. The dependency chain
Some jobs depend on other jobs. The Sunday 9am weekly report reads output from the 7am changelog and the 8am health check. If the 7am job runs slow, the 9am job reads an empty file. The fix: each downstream job checks for its inputs and substitutes a “data unavailable” fallback if any input is missing. The report always sends, even when upstream jobs fail.
3. Silent failures
Cron doesn’t tell you when a job fails. The job runs, crashes, and cron moves on. Unless you’ve built in logging and alerting, you won’t know until you notice a missing output. Every job on this server logs to its own file, and critical jobs (Garmin, market briefing) have a secondary failure-check job that runs an hour later to verify the output exists.
Instead of the job alerting on failure (which it can’t do if it crashed), a second job checks for the absence of success. If the expected output file doesn’t exist, the checker sends an alert. This catches silent crashes, auth errors, network timeouts — anything that prevents the job from completing without producing an error message.
4. Resource collisions
Multiple scripts share the same Schwab API token for market data. The market briefing, the options monitor, and the trading bot all need to refresh and use the same OAuth token. Without coordination, one script refreshes the token while another is mid-request, and the second script’s token is now invalid.
The solution is a shared token manager with file locking. One Python module handles all token reads and writes, using a lock file to prevent concurrent refreshes. Every script that needs market data imports the same module instead of managing tokens independently.
Organizing 100+ Lines
The crontab is organized by category with comment headers. Every entry follows the same format: schedule, wrapper script path, redirect stdout/stderr to a log file. No inline logic in the crontab itself — all logic lives in the scripts.
🔧 Developer section: Crontab conventions
- Comment headers group jobs by project:
# === MONITORING ===,# === TROPHYMANAGER ===,# === MARKET === - Every job redirects output:
>> /path/to/log 2>&1 - Wrapper scripts — small shell scripts (
.shfiles) that handle the setup (loading API keys, setting the working directory) before calling the actual Python or bash script. Think of them as a pre-flight checklist that runs before each job. - The crontab itself has no
cdcommands, no pipes, no conditionals — just schedule + script + log - Self-gating jobs: the 0DTE monitor runs
*/5 9-11 * * 1-5but checks the clock internally and exits early if it’s before 9:30 or after 11:30 ET
The wrapper script pattern is the key organizational choice. Without it, the crontab would be full of long one-liners with source .env && cd /path && python3 script.py. With wrappers, each crontab line is short and readable, and the setup logic lives where it can be tested independently.
Final Output
The crontab is the nervous system of the server. Every project on the Alienware — monitoring, market data, game bots, batch tasks, security — ultimately expresses itself as one or more cron entries. Adding a new capability means writing a script and adding a line. Removing one means commenting it out.
What went fast
- Adding individual jobs — once the conventions are established (wrapper script, log file, comment header), a new cron job is 5 minutes of work. The pattern is completely repeatable.
- cron itself — no daemon to configure, no YAML to write, no service to deploy.
crontab -e, add a line, save. It’s been the same interface for decades because it doesn’t need to change. - Log file debugging — every job writes to its own log. When something breaks, the answer is almost always in the log file. No centralized logging system needed at this scale.
What needed patience
- Cron environment surprises — even after setting PATH in the header, some jobs still failed because they depended on environment variables (API keys, database URLs) that only exist in interactive shells. Moving all env loading into wrapper scripts solved this permanently, but the first few failures were confusing.
- Timezone awareness — the server runs in Pacific time. Market-related jobs need to fire based on Eastern time. Cron doesn’t support per-job timezones. The solution: schedule jobs in broad windows and let the scripts self-gate based on the actual ET clock.
- The Sunday chain — 5 jobs run between 7am and 11am every Sunday, each depending on outputs from earlier jobs. Getting the timing right so the weekly report has all its inputs required staggering the schedule and adding fallbacks for every missing input. This took three Sundays of iteration to get reliable.
- Token refresh races — two scripts trying to refresh the same OAuth token at the same time. The shared token manager with file locking was the fix, but I didn’t add it until after a week of intermittent “invalid token” errors that only happened when the market briefing and options monitor ran within seconds of each other.
I didn’t set out to build an operations platform. I set out to automate a Garmin email so I could read it while walking Goose. Then a market briefing. Then a game bot. Each one added a few lines to the crontab, and eventually the crontab itself became the most important file on the server. It’s the single source of truth for everything the machine does, and reading it top to bottom is the fastest way to understand what this server is for.