How I Automated My Soccer Club with a Bot
A Python bot on a headless Linux server that manages a TrophyManager.com soccer club — scouting the transfer market, placing bids, setting lineups, assigning training, listing players for sale, and grading its own decisions every Sunday
Ingredients
- Headless Linux server — the always-on Alienware running all 9 cron jobs (already set up)
- Python 3.12 — main language for all bot logic (free)
- Playwright — a tool that controls a real web browser invisibly (no window, no screen — “headless”), used for login and market scanning (free)
- SQLite — a lightweight database that lives in a single file on disk (no server needed) — stores squad data, market observations, and decision logs (free)
- Claude CLI — powers the weekly self-grading module (included in subscription)
- Resend — email alerts for bids, sales, and depth analysis (free tier)
What Is TrophyManager?
TrophyManager is a browser-based soccer management sim. You run a club — setting lineups, buying and selling players on a live transfer market, assigning training, managing finances. Matches happen on a fixed schedule (Tuesdays, Thursdays, Saturdays for league; Wednesdays and Sundays for cups). The transfer market runs 24/7.
The game rewards consistency. Checking the transfer market twice a day, setting lineups before every deadline, rotating training groups — it’s a lot of small, repetitive decisions. The kind of thing a bot was made for.
I manage a club in a mid-tier division. Sixty players on the roster, most of them youth prospects aged 17–19. The strategy is simple: develop cheap young players, sell them at peak value, and reinvest in the next batch. It’s a volume game, and it’s exactly the kind of thing you don’t want to do manually 60 players at a time.
The First Wall: Login
Most web automation starts with a simple POST request to a login endpoint. TrophyManager doesn’t work that way. The login form submits via JavaScript that sets session cookies client-side. If you POST directly with Python’s requests library, you get a valid response but no session cookies — and every subsequent request fails silently.
🔧 Developer section: Playwright login
- Playwright launches a headless Chromium browser, navigates to the login page
- Types username and password into the form fields, then presses Enter via keyboard (not button click — the form’s JS submit handler only fires on keyboard Enter)
- Waits for the dashboard to load, then extracts the session cookies that the game’s JS sets on login
- Cookies are saved to a local cache file and loaded into a
requests.Sessionfor all subsequent API calls - Session is cached and reused across cron runs until it expires, then Playwright re-authenticates
This was the hardest single problem in the project. Every other module — bidding, lineup, training — is just HTTP requests with the right cookies. Getting those cookies required reverse-engineering the login flow in browser DevTools and realizing that only a real browser submission works.
Mapping the Game’s AJAX Endpoints
TrophyManager has no public API. Everything happens through internal AJAX endpoints — hidden URLs that the game’s JavaScript calls behind the scenes to fetch and save data. I opened the browser’s DevTools (the built-in developer panel that shows every network request a page makes), clicked through every feature in the game, and logged every request. The result was a map of POST endpoints that the bot uses for everything:
🔧 Developer section: Key endpoint categories
- Tactics — get the full squad list, save a lineup with formation
- Training — assign players to training groups
- Transfer market — check bid status, place bids, list players for sale
- Player data — detailed stats including hidden attributes like “routine”
- Sponsors — sign the best available deal
- Scouting — dispatch scouts and retrieve reports
One gotcha: skill values of 19 and 20 are returned as HTML <img> star tags instead of numbers. The parser has to read the alt attribute to get the numeric value. This is the kind of thing that only shows up when your bot tries to sort players by skill and everything above 18 comes back as NaN.
Nine Modules, Nine Cron Jobs
Each bot responsibility is a separate module with its own cron schedule:
- market_scan (every 10 min) — browses the transfer market for undervalued players, scores them against a valuation model
- market_bid (every 2 min) — checks active bids, places new bids on flagged opportunities
- market_list (daily 9am) — evaluates the squad for sell candidates, lists them with calculated minimum prices
- lineup (match days, 2pm) — sets the optimal starting XI before the 3pm deadline
- training (Tuesdays 8am) — assigns all players to the correct training groups
- sponsor (daily 6am) — checks if the sponsor deal needs renewing, signs the best one
- squad_audit (Sundays 10am) — full roster review: age distribution, position depth, financial summary
- scout_deploy — dispatches scouts to evaluate prospective purchases
- grader (Sundays 11am) — Claude reviews the week’s decisions and writes a self-assessment
Transfer bids in TrophyManager have a countdown. Running the bid module every 2 minutes means the bot can outbid competitors in the final minutes of an auction. It’s the most aggressive cron interval on the server — 720 runs per day — but each run is a single lightweight HTTP request.
The Valuation Model
The bot doesn’t just buy cheap players. It estimates what a player is worth based on age, skill index, and a hidden stat called “routine” — a composite training discipline score that ranges from 1 to 60+. Higher routine means the player develops faster and is worth more long-term.
🔧 Developer section: Valuation logic
- Base value uses market rate per ASI (aggregate skill index) — calibrated from 16,000+ market observations
- Routine multiplier: <10 = ×0.9, 10–19 = ×1.0, 20–29 = ×1.1, 30–39 = ×1.2, 40+ = ×1.3
- Age-adjusted routine floor: younger players get more leeway (a 17-year-old with routine 8 is fine; a 26-year-old with routine 8 is a pass)
- Sell-side scarcity premium: high-routine players (>40) are rare on the market, so listings get a 15% premium
- Starting XI gate: if a sell candidate is in the starting lineup, the bot emails a depth analysis table before listing — showing top 4 backups at the position with color-coded viability ratings
The valuation model was rewritten twice. The first version used a flat rate per ASI point. The second added routine awareness after competitive analysis showed that the top clubs in the division were winning through squad quality (average routine 42.9) rather than tactical variety — they all used the same mentality every match. The gap was player development, not strategy.
The Bot Grades Itself
Every Sunday at 11am, the grader module runs. It pulls the week’s decisions from the SQLite database — bids placed, players sold, lineup choices, training assignments — and passes them to Claude CLI with a prompt asking for an honest assessment.
Claude looks at outcomes: did the bid win? Was the sale price reasonable? Did the lineup choice result in a win? The grader has per-module “grace days” because some outcomes take time to materialize — a scout dispatch takes 10 days to return results, so the grader doesn’t judge scouting decisions until they’ve had time to play out.
Decisions are logged to a local file with timestamps, module names, and Claude’s assessment. Over time, this creates a decision history that I can review to see if the bot is getting better or repeating mistakes.
Final Output
The bot went from first commit to v0.6.0 in 10 days and 10 sessions. It now runs 9 cron jobs, makes dozens of automated decisions per day, and emails me when something needs human judgment (like selling a starter without a backup).
What went fast
- Individual modules — once the login and session management worked, each module was a straightforward loop: fetch data, apply logic, take action. Training assignment took 30 minutes. Sponsor renewal took 20. The pattern repeats.
- SQLite for everything — no external database, no connection strings, no network latency. The entire bot state lives in one file on disk. Queries are instant and backups are just file copies.
- Cron scheduling — same pattern as every other Alienware project. One line per module in the crontab. The simplicity of cron makes it easy to add, test, and adjust schedules.
What needed patience
- The Playwright login — three hours of trial and error. POST requests, form submissions, cookie injection — nothing worked until I watched the actual JS execution in DevTools and realized the session cookies are set by client-side code that only fires on a real browser form submission via keyboard Enter.
- Star-rating HTML parsing — skills 19 and 20 return as
<img>tags with star icons instead of numeric values. The bot sorted every 19+ player asNaNuntil I added parsing for thealtattribute. A silent data bug that only shows up at the top of the skill range. - Valuation calibration — the first flat-rate model overpaid for old players and underbid on young ones. The routine-aware rewrite required logging 16,000+ market observations to understand what players actually sell for, then building age curves and scarcity premiums on top of that data.
- Endpoint discovery — TrophyManager has no API documentation. Every endpoint was found by clicking through the game in DevTools and watching network requests. Some features aren’t where you’d expect — scouting doesn’t have its own endpoint; it piggybacks on a general player info endpoint with a special parameter. The obvious URL for scouting returns a 404.
- Phantom automation — this was the biggest lesson of the project. Claude would write a module, I’d add the cron job, and I’d assume it was running. But for the first two weeks, most of the “automated” actions never actually fired. Endpoints were mapped wrong. Parsers assumed response shapes that didn’t match reality. The scouting module called an endpoint that returned a 404. The bid module used field names from the wrong AJAX response. The lineup module sent a payload format the game silently rejected. Everything looked right in the code. Claude was confident. The cron jobs ran on schedule. But the actual game state never changed. I’d log in manually and find that no bids had been placed, no lineups had been set, and no scouts had been dispatched — for days. The fix wasn’t more code; it was logging every raw API response, diffing expected vs. actual payloads, and verifying in-game that each action actually took effect. This is the gap between “the script runs without errors” and “the script does what you think it does.”
Claude wrote every module with high confidence — correct syntax, clean structure, reasonable logic. But confidence isn’t correctness. When the AI is guessing at undocumented API contracts, it produces code that looks professional and does nothing. The only defense is verifying outcomes in the real system, not just checking that the code runs.
This is the most fun project on the server. Not because it’s the most useful — the market briefing and alert system have more real-world value. But there’s something satisfying about taking Goose for a walk, coming back, and finding an email that says the bot bought three youth prospects overnight. Then checking in on Sunday to read Claude’s honest assessment of whether those purchases were smart.