Bashmatica! #1: A New Shift On An Old Problem
Rest Easy by Automating Your Automations
200+ AI Side Hustles to Start Right Now
While you were debating if AI would take your job, other people started using it to print money. Seriously.
That's not hyperbole. People are literally using ChatGPT to write Etsy descriptions that convert 3x better. Claude to build entire SaaS products without coding. Midjourney to create designs clients pay thousands for.
Your Pipeline Needs A Pipeline
If you subscribed to the NodeBridge Newsletter, a weekly publication focused on workflow automation, welcome to its (slight) evolution: Bashmatica! Most of the prior 8 issues of NodeBridge covered DevOps and QA topics from the perspective of production resiliency and how to implement enterprise-grade monitoring and error handling; on that front, nothing much will change for you. This update presents a sharper focus on those concepts through the lens of my own experiences and the incredible opportunities I see emerging with AI and automation in DevOps and QA.
Why Bashmatica?
Honestly, in the lead up to publishing this newsletter, I racked my brain for weeks trying to come up with a name I loved for it that captured what I felt was its spirit; and for weeks I failed utterly.
Around NodeBridge #4, on the way to the park with my son, it just hit me - BASHMATICA!
Good engineers, in DevOps and QA especially, live in the shell; I love bash (or zsh or whatever) and I want to talk about awesome automations that help your processes, save you time, and keep your systems safe - BASH - (AUTO or SYS)MATIC - A.
The shift reflects something I've observed over 13+ years in the industry. DevOps and QA are treated as separate disciplines, but the daily work overlaps more than org charts suggest. Both roles build systems that verify other systems work correctly. Both deal with the consequences when those verification systems fail. Both are now navigating the same question: where does AI actually help, and where is it just adding complexity?
The overlap runs deeper than shared tooling. A QA engineer writing Selenium tests and a DevOps engineer writing deployment scripts face the same fundamental challenge: building automation that remains reliable while everything around it changes. Browser versions update. Dependencies drift. APIs evolve. The code you wrote last month breaks this month, not because you made a mistake, but because the world moved on.
Bashmatica! covers three areas:
Integration Strategies - How to add AI and automation to pipelines without breaking production
Honest Tool Assessments - What works, what doesn't, and when to avoid entirely
Case Studies - What's hype vs. what's actually working
This first issue is about a problem that spans both worlds: an automation is never complete until you've automated it fully.
The 2am Deployment That Didn’t
Production deployments at my organization ran late at night. Fewer users, lower risk, and plenty of time to roll back if needed. The pipeline was solid: code passed review, unit tests green, integration tests green, and finally a Selenium suite that ran against a staging environment before the deployment could proceed.
Until the night it didn't.
The deployment stalled. Not because of a bug in the code, not because a failing test caught a real regression issue; it failed because Chrome had auto-updated on the CI server and the chromedriver binary was now incompatible.
SessionNotCreatedException: Message: session not created:
This version of ChromeDriver only supports Chrome version 114
Current browser version is 115.0.5790.102The fix was manual: SSH into the server, download the correct chromedriver, replace the binary, restart the pipeline. Twenty minutes of firefighting for a problem that had nothing to do with the code being deployed.
This happened five more times over the next 16 months. Different browsers, same pattern. Firefox updates, geckodriver breaks. Edge updates, msedgedriver breaks. Each incident required someone to interrupt their night to fix a dependency mismatch that should never have been a problem in the first place.
The conventional wisdom about test automation is that you invest time upfront to save time later. That's true, but it's incomplete. Automation has a maintenance tax, and if you don't budget for it, you pay in unexpected ways.
Every automated system depends on external components that change without asking permission. Browser drivers are one example. Here are a few others:
Package dependencies drift when lockfiles aren't enforced or when upstream maintainers push breaking changes in minor versions
Docker base images update and introduce subtle behavior changes in the runtime environment
API contracts shift when upstream services release new versions without warning
SSL certificates expire on schedules no one remembers setting
Cloud provider SDKs deprecate methods between minor versions and break builds
The pattern is consistent: something outside your control changes, and your automation breaks. Not because your code is wrong, but because the ecosystem moved.
Most teams treat these failures as one-off incidents. They fix the immediate problem and move on. But incident-by-incident firefighting obscures the cumulative cost. Those six browser driver failures over 16 months cost roughly two hours of engineer time each. Twelve hours total, plus the context-switching cost of being pulled away from other work, plus the deployment delays, plus the erosion of confidence in the pipeline by the c-suite.
When your test suite fails for reasons unrelated to test quality, people start ignoring the test suite and the execs begin questioning the value you provide.
Automating Your Automations
After the sixth browser driver failure, I wrote a script. The goal was simple: before running the test suite, check if the browser driver matches the installed browser version, and update it if necessary.
The script needed to handle multiple browsers because our test suite ran against Chrome, Firefox, and Edge depending on the test target. The implementation ran on a Linux CI server, not local development machines.
Here's the core pattern for Chrome (the full script handles Firefox and Edge as well):
#!/bin/bash
BROWSER_TYPE="${1:-chrome}"
DRIVER_DIR="/opt/webdrivers"
get_chrome_version() {
google-chrome --version | grep -oP '\d+\.\d+\.\d+' | head -1
}
update_chromedriver() {
local version=$(get_chrome_version)
local major=$(echo "$version" | cut -d. -f1)
# Chrome 115+ uses the new Chrome for Testing endpoints
if [ "$major" -ge 115 ]; then
local driver_version=$(curl -s "https://googlechromelabs.github.io/chrome-for-testing/LATEST_RELEASE_${version%.*}")
local driver_url="https://storage.googleapis.com/chrome-for-testing-public/${driver_version}/linux64/chromedriver-linux64.zip"
else
local driver_version=$(curl -s "https://chromedriver.storage.googleapis.com/LATEST_RELEASE_${major}")
local driver_url="https://chromedriver.storage.googleapis.com/${driver_version}/chromedriver_linux64.zip"
fi
curl -sL "$driver_url" -o /tmp/chromedriver.zip
unzip -o /tmp/chromedriver.zip -d /tmp/
mv /tmp/chromedriver-linux64/chromedriver "$DRIVER_DIR/chromedriver" 2>/dev/null \
|| mv /tmp/chromedriver "$DRIVER_DIR/chromedriver"
chmod +x "$DRIVER_DIR/chromedriver"
rm -rf /tmp/chromedriver*
}The full browser-agnostic script with Firefox and Edge support is available on GitHub: bashmatica-scripts/webdriver-updater
The script runs as a pre-test step in the CI pipeline. If the driver is already current, the update takes seconds. If Chrome updated overnight, the script handles it before the tests run.
After implementation, this particular problem never broke a deployment release again.
The Higher Level
The browser driver problem is a specific instance of a general principle: if a maintenance task has bitten you more than twice, consider automating.
The calculation isn't purely about time saved. It's about:
Predictability - Scheduled automation beats reactive firefighting
Consistency - Scripts don't forget steps or make typos under pressure at 2am
Documentation - The script itself documents the process better than any wiki page could
Transferability - New team members don't need institutional knowledge to handle failures
The counterargument is that writing automation takes time, and not every task recurs often enough to justify it.
That's fair.
But the threshold is lower than most engineers assume. A task that takes 20 minutes and happens quarterly is worth two hours of automation effort if only to keep the c-suite from getting nervous.
The hidden cost isn't the task itself. It's the interruption, the context switch, the mental load of remembering that this thing needs to be done manually.
And just a quick note, because I can hear the chirping already - despite the explosion of Playwright and the AI/MCP-driven shift to direct browser agents performing test cases through a simple, config-defined codeless orchestration layer - Selenium still has its place; especially if you need full browser confirmation rather than browser-context approximation in your test scope.
Here’s how I use Attio to run my day.
Surfaces insights from calls and conversations across my entire CRM
Update records and create tasks without manual entry
Answers questions about deals, accounts, and customer signals that used to take hours to find
All in seconds. No searching, no switching tabs, no manual updates.
Ready to scale faster?
Quick Tip: AI-Assisted Error Diagnosis
When a CI pipeline fails with a cryptic error, you can use Claude's API directly from the command line to get an explanation:
# Pipe an error message to Claude for explanation
echo "SessionNotCreatedException: session not created: This version of ChromeDriver only supports Chrome version 114" | \
curl -s https://api.anthropic.com/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "content-type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d "$(jq -n --arg err "$(cat)" '{
model: "claude-3-haiku-20240307",
max_tokens: 500,
messages: [{role: "user", content: "Explain this CI/CD error and suggest fixes:\n\n\($err)"}]
}')" | jq -r '.content[0].text'Useful for errors you haven't seen before, or for getting a second opinion on root cause. Haiku is fast and cheap enough to use liberally in debugging workflows.
Quick Wins
🟢 Easy: Add the browser driver update script to your CI pipeline as a pre-test step. Start with whichever browser your tests use most frequently (probably Chrome).
🟡 Medium: Audit your pipeline for other external dependencies that could drift. Make a list of everything that isn't pinned to a specific version, then prioritize by failure impact.
🔴 Advanced: Set up a scheduled job that checks for dependency updates weekly and opens a PR with version bumps, rather than waiting for failures to force the update.
Next Week
We'll look at where LLMs actually help in CI/CD pipelines, and where they're currently more trouble than they're worth. Hint: it's not the use cases getting the most attention.
Thanks for reading the (kinda) first issue of Bashmatica!. If you have automation maintenance stories, or questions about topics you'd like covered, reply to this email. I read everything and your input shapes this publication.
P.S. You're an early subscriber, which means you're here before the patterns are fully established. Your feedback matters more now than it will in six months. If you got anything out of this issue, please consider sharing it with friends and colleagues; that how we want to grow!
I can help you or your team with:
Production Health Monitors
Optimize Workflows
Deployment Automation
Test Automation
CI/CD Workflows
Pipeline & Automation Audits
Fixed-Fee Integration Checks