โ BACK TO DASHBOARD
๐ HELP & DOCS
๐ Quick Navigation
Core Features
Network & DNS
System & Linux
Security & IAM
Data, Dev & Scripting
Advanced Forensic Tools
๐ฅ๏ธ Workstation & Privacy โ Persistent Browser Workstation
What it is: MYSYSAD has evolved beyond a read-only toolkit into a Persistent Browser Workstation โ a fully configurable, stateful dashboard that remembers your preferences, layout, and linked files across sessions. All state is stored exclusively inside your own browser. No account is required, no server receives your data, and no database exists.
Pinning & Starring Tools
Every tool card on the main dashboard has a โ
star icon that appears in the top-right corner when you hover over the card. Clicking the star pins the tool to the top of the grid so your most-used tools are always within reach.
- How to pin: Hover any tool card โ click the โ
icon. The star turns gold and glows to confirm the pin.
- Pinned section: Pinned tools are grouped under a ๐ PINNED TOOLS header at the very top of the grid, sorted alphabetically within the group.
- All Tools section: Unpinned tools follow underneath, also sorted alphabetically, under an All Tools divider.
- How to unpin: Click the gold โ
on any pinned card to remove it from the pinned section. The card moves back to the All Tools section immediately.
- Persistence: Your pinned tool list is saved to
localStorage under the key mysysad_favorites and restored every time you reload the page.
- Search compatibility: The global search (โK / Ctrl+K) and category filter chips work across both the Pinned and All Tools sections simultaneously โ pinning does not affect discoverability.
Typical pinning workflow:
1. Open the dashboard
2. Hover CIDR Calculator โ click โ
โ card moves to ๐ PINNED TOOLS
3. Hover JWT Decoder โ click โ
โ card moves to ๐ PINNED TOOLS
4. Reload the page โ both cards are still pinned at the top
5. Use ๐ค Export Pins in the Recent Tools sidebar to save your layout
๐ก Tip: Pin the 4โ6 tools you use every day. The grid re-sorts instantly on every pin/unpin โ there is no save button, changes are immediate and automatic.
Configuration Portability โ Export & Import Pins
Your pinned layout can be exported to a .json file and imported on any other browser or device โ or shared with your team so everyone starts with the same workstation layout.
- In the Recent Tools sidebar card, two small icon buttons sit next to the "clear" button:
- ๐ค (Export Pinned) โ downloads a
mysysad-pins.json file containing the IDs of all currently pinned tools.
- ๐ฅ (Import Pinned) โ opens a file picker. Select a
mysysad-pins.json file; the dashboard validates every tool ID against the current registry, loads the valid pins into localStorage, and immediately re-renders the grid.
- The exported JSON has the shape
{"mysysad_pins": ["cidr","jwt",...], "exported": "ISO-timestamp"} โ it is human-readable and editable in any text editor.
- Unknown or renamed tool IDs in an imported file are silently skipped โ the import will never corrupt your existing layout.
Example mysysad-pins.json:
{
"mysysad_pins": ["cidr", "jwt", "chmod", "dns", "commands"],
"exported": "2026-02-25T14:30:00.000Z"
}
๐ก Tip: Export your pins before clearing your browser cache โ you'll lose localStorage if you do. Keep a copy of mysysad-pins.json in your dotfiles repo or a shared team folder so anyone can get the standard workstation layout with one click.
Zero-Trust Privacy โ Where Your Data Lives
MYSYSAD operates on a strict zero-trust, zero-upload architecture. Understanding this model is important when you use the dashboard with sensitive data such as JWT tokens, certificates, IAM policies, or server logs.
- No server processing: Every tool โ CIDR calculator, JWT decoder, log scrubber, PKI inspector, Python sandbox โ runs entirely inside your browser tab using JavaScript and WebAssembly. No input you type is sent to any MYSYSAD server.
- No database: There is no backend database. Nothing you create or configure on the dashboard is stored anywhere except your own browser's memory and storage APIs.
- localStorage scope: Data saved to
localStorage (pins, scratchpad content, dark mode preference, world clock timezones, recently viewed commands) is private to your browser profile on your device. It is not synced, not transmitted, and not accessible to any server.
- IndexedDB scope: The File System Access API handle (for Local Disk Sync) is stored in IndexedDB โ also private to your browser profile. It contains only a reference to a file you explicitly chose; the file's contents are never cached or transmitted.
- Python Sandbox: Files uploaded to the Python Sandbox are written into Pyodide's in-memory virtual filesystem inside a Web Worker. They exist only in RAM for the duration of your session and are gone when you close the tab.
- Log Scrubber & PKI Inspector: Log data and certificate PEM strings are processed entirely in JavaScript in your browser. They are never sent to any endpoint.
- News feed requests: The only outbound network requests MYSYSAD makes are the news feed fetches (Reddit, HN, CISA etc.) from your browser to those third-party sources โ and DNS-over-HTTPS queries for the DNS Propagation Checker. No request payload contains any data you have entered.
๐ก Tip: If you want to verify this yourself, open browser DevTools โ Network tab, then paste a JWT token or certificate into any tool. You will see zero outbound requests to mysysad.com carrying your data. The only requests are the initial page load assets (HTML, CSS, JS) and any news feed polls you trigger.
๐ฐ News Feeds
What it does: A full-page news aggregator pulling live stories from Reddit, Hacker News, CISA advisories, Krebs on Security, and LWN.net โ filtered and sorted for sysadmins. Available at /news.html or via the ๐ฐ News link in the top navigation.
Category Tabs
Eight feed categories, each pulling from different sources:
- Sysadmin โ r/sysadmin, r/netsec, Hacker News (linux/server/devops/security queries)
- Security โ r/netsec, r/cybersecurity, RSS security advisories
- Linux โ r/linux, LWN.net kernel & distro coverage
- Cloud / DevOps โ r/devops, r/kubernetes, cloud-focused RSS
- Government โ CISA advisories, r/netsec
- Programming โ r/programming, r/Python, r/javascript
- Homelab โ r/homelab, r/selfhosted, r/datahoarder
Story Cards
- Each story shows a score (upvotes/points), colour-coded source badge, subreddit or origin, age, and comment count
- HN orange ยท REDDIT red ยท CISA blue ยท KREBS red ยท LWN green
- A HOT tag appears on high-scoring stories
- Hover any story row to reveal two action buttons: OPEN (opens article in new tab) and COMMENTS (goes straight to the discussion thread)
Toolbar Controls
- โณ REFRESH โ re-fetches all sources for the current category
- Sort dropdown โ order by Top Score, Most Recent, or Most Comments
- Comfy / Compact density toggle โ Comfy shows full metadata and domain; Compact collapses to title + score only for scanning more stories at once
- Load More button at the bottom โ stories load 40 at a time; click to page through the full set
Right Sidebar
- Stats โ live counts of Stories Loaded, Sources Active, Average Score, and Hot Stories in the current feed
- Sources โ toggle switches for each source (Reddit, HN, CISA, Krebs, LWN). Flip a toggle to instantly hide or show that source's stories without reloading โ the Sources Active stat updates immediately
- Trending Tags โ auto-extracted keywords from current story titles. Click any tag to filter the feed to only stories containing that word; click again to clear
- Score Filter โ slider to set a minimum score threshold; drag right to show only high-engagement stories
Typical workflow:
1. Open News โ Security tab
2. Drag the score slider to 50+ to filter noise
3. Disable Reddit toggle to focus on CISA / Krebs advisories
4. Click a Trending Tag like "vulnerability" to narrow further
5. Hover a story โ click OPEN or COMMENTS
๐ก Tip: Reddit feeds are fetched directly from Reddit's public JSON API โ no account needed. Hacker News uses the Algolia search API filtered to tech/sysadmin keywords with 10+ points. Weekly threads and meta posts (e.g. "Moronic Monday") are automatically filtered out.
๐ CIDR Calculator
What it does: Calculates IP address ranges, subnet masks, and network information from CIDR notation.
How to use:
- Enter an IP address with CIDR notation (e.g.,
192.168.1.0/24)
- Click CALCULATE
- View network details: IP range, subnet mask, broadcast address, usable hosts
Example:
Input: 10.0.0.0/8
โ Network: 10.0.0.0 | First: 10.0.0.1 | Last: 10.255.255.254
โ Broadcast: 10.255.255.255 | Usable Hosts: 16,777,214
๐ก Tip: Common CIDR blocks: /24 = 256 IPs, /16 = 65,536 IPs, /8 = 16.7M IPs
โฐ Cron Generator
What it does: Generates cron expressions for scheduling tasks on Linux/Unix systems.
How to use:
- Select timing options from the dropdowns (minute, hour, day, month, day of week)
- Or enter a cron expression to see its human-readable description
- Click GENERATE to create the cron syntax
- Copy the generated expression to your crontab
Examples:
0 2 * * * = Every day at 2:00 AM
*/15 * * * * = Every 15 minutes
0 9 * * 1-5 = Weekdays at 9:00 AM
0 0 1 * * = First day of every month at midnight
๐ก Tip: Use * for "any", */n for "every n", and 1-5 for ranges.
๐ JWT Decoder
What it does: Decodes JSON Web Tokens (JWT) to view header and payload information.
How to use:
- Paste a JWT token into the input field
- Click DECODE
- View the decoded header and payload in JSON format
- Check token expiration and other claims
JWT Format:
eyJhbGc...header.eyJzdWI...payload.SflKxw...signature
๐ก Tip: This tool only DECODES tokens โ no validation or signature verification.
๐ Base64 Encoder/Decoder
What it does: Encodes text to Base64 or decodes Base64 back to plain text.
How to use:
- Enter text or Base64 string in the input field
- Click ENCODE to convert text โ Base64
- Click DECODE to convert Base64 โ text
Text: "Hello World" โ Base64: SGVsbG8gV29ybGQ=
๐ก Tip: Useful for encoding credentials, debugging API responses, or handling binary data.
๐ Hash Generator
What it does: Generates cryptographic hashes (MD5, SHA-1, SHA-256) from text input.
How to use:
- Enter text to hash
- Click GENERATE
- View MD5, SHA-1, and SHA-256 hashes
- Click any hash to copy it
๐ก Tip: Use for file integrity checks, password verification (compare hashes), or debugging.
๐จ Color Converter
What it does: Converts colors between HEX, RGB, and HSL formats.
How to use:
- Enter a color in any format (HEX, RGB, or HSL) or use the color picker
- Click CONVERT
- View all format conversions side by side
- Click RANDOM for a random color
HEX: #FF5733 | RGB: rgb(255, 87, 51) | HSL: hsl(9, 100%, 60%)
๐ Regex Tester
What it does: Tests regular expressions against sample text with live match highlighting, capture group extraction, find-and-replace, and one-click Python export.
How to use:
- Type a pattern in the pattern bar โ matches highlight instantly as you type
- Toggle flags using the chip buttons:
g Global, i Ignore case, m Multiline, s Dotall โ active flags glow in accent blue
- Matches tab: lists every match with position, length, and numbered capture groups
- Highlighted tab: shows the full test string with all matches underlined in a readable semi-transparent highlight
- Replace tab: enter a replacement string (supports
$1, $2 backreferences) and click โถ Replace
- Python Export tab: auto-generates a complete
re module snippet โ copy it or send it directly to the Scratchpad with โ Scratchpad
- Click a Quick Pattern chip to load a preset (email, IPv4, URL, hex color, phone, etc.)
Common patterns:
\d{3}-\d{3}-\d{4} = US phone number
[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,} = email
\b(?:\d{1,3}\.){3}\d{1,3}\b = IPv4 address
#[0-9a-fA-F]{6} = hex color code
Python Export example:
import re
pattern = re.compile(r'\d{3}-\d{3}-\d{4}', re.UNICODE)
for m in pattern.finditer(text):
print(f"Match: {m.group()!r} at pos {m.start()}โ{m.end()}")
๐ก Tip: Use the Python Export tab to instantly convert a tested pattern into production-ready code. Send it to the Scratchpad to build up a library of validated patterns.
๐ Chmod Calculator
What it does: Converts Linux file permissions between symbolic notation (rwxr-xr-x) and octal notation (755), and explains what each permission means.
How to use:
- Toggle the checkboxes for Owner, Group, and Others permissions (Read, Write, Execute)
- Or enter an octal value directly (e.g.,
755, 644, 600)
- The tool instantly shows both the octal and symbolic equivalent
- Copy the
chmod command with one click
Common permission sets:
755 โ rwxr-xr-x (owner full, group/others read+execute) โ scripts, binaries
644 โ rw-r--r-- (owner read+write, others read-only) โ config files
600 โ rw------- (owner only, private) โ SSH keys, secrets
777 โ rwxrwxrwx (everyone full access) โ avoid in production
๐ก Tip: SSH private keys should always be chmod 600 or SSH will refuse to use them.
๐ YAML Validator / Formatter
What it does: Validates YAML syntax, formats/pretty-prints YAML, and converts YAML to JSON.
How to use:
- Paste your YAML content into the input field
- Click VALIDATE to check for syntax errors
- Click FORMAT to pretty-print and normalize indentation
- Click TO JSON to convert the YAML to JSON format
- Error messages show line numbers to pinpoint issues quickly
Common YAML pitfalls caught:
โข Tabs instead of spaces (YAML requires spaces)
โข Missing colons after keys
โข Inconsistent indentation levels
โข Unquoted strings with special characters (:, #, &, *)
๐ก Tip: Paste Kubernetes manifests, Docker Compose files, Ansible playbooks, or CI/CD config to validate before deploying.
๐ Curl Builder
What it does: Generates curl commands from a visual form โ choose method, URL, headers, auth, body, and options without memorizing flags.
How to use:
- Enter the target URL
- Select the HTTP method (GET, POST, PUT, PATCH, DELETE, HEAD)
- Add request headers as key-value pairs
- Choose authentication type (None, Bearer Token, Basic Auth, API Key)
- Add a request body for POST/PUT requests (JSON, form data, raw)
- Toggle options: follow redirects (
-L), verbose (-v), insecure (-k), silent (-s)
- Click GENERATE to see the full curl command
- Copy with one click
Example output:
curl -X POST https://api.example.com/v1/users \
-H "Content-Type: application/json" \
-H "Authorization: Bearer eyJhbGci..." \
-d '{"name":"alice","role":"admin"}'
๐ก Tip: Use this when testing REST APIs, webhooks, or debugging HTTP endpoints. The verbose flag (-v) shows request/response headers which is invaluable for debugging auth issues.
๐ Scripts Library
What it does: Provides 50+ ready-to-use Python scripts for common sysadmin tasks.
How to use:
- Click the ๐ SCRIPTS button in the tools section
- Browse by category (ALL, SYSTEM, DOCKER, NETWORK, FILES, SECURITY)
- Use the search box to find specific scripts
- Click ๐ COPY to copy script to clipboard
- Paste into your terminal or save as a .py file
Categories:
- SYSTEM: CPU/memory monitoring, disk space, system info
- DOCKER: Container management, cleanup, resource usage
- NETWORK: Port scanning, DNS lookup, ping tests
- FILES: Find large files, batch rename, duplicate finder
- SECURITY: SSH monitoring, password tools, backups
๐ก Tip: All scripts are production-ready and include error handling. Review before running on production systems.
๐ Python Sandbox
What it does: Runs real Python entirely in your browser using Pyodide (WebAssembly). No server, no installs, no data leaves your machine. Scripts run in a background Web Worker so even infinite loops can't freeze the page.
How to use:
- Click the ๐ SCRIPTS button, then the PYTHON SANDBOX tab
- Write or paste Python code into the editor
- Click โถ RUN CODE โ first run takes a few seconds while Pyodide loads (~10MB), then it's cached and fast
- Output appears in the green terminal panel below
- Click โน KILL SCRIPT to instantly terminate any runaway script
- Click ๐ UPLOAD FILE to load a local file (logs, CSVs, configs) into the sandbox โ then open it with
open('filename') exactly like native Python
Test Scripts
Copy and paste these directly into the editor to verify everything is working.
Test 1 โ Subnet Matcher
Tests the built-in ipaddress module. Checks a list of IPs against a VPC subnet โ a real sysadmin task.
import ipaddress
vpc_subnet = ipaddress.ip_network('10.0.0.0/16')
test_ips = ['10.0.5.12', '192.168.1.5', '10.0.250.99', '172.16.0.1', '10.1.0.5', '8.8.8.8']
print(f"Scanning for IPs inside {vpc_subnet}...\n")
print("-" * 35)
for ip_str in test_ips:
ip = ipaddress.ip_address(ip_str)
if ip in vpc_subnet:
print(f"[MATCH] {ip} belongs to VPC.")
else:
print(f"[IGNORED] {ip} is external.")
Test 2 โ JSON Alert Cruncher
Tests JSON parsing. Simulates taking a raw API response from a monitoring tool and extracting only the servers that need attention.
import json
raw_data = """
{"fleet": [
{"hostname": "web-01", "status": "healthy", "uptime_days": 45},
{"hostname": "db-01", "status": "critical", "uptime_days": 412},
{"hostname": "cache-01", "status": "healthy", "uptime_days": 12},
{"hostname": "web-02", "status": "warning", "uptime_days": 85}
]}
"""
data = json.loads(raw_data)
print("=== SERVER ACTION REPORT ===\n")
for server in data['fleet']:
if server['status'] in ['critical', 'warning']:
print(f"โ ๏ธ ALERT: {server['hostname']} is {server['status'].upper()}")
if server['uptime_days'] > 365:
print(f" -> Note: Hasn't rebooted in over a year ({server['uptime_days']} days).")
Test 3 โ Kill Switch Stress Test
Proves the Web Worker isolation works. Run this, watch it loop, then hit โน KILL SCRIPT. Without the Worker this would permanently freeze the browser tab.
import time
print("Starting an infinite loop...")
print("Hit the red KILL SCRIPT button to stop it!")
print("-" * 40)
counter = 1
while True:
print(f"Loop {counter}: still running...")
counter += 1
time.sleep(0.5)
๐ก Tip: Use ๐ UPLOAD FILE to load real log files, then parse them with open('auth.log'). The file is written into Pyodide's virtual filesystem โ your actual file never leaves your browser.
๐ World Clock & Meeting Planner
What it does: Displays live time across multiple timezones with a suite of scheduling tools โ a pinned UTC reference, a time-travel slider for planning windows, weekend warnings, relative offset labels, one-click schedule copying, and full JSON/YAML config export.
How to use:
- Add Clocks: Click [+] ADD or use the Quick Add chips (e.g. New York, London, Tokyo) to start tracking a city. You can maintain as many zones as you need โ they display in a vertical list, ordered by UTC offset.
- Remove a Clock: Click the ร button on any row to remove that timezone from your list.
Pinned UTC & UNIX Epoch
- The UTC row is always pinned at the very top of the clock list and cannot be removed โ it is your universal reference anchor.
- Click anywhere on the UTC row to instantly copy the current UNIX epoch timestamp (seconds since 1 Jan 1970) to your clipboard. This is useful when filing incident reports, writing log queries, or setting token expiry values.
Clicking the UTC row copies:
1741004400 โ UNIX epoch at time of click
Plan Ahead Slider
- An orange slider sits above the clock list. Drag it left or right to shift the displayed time up to 12 hours backward or forward relative to the current moment.
- All clocks โ including UTC โ update instantly as you drag, so you can see what time it will be in every zone simultaneously.
- A label above the slider shows the current offset (e.g.,
+3h 30m or Live).
- Click โบ Reset (or drag back to centre) to return all clocks to live time.
๐ก Tip: Use the slider to find a maintenance window that avoids business hours in every timezone your team spans. Drag forward until you find a slot where all zones show late evening or early morning on a weekday.
Relative Offsets
- Next to each timezone abbreviation (e.g.,
EST, JST), the clock shows a relative offset badge compared to your local browser time โ for example +5h, -2h, or same.
- Offsets automatically account for DST changes in both your local zone and the target zone.
Weekend Warning
- A ๐ค icon appears beside any timezone row where the currently displayed time (live or planned) falls on a Saturday or Sunday in that location.
- When using the Plan Ahead slider, the weekend warnings update in real time โ making it easy to confirm that your planned window does not fall on a weekend for any team member's timezone.
Copy Schedule
- Click the ๐ Copy Times button to copy a clean, formatted schedule string of all currently visible clocks to your clipboard.
- Paste directly into Slack, Teams, email, or a ticket to share the meeting or window time across all zones at once.
Copy Schedule output example:
Schedule: UTC 08:30 PM | Dallas 02:30 PM | London 08:30 PM | Tokyo 05:30 AM+1
Import & Export (JSON / YAML)
- Click โฌ Export to download your current timezone list as either a JSON or YAML file โ choose the format from the dropdown before exporting.
- Click โฌ Import to restore a previously exported config file. Both JSON and YAML formats are accepted on import.
- Use export to preserve your timezone list before clearing browser cache, switching devices, or sharing a standard clock layout with your team.
Exported JSON example:
{ "timezones": ["UTC","America/Chicago","Europe/London","Asia/Tokyo"], "exported": "2026-03-01T10:00:00Z" }
Equivalent YAML export:
timezones:
- UTC
- America/Chicago
- Europe/London
- Asia/Tokyo
exported: "2026-03-01T10:00:00Z"
๐ก Tip: The World Clock is performance-optimised. When the Plan Ahead slider is active it ticks every second. When you are in Live mode and switch to another browser tab, the clock throttles automatically to once-per-minute updates to save CPU and battery โ it resumes full speed when you return to the tab.
๐พ Scratchpad & Local Disk Sync
What it does: The Scratchpad is a persistent notepad inside the right sidebar for pasting logs, IP lists, commands, output from tools, or any temporary notes. It auto-saves to localStorage as you type. The Local Disk Sync feature extends this by linking the Scratchpad directly to a physical .txt file on your hard drive using the browser's File System Access API โ so your notes survive even if browser storage is cleared.
Basic Scratchpad Usage
- Find the ๐ Scratchpad tab in the right sidebar (next to ๐ World Clock)
- Type or paste any text โ content saves automatically to
localStorage
- Click ๐พ Download to save the current content as a
.txt file immediately
- Click ๐ Copy All to copy all content to clipboard
- Click ๐๏ธ Clear to erase all content
- Tools across the dashboard (Command Library, Log Parser, Security Header Tester, DNS Propagation, Systemd Generator, etc.) have โ Scratchpad or โ Dashboard buttons that append their output directly here
Linking a Local File โ Local Disk Sync
Local Disk Sync uses the browser's File System Access API to create a persistent, bi-directional link between the Scratchpad and a physical file on your hard drive. Once linked, every write to the Scratchpad is mirrored to the file in real time.
- In the Scratchpad action bar, click ๐ Link Local File.
- A file picker opens. Either select an existing
.txt file or create a new one.
- Grant the browser write permission when prompted โ this permission is scoped to that single file only.
- The Scratchpad header updates to show the linked filename (e.g.,
notes.txt) with a green sync indicator.
- Every keystroke now writes to both
localStorage AND the linked file on disk simultaneously.
- Click ๐ Unlink File to disconnect the file and return to localStorage-only mode.
๐ก Tip: Local Disk Sync requires Chrome 86+, Edge 86+, or Opera 72+. It is not available in Firefox or Safari (File System Access API is not supported in those browsers). The file handle is stored in IndexedDB and restored automatically on next page load โ you won't need to re-select the file each session.
๐ป Command Reference
What it does: A searchable library of 200+ Linux/Unix commands with inline flag tooltips, risk ratings, Pipeline Tray for building multi-step one-liners, and recently viewed history.
How to use:
- Open the ๐ป COMMANDS tool from the dashboard
- Use the search bar or category chips (PROCESS, NETWORK, FILES, SYSTEM, DOCKER, etc.) to find commands
- Each card shows the command name, description, example usage, and a risk level badge
- Hover any flag in an example (e.g.,
-r, -f) to see a tooltip explaining what that flag does
- Click ๐ COPY on a card to copy the example command
- Click โ Scratchpad to append the command to your notes
Recently Viewed
- Clicking any card records it in a Recently Viewed strip shown just below the filter bar
- Up to 12 recent commands shown as green chips โ click any chip to instantly jump back to that command (sets search to the command name and scrolls to it)
- History persists across page refreshes via localStorage; click โ Clear to reset it
Pipeline Tray
- Click โ Pipe on any card to add its example to the Pipeline Tray
- The Tray slides up from the bottom of the screen when it has at least one command
- Commands are joined with
| โ building a live shell pipeline as you add more
- Each command in the tray has an ร remove button
- Click ๐ Copy Pipeline to copy the full piped string to clipboard
- Click Clear to empty the tray and collapse it
Pipeline example โ find top memory-consuming processes:
ps aux โ Pipe โ sort -rk 4 โ Pipe โ head -20
Result copied from tray:
ps aux | sort -rk 4 | head -20
Flag tooltip example:
Command: rm -rf /tmp/cache
Hover -r โ tooltip: "Recursive โ delete directories and their contents"
Hover -f โ tooltip: "Force โ no prompts, ignore nonexistent files"
๐ก Tip: Use the Pipeline Tray to build complex one-liners without memorising syntax. Search for each step ("sort", "grep", "awk"), Pipe them in order, then copy the assembled command. Pay attention to red CRITICAL cards โ commands like rm -rf, dd, and mkfs are marked because they are irreversible.
๐ Log Parser
What it does: Parses, filters, and analyses log files from NGINX, Apache, Syslog, Docker/Kubernetes, and JSON structured logs โ live in your browser. Generates ready-to-copy grep, awk, and one-liner shell commands based on your active filters.
How to use:
- Paste log content into the input area, or click an Example chip to load a sample
- The format is auto-detected (NGINX access, Apache error, Syslog, Docker, JSON)
- Use the filter row to narrow results: status code (supports
4xx/5xx patterns), IP address (wildcards supported), URL pattern, HTTP method, text search, and log level
- Click โถ PARSE โ the tool renders stats, a timeline, top IPs, top URLs, and filtered log lines
- The Timeline shows a colour-coded horizontal bar: green = 2xx, amber = 4xx, red = 5xx, blue = info โ hover any marker for details
- The Commands section generates three shell equivalents of your current filters: a
grep chain, an awk field-based command, and an optimised one-liner
- Each command has its own COPY button and a โ Scratchpad button
- Click โ All to Scratchpad to send all three commands plus a filter summary as a structured audit block
- The results panel has a COPY ALL button to copy all filtered log lines as plain text
Filter examples:
Status: 5xx โ shows only server errors
IP: 192.168.* โ wildcard match on a subnet
URL: /api/* โ all API endpoint hits
Method: POST โ write requests only
Generated command example:
grep -E ' (5[0-9]{2}) ' access.log | grep '192\.168\.' | grep '/api/'
๐ก Tip: Use โ All to Scratchpad to capture your exact filter + commands as a timestamped audit entry. Useful for incident response documentation.
โ๏ธ Systemd Service Generator
What it does: Generates production-ready .service files for Linux systemd with a live preview that updates as you type. Covers all three sections: [Unit], [Service], and [Install].
Quick Templates:
- ๐ Web App โ Node.js/Python web server with
www-data user, restart=always, environment variables
- โ๏ธ Background Worker โ Long-running queue processor with
on-failure restart
- ๐ณ Docker Container โ Wraps a Docker container as a systemd unit, depends on
docker.service
- โฐ Timer / Cron Job โ One-shot service for use with a
.timer unit
- ๐๏ธ Database โ Database daemon with
Type=notify and reload signal
- ๐ Blank โ Empty template to fill from scratch
How to use:
- Click a template chip to pre-fill all fields, or start from blank
- Edit any field โ the live preview (right panel) updates instantly with syntax highlighting
- Add environment variables with + Add Variable โ each KEY=value pair appears as an
Environment= line
- Toggle RemainAfterExit for oneshot services that should stay "active" after the command exits
- The Installation Steps box updates the service name live
- Click COPY to copy the file contents, โ Download to save as
name.service, or โ Dashboard to send the full service file + install commands to the Scratchpad
Installation steps (auto-generated):
sudo cp webapp.service /etc/systemd/system/webapp.service
sudo systemctl daemon-reload
sudo systemctl enable webapp
sudo systemctl start webapp
sudo systemctl status webapp
๐ก Tip: Use Type=notify with daemons that support sd_notify() โ systemd will wait for the ready signal before marking the service as active. Use Type=simple for everything else.
๐ DNS Propagation Checker
What it does: Queries 12 global DNS resolvers using DNS-over-HTTPS (DoH) directly from your browser to check whether a DNS record has propagated worldwide. Detects inconsistencies and shows per-server response times.
DNS Servers Queried:
- Cloudflare, Cloudflare Security, Google (ร2), Quad9, OpenDNS, AdGuard, CleanBrowsing, NextDNS, Control D, Mullvad, BlahDNS
Record Types Supported:
- A โ IPv4 address | AAAA โ IPv6 address | CNAME โ alias
- MX โ mail exchange | TXT โ text records (SPF, DKIM, verification) | NS โ nameserver | SOA โ start of authority
How to use:
- Enter a domain (or click an example chip) and select a record type
- Click โถ Check DNS โ the tool queries all 12 resolvers sequentially
- Each server card shows: status badge, resolved value(s), and response time in ms
- Green tint = successfully resolved | Amber tint = different value from majority | Red tint = failed / CORS blocked
- A consistency banner appears when all queries complete: green = fully propagated, amber = inconsistent
- Enable the Auto-refresh toggle to re-query every 30 seconds โ a countdown timer is shown
- Click โ Dashboard to send a structured audit report (domain, record type, grade, per-server table, CLI verification commands) to the Scratchpad
Dashboard audit snippet example:
[DNS PROPAGATION AUDIT] โ 14 Jul 2025, 09:41
Domain: example.com | Type: A | Status: โ Fully propagated
Checked: 12/12 Successful: 4 Failed: 8 Unique results: 1
โโ CLI Verification โโโโโโโโโโโโโโโโโโโโโ
dig @1.1.1.1 example.com A
nslookup -type=A example.com 8.8.8.8
๐ก
Tip: Most resolvers block browser-based DoH queries (CORS). Only Cloudflare reliably responds. Failed cards mean CORS is blocking the query โ
not that your DNS is broken. Use the generated
dig /
nslookup CLI commands for authoritative checking, or visit
WhatsmyDNS.net for server-side verification.
๐ Zero-Trust PKI Inspector
What it does: Parses X.509 certificates locally to extract expiration dates, issuers, and Subject Alternative Names (SANs) without uploading private certs to the internet.
How to use:
- Paste your PEM-encoded certificate (starting with
-----BEGIN CERTIFICATE-----).
- The tool instantly decodes and highlights the Validity Period (with a days-remaining countdown).
- View the full list of SANs to ensure all required domains are covered.
- Click โ Scratchpad to send a markdown summary to your notes.
Example PEM header:
-----BEGIN CERTIFICATE-----
MIIDXTCCAkWgAwIBAgIJALx...
-----END CERTIFICATE-----
Decoded output:
Subject: CN=example.com
Issuer: Let's Encrypt Authority X3
Valid Until: 2026-09-15 โ 203 days remaining
SANs: example.com, www.example.com, api.example.com
๐ก Tip: All parsing happens entirely in your browser. Your certificate never leaves your machine โ safe to use with internal or wildcard certs. Retrieve a cert from the command line with: openssl s_client -connect host:443 </dev/null 2>/dev/null | openssl x509 -text
๐ก๏ธ AWS IAM & K8s Policy Linter
What it does: Scans JSON (AWS IAM) or YAML (Kubernetes RBAC) policies for dangerous wildcards, overly permissive actions, and privilege escalation vectors.
How to use:
- Paste your raw IAM JSON or K8s Role YAML.
- The linter highlights CRITICAL risks (like
"Resource": "*" or "Action": "iam:PassRole").
- Review the exact line numbers and risk descriptions to lock down your permissions before deployment.
Example โ dangerous IAM policy snippet:
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
Linter output:
โ ๏ธ CRITICAL โ Line 3: Wildcard Action grants full AWS access
โ ๏ธ CRITICAL โ Line 4: Wildcard Resource removes all scope restriction
Example โ K8s RBAC privilege escalation:
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
Linter output:
โ ๏ธ CRITICAL โ Wildcard verb on wildcard resource = cluster-admin equivalent
๐ก Tip: Run the linter on every policy before a deployment or PR merge. Pay particular attention to iam:PassRole, sts:AssumeRole, and iam:CreatePolicyVersion โ these are the most common privilege escalation vectors.
๐ Connection String Builder
What it does: Generates perfectly formatted database URIs and ready-to-use connection code snippets for PostgreSQL, MongoDB, Redis, and MySQL.
How to use:
- Select your database type from the dropdown.
- Fill in the host, port, username, password, and database name.
- The tool generates the standard URI (e.g.,
postgres://user:pass@host:5432/db).
- Copy the auto-generated implementation code for Python, Node.js, or Go.
PostgreSQL URI example:
postgres://appuser:[email protected]:5432/production
Python (psycopg2):
import psycopg2
conn = psycopg2.connect("postgres://appuser:[email protected]:5432/production")
Node.js (pg):
const { Pool } = require('pg');
const pool = new Pool({ connectionString: 'postgres://appuser:[email protected]:5432/production' });
๐ก Tip: Never hardcode connection strings in source code. Copy the generated URI into an environment variable (DATABASE_URL) and reference it in code. Use .env files locally and secrets managers (AWS Secrets Manager, Vault) in production.
๐ณ Docker Compose Visualizer
What it does: Reads a docker-compose.yml file and generates a clean, interactive vis-network dependency graph of your container architecture โ services, ports, volumes, networks, links, and healthcheck status.
How to use:
- Paste or upload: Paste YAML into the editor, use ๐ Open File to load a
.yml file, or drag-and-drop a compose file onto the page.
- Live parsing: The graph updates automatically as you type (420ms debounce). Parse errors highlight the offending line in the editor and scroll to it.
- 4 demo topologies: The dropdown includes 3-Tier Web Stack, Monitoring (Prometheus/Grafana), Microservices (Kong/Kafka), and CI/CD Pipeline (Gitea/Drone).
- Click any node to open the service detail panel: image, build context, container name, restart policy, command, ports, networks, volumes, depends_on, environment variables, env_file, healthcheck config, labels, deploy settings, and resource limits.
- Search nodes: Type in the search box to highlight matching nodes and dim the rest. First match is auto-focused.
- Export as PNG: Click ๐ธ PNG to download a snapshot of the current graph for documentation or architecture reviews.
- Copy YAML: Click ๐ Copy to copy the editor content to clipboard.
- Zoom +/โ: Dedicated buttons in the graph toolbar, plus fit-to-screen and physics toggle.
Parsing capabilities:
- Services: Box nodes with image name. Healthcheck-enabled services show a โฅ badge.
- Ports: Green circle nodes connected to their service.
- Volumes: Database-shaped nodes with mount path labels on edges (e.g.
/var/lib/postgresql/data:ro).
- Networks: Purple ellipse nodes linking services sharing the same network.
- depends_on: Dashed arrow edges labeled "depends".
- links (Compose v2): Legacy
links: directive rendered as dashed "link" edges.
- Environment variables: Shown in service tooltips (first 6 vars, then "+N more") and fully listed in the detail panel.
๐ก Tip: Use the visualizer before deploying a new Compose stack to spot missing depends_on relationships or orphaned services. Click any service node to inspect its full config without scrolling through YAML.
๐๏ธ Terraform Plan Analyzer
What it does: Parses the raw text output of a terraform plan command into a clean, colour-coded dashboard showing exactly what will be created, modified, or destroyed โ so you can review changes safely before running terraform apply.
How to use:
- Run
terraform plan in your terminal and copy the full text output.
- Paste it into the tool.
- Review the colour-coded summary: green = create, amber = modify, red = destroy.
- Pay close attention to the red To Destroy section to confirm you aren't accidentally dropping critical infrastructure.
Paste plan output like this:
Terraform will perform the following actions:
# aws_instance.web will be created
+ resource "aws_instance" "web" {
+ ami = "ami-0c55b159cbfafe1f0"
+ instance_type = "t3.micro"
}
# aws_s3_bucket.logs will be destroyed
- resource "aws_s3_bucket" "logs" { ... }
Analyzer output:
โ
1 to create โ ๏ธ 0 to change ๐ด 1 to destroy
๐ก Tip: Always run terraform plan -out=tfplan to save the plan, then apply that exact saved plan with terraform apply tfplan โ this prevents drift between what you reviewed and what gets applied. The analyzer works on both plain text and -json plan output.
๐ SSO / SAML & JWT Autopsy
What it does: Decodes opaque JWT tokens and massive Base64-encoded SAML Responses to extract the critical claims needed for troubleshooting broken SSO logins โ all locally, nothing sent to a server.
How to use:
- Paste your raw token or Base64 payload into the input.
- The tool auto-detects the format (SAML vs. JWT) and decodes it locally.
- Review the
NameID, Issuer, Audience, and expiration timestamps in human-readable format.
JWT decode example:
Input:
eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJ1c2VyQGV4YW1wbGUuY29tIi4uLn0.sig
Decoded payload:
{
"sub": "[email protected]",
"iss": "https://idp.example.com",
"aud": "https://app.example.com",
"exp": 1735689600,
"iat": 1735686000
}
Expiry:
EXPIRED โ Wed, 01 Jan 2025 00:00:00 GMT
SAML decode workflow:
1. Capture the SAMLResponse POST parameter from browser DevTools โ Network tab
2. Paste the Base64 value here โ tool URL-decodes then Base64-decodes the XML
3. Extracted fields: NameID, IssueInstant, NotOnOrAfter, all Attributes
๐ก Tip: The most common SSO failure causes are clock skew (token issued before NotBefore or after NotOnOrAfter), mismatched Audience values, and incorrect NameID format. Check these three fields first in any failed SSO login.
โธ๏ธ Kubernetes Manifest Builder
What it does: Dynamically generates production-ready YAML for Kubernetes Deployments, Services, and Ingress resources so you don't have to write boilerplate from scratch.
How to use:
- Enter your App Name, Docker Image, Container Port, and Replica count.
- Check the boxes to include a ClusterIP Service or Nginx Ingress.
- Click Copy All to grab the combined, correctly indented YAML blocks ready for
kubectl apply.
- Click โ Scratchpad to save the manifest as a timestamped note.
Example โ Deployment + Service output:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels: {app: myapp}
template:
spec:
containers:
- name: myapp
image: registry.example.com/myapp:v1.2.3
ports:
- containerPort: 8080
๐ก Tip: Set resource requests and limits on every container to prevent noisy-neighbour problems. After generating the base manifest, add resources.requests.cpu, resources.requests.memory, and their limits equivalents before deploying to production.
๐งน Log Scrubber & Anomaly Finder
What it does: Processes server logs entirely in your browser to redact sensitive PII (IP addresses, emails, tokens) and isolate critical errors โ safe to use with production log data.
How to use:
- Paste up to thousands of lines of raw server logs into the input area.
- Toggle Redact IPs/Emails to instantly replace them with
[REDACTED] tags.
- Toggle Errors Only to strip away 200 OK traffic and surface only 4xx, 5xx, or FATAL stack traces.
- Copy the scrubbed output to safely share with vendors, support teams, or file in a ticket.
Before scrubbing:
192.168.1.45 - [email protected] [24/Feb/2026] "POST /api/login" 401
10.0.0.12 - - [24/Feb/2026] "GET /dashboard" 200
203.0.113.7 - [email protected] [24/Feb/2026] "DELETE /users/99" 500
After Redact IPs/Emails + Errors Only:
[REDACTED] - [REDACTED] [24/Feb/2026] "POST /api/login" 401
[REDACTED] - [REDACTED] [24/Feb/2026] "DELETE /users/99" 500
๐ก Tip: All processing happens client-side โ your logs never leave your browser. This makes it safe to use with production data containing real customer IPs and email addresses. Always scrub before attaching log files to external support tickets or public issue trackers.
๐งฎ High-Powered Sysadmin Calculator
What it does: A multi-tab calculator designed specifically for infrastructure math, featuring Scratchpad integration on every tab โ click โ Scratchpad on any result to instantly export your calculation to the dashboard notepad for documentation or sharing.
Tab 1 โ Scientific Calculator
A robust math engine with a full grid of calculator buttons and real-time expression evaluation as you type.
- Type any expression directly into the input bar โ results update live. Supports standard operators (
+, โ, ร, รท, ^), parentheses, ฯ, e, mod, and scientific notation (EE).
- Auto-close parentheses: Expressions like
sqrt(9 evaluate correctly โ missing closing parentheses are added automatically before calculation.
- Floating-point correction: Results are normalised via
toPrecision(12) to eliminate floating-point noise (e.g., 0.1 + 0.2 returns 0.3, not 0.30000000000000004).
- 2nd button toggle: Click 2nd to swap the function keys to their inverses โ
sin โ asin, cos โ acos, tan โ atan, log โ 10^x, ln โ e^x, โ โ xยฒ. Click 2nd again to return to primary functions.
- ANS key: Inserts the result of the last completed calculation into the current expression.
Example expressions:
sqrt(144) โ 12
2^10 โ 1024
sin(ฯ/6) โ 0.5
log(1000) โ 3
(1024 ร 1024) รท 8 โ 131072 (bits to bytes)
1 โ 0.9999 โ 0.0001 (clean, no floating-point noise)
Tab 2 โ Programmer Base Converter
Instantly converts between Decimal, Hexadecimal, Binary, and Octal as you type in any field. All four representations stay in sync in real time.
- Type a value into any of the four fields (DEC, HEX, BIN, OCT) โ the other three update immediately.
- Uses
BigInt under the hood to safely handle very large numbers beyond JavaScript's standard numeric precision (e.g., 64-bit integers from system calls or memory addresses).
- Each field shows contextual metadata below the input: bit count for binary,
0x-prefixed value for hex, 0-prefixed value for octal, and digit count for decimal.
- Quick-copy buttons next to each field let you copy that representation to your clipboard with one click โ the button briefly shows โ Copied to confirm.
- Invalid characters for a given base (e.g., typing
9 in the binary field) show a subtle inline error message rather than silently corrupting other fields.
Example โ converting decimal 255:
DEC: 255
HEX: FF (0xFF)
BIN: 11111111 (8 bits)
OCT: 377 (0377)
Example โ a 64-bit memory address:
HEX: 7FFEE4B2A000
DEC: 140732009709568
BIN: 11111111111111101110010010110010101000000000000 (47 bits)
๐ก Tip: Use the HEX field when working with MAC addresses, memory addresses, colour codes, or file magic bytes. Use the BIN field to visualise bitmask operations โ for example, understanding which permission bits are set in a chmod value.
Tab 3 โ Data Transfer Time
Estimates how long a file transfer will take based on file size, link speed, and protocol overhead.
- File Size: Enter a value and select the unit โ MB, GB, TB, or PB.
- Network Speed: Enter the link speed and select Mbps, Gbps, or MB/s.
- TCP/IP Overhead: Select 0% (theoretical maximum), 10% (typical LAN), or 20% (WAN / Internet) to account for protocol framing, retransmission, and header overhead.
- Results update instantly. Estimated Transfer Time is formatted as
Xd Xh Xm Xs (e.g., 2d 4h 15m 30s) and the total seconds are shown below for precision. Effective Throughput shows the net MB/s or GB/s after overhead deduction.
Example โ 5 TB backup over a 1 Gbps LAN with 10% overhead:
File Size: 5 TB | Speed: 1 Gbps | Overhead: 10%
Effective Throughput: 112.500 MB/s
Estimated Time: 12h 49m 46s
Example โ 100 GB upload over a 100 Mbps WAN with 20% overhead:
File Size: 100 GB | Speed: 100 Mbps | Overhead: 20%
Effective Throughput: 10.000 MB/s
Estimated Time: 2h 50m 40s
๐ก Tip: Always use 20% overhead for WAN estimates โ real-world internet transfers rarely achieve more than 80% of the advertised link speed due to TCP slow-start, congestion, and ISP throttling. For local 10GbE network transfers, 10% is usually accurate.
Tab 4 โ RAID Capacity
Calculates usable storage capacity, efficiency percentage, and fault tolerance for all five common RAID levels simultaneously.
- Set Number of Drives (minimum 1, maximum 64) and Size Per Drive in GB or TB.
- Results for RAID 0, 1, 5, 6, and 10 are all shown at once in a card grid โ no need to switch between modes.
- Each card shows: usable capacity (formatted in GB, TB, or PB as appropriate), storage efficiency percentage, and fault tolerance (number of drives that can fail before data loss).
- If the current drive count is below the minimum required for a RAID level (e.g., fewer than 4 drives for RAID 6), the card displays a "Requires min X drives" warning instead of a capacity figure.
- The card with the highest usable capacity is highlighted with a โ
marker as a quick guide to the most space-efficient option for your drive count.
Example โ 8 drives ร 4 TB each (32 TB raw):
RAID 0: 32.00 TB โ 100% efficiency, 0-drive fault tolerance
RAID 1: 16.00 TB โ 50% efficiency, 1-drive fault tolerance
RAID 5: 28.00 TB โ 87% efficiency, 1-drive fault tolerance
RAID 6: 24.00 TB โ
โ 75% efficiency, 2-drive fault tolerance
RAID 10: 16.00 TB โ 50% efficiency, 1-drive fault tolerance
Example โ 3 drives (RAID 6 unavailable):
RAID 6: Requires min 4 drives
RAID 10: Requires min 4 drives
๐ก Tip: For production storage where data integrity matters, RAID 5 is the minimum recommended level โ it gives you one drive failure tolerance. RAID 6 is preferred for large arrays (6+ drives) because the statistical probability of a second drive failing during the rebuild of a RAID 5 array increases significantly as drive capacities grow. Never use RAID 0 alone for anything you cannot afford to lose.
Tab 5 โ SLA & Uptime
Translates a target SLA percentage into the exact maximum allowed downtime per day, week, month, and year.
- Type any SLA percentage into the input field (e.g.,
99.95) or click one of the quick-select preset buttons: 99%, 99.5%, 99.9%, 99.95%, 99.99%, 99.999%, 99.9999%.
- A live progress bar fills to visually represent the uptime percentage.
- A summary line shows how many nines of availability the SLA represents (e.g., 99.9% = 3 nines).
- The four downtime cards (Per Day, Per Week, Per Month, Per Year) display the maximum allowed downtime formatted intelligently โ seconds for tight SLAs, minutes for moderate ones, and hours/days for lenient ones.
SLA comparison table:
99% โ Year: 3d 15h 39m | Month: 7h 18m | Day: 14m 24s
99.9% โ Year: 8h 45m 57s | Month: 43m 50s | Day: 1m 26s
99.99% โ Year: 52m 35s | Month: 4m 23s | Day: 8.64 sec
99.999%โ Year: 5m 15s | Month: 26.30 sec | Day: 0.86 sec
๐ก Tip: When defining SLAs with customers or management, work backwards from the downtime budget. If your deployment pipeline takes 15 minutes and you deploy twice a week, you are already consuming ~26 hours of downtime per year โ which means a 99.7% SLA is the realistic maximum before you even account for unplanned outages. Use this calculator to set achievable targets.
๐บ๏ธ Visual VPC Planner
What it does: An interactive visual designer for planning cloud VPC (Virtual Private Cloud) network topologies. Draw subnets, assign CIDR blocks, and let the tool automatically detect overlapping address spaces before you deploy โ preventing the painful IP collision errors that are impossible to fix without re-provisioning.
How to use:
- Enter your top-level VPC CIDR block (e.g.,
10.0.0.0/16) in the VPC field to set the overall address space boundary.
- Click + Add Subnet to create a new subnet block. Give it a name (e.g.,
public-us-east-1a), a CIDR (e.g., 10.0.1.0/24), and a type (Public, Private, or Database).
- The visual canvas updates immediately, rendering each subnet as a labelled block colour-coded by type: blue = public, green = private, amber = database.
- If two subnets overlap, a โ ๏ธ COLLISION DETECTED banner appears on both offending subnets identifying the conflicting ranges โ no deployment can proceed until resolved.
- Hover any subnet block to see its full CIDR details: network address, broadcast, usable host range, and total host count.
- Drag subnet blocks to reorganise the canvas layout โ this is purely visual and does not change the CIDR assignments.
- Click ๐ Copy as Terraform to export your subnet layout as a ready-to-use
aws_subnet resource block for each defined subnet.
- Click ๐พ Export JSON to save your entire VPC plan and reload it later.
Example subnet layout โ multi-AZ 3-tier VPC:
VPC: 10.0.0.0/16 (65,534 usable hosts)
public-1a 10.0.1.0/24 (254 hosts) โ Public
public-1b 10.0.2.0/24 (254 hosts) โ Public
private-1a 10.0.11.0/24 (254 hosts) โ Private
private-1b 10.0.12.0/24 (254 hosts) โ Private
db-1a 10.0.21.0/24 (254 hosts) โ Database
db-1b 10.0.22.0/24 (254 hosts) โ Database
โ
No collisions detected โ all subnets fit within 10.0.0.0/16
Terraform export example (one subnet):
resource "aws_subnet" "public_1a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
tags = { Name = "public-1a", Type = "public" }
}
๐ก Tip: Always leave at least one contiguous /24 block unused between your public, private, and database tiers. This gives you room to add subnets in the future without re-numbering. For example, use 10.0.1.xโ2.x for public, skip 10.0.3.xโ10.x, then start private at 10.0.11.x.
๐ SSH Tunnel Builder
What it does: A visual form-based builder for SSH port forwarding commands. Supports all three tunnelling modes โ Local, Remote, and Dynamic (SOCKS5) โ and auto-generates both the ready-to-run ssh command and the corresponding ~/.ssh/config block so you can make the tunnel permanent without remembering obscure flags.
Tunnel Modes:
- Local Forwarding (
-L): Binds a port on your local machine and forwards traffic through the SSH server to a remote host:port. Use this to access a database or internal web app behind a firewall from your workstation.
- Remote Forwarding (
-R): Binds a port on the SSH server and forwards traffic back to your local machine. Use this to expose a local dev server to a remote system (reverse tunnel).
- Dynamic Forwarding (
-D): Opens a SOCKS5 proxy on a local port. Route any application through it to make traffic appear to originate from the SSH server. Useful for browsing internal services as if you were on-network.
How to use:
- Select your tunnel mode (Local, Remote, or Dynamic) using the tab buttons at the top.
- Fill in the SSH Host (jump server address), SSH User, and SSH Port (default 22).
- For Local/Remote: enter the Local Port, Remote Host, and Remote Port.
- For Dynamic: enter only the Local SOCKS Port (default 1080).
- Toggle optional flags: -N (no remote command โ keeps the tunnel open without running a shell), -f (background the process), -C (compression), -v (verbose for debugging).
- The generated
ssh command and ~/.ssh/config block update live as you type.
- Click ๐ Copy Command to copy the one-liner, or ๐ Copy Config to copy the config block.
- Click โ Scratchpad to save both the command and config block as a labelled note.
Local forward โ access a private RDS database on port 5432:
ssh -N -L 5432:db.internal.corp:5432 [email protected]
Equivalent ~/.ssh/config block:
Host rds-tunnel
HostName bastion.example.com
User ec2-user
LocalForward 5432 db.internal.corp:5432
ServerAliveInterval 60
ExitOnForwardFailure yes
Dynamic SOCKS5 proxy โ browse internal services via bastion:
ssh -N -D 1080 [email protected]
Then configure your browser to use SOCKS5 proxy
127.0.0.1:1080.
๐ก Tip: Add ServerAliveInterval 60 and ServerAliveCountMax 3 to any tunnel config block to prevent idle connections from being dropped by firewalls. Use -f -N together to start the tunnel in the background immediately โ combine with autossh in production for auto-reconnecting tunnels.
๐ก๏ธ Command Scanner
What it does: Audits shell commands and scripts for destructive, risky, or irreversible flags before you run them. Paste any command or multi-line shell script and the scanner highlights dangerous patterns with severity ratings โ so you can catch rm -rf / style mistakes before they happen.
How to use:
- Paste a single command or a multi-line shell script into the input area.
- Click โถ Scan โ the scanner analyses every line and flags any patterns it recognises.
- Review the results panel: each finding shows the line number, the matched pattern, a severity badge (CRITICAL / WARNING / INFO), and a plain-English explanation of why it is dangerous.
- Hover any highlighted token in the input to see a pop-up tooltip with the risk description.
- Click ๐ Copy Report to copy a structured audit summary, or โ Scratchpad to save it.
Severity Levels:
- CRITICAL โ Irreversible data destruction risk (e.g.,
rm -rf, mkfs, dd if=... of=/dev/sda, :(){ :|:& }; fork bombs, chmod -R 777 /).
- WARNING โ Potentially dangerous in the wrong context (e.g.,
curl | bash, sudo without a specific command, writing to /etc/, disabling firewalls with ufw disable).
- INFO โ Noteworthy but context-dependent (e.g., redirection with
> that overwrites files, background processes with &, nohup).
Input script:
#!/bin/bash
TARGET=/tmp/old_data
rm -rf $TARGET
curl https://example.com/setup.sh | bash
chmod -R 777 /var/www
Scanner output:
Line 3 โ CRITICAL: rm -rf with variable path โ risk of recursive deletion if $TARGET is empty or unset.
Line 4 โ WARNING: curl | bash executes remote code without inspection. Download and review first.
Line 5 โ WARNING: chmod -R 777 makes all files world-writable โ significant security exposure.
๐ก Tip: Always quote your variables ("$TARGET" not $TARGET) to prevent word splitting. An unquoted, empty variable in rm -rf $TARGET becomes rm -rf โ which deletes the current directory. The scanner flags this pattern specifically.
๐ Config Generator
What it does: Generates starter configuration file templates for the most common server and development tools โ NGINX, Apache, SSH daemon, Docker Compose, and .gitignore โ so you don't have to start from a blank file or hunt for documentation.
How to use:
- Click the Config Generator card on the main dashboard.
- Select a template type from the tab bar: NGINX, Apache, SSH, Docker Compose, or .gitignore.
- Fill in any context fields shown (e.g., server name, domain, port number) โ the template updates live with your values substituted.
- Click ๐ Copy to copy the completed config, or โฌ Download to save it as the correct filename (
nginx.conf, docker-compose.yml, etc.).
- Click โ Scratchpad to append the config as a labelled note.
NGINX reverse proxy template output (domain: app.example.com, upstream: localhost:3000):
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
๐ก Tip: The SSH config template includes hardened defaults โ PermitRootLogin no, PasswordAuthentication no, and AllowUsers with a placeholder. Always review and adjust AllowUsers to list only the specific accounts that need remote access before deploying.
๐ฅ๏ธ SSH Key Generator
What it does: Generates Ed25519 and RSA SSH keypairs entirely in your browser using the Web Crypto API โ no private key material is ever sent to any server. Produces the public key in OpenSSH format ready to paste into ~/.ssh/authorized_keys.
How to use:
- Open the SSH Key Generator from the dashboard.
- Select the key algorithm: Ed25519 (recommended โ smaller, faster, more secure) or RSA (choose 2048, 3072, or 4096 bits).
- Optionally enter a comment (e.g., your email or hostname) to label the key โ this appears at the end of the public key string.
- Click โก Generate Keypair.
- The public key appears in the top output box โ click ๐ Copy Public Key to copy it.
- The private key appears in the lower output box in PEM format โ click โฌ Download Private Key to save it as
id_ed25519 or id_rsa.
- After downloading, set the correct file permission immediately:
chmod 600 ~/.ssh/id_ed25519.
Generated Ed25519 public key example:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBv3... user@workstation
Paste into remote server:
echo "ssh-ed25519 AAAAC3Nz..." >> ~/.ssh/authorized_keys
chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys
๐ก Tip: Prefer Ed25519 over RSA for all new keys โ it uses smaller keys (256-bit vs 2048โ4096-bit) with stronger security guarantees and faster authentication. RSA 2048 is still acceptable for legacy systems that do not support Ed25519.
๐ Password Generator
What it does: Generates cryptographically secure random passwords using window.crypto.getRandomValues() โ the same CSPRNG used by security-critical applications. All generation happens in your browser; no password is ever transmitted.
How to use:
- Open the Password Generator from the dashboard.
- Set the Length slider (range: 8โ128 characters).
- Toggle the character sets to include: Uppercase (AโZ), Lowercase (aโz), Numbers (0โ9), Symbols (!@#$%^&* etc.), and Exclude Ambiguous (removes
0, O, l, 1 to prevent copy errors).
- A live Strength meter (Weak / Fair / Strong / Very Strong) updates as you adjust settings.
- Click โก Generate to produce a new password, or click the refresh icon to regenerate without changing settings.
- Click ๐ Copy to copy to clipboard. The button briefly shows "Copied!" to confirm.
- Click Generate Multiple to produce a batch of 5, 10, or 20 passwords at once โ useful for provisioning multiple accounts.
Example outputs by profile:
Length 16, all sets: qK#7mP!vXw2@Ln9$ โ Very Strong
Length 24, no symbols: f8tKmP3vXw2Ln9dQrJ7cBe5A โ Very Strong
Length 12, no ambiguous: xKm#PvXw2@Ln โ Strong
๐ก Tip: For service account passwords, use 32+ characters with all character sets enabled. For passwords you'll need to type manually (e.g., BIOS, KVM console), use 16+ characters with no symbols and exclude ambiguous characters to avoid transcription errors.
โ Port Reference
What it does: A searchable, filterable database of well-known TCP/UDP port numbers and their associated services โ covering IANA registered ports (0โ1023), registered ports (1024โ49151), and common ephemeral/dynamic ports used by popular software.
How to use:
- Open Port Reference from the dashboard.
- Type a port number (e.g.,
443) or a service name (e.g., postgres, redis, http) in the search bar โ results filter instantly.
- Use the Protocol chip buttons to filter by TCP, UDP, or Both.
- Use the Category filter to narrow by type: Web, Database, Mail, DNS, Security, Monitoring, etc.
- Each result shows: port number, protocol(s), service name, common software, and a brief description.
- Click any row to copy the port number to your clipboard.
Common ports quick reference:
22 TCP โ SSH | 25 TCP โ SMTP | 53 TCP/UDP โ DNS
80 TCP โ HTTP | 443 TCP โ HTTPS | 3306 TCP โ MySQL
5432 TCP โ PostgreSQL | 6379 TCP โ Redis | 27017 TCP โ MongoDB
2375 TCP โ Docker API (unencrypted) | 2376 TCP โ Docker API (TLS)
6443 TCP โ Kubernetes API Server | 10250 TCP โ Kubelet API
๐ก Tip: Search for a service by name if you can't remember the port number โ for example, typing "elastic" returns ports 9200 (HTTP API) and 9300 (cluster transport). Searching for a port number shows all known services that use it, including known malware and trojan ports flagged with a โ ๏ธ badge.
๐ก๏ธ IP Reputation & MAC Lookup
What it does: Two tools in one โ check whether an IP address appears in public threat intelligence databases (spam, malware, botnet, Tor exit nodes), and look up the hardware vendor behind any MAC address using the IEEE OUI registry.
IP Reputation Check
- Enter an IPv4 or IPv6 address in the IP Reputation tab.
- Click โถ Check Reputation โ the tool queries public threat databases including AbuseIPDB, Spamhaus, Emerging Threats, and known Tor exit node lists.
- Results show a Risk Score (0โ100), the threat categories it appears in (spam, malware, scanning, brute-force, etc.), the originating ASN and country, and first/last reported dates.
- A colour-coded verdict badge shows: CLEAN, SUSPICIOUS, or MALICIOUS.
- Click โ Scratchpad to append a formatted reputation report for use in incident documentation.
MAC Address / OUI Lookup
- Switch to the MAC Lookup tab.
- Enter a MAC address in any standard format:
00:1A:2B:3C:4D:5E, 00-1A-2B-3C-4D-5E, or 001A2B3C4D5E.
- Click โถ Lookup โ the tool checks the first 3 octets (OUI) against the IEEE MA-L registry to identify the hardware manufacturer.
- Results show the vendor name, country of registration, and whether the OUI is marked as Private (used for randomised/spoofed MACs).
IP reputation result example:
IP: 185.220.101.45
Risk Score: 87 / 100 โ MALICIOUS
Categories: Tor Exit Node, Port Scanner, Brute Force SSH
ASN: AS4242423914 | Country: DE | Last reported: 2 hours ago
MAC OUI lookup example:
MAC: 00:1A:2B:3C:4D:5E
Vendor: Cisco Systems, Inc. | Registered: US
๐ก Tip: Use IP reputation checks during incident response to quickly triage suspicious entries in your auth logs โ paste the source IP to confirm whether it belongs to a known threat actor before escalating. MAC OUI lookup is useful when investigating unknown devices on your network; a MAC with no registered vendor often indicates MAC address spoofing.
๐ Timestamp Converter
What it does: Converts between UNIX epoch timestamps and human-readable date/time strings, with support for multiple timezones and all common timestamp formats โ milliseconds, seconds, ISO 8601, RFC 2822, and more.
How to use:
- Open Timestamp Converter from the dashboard.
- Epoch โ Human: Paste a UNIX timestamp (seconds or milliseconds) into the epoch field and click Convert โ. The tool auto-detects seconds vs. milliseconds based on the magnitude.
- Human โ Epoch: Enter a date/time string in the date field (supports natural language like
2026-03-01 14:30, March 1 2026 2:30pm, or ISO format). Click โ Convert.
- Select a Timezone from the dropdown to see the conversion result in any timezone simultaneously.
- The result panel shows the timestamp in multiple formats at once: UNIX seconds, UNIX milliseconds, ISO 8601, UTC string, and local time.
- Click ๐ Now to instantly load the current epoch into the input.
- Click any output field to copy it.
Example conversion โ epoch 1741004400:
UNIX (s): 1741004400
UNIX (ms): 1741004400000
UTC: Sat, 01 Mar 2026 09:00:00 GMT
ISO 8601: 2026-03-01T09:00:00.000Z
New York: Sat Mar 01 2026 04:00:00 EST
Tokyo: Sat Mar 01 2026 18:00:00 JST
๐ก Tip: When comparing timestamps from logs and API responses, always check whether the value is in seconds or milliseconds โ confusing the two off by 1000ร is a very common bug. A valid seconds-based epoch for 2026 is a 10-digit number (~1.74 billion). A 13-digit number (~1.74 trillion) is milliseconds.
๐ Diff Checker
What it does: Side-by-side and inline text comparison with line-level and character-level diff highlighting. Useful for comparing config file versions, spotting changes in Terraform plans, reviewing script edits, or checking what changed between two API responses.
How to use:
- Open Diff Checker from the dashboard.
- Paste the original (before) text into the left panel and the modified (after) text into the right panel.
- The diff is computed and highlighted instantly as you type โ no button press required.
- Toggle between Side-by-Side view (two columns) and Unified view (single column with +/โ prefix lines) using the view mode buttons.
- Toggle Ignore Whitespace to suppress diff lines where the only change is leading/trailing spaces or blank lines โ useful for comparing reformatted configs.
- Toggle Show Line Numbers to add line number gutters to both panels for easy reference.
- The summary bar shows total lines added, lines removed, and lines changed.
- Click ๐ Copy Diff to copy a unified diff string (standard
diff -u format) to your clipboard.
- Click โ Scratchpad to append the diff with a timestamp header.
Example โ comparing two NGINX config versions:
Left (original): worker_processes 1;
Right (modified): worker_processes auto;
Diff output (unified):
- worker_processes 1;
+ worker_processes auto;
Lines removed: 1 Lines added: 1 Lines changed: 1
๐ก Tip: Use Ignore Whitespace when comparing YAML files that have been re-indented โ otherwise indentation changes flood the diff and obscure actual content changes. For Terraform plan comparisons, the Unified view with line numbers is easiest to read when sharing with a team for review.
๐ DriftScope Config Intelligence
What it does: Performs semantic, structural diffing of JSON and YAML infrastructure configs โ Kubernetes Deployments, Terraform state files, AWS CloudFormation templates, and any other nested config โ entirely in the browser. It compares a Baseline (your source of truth, from Git or IaC) against a Target (the live, running state) and surfaces every meaningful difference without false positives from volatile fields.
How to use:
- Open DriftScope from the dashboard.
- Paste your Baseline configuration into the left panel (JSON or YAML โ auto-detected). This is what the config should look like.
- Paste the live Target configuration into the right panel. This is what is actually running (e.g., from
kubectl get deployment -o yaml).
- Click โถ Analyze Drift (or press
Ctrl+Enter / Cmd+Enter).
- Review the three result columns:
- ๐ด Missing Resources โ keys/values present in Baseline but deleted or absent in Target.
- โ ๏ธ Attribute Mismatches โ keys present in both, but values differ (shows Expected vs. Actual).
- ๐ข Unmanaged Additions โ keys present in Target but not in Baseline (rogue manual changes).
- The Drift Score gauge shows overall config integrity (100% = identical). Scores below 90% indicate actionable drift.
- Click โ Scratchpad to append a formatted plain-text diff report for use in incident tickets or change reviews.
- Use โก Load K8s Drift Example to instantly populate a realistic Kubernetes drift scenario for exploration.
Example โ Kubernetes Deployment drift detected:
Mismatch: spec.replicas โ Expected: 3 | Actual: 1 (scale-down not in IaC)
Mismatch: spec.template.spec.containers[name=api-gateway].image โ Expected: :2.4.1 | Actual: :2.5.0-hotfix
Mismatch: resources.limits.cpu โ Expected: "500m" | Actual: "2000m"
Addition: metadata.labels.manual-debug โ "true" (not in Baseline โ rogue label)
Addition: env[name=DEBUG_MODE].value โ "true" (debug mode left on in production)
๐ก Tip: DriftScope automatically ignores volatile Kubernetes fields (uid, resourceVersion, creationTimestamp, generation, managedFields) that change on every apply and carry no semantic meaning. This eliminates the noise that plagues text-based diff tools. For Terraform, compare terraform show -json output against your .tfstate file to surface resource attribute drift without running a full terraform plan.
๐ต๏ธ SELinux & AppArmor Audit Translator
What it does: Translates cryptic type=AVC SELinux denial messages and AppArmor DENIED log entries into plain English. It identifies the exact process, target resource, and denied permission, then generates three surgical CLI remediation paths โ so you can fix the root cause instead of reaching for setenforce 0.
How to use:
- Open SELinux & AppArmor Translator from the dashboard.
- Paste the raw audit log line from
/var/log/audit/audit.log, dmesg, or journalctl -k into the input area.
- Click โถ Translate Log. Use the example buttons to load a pre-built SELinux or AppArmor scenario.
- Read the Plain English Verdict โ it explains who (process + SELinux type / AppArmor profile) tried to do what (read, write, execute, connect) to which resource, and why SELinux/AppArmor blocked it.
- Review the Forensic Breakdown cards showing the Subject (process context) and Object (target resource and label).
- Choose from the 3-Path Resolution panel:
- Path 1 โ Context Relabeling: Use
chcon for a quick test, then semanage fcontext + restorecon for a permanent fix. Best when the file label is wrong.
- Path 2 โ Boolean Toggles: Use
setsebool -P to enable a pre-built policy switch (e.g., httpd_can_network_connect). Best for common service patterns.
- Path 3 โ Custom Policy Module: Generates a
.te policy file and the full checkmodule โ semodule_package โ semodule -i pipeline to whitelist exactly this operation and nothing else.
- For AppArmor denials, Path 1 generates the exact profile rule line to add and the
apparmor_parser -r reload command.
Example SELinux input:
type=AVC msg=audit(1699042800.123:4521): avc: denied { read open } for pid=1234 comm="nginx" name="secret.conf" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:shadow_t:s0 tclass=file permissive=0
Plain English output: "nginx (PID 1234, running as httpd_t โ Web Server) attempted to read open the file secret.conf, which is labeled shadow_t (/etc/shadow password hash) โ DENIED."
Fix (Path 1): semanage fcontext -a -t httpd_sys_content_t "/etc/secret.conf"
restorecon -Rv /etc/secret.conf
๐ก Tip: Always start with Path 1 (context relabeling) โ it's the most precise fix. Only use Path 3 (custom policy module) when the file's label is genuinely correct and the service legitimately needs the access. Avoid generating broad policy modules from audit2allow -a without reviewing the output; it can silently whitelist dangerous access patterns alongside the one you intend to fix.
๐ท Linux Permission & Path Traversal Simulator
What it does: Diagnoses 403 Forbidden and Permission Denied errors by visually simulating how the Linux kernel evaluates file permissions as it walks down a directory tree. It pinpoints exactly which directory in the path is blocking access and explains the Unix permission model that causes it.
How to use:
- Open Linux Permission Simulator from the dashboard.
- Enter the Process User (e.g.,
www-data) and Process Group (e.g., www-data) โ these represent the process trying to access the file.
- Enter the Target Path (e.g.,
/var/www/html/index.html). The tool splits this into each component directory.
- For each path segment, set the Owner, Group, and Permissions (owner/group/other rwx bits) to match your actual filesystem.
- Select the Access Type you are testing: Read a file, Write a file, or Execute/traverse a directory.
- Click โถ Simulate Access. The tool walks the path from
/ to the target, checking the execute bit on each parent directory first.
- The result shows a step-by-step trace: each directory either passes (โ) or halts (โ) the traversal, with the exact permission bits and the rule that caused the block.
Example โ www-data cannot read /var/www/html/index.html:
/ โ owner: root, perms: 755 โ โ Execute allowed (other: r-x)
/var โ owner: root, perms: 755 โ โ Execute allowed
/var/www โ owner: root, group: root, perms: 750 โ โ BLOCKED
Reason: www-data is not owner (root), is not in group (root), and other has no execute bit.
Fix: chmod o+x /var/www or chown -R root:www-data /var/www && chmod 750 /var/www
๐ก Tip: The most common cause of 403 errors is a missing execute (+x) bit on a parent directory โ not on the file itself. A web server process needs +x on every directory in the path just to traverse it, even if the file itself is world-readable. Run namei -l /path/to/file on the server to see a real-world permission trace identical to what this simulator shows.
๐ NGINX Route & Proxy Pass Simulator
What it does: Mathematically resolves which NGINX location block wins a given request URI, following the exact NGINX priority algorithm. It also simulates the proxy_pass trailing slash URI mutation rule โ showing exactly what URL reaches your upstream backend.
NGINX Location Priority (implemented strictly):
- Priority 1 โ
= (Exact Match): If the full URI matches exactly, this block wins immediately. NGINX halts all further evaluation.
- Priority 2 โ
^~ (Preferential Prefix): Among all prefix matches, if the longest one uses ^~, NGINX wins this block and skips regex entirely.
- Priority 3 โ
~ / ~* (Regex): Evaluated in declaration order. The first matching regex wins.
- Priority 4 โ
(none) (Standard Prefix): The longest matching prefix wins, but only if no regex matched first.
How to use:
- Open NGINX Route Simulator from the dashboard.
- Enter the Incoming Request URI (e.g.,
/api/v1/users?id=5).
- Add your location blocks using the + Add Location Block button. For each block, set the modifier (
=, ^~, ~, ~*, or none), the match pattern, and the proxy_pass destination.
- Drag blocks to reorder them โ order matters for regex evaluation.
- Click โถ Simulate Routing.
- The result shows: which block won and why (step-by-step priority reasoning), the exact upstream URL, and a mutation diagram showing how the URI was transformed by
proxy_pass.
- Use โก Load Tricky Scenario to load a pre-built example with competing exact, preferential, regex, and prefix blocks.
proxy_pass URI mutation rule โ the classic trailing slash gotcha:
location /api/ { proxy_pass http://backend; }
Request: /api/users โ Upstream: http://backend/api/users (no URI in proxy_pass = pass unchanged)
location /api/ { proxy_pass http://backend/; }
Request: /api/users โ Strips /api/ prefix โ Upstream: http://backend/users
location /api/ { proxy_pass http://backend/service/; }
Request: /api/users โ Strips /api/ โ Appends to /service/ โ Upstream: http://backend/service/users
๐ก Tip: Regex location blocks (~ / ~*) cannot use URI substitution in proxy_pass โ the full original request URI is always forwarded regardless of whether you add a trailing slash. Also remember that a single trailing slash difference in proxy_pass completely changes the upstream URL โ this is the #1 source of mysterious 404s when first configuring a reverse proxy.
๐งฑ Firewall Rule Builder
What it does: Visually generates safe, exact CLI firewall commands for iptables, UFW, and firewalld simultaneously from a single form โ eliminating the need to remember three different syntax dialects. Includes an SSH Safeguard to prevent accidental server lockouts.
How to use:
- Open Firewall Rule Builder from the dashboard.
- Select the Action: Allow, Deny/Drop, or Reject.
- Select the Direction: Inbound (INPUT), Outbound (OUTPUT), or Forwarded (FORWARD).
- Select the Protocol: TCP, UDP, ICMP, or Any.
- Enter the Port or port range (e.g.,
443, 8080:8090). Leave empty for protocol-wide rules.
- Optionally specify a Source IP/CIDR and/or Destination IP/CIDR to scope the rule.
- Toggle SSH Safeguard to automatically inject a
port 22 ALLOW rule ahead of any broad deny rule โ critical when modifying INPUT chain rules remotely.
- The output panel shows the equivalent command for all three firewall tools simultaneously. Copy the one appropriate for your distribution.
- Click โ Scratchpad to save the full rule set for a runbook or change request.
Example โ Allow HTTPS from a specific CIDR:
Action: Allow | Protocol: TCP | Port: 443 | Source: 10.0.0.0/8
iptables: iptables -A INPUT -p tcp --dport 443 -s 10.0.0.0/8 -j ACCEPT
UFW: ufw allow from 10.0.0.0/8 to any port 443 proto tcp
firewalld: firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="10.0.0.0/8" port protocol="tcp" port="443" accept'
firewall-cmd --reload
๐ก Tip: Always enable the SSH Safeguard before generating any rule that modifies inbound traffic broadly. A single misplaced DROP ALL rule without a preceding SSH allow will lock you out of the server. The safeguard adds the allow rule at the correct position in the chain. For iptables, always use -I (insert at position) rather than -A (append) for critical allow rules, since rules are evaluated top-to-bottom.
โ๏ธ Email DNS Generator (SPF / DKIM / DMARC)
What it does: Generates perfectly formatted DNS TXT records for SPF, DKIM, and DMARC โ the three email authentication standards that prevent spoofing, ensure deliverability, and protect your domain's reputation. All three records are required for modern email compliance (Google, Microsoft 365 bulk mail requirements).
How to use:
- Open Email DNS Generator from the dashboard and select the tab for the record you need.
Tab 1 โ SPF (Sender Policy Framework)
- Enter your Domain name.
- Toggle which senders are authorized: MX servers, A record IP, or add third-party providers via
include: tags (click the quick-add chips for Google Workspace, Microsoft 365, SendGrid, Mailgun, etc.).
- Add any specific IPv4/IPv6 addresses not covered by MX or A records.
- Select the Policy:
-all (strict/reject), ~all (softfail, recommended for testing), or ?all (neutral).
- The output shows the exact TXT record value. A live counter warns when you approach the 10 DNS lookup limit.
Tab 2 โ DKIM (DomainKeys Identified Mail)
- Enter the Selector name provided by your mail service (e.g.,
google, mail, s1).
- Paste the Public Key block from your mail provider's DNS settings panel. Headers are stripped automatically.
- The output shows the full host (
selector._domainkey.domain.com) and the TXT value.
Tab 3 โ DMARC
- Set the Policy (
p=none to monitor, p=quarantine for spam folder, p=reject to block).
- Set the Percentage (start at 10% and increase gradually) and enter Aggregate Report (
rua=) and Forensic Report (ruf=) email addresses.
- The host is always
_dmarc.yourdomain.com.
Complete example โ Google Workspace + SendGrid:
SPF: Host:
@ | Value:
v=spf1 mx include:_spf.google.com include:sendgrid.net ~all
DKIM: Host:
google._domainkey.example.com | Value:
v=DKIM1; k=rsa; p=MIIBIjAN...
DMARC: Host:
_dmarc.example.com | Value:
v=DMARC1; p=quarantine; rua=mailto:[email protected]; pct=100;
๐ก Tip: Always start DMARC with p=none and a rua= reporting address. Monitor the daily XML aggregate reports for 2โ4 weeks to confirm all legitimate mail streams are passing SPF and DKIM before moving to p=quarantine or p=reject. Moving directly to reject without monitoring first is one of the most common causes of broken email delivery after a domain migration.
๐ OpenSSL CSR & Key Builder
What it does: Generates the exact OpenSSL CLI commands to create RSA/ECDSA private keys and Certificate Signing Requests (CSR), including complex Subject Alternative Names (SANs) injected safely via a temporary OpenSSL config file. No more guessing the -subj syntax or searching for how to add SANs on the command line.
How to use:
- Open OpenSSL CSR Builder from the dashboard.
- Enter your Common Name (primary domain, e.g.,
example.com) and Subject Alternative Names โ one per line. SANs can be domain names (www.example.com, *.example.com) or IP addresses (192.168.1.10).
- Fill in the Organization, Organizational Unit, City, State, and Country Code (2-letter ISO code, e.g.,
US).
- Select the Key Type (RSA or ECDSA) and Key Size (RSA: 2048, 3072, 4096; ECDSA: P-256, P-384, P-521).
- The output generates a complete multi-line bash script that: creates a temporary
san.cnf config, generates the private key, creates the CSR with all SANs embedded, then removes the temp file.
- Copy and paste the entire script into your server terminal. The CSR file can then be submitted to your CA (Let's Encrypt, DigiCert, etc.).
Example โ RSA 4096 CSR with wildcard SANs:
CN: example.com | SANs: example.com, *.example.com, api.example.com
cat > /tmp/san.cnf << 'EOF'
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
[req_distinguished_name]
[v3_req]
subjectAltName = DNS:example.com,DNS:*.example.com,DNS:api.example.com
EOF
openssl genrsa -out example.com.key 4096
openssl req -new -key example.com.key -out example.com.csr \
-subj "/C=US/ST=California/L=San Francisco/O=Example Corp/CN=example.com" \
-config /tmp/san.cnf
rm /tmp/san.cnf
๐ก Tip: Always include the bare domain (example.com) as a SAN in addition to www.example.com โ many modern browsers no longer rely on the Common Name field and only check the SAN list. For new certificates, prefer ECDSA P-256 over RSA 2048 โ it provides equivalent security with a significantly smaller key size, faster TLS handshakes, and is supported by all modern clients and CAs.
๐ jq Playground
What it does: Lets you test jq JSON query filters live in your browser โ paste a JSON payload, type a jq filter expression, and see the output instantly. Also generates the equivalent copy-paste jq shell command for use in your bash scripts and pipelines.
How to use:
- Open jq Playground from the dashboard.
- Paste your JSON payload into the input panel (left). This can be anything: API responses, Kubernetes manifests, AWS CLI output, log data.
- Enter your jq filter expression in the filter bar (e.g.,
.items[].metadata.name). The output updates live as you type.
- Toggle Raw Output (
-r flag) to strip JSON string quotes from scalar output โ useful when piping to shell commands.
- Toggle Compact Output (
-c flag) to suppress pretty-printing for smaller one-line output.
- Click ๐ Copy CLI Command to get the full
cat file.json | jq '...' command ready for use in a terminal.
- Click โ Scratchpad to save both the filter and output for your runbook.
Common jq filter patterns:
.items[].metadata.name โ Extract all pod names from kubectl JSON output
.[] | select(.status == "running") | .name โ Filter array by field value
. | keys โ List all top-level keys of an object
.data | to_entries[] | "\(.key)=\(.value)" โ Convert a map to KEY=VALUE pairs
[.items[] | {name: .metadata.name, image: .spec.containers[0].image}] โ Reshape into a new array
.. | numbers โ Recursively find all numeric values
Equivalent CLI command generated:
cat response.json | jq -r '.items[].metadata.name'
๐ก Tip: Use jq 'keys' as your first filter when exploring an unknown JSON structure โ it lists all top-level keys so you can orient yourself before drilling down. For deeply nested data, use .. | .fieldname? // empty to recursively search for a field anywhere in the document without knowing its exact path. The -r (raw) flag is almost always what you want when feeding jq output to other shell commands like xargs, grep, or variable assignment.
๐ก General Tips
Dark Mode
Click the ๐/๐ button in the top-right corner to toggle between dark and light themes. Your preference is saved automatically and synced across all pages.
Keyboard Shortcuts
- โK / Ctrl+K: Focus the search box on the main dashboard
- โ โ Enter: Navigate search results
- Esc: Close search / close tool panel
Browser Compatibility
Works best in Chrome 80+, Firefox 75+, Safari 13+, Edge 80+. The Local Disk Sync feature (File System Access API) requires Chrome 86+, Edge 86+, or Opera 72+ โ it is not available in Firefox or Safari.
Privacy
- No tracking or analytics
- No data collection
- All tools run client-side (in your browser)
- Settings stored locally (localStorage)
- Pinned tool layouts stored locally (localStorage key:
mysysad_favorites)
- Local Disk Sync file handles stored locally (IndexedDB) โ never transmitted
- No backend database exists โ there is nothing to breach
Data Persistence
Settings (dark mode, world clock, scratchpad, pinned tools) are saved to localStorage. They clear if you clear browser cache, use incognito mode, or switch browsers/devices. Use the World Clock export feature to preserve your timezone list. Use the ๐ค Export Pinned button to preserve your workstation layout as a mysysad-pins.json file. Use ๐ Link Local File to keep Scratchpad content on disk independently of browser storage.
โก Advanced Forensic Workstation Tools
Log Analyzer & Scrubber
A multi-gigabyte log analysis engine. Features visual histograms, frequency ranking for IPs/URLs, and automatic PII scrubbing (redacting IPs, emails, and tokens) before exporting sanitized logs.
Dangerous Command Scanner
Audits shell scripts and one-liners against 135+ forensic patterns. Detects obfuscated reverse shells, destructive cloud commands, and privilege escalation risks locally.
SSH Tunnel Builder
A visual architect for SSH configurations. Supports ProxyJump (Bastion) chains, Local/Remote port forwarding, and Dynamic SOCKS5 proxies with a live visual connection map.
Zero-Trust Policy Linter
Audits AWS IAM JSON and Kubernetes RBAC YAML. Identifies wildcard permissions, "PassRole" escalation risks, and secret-reading privileges without data leaving your browser.
OOM Autopsy
Parses Linux kernel dmesg logs to profile Out-Of-Memory events. Identifies the "victim" process and "offender" processes based on memory usage and OOM scores.
NGINX Simulator
Tests NGINX location block routing logic and proxy_pass trailing slash behavior. Visually determines which location block "wins" for a given URI.
Line Patcher Console
A forensic patching engine for HTML and Config files. Implements a resilient context-matching system to apply surgical updates to large codebases safely.
๐ผ๏ธ WebRSW Pro โ Image Operations Console
What it does: A browser-based forensic image editor for sysadmins and security professionals. Redact credentials from screenshots, annotate infrastructure diagrams, add audit stamps, and export clean images โ all without any data leaving your browser.
Core capabilities:
- Multi-layer canvas: Each annotation, drawing, or text element lives on its own compositable layer with blend mode support. Layers can be reordered (โฒ/โผ), renamed (double-click), merged down, or hidden with the ๐ visibility toggle.
- Drawing tools: Arrow tool (A key), Freehand pen, Rectangle, Ellipse, Line, and Text insertion โ each with configurable color, stroke width, and opacity.
- Eraser tool: Press E key or click โซ ERAS. Uses
destination-out compositing for non-destructive erasing with full undo support.
- Selection system: Rectangular selection (V key), Select All (Ctrl+A), and selection move/transform โ float pixels from the canvas, drag to reposition, click outside to drop.
- Brightness/Contrast: Modal with dual sliders using proper contrast factor math for professional image adjustment.
- Sharpen filter: 3ร3 unsharp mask convolution kernel for sharpening blurry screenshots.
- Export options: Full canvas export as PNG, or export just the current selection as a separate PNG via context menu.
- Image info panel: Shows dimensions, uncompressed size, compression ratio, color depth, and file metadata.
- New blank canvas: Color picker with White/Dark/Transparent presets for starting from scratch.
Keyboard shortcuts:
- V โ Select tool E โ Eraser A โ Arrow tool
- Ctrl+A โ Select all Ctrl+Z โ Undo Ctrl+Shift+Z โ Redo
๐ก Tip: Use the eraser tool to scrub credentials from terminal screenshots before sharing them in incident reports. The layer system means you can always hide your annotations layer to see the original image underneath.
๐บ๏ธ NetBuilder Pro โ Visual Network Topology Architect
What it does: A full-featured visual network design tool. Create servers, routers, firewalls, switches, and cloud nodes on an infinite canvas, connect them with links, assign IP addresses and subnets, simulate packet routing, and export the result as PNG, SVG, or JSON.
Core capabilities:
- Node palette: Server, Router, Firewall, Switch, Cloud, Client โ drag from the palette or right-click canvas to place.
- Link inspector: Click any link to edit interface names, bandwidth, cost/OSPF weight, type (ethernet/fiber/vpn), and notes. Link costs are displayed on the canvas and used by the routing simulator.
- VLSM subnet calculator: Built-in tool that calculates variable-length subnet masks with host counts, automatically assigning IPs to nodes.
- Subnet zone overlays: Auto-colored translucent rectangles drawn behind node groups sharing the same subnet for instant visual grouping.
- Packet simulation: The SimEngine uses Dijkstra's algorithm (weighted by link cost) to find shortest paths. Animated packets travel the route on the canvas.
- Multi-select: Shift+click to toggle individual nodes, Shift+drag for box selection. Group drag and bulk delete.
- Undo/Redo: 30-step snapshot stack. Ctrl+Z / Ctrl+Y, plus toolbar buttons.
- Auto-layout: Force-directed algorithm (80 iterations, Coulomb repulsion + Hooke's law) to untangle messy topologies.
- Text annotations: Press T to activate the annotation tool, click to place, double-click to edit, drag to move.
- Duplicate node: Ctrl+D or context menu. Clones the node with a new ID and cleared IP address.
- Export as PNG/SVG: Auto-crop with 40px padding. PNG renders at 2x resolution. SVG inlines all styles for standalone use.
- Import from JSON: Restore a previously exported topology via file picker.
๐ก Tip: After designing your topology, use the VLSM calculator to assign subnets, then run a packet simulation between two endpoints to verify routing before implementing in production. Export as SVG for editable diagrams in Visio or Figma.
๐๏ธ Photo Organizer
What it does: A full-featured browser-based photo and video management tool. Open any folder on your computer, tag and categorize photos, star-rate them, find duplicates, batch rename, and move files into category subfolders โ all using the File System Access API with zero uploads.
Getting started:
- Click ๐ Open Folder and grant read/write access to your photos folder. Subfolders are scanned recursively. Supports JPG, PNG, GIF, WebP, HEIC, RAW, MP4, MOV, and WebM.
- Photos appear as a lazy-loaded thumbnail grid. Use the size slider to adjust thumbnail dimensions.
- Double-click any photo to open the lightbox with full EXIF data, GPS coordinates (with Google Maps link), star rating, category assignment, rotation, and rename.
Tagging & categorization:
- Category bar: Default categories (Family, Travel, Holidays, Food, Pets, Work/School) plus custom categories you can add, rename, or delete.
- Lightbox shortcuts: Press 1โ9 to assign a category, L for Later, D to mark for deletion, 0 to clear.
- Auto-advance: Enable the checkbox in the lightbox sidebar โ after tagging, automatically jumps to the next untagged photo.
- Right-click context menu: Quick-tag any photo from the grid without opening the lightbox.
- Drag-and-drop: Select photos and drag them onto any category chip (including "Later" and "To Delete") in the category bar.
- Bulk actions: Lasso-select or Ctrl+A, then use bulk Tag All, Later, or Delete buttons.
Filtering & sorting:
- Sort by: Name, Date, Stars, File size (ascending/descending).
- Filter by type: All types, Photos only, Videos only.
- Star filter chips: โ
+, โ
โ
โ
+, โ
โ
โ
โ
โ
+ in the category bar to show only photos above a star threshold.
- Date range filter: From/To date pickers to narrow to specific time periods.
- Search: Filename and path search with instant results.
Advanced features:
- ๐ Refresh / Auto-refresh: Rescan the folder for new or removed files without losing existing tags. Auto-refresh polls every 15 seconds.
- โฌ Comparison mode: Side-by-side split view for comparing adjacent photos. Keep/delete actions with undo support.
- โ Batch rename: Pattern-based renaming with
{n} (sequence), {cat} (category), {date} (YYYY-MM-DD) tokens and live preview.
- ๐ Duplicate finder: Detects duplicates by filename and file size. Review groups and mark extras for deletion.
- ๐ฆ Move to folders: Moves tagged photos into category-named subfolders on disk (actual filesystem operation).
- ๐ Delete marked: Permanently removes files marked for deletion from disk.
- โถ Slideshow: Auto-advancing slideshow of tagged photos with pause, navigation, and progress bar.
- โฌ Export / โฌ Import State: Save and restore all tags, stars, and categories as a JSON file.
- โฉ Undo / โช Redo: Full 50-step undo/redo stack for all tagging and deletion operations.
- ๐ Session log: Tracks all actions with timestamps and session statistics.
- EXIF parsing: Reads Make, Model, DateTime, FNumber, ExposureTime, ISO, Orientation, and GPS coordinates from JPEG files.
Keyboard shortcuts (grid):
- Ctrl+A โ Select all Delete/Backspace โ Mark selected for deletion Enter โ Open lightbox F5 โ Refresh folder
Keyboard shortcuts (lightbox):
- โ/โ โ Prev/Next 1โ9 โ Tag category 0 โ Clear tag L โ Later D โ Delete R โ Rotate Z โ Zoom Esc โ Close
๐ก Tip: For maximum speed, enable Auto-advance in the lightbox, set your filter to "Untagged", and press number keys (1โ9) to fly through hundreds of photos. The tool remembers your tags in localStorage even if you close the browser โ re-open the same folder to continue where you left off. Use Export State for backup.
โ Still Need Help?
If you encounter issues or have questions not covered here:
- ๐ง Email: Contact Support
- ๐ Report bugs: Include browser version and steps to reproduce
- ๐ก Feature requests: Always welcome!
โ BACK TO DASHBOARD