What you’ll build
A 60-line Python script that hits Fluenta every morning, walks the entire published-ideas catalogue, and auto-bookmarks anything above an LRS threshold you pick. By the end you’ll have:
- A working httpx client wrapping the Fluenta REST surface.
- Robust pagination — no idea slips through.
- Backoff that survives 429s and transient 5xx without losing a run.
- A cron entry and a GitHub Actions workflow you can paste verbatim.
Total time: ~12 minutes. Roughly 2,000 lines of Python after the imports? No. 60. The hard part is in understanding the loop, not writing it.
Prerequisites
- Python 3.10 or newer.
- A Fluenta API key with the
read_writescope (POST bookmark requires it). Create one if you don’t have it. httpxinstalled:pip install httpx.
Step 1 — Talk to the API
Set up an httpx client with the Bearer token attached as a default header. Wrap it in a context manager so connections are reused across requests — this is a real performance win when you’re paginating hundreds of pages.
import httpx
API_BASE = "https://dev.fluenta.space/backend"
API_KEY = "fl_live_..." # from /app/settings/api-keys
with httpx.Client(
base_url=API_BASE,
headers={"Authorization": f"Bearer {API_KEY}"},
timeout=30.0,
) as client:
r = client.get("/api/v1/ext/ping")
r.raise_for_status()
print(r.json())
# {"success": true, "data": {"ok": true}}If you get a 200 with success: true, your key works and CORS isn’t in the way (you’re running server-side anyway). On 401, check the key. On 403, the key’s scope is too narrow. On 404, the base URL is wrong — double-check API_BASE.
Step 2 — Paginate every idea
The ideas catalogue lives at GET /api/v1/ext/ideas/search. Pagination is offset-based: pass page (1-indexed) and per_page (max 50) and increment until data comes back empty.
Don’t hardcode a stop condition like “loop 100 pages”. Catalogue size grows weekly. The empty-array sentinel is the only correct termination.
Step 3 — Filter by LRS
Each idea exposes a lrs field (Launch Readiness Score, 0–100). Pick a threshold that fits your taste — 70 is the published “promising” floor; 80+ is “prime launch.” The script below defaults to 80 and lets you override on the command line.
Future-proof tip: the field name might surface as either lrs or launch_readiness_score depending on how the spec evolves. Read both with a fallback so a rename doesn’t break your job.
Step 4 — Bookmark
POST /api/v1/ext/ideas/{id}/bookmark takes no body. Returns 200 (or 201) on success. If the idea is already in your pipeline, treat it as a no-op — some servers return 409 Conflict, others return 200; the script handles both.
Step 5 — Put it together
Here’s the full script. Save as fluenta_bookmark_high_lrs.py and run with python fluenta_bookmark_high_lrs.py --min-lrs 80. Use --dry-run first to see what would happen.
"""
fluenta_bookmark_high_lrs.py — auto-bookmark every published Fluenta idea
whose Launch Readiness Score (LRS) clears your threshold.
Usage:
export FLUENTA_API_KEY="fl_live_..."
python fluenta_bookmark_high_lrs.py --min-lrs 80
"""
from __future__ import annotations
import argparse
import os
import sys
import time
from typing import Any, Iterator
import httpx
API_BASE = os.environ.get(
"FLUENTA_API_BASE", "https://dev.fluenta.space/backend"
)
API_KEY = os.environ["FLUENTA_API_KEY"] # raises KeyError if missing
def request_with_retry(
client: httpx.Client,
method: str,
path: str,
*,
max_retries: int = 5,
**kwargs: Any,
) -> httpx.Response:
"""Wrap httpx with retries on 429 and 5xx. Honours Retry-After."""
delay = 1.0
for attempt in range(max_retries):
resp = client.request(method, path, **kwargs)
if resp.status_code == 429:
wait = float(resp.headers.get("Retry-After", delay))
time.sleep(min(wait, 60))
delay *= 2
continue
if 500 <= resp.status_code < 600:
time.sleep(min(delay, 60))
delay *= 2
continue
return resp
resp.raise_for_status()
return resp
def search_ideas(client: httpx.Client) -> Iterator[dict[str, Any]]:
"""Paginate every published idea. Yields one idea dict at a time."""
page = 1
while True:
resp = request_with_retry(
client,
"GET",
"/api/v1/ext/ideas/search",
params={"page": page, "per_page": 50},
)
if resp.status_code == 401:
sys.exit("ERROR: API key is invalid or revoked.")
if resp.status_code == 403:
sys.exit("ERROR: this key lacks the 'read' scope.")
resp.raise_for_status()
body = resp.json()
ideas = body.get("data", [])
if not ideas:
return # no more pages
for idea in ideas:
yield idea
page += 1
def bookmark(client: httpx.Client, idea_id: str) -> bool:
"""Add a bookmark. Returns True if added, False if already bookmarked."""
resp = request_with_retry(
client,
"POST",
f"/api/v1/ext/ideas/{idea_id}/bookmark",
)
if resp.status_code == 200 or resp.status_code == 201:
return True
if resp.status_code == 409:
return False # already bookmarked
if resp.status_code == 403:
sys.exit("ERROR: bookmarking requires a 'read_write' key.")
resp.raise_for_status()
return False
def main() -> None:
parser = argparse.ArgumentParser()
parser.add_argument("--min-lrs", type=float, default=80.0)
parser.add_argument("--dry-run", action="store_true")
args = parser.parse_args()
headers = {"Authorization": f"Bearer {API_KEY}"}
with httpx.Client(base_url=API_BASE, headers=headers, timeout=30.0) as client:
scanned = matched = bookmarked = skipped = 0
for idea in search_ideas(client):
scanned += 1
lrs = idea.get("lrs") or idea.get("launch_readiness_score")
if lrs is None or lrs < args.min_lrs:
continue
matched += 1
if args.dry_run:
print(f"[dry-run] would bookmark {idea['id']} ({idea.get('title')!r}, LRS={lrs})")
continue
if bookmark(client, idea["id"]):
bookmarked += 1
print(f"+ bookmarked {idea['id']} ({idea.get('title')!r}, LRS={lrs})")
else:
skipped += 1
print(f"= already bookmarked {idea['id']} ({idea.get('title')!r})")
print(
f"\ndone. scanned={scanned} matched={matched} "
f"bookmarked={bookmarked} already_bookmarked={skipped}"
)
if __name__ == "__main__":
main()
Step 6 — Schedule it
Pick whichever scheduler your stack runs on. Both options below pick up FLUENTA_API_KEY from the environment.
Cron (Linux / macOS)
# Run every morning at 8 AM UTC
0 8 * * * cd /opt/fluenta && /usr/bin/env FLUENTA_API_KEY=$FLUENTA_API_KEY \
python3 fluenta_bookmark_high_lrs.py --min-lrs 80 >> /var/log/fluenta.log 2>&1GitHub Actions
# .github/workflows/fluenta-daily.yml
name: Bookmark high-LRS ideas
on:
schedule:
- cron: "0 8 * * *" # 08:00 UTC
workflow_dispatch:
jobs:
bookmark:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install httpx
- run: python fluenta_bookmark_high_lrs.py --min-lrs 80
env:
FLUENTA_API_KEY: ${{ secrets.FLUENTA_API_KEY }}What to do next
- Add a Slack notification when a new high-LRS idea lands — pipe the script’s stdout to a webhook that posts to
#ideas. - Switch to running an X-Ray from inside Cursor (MCP) so your agent can score arbitrary ideas mid-conversation.
- Read the concepts page for credits, pagination, and retry semantics in one shot.