Back to scouq.com

Getting started

From zero to first response in five minutes.

This guide walks through the three things you need: a Pro tier Scouq account, an API token, and a working first request.

1. Confirm Pro access

API access is gated by plan. If your account is on the Professional or Lifetime plan you are eligible. If not, upgrade from the pricing page before continuing.

To verify, sign in to scouq.com/app and open Settings. Your active plan is shown at the top of the account section.

2. Get an API token Settings UI coming soon

The long-term flow is a dedicated API tab inside Settings where you mint, label, and revoke tokens. That UI is currently being built. In the interim, your Supabase session JWT works as an API token.

To extract your current session token, sign in to scouq.com/app, open your browser developer console on any Scouq page, and run:

JSON.parse(localStorage.getItem(
  Object.keys(localStorage).find(function(k){ return k.indexOf('sb-') === 0 && k.indexOf('-auth-token') > 0; })
)).access_token;

Copy the resulting string. Treat it like a password. Session tokens are short lived (one hour by default) and refresh automatically inside the app. For long-running scripts, plan to refresh the token before each expiry, or wait for the upcoming Settings UI which issues long-lived keys.

Never commit a token to source control and never expose it in a browser bundle you ship to end users. If you suspect a token has leaked, sign out everywhere from Settings to invalidate it.

3. Make your first request

The simplest endpoint to test is POST /api/ai, which runs a chat completion through Scouq's compliance-checked AI proxy. Replace YOUR_API_TOKEN below with the token from step 2.

curl https://scouq.com/api/ai \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama-3.3-70b-versatile",
    "messages": [
      { "role": "user", "content": "What does a 70 percent rule offer look like on a 220k ARV with 35k rehab?" }
    ],
    "max_tokens": 300,
    "temperature": 0.3
  }'

Sample response

The response body mirrors the OpenAI chat completion shape. Pull the assistant text out of choices[0].message.content.

{
  "id": "chatcmpl-...",
  "object": "chat.completion",
  "created": 1747083600,
  "model": "llama-3.3-70b-versatile",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Max offer at 70 percent: 0.7 * 220000 - 35000 = 119000. This is not financial, legal, or tax advice. Consult a licensed advisor."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 64,
    "completion_tokens": 48,
    "total_tokens": 112
  }
}

What to do next