Back to scouq.com

API reference

Base URL: https://scouq.com/api. All endpoints require a bearer token.

Every endpoint expects JSON in and returns JSON out. Authentication is bearer-token only, passed in the Authorization header. Errors follow the shape documented in the overview.

POST /api/ai Pro

Run a chat completion through Scouq's compliance-checked AI proxy. Scouq prepends a fixed system prompt that enforces Fair Housing Act compliance and financial-advice disclaimers; user-supplied system messages are stripped.

Request headers

HeaderRequiredDescription
AuthorizationYesBearer token. See Getting started.
Content-TypeYesMust be application/json.

Request body

FieldTypeRequiredDescription
modelstringYesOne of llama-3.3-70b-versatile or llama-3.1-8b-instant.
messagesarrayYesNon-empty array of { role, content } objects. Roles are user or assistant. System messages are ignored.
max_tokensintegerNoDefaults to 300. Hard cap of 800.
temperaturenumberNoDefaults to 0.3. Clamped to [0, 1.5].

Combined content across all messages must not exceed 8,000 characters. Longer prompts return 400 prompt_too_long.

Example request

POST /api/ai HTTP/1.1
Host: scouq.com
Authorization: Bearer YOUR_API_TOKEN
Content-Type: application/json

{
  "model": "llama-3.3-70b-versatile",
  "messages": [
    { "role": "user", "content": "ARV 240000, rehab 28000, asking 145000. Quick take on the 70 percent rule?" }
  ],
  "max_tokens": 250,
  "temperature": 0.3
}

Response

The response body is the upstream Groq chat completion, passed through verbatim. The schema follows OpenAI's chat completion format.

FieldTypeDescription
idstringCompletion id from the upstream provider.
objectstringAlways chat.completion.
createdintegerUnix timestamp.
modelstringEcho of the requested model.
choicesarrayOne element. choices[0].message.content contains the assistant text.
usageobjectprompt_tokens, completion_tokens, total_tokens.

Example response

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1747083600,
  "model": "llama-3.3-70b-versatile",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Max offer: 0.7 * 240000 - 28000 = 140000. Asking 145000 is 5000 above the 70 percent line; thin but workable if rehab is firm. This is not financial, legal, or tax advice."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": { "prompt_tokens": 58, "completion_tokens": 62, "total_tokens": 120 }
}

Errors

StatusError codeCause
400invalid_jsonBody did not parse as JSON.
400model_not_allowedModel is not in the allow list.
400messages_requiredMissing or empty messages.
400bad_message_shapeA message is missing role or content, or content is not a string.
400prompt_too_longCombined content exceeded 8,000 characters.
401missing_bearer_tokenNo bearer token in Authorization.
401invalid_tokenToken did not verify against Supabase.
403origin_not_allowedBrowser request from a non-allowed origin.
405method_not_allowedUse POST.
429rate_limited30 requests per 10 minutes exceeded. See Retry-After.
502upstream_unreachableGroq could not be reached. Retry with backoff.

Rate limit

30 requests per 10 minute rolling window, per Supabase user id. See Rate limits.

Compliance notes

The server-side system prompt enforces Fair Housing Act 3604(c) and adds a financial-advice disclaimer to outputs that discuss returns or investment decisions. Output is constrained to ban emojis and dashes. Do not attempt to work around these constraints; doing so violates the terms of service.

Planned endpoints

The following endpoints are on the roadmap and not yet available. They are listed here so integrators can plan ahead. Schemas are not stable until shipped.