Skip the filter ladder. Describe the kind of creator you want in plain English and get a ranked list back. Behind the scenes an LLM translates your prompt into the same structured filters the creators.search endpoint accepts, runs the search, and returns the matches alongside the filters it inferred so you can audit or reproduce the call.
What it does
- Parses a natural-language query (niche, region, follower tier, engagement, hashtags, platforms).
- Translates it into structured filters using a domain-tuned system prompt.
- Runs the search and returns the matching creators in the same shape as
/v1/creators/search. - Returns the inferred filters in
meta.interpreted_filtersso you can debug, log, or replay the search deterministically.
ai_search for prospect-friendly UIs, exploratory queries, or when your end-user describes intent rather than constraints. Use creators.search when your client already has explicit filters — it's 24 credits cheaper per call and answers in milliseconds instead of seconds.
POST/v1/ai_search#
Body
| Param | Type | Notes |
|---|---|---|
queryrequired | string | Plain-English description of who you're looking for. |
max_results | int | How many creators to return. Defaults to 20, max 50. |
Example
curl -X POST "https://developers.tokfluence.com/v1/ai_search" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"query": "fitness creators in the US under 100K followers with high engagement",
"max_results": 5
}'Response shape
{
"data": [
{
"id": "tkc_a1b2c3d4e5f6a7b8",
"username": "alice",
"follower_count": 78000,
"engagement_rate": 0.078,
"region": "US",
...
}
],
"meta": {
"interpreted_filters": {
"min_followers": 1000,
"max_followers": 100000,
"min_engagement_rate": 0.05,
"region": "US",
"hashtag": "fittok,gymtok,workout,fitnessmotivation"
},
"total_matches": 312,
"model": "claude-sonnet-4-20250514",
"cost": 25
}
}The data array uses the same creator schema as creators.search (tkc_ id, contact emails, cross-platform info, etc.).
Audit trail: the AI's choices
Every successful response includes meta.interpreted_filters: the exact filters the model chose. Three reasons this matters:
- Debug bad matches. If the results look off, the filters tell you why immediately.
- Reproduce deterministically. Pass the same filters to
GET /v1/creators/searchfor an LLM-free re-run that costs 1 credit instead of 25. - Build a feedback loop. Show the filters in your own UI, let your users tweak them, then call
creators.searchwith the edited set.
"find me influencers"), the model returns a clarification instead of filters. The response is 200 with data: [], meta.interpreted_filters: null, and the question in meta.clarification. Show it to your user, get more detail, retry. You're still charged for the LLM round-trip.
Latency and cost
This endpoint is the most expensive one in v1 (25 credits per call) and the slowest (3–10 seconds typical, 30s timeout). Every call hits an external LLM provider — that's where both the cost and the latency come from.
Independent of the regular per-key throttle (1200 req/min), AI search has its own ceiling: 30 calls per minute per key. This caps the LLM bill if a client goes wild, and it's tight enough that you'll want to cache results client-side for repeated identical queries.
Errors
| Param | Type | Notes |
|---|---|---|
200 | ok with data | Filters interpreted, search ran, results returned. You're charged. |
200 | clarification | Model asked a follow-up question. data is empty, meta.clarification has the question. You're still charged for the LLM call. |
402 | insufficient_credits | Wallet empty. No LLM call made. |
422 | missing_parameter | query is required and non-empty. |
502 | ai_unavailable | LLM provider failed or timed out. Credits are refunded automatically. |
429 | rate_limited | 30 calls/min/key cap reached. Retry after the Retry-After seconds. |