Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.realie.ai/llms.txt

Use this file to discover all available pages before exploring further.

The /property/search endpoint supports two pagination modes: cursor (recommended) and offset (legacy, deprecated). This page explains both, when to use which, and how to handle common workflows.

TL;DR

  • For pagination beyond the first page, always use cursor.
  • Opt in by passing cursor= (empty value) on your first request — even with no token. The response will include metadata.nextCursor for the next page.
  • Cursor pagination is constant-time per page regardless of how deep you’ve paginated.
  • The offset parameter still works for backwards compatibility but does not scale beyond a few thousand records and may time out.
  • If you’re doing incremental sync (fetching only what’s new since your last run), cursor pagination is the natural fit — save the nextCursor from your last response, and pass it back later to continue from exactly where you stopped.
Cursor mode is opt-in. Without the cursor query parameter, the response uses the legacy offset shape (metadata.offset: 0, no nextCursor). To enable cursor pagination you must include the cursor parameter — even an empty value (cursor=) is enough. This keeps existing callers’ default responses unchanged.

How cursor pagination works

Each response includes metadata.nextCursor — an opaque string token. Pass that token back as the cursor parameter on the next request to retrieve the next page. When there are no more results, nextCursor is null. To start a fresh pagination, send an empty cursor (cursor=). To continue a previous one, send the saved token.

Basic example

First page — pass cursor= (empty value) to opt into cursor mode:
curl 'https://app.realie.ai/api/public/property/search/?state=NV&limit=100&cursor=' \
  -H 'Authorization: <api-key>'
Response:
{
  "properties": [/* ...100 properties... */],
  "metadata": {
    "limit": 100,
    "count": 100,
    "nextCursor": "eyJsYXN0SWQiOiI2NzI4ZGM0OTA5MTNiNDJmNTJkOTU4OTUifQ"
  }
}
Next page — pass back the nextCursor value as cursor:
curl 'https://app.realie.ai/api/public/property/search/?state=NV&limit=100&cursor=eyJsYXN0SWQiOiI2NzI4ZGM0OTA5MTNiNDJmNTJkOTU4OTUifQ' \
  -H 'Authorization: <api-key>'
Each response gives you a fresh cursor for the page after it. Repeat until nextCursor is null.

Last page

When you’ve reached the end of the result set, nextCursor is null:
{
  "properties": [/* ...47 final properties... */],
  "metadata": {
    "limit": 100,
    "count": 47,
    "nextCursor": null
  }
}

Saving cursors for later

Cursor tokens are stateless and portable. They aren’t tied to a session, a connection, or a time window. You can:
  • Save the token to disk, a database, or environment variable
  • Send it to a different machine
  • Pause for hours, days, or weeks and resume from exactly the same point
This makes cursor pagination the ideal pattern for incremental sync workflows.

Incremental sync example

Suppose you’re keeping a local mirror of all properties in Nevada. You only want to fetch what’s new since your last run. First run (one-time setup): walk to the end of the current dataset and save the final cursor.
import requests, json

cursor = ""  # empty string opts into cursor mode
headers = {"Authorization": "<api-key>"}

while True:
    r = requests.get(
        "https://app.realie.ai/api/public/property/search/",
        params={"state": "NV", "limit": 100, "cursor": cursor},
        headers=headers,
    )
    data = r.json()
    save_to_local_store(data["properties"])
    next_cursor = data["metadata"]["nextCursor"]
    if next_cursor is None:
        # Reached the end — save the final cursor
        with open("nv_cursor.txt", "w") as f:
            f.write(cursor)  # save the cursor used in this request
        break
    cursor = next_cursor
Subsequent runs (e.g., daily): load the saved cursor and pull only what’s new.
with open("nv_cursor.txt") as f:
    cursor = f.read().strip()

while True:
    r = requests.get(
        "https://app.realie.ai/api/public/property/search/",
        params={"state": "NV", "limit": 100, "cursor": cursor},
        headers=headers,
    )
    data = r.json()
    if not data["properties"]:
        break
    save_to_local_store(data["properties"])
    next_cursor = data["metadata"]["nextCursor"]
    if next_cursor is None:
        # Caught up — save the latest cursor for next run
        with open("nv_cursor.txt", "w") as f:
            f.write(cursor_for_last_request)
        break
    cursor = next_cursor
Each subsequent run only fetches properties added since your last cursor — typically a handful of pages, not the full dataset. Each page returns in around 200 ms regardless of how deep into the dataset you are.

Performance characteristics

ScenarioCursorOffset (legacy)
First page~200 ms~200 ms
Page at depth 10,000~200 msseconds, may time out
Page at depth 50,000~200 mswill time out
Resume from saved position (any depth)~200 msrequires re-walking from page 0
Incremental “what’s new” sync~200 ms per new pageimpractical — offsets shift as data changes
The cursor’s per-page latency stays flat at any depth because the database can seek directly to your saved position via an index. Offset pagination has to walk through every record up to your offset on every request.

Reference: cursor format

The cursor is base64url-encoded JSON of the form {"lastId": "<hex ObjectId>"}. You don’t need to construct or parse it yourself — always treat it as opaque and pass back exactly what the API returned.

Migrating from offset

If your existing code uses ?offset=N to paginate, the migration is one query parameter. Replace your first call’s offset with an empty cursor, then use the returned nextCursor for subsequent pages.
- GET /api/public/property/search?state=NV&limit=100&offset=0
+ GET /api/public/property/search?state=NV&limit=100&cursor=
The first response now includes metadata.nextCursor. Pass that value back as cursor on the next request and repeat until nextCursor is null. No need to track or increment offsets — the server hands you the next-page token each time. For codebases with deep pagination (offset > a few thousand), this migration is worth doing soon — offset calls at those depths can time out and return 503.

When to keep using offset

  • You’re prototyping and only ever fetch the first page or two
  • You have a small result set that fits comfortably under a few hundred records
  • You’re working with existing code that already uses offset and shallow depth
For anything else — production sync workflows, large datasets, deep pagination — use cursor. The offset parameter (with offset > 0) produces a Deprecation: true HTTP header and a metadata.deprecationNotice field in the response body.

Endpoints that support cursor pagination

Other paginated endpoints will adopt the same pattern over time. Until they do, cursor is silently ignored on endpoints that don’t support it.