TheDocumentation Index
Fetch the complete documentation index at: https://dev.1st.app/llms.txt
Use this file to discover all available pages before exploring further.
/readings endpoint is for interactive use — last 24h, single sensor.
For bulk pulls across many sensors and long time ranges, use
GET /v1/readings.csv instead. One request, one rate-limit hit, returns
a streamed CSV body you save to disk.
Curl
shape=long returns one row per (sensor, bucket). Columns:
bucket_at, sensor_id, display_name, co2_ppm, temp_c, humidity_pct, lux, spl_db, valid_mask.
Spreadsheet-friendly wide shape
Add?shape=wide to get one row per bucket with one column per
(sensor × metric) — the layout spreadsheets actually want:
bucket_at, room4.co2_ppm, room4.temp_c, room12.co2_ppm, ....
Column names are derived from each sensor’s display_name (lowercased,
non-alphanumeric replaced with _, collision suffixes when needed).
Sensors filter
To restrict the export to a subset, pass?sensors= with a comma-
separated list of UUIDs:
sensors to include every non-archived sensor in the team.
Sheets / Excel direct
Both connectors can fetch the CSV directly if you pass the key as a query parameter (they can’t set headers):Limits
- Max range: 90 days per request.
- Max rows: 250,000 per request. Wider requests get 413 — narrow the range or sensor list and re-call.
- Floats are rounded to 2 decimals.
- Timestamps are ISO 8601 UTC (Excel-parseable).
Python
Nightly job pattern
For unattended nightly pulls, use aread-scope key with a meaningful
name (“Nightly BigQuery sync”) and call the CSV endpoint once. If the
response exceeds 250k rows, split the range — e.g. 30 days per call,
12 calls per year of history.