Core Concepts
Understanding these key concepts will help you make the most of dakkio.
Architecture Overview
Organization
An organization is the top-level container for all your resources.
Key Points:
- Created automatically when you sign up
- One organization per user account (multi-user orgs coming soon)
- All resources (buckets, API keys, users) belong to an organization
- Billing and plan limits are at the organization level
Example:
{
"_id": "507f1f77bcf86cd799439012",
"name": "Acme Corp's Organization",
"plan": "pro",
"planLimits": {
"maxBuckets": 25,
"maxDataSources": 100,
"dataRetentionDays": 365,
"apiCallsPerMonth": 1000000
}
}
Buckets
A bucket is a logical container for time-series data.
Use Cases:
- Separate different projects or applications
- Isolate production vs development data
- Organize by location (Building A, Building B)
- Group by customer (in multi-tenant applications)
Key Properties:
name: Human-readable identifierretentionDays: How long to keep data (1-3650 days)status: active or archived
Example:
{
"_id": "507f1f77bcf86cd799439013",
"name": "Production Sensors",
"description": "Temperature and humidity sensors in production facilities",
"retentionDays": 365,
"status": "active"
}
Best Practices:
- Use descriptive names
- Set appropriate retention periods (shorter = lower costs)
- Archive unused buckets instead of deleting
- Organize buckets by project or environment
Data Sources
A data source defines a type of sensor or metric within a bucket.
Use Cases:
- Different sensor types (temperature, humidity, pressure)
- Different device types (ESP32, Arduino, Raspberry Pi)
- Different metrics (CPU usage, memory, disk)
- Different locations (Room A, Room B)
Types:
sensor: Physical IoT sensorsapi: Data from external APIswebhook: Data pushed via webhooksmanual: Manually entered data
Example:
{
"_id": "507f1f77bcf86cd799439014",
"bucketId": "507f1f77bcf86cd799439013",
"name": "Temperature Sensor - Room A",
"type": "sensor",
"schema": {
"temperature": "number",
"humidity": "number",
"pressure": "number"
},
"status": "active"
}
Best Practices:
- One data source per sensor type or metric
- Define clear schemas
- Use descriptive names with locations
- Keep data sources active even if temporarily not sending data
Time-Series Data
Time-series data is the actual readings from your sensors and devices.
Structure:
timestamp: When the reading was taken (ISO 8601)values: Key-value pairs of measurementsmetadata: Additional context (optional)
Example:
{
"bucketId": "507f1f77bcf86cd799439013",
"dataSourceId": "507f1f77bcf86cd799439014",
"timestamp": "2024-01-15T10:30:00Z",
"values": {
"temperature": 22.5,
"humidity": 65,
"pressure": 1013.25
},
"metadata": {
"deviceId": "ESP32-001",
"location": "Room A",
"batteryLevel": 85,
"firmwareVersion": "1.2.3"
}
}
Data Retention:
- Data is automatically deleted after the bucket's retention period
- Deleted data cannot be recovered
- Queries only return data within the retention window
Timestamp Handling:
- Always use UTC timezone
- Use ISO 8601 format:
2024-01-15T10:30:00Z - Server accepts timestamps up to 24 hours in the future
- Server accepts historical timestamps (back-filling)
Alert Rules
Alert rules monitor your data and trigger notifications when conditions are met.
Features:
- Natural language queries (no code required)
- Parsed into structured conditions
- Cooldown periods to prevent spam
- Status: active, paused, or inactive
Example:
{
"_id": "507f1f77bcf86cd799439015",
"bucketId": "507f1f77bcf86cd799439013",
"dataSourceId": "507f1f77bcf86cd799439014",
"name": "High Temperature Alert",
"naturalLanguageQuery": "Temperature exceeds 30°C for 5 minutes",
"parsedCondition": {
"field": "temperature",
"operator": ">",
"value": 30,
"duration": 300
},
"status": "active",
"cooldownMinutes": 15
}
How It Works:
- Alert engine checks new data points against rules
- If condition is met, alert is triggered
- Webhooks are called with alert details
- Alert enters cooldown period
- After cooldown, alert can trigger again
Best Practices:
- Use cooldown periods to prevent alert fatigue
- Start with conservative thresholds, then tune
- Pause alerts during maintenance
- Name alerts descriptively
Webhooks
Webhooks receive notifications when events occur.
Supported Events:
alert.triggered: An alert rule condition was metdata.received: New data point was ingestedbucket.updated: Bucket configuration changed
Example:
{
"_id": "507f1f77bcf86cd799439016",
"bucketId": "507f1f77bcf86cd799439013",
"url": "https://hooks.slack.com/services/YOUR/WEBHOOK/URL",
"secret": "webhook_secret_key",
"events": ["alert.triggered"],
"status": "active",
"retryConfig": {
"maxRetries": 3,
"backoffMultiplier": 2
}
}
Webhook Payload:
{
"event": "alert.triggered",
"timestamp": "2024-01-15T10:30:00Z",
"data": {
"alertId": "507f1f77bcf86cd799439015",
"alertName": "High Temperature Alert",
"bucketId": "507f1f77bcf86cd799439013",
"dataSourceId": "507f1f77bcf86cd799439014",
"currentValue": 31.5,
"threshold": 30,
"dataPoint": {
"timestamp": "2024-01-15T10:30:00Z",
"values": {
"temperature": 31.5
}
}
}
}
Security:
- Webhook requests include
X-Webhook-Signatureheader - Signature is HMAC-SHA256 of payload using your secret
- Always verify signatures to prevent spoofing
API Keys
API keys authenticate external clients and devices.
Features:
- One API key per organization
- Granular permissions (read, write, bucket access)
- Can be revoked and regenerated
- Never expire (unless explicitly set)
Format:
dakkio_a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6
Permissions:
data:read: Query time-series datadata:write: Ingest time-series databucket:read: Read bucket information
Example:
{
"_id": "507f1f77bcf86cd799439017",
"organizationId": "507f1f77bcf86cd799439012",
"keyPreview": "dakkio_a1b2c3d4...",
"label": "Production Sensors",
"permissions": ["data:read", "data:write", "bucket:read"],
"status": "active",
"createdAt": "2024-01-15T10:00:00Z"
}
Security:
- Keys are hashed with bcrypt in the database
- Full key is only shown once during creation
- Store keys in environment variables, never in code
- Rotate keys periodically
- Revoke compromised keys immediately
Data Flow
Here's how data flows through dakkio:
Limits and Quotas
Free Plan
- 3 buckets
- 10 data sources per bucket
- 5 alert rules per bucket
- 2 webhooks per bucket
- 30 days data retention
- 10,000 API calls/month
Pro Plan
- 25 buckets
- 100 data sources per bucket
- 50 alert rules per bucket
- 10 webhooks per bucket
- 365 days data retention
- 1,000,000 API calls/month
Enterprise Plan
- Unlimited buckets
- Unlimited data sources
- Unlimited alert rules
- Unlimited webhooks
- Custom data retention
- Unlimited API calls
Next Steps
Now that you understand the core concepts: