Rate Limits¶
Overview¶
The Wickson API implements rate limiting to ensure fair usage and optimal performance for all users. This documentation explains the rate limits, how to monitor your usage, and how to handle cases where you exceed these limits.
Standard Rate Limits¶
All API keys are subject to the following rate limits:
| Limit Type | Value | Description |
|---|---|---|
| Hourly Limit | 1,000 requests | Maximum number of requests allowed in a 1-hour window |
| Reset Period | 60 minutes | Time after which your hourly limit fully resets |
These limits apply across all endpoints and are enforced per API key, not per IP address or user account.
Monitoring Your Usage¶
The Wickson API includes rate limit information in the response headers of most requests, allowing you to monitor your usage and adjust your request patterns accordingly.
Rate Limit Headers¶
These headers provide the following information:
X-RateLimit-Limit: Your total request allocation per hourX-RateLimit-Remaining: Number of requests remaining in the current windowX-RateLimit-Reset: Unix timestamp (in seconds) when your limit will reset
You can also check your current rate limit status via the /v1/status endpoint, which returns detailed information about your account including rate limit information.
When You Exceed Rate Limits¶
If you exceed your rate limit, the API will respond with:
- HTTP status code
429 Too Many Requests - A JSON error response with details about the rate limit and when it will reset
Example Rate Limit Exceeded Response¶
{
"success": false,
"code": "rate_limit_error",
"message": "Rate limit exceeded",
"status_code": 429,
"details": {
"limit": 1000,
"remaining": 0,
"reset_time": 1678312568,
"window_size": 3600
},
"request_id": "req_1a2b3c4d",
"suggestion": "Please retry after the reset time"
}
Best Practices for Rate Limit Management¶
To avoid hitting rate limits in your applications:
-
Implement Exponential Backoff: When receiving a 429 response, implement exponential backoff with jitter to smoothly recover.
-
Cache Responses: Cache API responses where appropriate to reduce the number of unnecessary requests.
-
Distribute Requests: Spread your requests evenly over time rather than sending them in bursts.
-
Monitor Your Usage: Check the rate limit headers regularly to track your consumption.
-
Batch Operations: Use the batch processing endpoints (
/v1/batches) for processing multiple items at once, which counts as a single request.
Code Example: Handling Rate Limits¶
Here's a simple example of how to handle rate limits in your code:
import requests
import time
def make_api_request(url, api_key, max_retries=5):
headers = {"X-Api-Key": api_key}
for attempt in range(max_retries):
response = requests.get(url, headers=headers)
if response.status_code == 200:
return response.json()
if response.status_code == 429:
# Extract rate limit reset time
reset_time = int(response.headers.get('X-RateLimit-Reset', 0))
current_time = int(time.time())
# Calculate sleep duration (add a small buffer)
sleep_duration = max(reset_time - current_time, 1) + 2
print(f"Rate limit exceeded. Retrying in {sleep_duration} seconds.")
time.sleep(sleep_duration)
else:
# Handle other errors
response.raise_for_status()
raise Exception("Maximum retry attempts reached")
Enterprise Rate Limits¶
If your application requires higher rate limits, please contact our sales team to discuss enterprise-level API access with custom rate limits tailored to your needs.
Related Documentation¶
- Status API - Check your account's current rate limit status
- Batch Operations - Process multiple items efficiently with a single request
For any questions about rate limits or to request adjustments, please contact our support team.