nating API Reference
The current /api/v1/ API is built for account-bound integrations. It supports personal API tokens, direct-to-storage multipart uploads, a managed upload shortcut for desktop tools, owner-scoped file metadata, and application-controlled download links.
Overview
This API is designed for real integrations, not just browser calls. Every API token belongs to a specific user account, so uploads, quotas, folders, visibility rules, and package limits all run in that user context.
- Uploads land in the authenticated user's account.
- Quota is reserved before upload completion to avoid oversubscription.
- Large files should use multipart direct-to-storage uploads.
- Download links stay application-controlled so nating can decide CDN, signed-origin, or tracked delivery.
Uploads are direct-to-storage, but downloads stay policy-driven. Your client requests a signed download link from nating and the app still decides whether the final transfer uses CDN, signed origin, Nginx, Apache, LiteSpeed, or PHP based on site configuration, package rules, and payout-verification requirements.
Authentication
Third-party tools should use personal API tokens. Browser-session calls still work for the site itself, but tokens are the intended public integration method.
Supported headers
Authorization: Bearer fyu_your_token_here
X-API-Token: fyu_your_token_here
Behavior
- Token requests do not require CSRF.
- Session requests still use CSRF for mutating actions.
- Revoking a token immediately blocks future requests.
Tokens and Scopes
Users create API tokens in account settings. Tokens are shown once, stored hashed, and can be revoked without affecting the user password or browser session.
| Scope | Purpose |
|---|---|
files.upload |
Create sessions, sign parts, report parts, complete, and abort uploads. |
files.read |
Read file metadata and request application-controlled download links. |
Tokens are account-bound. An upload created with a user token is uploaded into that user's account and counts against that user's quota.
Idempotency and Safe Retries
Upload session creation and upload completion accept Idempotency-Key or X-Idempotency-Key. This lets tools retry safely when a network timeout happens after the server may already have processed the request.
Idempotency-Key: desktop-client-42e0f2f4-1
- If the same key is reused with the same payload, the completed response is replayed.
- If the same key is already being processed, the API returns
409. - If the same key is reused with a different payload, the request is rejected.
Endpoint Map
| Method | Endpoint | Purpose |
|---|---|---|
| POST | /api/v1/uploads/sessions | Create a multipart upload session. |
| POST | /api/v1/uploads/managed | Create a session and return signed part URLs in one call. |
| GET | /api/v1/uploads/sessions/{id} | Inspect an upload session for resume/retry. |
| POST | /api/v1/uploads/sessions/{id}/parts/sign | Request signed upload URLs for one or more parts. |
| POST | /api/v1/uploads/sessions/{id}/parts/report | Report a successfully uploaded part and its ETag. |
| POST | /api/v1/uploads/sessions/{id}/complete | Finalize the multipart upload and create the file record. |
| POST | /api/v1/uploads/sessions/{id}/abort | Abort the multipart upload and release reservation state. |
| GET | /api/v1/files/{id} | Get owner-scoped file metadata. |
| GET | /api/v1/downloads/{id}/link | Get an application-signed download link. |
Code Samples
These examples use the managed-upload shortcut first because it is the fastest path for third-party tools. Replace the base URL, token, file IDs, and session IDs with your own values.
curl: create a managed upload
curl -X POST "https://your-site.example/api/v1/uploads/managed" \
-H "Authorization: Bearer fyu_your_token_here" \
-H "Content-Type: application/json" \
-H "Idempotency-Key: desktop-upload-001" \
-d '{
"filename": "archive.iso",
"size": 10737418240,
"mime_type": "application/octet-stream",
"folder_id": 123,
"part_numbers": [1, 2, 3],
"expires_in": 3600
}'
curl: request file metadata
curl "https://your-site.example/api/v1/files/123" \
-H "Authorization: Bearer fyu_your_token_here"
curl: request a download link
curl "https://your-site.example/api/v1/downloads/123/link" \
-H "Authorization: Bearer fyu_your_token_here"
PHP: create a managed upload
<?php
$payload = [
'filename' => 'archive.iso',
'size' => 10737418240,
'mime_type' => 'application/octet-stream',
'folder_id' => 123,
'part_numbers' => [1, 2, 3],
'expires_in' => 3600,
];
$ch = curl_init('https://your-site.example/api/v1/uploads/managed');
curl_setopt_array($ch, [
CURLOPT_POST => true,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_HTTPHEADER => [
'Authorization: Bearer fyu_your_token_here',
'Content-Type: application/json',
'Idempotency-Key: desktop-upload-001',
],
CURLOPT_POSTFIELDS => json_encode($payload),
]);
$response = curl_exec($ch);
$status = curl_getinfo($ch, CURLINFO_RESPONSE_CODE);
curl_close($ch);
var_dump($status, json_decode($response, true));
Node.js: create a managed upload
const response = await fetch('https://your-site.example/api/v1/uploads/managed', {
method: 'POST',
headers: {
'Authorization': 'Bearer fyu_your_token_here',
'Content-Type': 'application/json',
'Idempotency-Key': 'desktop-upload-001'
},
body: JSON.stringify({
filename: 'archive.iso',
size: 10737418240,
mime_type: 'application/octet-stream',
folder_id: 123,
part_numbers: [1, 2, 3],
expires_in: 3600
})
});
const data = await response.json();
console.log(response.status, data);
End-to-end multipart example
1. Create the upload session
curl -X POST "https://your-site.example/api/v1/uploads/sessions" \
-H "Authorization: Bearer fyu_your_token_here" \
-H "Content-Type: application/json" \
-H "Idempotency-Key: upload-session-001" \
-d '{
"filename": "archive.iso",
"size": 10737418240,
"mime_type": "application/octet-stream",
"folder_id": 123
}'
2. Request a signed URL for part 1
curl -X POST "https://your-site.example/api/v1/uploads/sessions/ups_ab12cd34ef56/parts/sign" \
-H "Authorization: Bearer fyu_your_token_here" \
-H "Content-Type: application/json" \
-d '{
"part_numbers": [1],
"expires_in": 3600
}'
3. Upload that part directly to object storage
curl -X PUT "https://signed-storage-url-from-step-2" \
-H "Content-Type: application/octet-stream" \
--data-binary "@archive.part1"
4. Report the completed part back to nating
curl -X POST "https://your-site.example/api/v1/uploads/sessions/ups_ab12cd34ef56/parts/report" \
-H "Authorization: Bearer fyu_your_token_here" \
-H "Content-Type: application/json" \
-d '{
"part_number": 1,
"etag": "\"etag-returned-by-storage\"",
"part_size": 67108864
}'
5. Complete the upload after all parts are reported
curl -X POST "https://your-site.example/api/v1/uploads/sessions/ups_ab12cd34ef56/complete" \
-H "Authorization: Bearer fyu_your_token_here" \
-H "Content-Type: application/json" \
-H "Idempotency-Key: upload-complete-001" \
-d '{
"checksum_sha256": "optional-final-sha256"
}'
Managed Upload Shortcut
POST /api/v1/uploads/managed is the easiest way for desktop tools to start. It creates the session and immediately returns signed part URLs for the requested part numbers.
If the site uses B2, Wasabi, R2, or another S3-compatible bucket, the storage side still needs correct CORS. The API session and signed part URLs are only one half of the flow; the bucket must still allow the site origin and expose ETag for multipart uploads to complete reliably.
Example request
{
"filename": "archive.iso",
"size": 10737418240,
"mime_type": "application/octet-stream",
"folder_id": 123,
"part_numbers": [1, 2, 3],
"expires_in": 3600
}
Example response
{
"status": "ok",
"session": {
"public_id": "ups_ab12cd34ef56",
"status": "pending"
},
"part_size_bytes": 67108864,
"parts": [
{
"part_number": 1,
"method": "PUT",
"url": "https://..."
}
],
"complete_url": "/api/v1/uploads/sessions/ups_ab12cd34ef56/complete",
"report_part_url": "/api/v1/uploads/sessions/ups_ab12cd34ef56/parts/report"
}
Multipart Upload Flow
- Create a session with
/uploads/sessionsor use/uploads/managed. - Request signed part URLs for the part numbers you want to upload next.
- Upload parts directly to object storage using the returned URLs.
- Report each completed part with its
part_number,etag, and byte size. - Call
/completewhen all parts are uploaded and reported.
Report-part example
{
"part_number": 1,
"etag": "\"8b1a9953c4611296a827abf8c47804d7\"",
"part_size": 67108864
}
Complete example
{
"checksum_sha256": "optional-client-calculated-sha256"
}
For large integrations, keep the client on the direct-to-storage multipart path. Do not proxy 10 GB to 100 GB uploads through PHP if you care about throughput and reliability.
Resume Interrupted Uploads
The intended resume pattern is simple: persist the upload session ID locally, ask the API for the latest session state, determine which parts still need to be uploaded, then request fresh signed URLs only for the missing parts.
- Store the session
public_idin the desktop app or browser state when the upload begins. - After a crash, refresh, or network drop, call
GET /api/v1/uploads/sessions/{id}. - Inspect the returned session status, uploaded bytes, completed parts, and any part state your client already knows.
- Request fresh signed URLs for the remaining parts with
/parts/sign. - Upload the missing parts, report them, and then call
/complete.
Resume check
curl "https://your-site.example/api/v1/uploads/sessions/ups_ab12cd34ef56" \
-H "Authorization: Bearer fyu_your_token_here"
Typical resumed session response
{
"status": "ok",
"session": {
"public_id": "ups_ab12cd34ef56",
"status": "uploading",
"expected_size": 10737418240,
"uploaded_bytes": 201326592,
"completed_parts": 3
}
}
Request fresh URLs only for the missing parts
curl -X POST "https://your-site.example/api/v1/uploads/sessions/ups_ab12cd34ef56/parts/sign" \
-H "Authorization: Bearer fyu_your_token_here" \
-H "Content-Type: application/json" \
-d '{
"part_numbers": [4, 5, 6],
"expires_in": 3600
}'
Do not assume an old signed storage URL is still valid after a pause or restart. Always ask for fresh part URLs before resuming if the previous URLs may have expired.
Files and Downloads
File metadata
GET /api/v1/files/{id} returns owner-scoped metadata including filename, size, mime type, short ID, folder, visibility, download count, and current file status.
Download links
GET /api/v1/downloads/{id}/link returns an application-signed link. The app still decides whether the actual transfer uses CDN, signed origin, or app-controlled delivery.
{
"status": "ok",
"url": "https://your-site.example/download/123?token=...",
"expires_in": 3600,
"delivery": "cdn",
"delivery_reason": "public_object_storage_cdn"
}
- Treat the returned link as opaque and short-lived.
- Do not hardcode assumptions about the final transfer method in the client.
- If the site requires percent-based payout verification for ordinary downloads, Apache and LiteSpeed standard-file transfers can still fall back to PHP even when those handoff modes are enabled in the admin area.
- Streaming and watch-based media flows are separate from ordinary file-download delivery and should not be treated as the same completion-verification model.
Errors and Limits
401: authentication failed or token is invalid.403: token is valid but missing the required scope, or CSRF failed for session-mode writes.404: the file or upload session is not accessible to the caller.409: an idempotent request with the same key is still in flight.422: validation failed, upload state is inconsistent, or the provider-side step could not be completed.
API traffic is rate-limited per token, per user, and per IP. Exact limits are site-configurable and can differ by deployment.
Production Checklist
- Use API tokens instead of browser cookies in third-party tools.
- Send idempotency keys on create and complete.
- Expose
ETagin bucket CORS for multipart uploads. - Keep direct object-storage URLs out of client logic. Use the download-link endpoint.
- Assume delivery can change between PHP, CDN, Nginx, Apache, and LiteSpeed depending on site policy. Build clients around the API contract, not a fixed transport path.
- Persist upload session IDs locally so desktop tools can resume large transfers.
- Test the full path with your actual B2, R2, Wasabi, or S3-compatible bucket before pushing real traffic.