Create a model
Create a new model.
In: header
The name of the model.
The visibility of the model.
"private""private" | "public"The description of the model.
The overview of the model.
The identifier of the hardware used by the model.
Source url from where the model's code can be referenced.
License url where the model's usage is specified.
Paper url from where research info on the model can be found.
A list of model category slugs.
[]Response Body
curl -X POST "https://api.wrift.ai/v1/models" \ -H "Content-Type: application/json" \ -d '{ "name": "string", "hardware_identifier": "string" }'{
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"name": "string",
"created_at": "2019-08-24T14:15:22Z",
"visibility": "private",
"description": "string",
"updated_at": "2019-08-24T14:15:22Z",
"owner": {
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"username": "string",
"avatar_url": "string"
},
"predictions_count": 0,
"categories": [
{
"name": "string",
"slug": "string"
}
],
"overview": "string",
"latest_version": {
"number": 0,
"release_notes": "string",
"created_at": "2019-08-24T14:15:22Z",
"container_image_digest": "string",
"schemas": {
"prediction": {
"input": {},
"output": {}
}
}
},
"hardware": {
"identifier": "string",
"name": "string"
},
"source_url": "http://example.com",
"license_url": "http://example.com",
"paper_url": "http://example.com"
}{
"detail": [
{
"loc": [
"string"
],
"msg": "string",
"type": "string"
}
]
}List model versions GET
Retrieve a paginated list of a model's versions. The versions are sorted by creation time, newest to oldest.
Create a prediction POST
Create a prediction for the provided inputs against the latest version of a model. By default this endpoint handles requests asynchronously by creating a prediction and returning the created prediction without waiting for a response from the model. This endpoint can be made to wait for a response by passing a optional Prefer header with the wait time. This endpoint will then wait upto the wait time specified for a response from the model. If it receives one it will return the response otherwise the pending prediction is returned which can be polled for updates. If it receives one it will return the response otherwise the pending prediction is returned which can be polled for updates using get prediction endpoint.