Model - TypeScript SDK
Model type definition
The TypeScript SDK and docs are currently in beta. Report issues on GitHub.
Information about an AI model available on OpenRouter
Example Usage
1 import { Model } from "@openrouter/sdk/models"; 2 3 let value: Model = { 4 id: "openai/gpt-4", 5 canonicalSlug: "openai/gpt-4", 6 name: "GPT-4", 7 created: 1692901234, 8 pricing: { 9 prompt: "0.00003", 10 completion: "0.00006", 11 }, 12 contextLength: 8192, 13 architecture: { 14 modality: "text->text", 15 inputModalities: [ 16 "text", 17 ], 18 outputModalities: [ 19 "text", 20 ], 21 }, 22 topProvider: { 23 isModerated: true, 24 }, 25 perRequestLimits: null, 26 supportedParameters: [ 27 "temperature", 28 "top_p", 29 "max_tokens", 30 ], 31 defaultParameters: null, 32 };
Fields
| Field | Type | Required | Description | Example |
|---|---|---|---|---|
id | string | ✔️ | Unique identifier for the model | openai/gpt-4 |
canonicalSlug | string | ✔️ | Canonical slug for the model | openai/gpt-4 |
huggingFaceId | string | ➖ | Hugging Face model identifier, if applicable | microsoft/DialoGPT-medium |
name | string | ✔️ | Display name of the model | GPT-4 |
created | number | ✔️ | Unix timestamp of when the model was created | 1692901234 |
description | string | ➖ | Description of the model | GPT-4 is a large multimodal model that can solve difficult problems with greater accuracy. |
pricing | models.PublicPricing | ✔️ | Pricing information for the model | {"prompt": "0.00003","completion": "0.00006","request": "0","image": "0"} |
contextLength | number | ✔️ | Maximum context length in tokens | 8192 |
architecture | models.ModelArchitecture | ✔️ | Model architecture information | {"tokenizer": "GPT","instruct_type": "chatml","modality": "text-\u003etext","input_modalities": ["text"],"output_modalities": ["text"]} |
topProvider | models.TopProviderInfo | ✔️ | Information about the top provider for this model | {"context_length": 8192,"max_completion_tokens": 4096,"is_moderated": true} |
perRequestLimits | models.PerRequestLimits | ✔️ | Per-request token limits | {"prompt_tokens": 1000,"completion_tokens": 1000} |
supportedParameters | models.Parameter[] | ✔️ | List of supported parameters for this model | |
defaultParameters | models.DefaultParameters | ✔️ | Default parameters for this model | {"temperature": 0.7,"top_p": 0.9,"frequency_penalty": 0} |