Docs: Update docs.json and formatting (#3991)

* docs: add xAI Grok and Mistral AI provider configs

* docs: add Anthropic Claude model configuration guide

- Add comprehensive documentation for configuring Anthropic Claude models with Cline
- Include API key setup, supported models list, and configuration steps
- Cover advanced features like prompt caching and rate limits
- Update navigation to include new Anthropic page in custom model configs section

* docs: add DeepSeek, Ollama, OpenAI, OpenAI Compatible pages and update Plan & Act

* docs: add Extended Thinking section to Anthropic configuration guide

* docs: update vscode language model api page

* docs: update vscode language model api docs

* Add model documentation pages and update navigation structure

- Add new documentation pages for model overviews (Claude, Gemini, OpenAI, XAI)
- Add general models overview page
- Update docs.json to include new model documentation in navigation
- Update OpenAI-compatible model documentation

* Remove Notes column from model documentation tables for consistency

* Fix table formatting in Gemini models documentation

* added 5 new model configurations and updated existing ones

* Update AWS Bedrock documentation with minimal IAM permissions

* modified:   docs/get-to-know-the-models/claude-models.mdx

* Renamed 'custom model configuration' to 'provider configuation' to avoid providers being confused with models

* Fix dollar sign rendering in model documentation

- Escape dollar signs in pricing tables to prevent MDX parsing issues
- Fixes disappearing dollar signs in gemini-models.mdx and other model docs
- Dollar signs now display correctly as literal currency symbols

* Add feature descriptions to OpenAI and XAI model docs

- Added 'Diverse Performance for Different Tasks Across Model Tiers' section to OpenAI models
- Added 'Real-time Information Access' section to XAI models
- Maintains consistency with existing Claude and Gemini documentation format
- Highlights valuable features for agentic AI coding workflows

* removing these files due to name change in header. they're in the new docs/provider configs folder

* added 2 new issues per model page + formatting

* Update model documentation files

* Fix: Correct paths in docs.json for provider configs

* docs: update docs.json and apply formatting

* docs: fix broken links, add OpenRouter & Requesty pages

* docs: remove 'get to know the models' section and files

* Update docs/provider-config/openai-compatible.mdx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: kevinneung <94151024+kevinneung@users.noreply.github.com>
Co-authored-by: Dennise Bartlett <bartlett.dc.1@gmail.com>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
This commit is contained in:
francis 2025-06-02 20:50:49 -07:00 committed by GitHub
parent 064dac48f8
commit e5857cbfec
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
17 changed files with 582 additions and 26 deletions

View File

@ -143,13 +143,22 @@
]
},
{
"group": "Custom Model Configurations",
"group": "Provider Configuration",
"pages": [
"custom-model-configs/aws-bedrock-with-credentials-authentication",
"custom-model-configs/aws-bedrock-with-profile-authentication",
"custom-model-configs/gcp-vertex-ai",
"custom-model-configs/litellm-and-cline-using-codestral",
"custom-model-configs/vscode-language-model-api"
"provider-config/anthropic",
"provider-config/aws-bedrock-with-credentials-authentication",
"provider-config/aws-bedrock-with-profile-authentication",
"provider-config/gcp-vertex-ai",
"provider-config/litellm-and-cline-using-codestral",
"provider-config/vscode-language-model-api",
"provider-config/xai-grok",
"provider-config/mistral-ai",
"provider-config/deepseek",
"provider-config/ollama",
"provider-config/openai",
"provider-config/openai-compatible",
"provider-config/openrouter",
"provider-config/requesty"
]
},
{

View File

@ -14,9 +14,9 @@ Certain scenarios may warrant using local models, including handling highly sens
#### [IAM Security Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) (For administrators)
#### [AWS Bedrock setup for Legacy IAM (AWS Credentials)](/custom-model-configs/aws-bedrock-with-credentials-authentication.mdx)
#### [AWS Bedrock setup for Legacy IAM (AWS Credentials)](/provider-config/aws-bedrock-with-credentials-authentication)
#### [AWS Bedrock setup for SSO token (AWS Profile)](/custom-model-configs/aws-bedrock-with-profile-authentication.mdx)
#### [AWS Bedrock setup for SSO token (AWS Profile)](/provider-config/aws-bedrock-with-profile-authentication)
#### VPC Endpoint Setup

View File

@ -6,10 +6,13 @@ sidebarTitle: "Plan & Act"
Plan & Act modes represent Cline's approach to structured AI development, emphasizing thoughtful planning before implementation. This dual-mode system helps developers create more maintainable, accurate code while reducing iteration time.
<Frame>
<img
src="https://storage.googleapis.com/cline_public_images/docs/assets/planningThenActing%20(1).gif"
alt="Use Plan to gather context before using Act to implement the plan"
/>
<iframe
style={{ width: "100%", aspectRatio: "16/9" }}
src="https://www.youtube.com/embed/b7o6URFPp64"
title="YouTube video player"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowFullScreen></iframe>
</Frame>
#### Plan Mode: Think First

View File

@ -0,0 +1,61 @@
---
title: "Anthropic"
description: "Learn how to configure and use Anthropic Claude models with Cline. Covers API key setup, model selection, and advanced features like prompt caching."
---
**Website:** [https://www.anthropic.com/](https://www.anthropic.com/)
### Getting an API Key
1. **Sign Up/Sign In:** Go to the [Anthropic Console](https://console.anthropic.com/). Create an account or sign in.
2. **Navigate to API Keys:** Go to the [API keys](https://console.anthropic.com/settings/keys) section.
3. **Create a Key:** Click "Create Key". Give your key a descriptive name (e.g., "Cline").
4. **Copy the Key:** **Important:** Copy the API key _immediately_. You will not be able to see it again. Store it securely.
### Supported Models
Cline supports the following Anthropic Claude models:
- `claude-opus-4-20250514`
- `claude-opus-4-20250514:thinking` (Extended Thinking variant)
- `claude-sonnet-4-20250514` (Recommended)
- `claude-sonnet-4-20250514:thinking` (Extended Thinking variant)
- `claude-3-7-sonnet-20250219`
- `claude-3-7-sonnet-20250219:thinking` (Extended Thinking variant)
- `claude-3-5-sonnet-20241022`
- `claude-3-5-haiku-20241022`
- `claude-3-opus-20240229`
- `claude-3-haiku-20240307`
See [Anthropic's Model Documentation](https://docs.anthropic.com/en/docs/about-claude/models) for more details on each model's capabilities.
### Configuration in Cline
1. **Open Cline Settings:** Click the settings icon (⚙️) in the Cline panel.
2. **Select Provider:** Choose "Anthropic" from the "API Provider" dropdown.
3. **Enter API Key:** Paste your Anthropic API key into the "Anthropic API Key" field.
4. **Select Model:** Choose your desired Claude model from the "Model" dropdown.
5. **(Optional) Custom Base URL:** If you need to use a custom base URL for the Anthropic API, check "Use custom base URL" and enter the URL. Most users won't need to adjust this setting.
### Extended Thinking
Anthropic models offer an "Extended Thinking" feature, designed to give them enhanced reasoning capabilities for complex tasks. This feature allows the model to output its step-by-step thought process before delivering a final answer, providing transparency and enabling more thorough analysis for challenging prompts.
When extended thinking is in Cline, the model generates `thinking` content blocks that detail its internal reasoning. These insights are then incorporated into its final response.
Cline users can leverage this by checking the `Enable Extended Thinking` box below the model selection menu after selecting a Claude Model from any provider.
**Key Aspects of Extended Thinking:**
- **Supported Models:** This feature is available for select models, including variants of Claude Opus 4, Claude Sonnet 4, and Claude Sonnet 3.7. The specific models listed in the "Supported Models" section above with the `:thinking` suffix are pre-configured in Cline to utilize this.
- **Summarized Thinking (Claude 4):** For Claude 4 models, the API returns a summary of the full thinking process to balance insight with efficiency and prevent misuse. You are billed for the full thinking tokens, not just the summary.
- **Streaming:** Extended thinking responses, including the `thinking` blocks, can be streamed.
- **Tool Use & Prompt Caching:** Extended thinking interacts with tool use (requiring thinking blocks to be passed back) and prompt caching (with specific behaviors around cache invalidation and context).
For comprehensive details on how extended thinking works, including API examples, interaction with tool use, prompt caching, and pricing, please refer to the [official Anthropic documentation on Extended Thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking).
### Tips and Notes
- **Prompt Caching:** Claude 3 models support [prompt caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching), which can significantly reduce costs and latency for repeated prompts.
- **Context Window:** Claude models have large context windows (200,000 tokens), allowing you to include a significant amount of code and context in your prompts.
- **Pricing:** Refer to the [Anthropic Pricing](https://www.anthropic.com/pricing) page for the latest pricing information.
- **Rate Limits:** Anthropic has strict rate limits based on [usage tiers](https://docs.anthropic.com/en/api/rate-limits#requirements-to-advance-tier). If you're repeatedly hitting rate limits, consider contacting Anthropic sales or accessing Claude through a different provider like [OpenRouter](/provider-config/openrouter) or [Requesty](/provider-config/requesty).

View File

@ -25,12 +25,41 @@ description: "Learn how to set up AWS Bedrock with Cline using credentials authe
#### 1.2 Attach the Required Policies
1. **Attach the Managed Policy:**
- Attach the **`AmazonBedrockFullAccess`** managed policy to your user/role.\
[View AmazonBedrockFullAccess Policy Details](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html)
2. **Confirm Additional Permissions:**
- Ensure your policy includes permissions for model invocation (e.g., `bedrock:InvokeModel` and `bedrock:InvokeModelWithResponseStream`), model listing, and AWS Marketplace actions (like `aws-marketplace:Subscribe`).
- _Enterprise Tip:_ Apply least-privilege practices by scoping resource ARNs and using [Service Control Policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) to restrict access where necessary.
To ensure Cline can interact with AWS Bedrock, your IAM user or role needs specific permissions. While the `AmazonBedrockFullAccess` managed policy provides comprehensive access, for a more restricted and secure setup adhering to the principle of least privilege, the following minimal permissions are sufficient for Cline's core model invocation functionality:
- `bedrock:InvokeModel`
- `bedrock:InvokeModelWithResponseStream`
You can create a custom IAM policy with these permissions and attach it to your IAM user or role.
**Option 1: Minimal Permissions (Recommended for Production & Least Privilege)**
1. In the AWS IAM console, create a new policy.
2. Use the JSON editor to add the following policy document:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["bedrock:InvokeModel", "bedrock:InvokeModelWithResponseStream"],
"Resource": "*" // For enhanced security, scope this to specific model ARNs if possible.
}
]
}
```
3. Name the policy (e.g., `ClineBedrockInvokeAccess`) and attach it to your IAM user or role.
**Option 2: Using a Managed Policy (Simpler Initial Setup)**
- Alternatively, you can attach the AWS managed policy **`AmazonBedrockFullAccess`**. This grants broader permissions, including the ability to list models, manage provisioning, and other Bedrock features. This might be simpler for initial setup or if you require these wider capabilities.
[View AmazonBedrockFullAccess Policy Details](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html)
**Important Considerations:**
- **Model Listing in Cline:** The minimal permissions (`bedrock:InvokeModel`, `bedrock:InvokeModelWithResponseStream`) are sufficient for Cline to _use_ a model if you specify the model ID directly in Cline's settings. If you rely on Cline to dynamically list available Bedrock models, you might need additional permissions like `bedrock:ListFoundationModels`.
- **AWS Marketplace Subscriptions:** For third-party models (e.g., Anthropic Claude), ensure you have active AWS Marketplace subscriptions. This is typically managed in the AWS Bedrock console under "Model access" and might require `aws-marketplace:Subscribe` permissions if not already handled.
- _Enterprise Tip:_ Always apply least-privilege practices. Where possible, scope resource ARNs in your IAM policies to specific models or regions. Utilize [Service Control Policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) for overarching governance in AWS Organizations.
---

View File

@ -0,0 +1,33 @@
---
title: "DeepSeek"
description: "Learn how to configure and use DeepSeek models like deepseek-chat and deepseek-reasoner with Cline."
---
Cline supports accessing models through the DeepSeek API, including `deepseek-chat` and `deepseek-reasoner`.
**Website:** [https://platform.deepseek.com/](https://platform.deepseek.com/)
### Getting an API Key
1. **Sign Up/Sign In:** Go to the [DeepSeek Platform](https://platform.deepseek.com/). Create an account or sign in.
2. **Navigate to API Keys:** Find your API keys in the [API keys](https://platform.deepseek.com/api_keys) section of the platform.
3. **Create a Key:** Click "Create new API key". Give your key a descriptive name (e.g., "Cline").
4. **Copy the Key:** **Important:** Copy the API key _immediately_. You will not be able to see it again. Store it securely.
### Supported Models
Cline supports the following DeepSeek models:
- `deepseek-v3-0324` (Recommended for coding tasks)
- `deepseek-r1` (Recommended for reasoning tasks)
### Configuration in Cline
1. **Open Cline Settings:** Click the ⚙️ icon in the Cline panel.
2. **Select Provider:** Choose "DeepSeek" from the "API Provider" dropdown.
3. **Enter API Key:** Paste your DeepSeek API key into the "DeepSeek API Key" field.
4. **Select Model:** Choose your desired model from the "Model" dropdown.
### Tips and Notes
- **Pricing:** Refer to the [DeepSeek Pricing](https://api-docs.deepseek.com/quick_start/pricing/) page for details on model costs.

View File

@ -0,0 +1,53 @@
---
title: "Mistral"
description: "Learn how to configure and use Mistral AI models, including Codestral, with Cline. Covers API key setup and model selection."
---
Cline supports accessing models through the Mistral AI API, including both standard Mistral models and the code-specialized Codestral model.
**Website:** [https://mistral.ai/](https://mistral.ai/)
### Getting an API Key
1. **Sign Up/Sign In:** Go to the [Mistral Platform](https://console.mistral.ai/). Create an account or sign in. You may need to go through a verification process.
2. **Create an API Key:**
- [La Plateforme API Key](https://console.mistral.ai/api-keys/) and/or
- [Codestral API Key](https://console.mistral.ai/codestral)
### Supported Models
Cline supports the following Mistral models:
- pixtral-large-2411
- ministral-3b-2410
- ministral-8b-2410
- mistral-small-latest
- mistral-medium-latest
- mistral-small-2501
- pixtral-12b-2409
- open-mistral-nemo-2407
- open-codestral-mamba
- codestral-2501
- devstral-small-2505
**Note:** Model availability and specifications may change.
Refer to the [Mistral AI documentation](https://docs.mistral.ai/api/) and [Mistral Model Overview](https://docs.mistral.ai/getting-started/models/models_overview/) for the most current information.
### Configuration in Cline
1. **Open Cline Settings:** Click the settings icon (⚙️) in the Cline panel.
2. **Select Provider:** Choose "Mistral" from the "API Provider" dropdown.
3. **Enter API Key:** Paste your Mistral API key into the "Mistral API Key" field if you're using a standard `mistral` model. If you intend to use `codestral-latest`, see the "Using Codestral" section below.
4. **Select Model:** Choose your desired model from the "Model" dropdown.
### Using Codestral
[Codestral](https://docs.mistral.ai/capabilities/code_generation/) is a model specifically designed for code generation and interaction.
For Codestral, you can use different endpoints (Default: codestral.mistral.ai).
If using the La Plateforme API Key for Codestral, change the **Codestral Base Url** to: `https://api.mistral.ai`
To use Codestral with Cline:
1. **Select "Mistral" as the API Provider in Cline Settings.**
2. **Select a Codestral Model** (e.g., `codestral-latest`) from the "Model" dropdown.
3. **Enter your Codestral API Key** (from `codestral.mistral.ai`) or your La Plateforme API Key (from `api.mistral.ai`) into the appropriate API key field in Cline.

View File

@ -0,0 +1,78 @@
---
title: "Ollama"
---
Cline supports running models locally using Ollama. This approach offers privacy, offline access, and potentially reduced costs. It requires some initial setup and a sufficiently powerful computer. Because of the present state of consumer hardware, it's not recommended to use Ollama with Cline as performance will likely be poor for average hardware configurations.
**Website:** [https://ollama.com/](https://ollama.com/)
### Setting up Ollama
1. **Download and Install Ollama:**
Obtain the Ollama installer for your operating system from the [Ollama website](https://ollama.com/) and follow their installation guide. Ensure Ollama is running. You can typically start it with:
```bash
ollama serve
```
2. **Download a Model:**
Ollama supports a wide variety of models. A list of available models can be found on the [Ollama model library](https://ollama.com/library). Some models recommended for coding tasks include:
- `codellama:7b-code` (a good, smaller starting point)
- `codellama:13b-code` (offers better quality, larger size)
- `codellama:34b-code` (provides even higher quality, very large)
- `qwen2.5-coder:32b`
- `mistralai/Mistral-7B-Instruct-v0.1` (a solid general-purpose model)
- `deepseek-coder:6.7b-base` (effective for coding)
- `llama3:8b-instruct-q5_1` (suitable for general tasks)
To download a model, open your terminal and execute:
```bash
ollama pull <model_name>
```
For instance:
```bash
ollama pull qwen2.5-coder:32b
```
3. **Configure the Model's Context Window:**
By default, Ollama models often use a context window of 2048 tokens, which can be insufficient for many Cline requests. A minimum of 12,000 tokens is advisable for decent results, with 32,000 tokens being ideal. To adjust this, you'll modify the model's parameters and save it as a new version.
First, load the model (using `qwen2.5-coder:32b` as an example):
```bash
ollama run qwen2.5-coder:32b
```
Once the model is loaded within the Ollama interactive session, set the context size parameter:
```
/set parameter num_ctx 32768
```
Then, save this configured model with a new name:
```
/save your_custom_model_name
```
(Replace `your_custom_model_name` with a name of your choice.)
4. **Configure Cline:**
- Open the Cline sidebar (usually indicated by the Cline icon).
- Click the settings gear icon (⚙️).
- Select "ollama" as the API Provider.
- Enter the Model name you saved in the previous step (e.g., `your_custom_model_name`).
- (Optional) Adjust the base URL if Ollama is running on a different machine or port. The default is `http://localhost:11434`.
- (Optional) Configure the Model context size in Cline's Advanced settings. This helps Cline manage its context window effectively with your customized Ollama model.
### Tips and Notes
- **Resource Demands:** Running large language models locally can be demanding on system resources. Ensure your computer meets the requirements for your chosen model.
- **Model Choice:** Experiment with various models to discover which best fits your specific tasks and preferences.
- **Offline Capability:** After downloading a model, you can use Cline with that model even without an internet connection.
- **Token Usage Tracking:** Cline tracks token usage for models accessed via Ollama, allowing you to monitor consumption.
- **Ollama's Own Documentation:** For more detailed information, consult the official [Ollama documentation](https://ollama.com/docs).

View File

@ -0,0 +1,72 @@
---
title: "OpenAI Compatible"
description: "Learn how to configure Cline with various AI model providers that offer OpenAI-compatible APIs."
---
Cline supports a wide range of AI model providers that offer APIs compatible with the OpenAI API standard. This allows you to use models from providers _other than_ OpenAI, while still utilizing a familiar API interface. This includes providers such as:
- **Local models** running through tools like Ollama and LM Studio (which are covered in their respective sections).
- **Cloud providers** like Perplexity, Together AI, Anyscale, and many others.
- **Any other provider** that offers an OpenAI-compatible API endpoint.
This document focuses on setting up providers _other than_ the official OpenAI API (which has its own [dedicated configuration page](/provider-config/openai)).
### General Configuration
The key to using an OpenAI-compatible provider with Cline is to configure these main settings:
1. **Base URL:** This is the API endpoint specific to the provider. It will _not_ be `https://api.openai.com/v1` (that URL is for the official OpenAI API).
2. **API Key:** This is the secret key you obtain from your chosen provider.
3. **Model ID:** This is the specific name or identifier for the model you wish to use.
You'll find these settings in the Cline settings panel (click the ⚙️ icon):
- **API Provider:** Select "OpenAI Compatible".
- **Base URL:** Enter the base URL provided by your chosen provider. **This is a crucial step.**
- **API Key:** Enter your API key from the provider.
- **Model:** Choose or enter the model ID.
- **Model Configuration:** This section allows you to customize advanced parameters for the model, such as:
- Max Output Tokens
- Context Window size
- Image Support capabilities
- Computer Use (e.g., for models with tool/function calling)
- Input Price (per token/million tokens)
- Output Price (per token/million tokens)
### Supported Models (for OpenAI Native Endpoint)
While the "OpenAI Compatible" provider type allows connecting to various endpoints, if you are connecting directly to the official OpenAI API (or an endpoint that mirrors it exactly), Cline recognizes the following model IDs based on the `openAiNativeModels` definition in its source code:
- `o3-mini`
- `o3-mini-high`
- `o3-mini-low`
- `o1`
- `o1-preview`
- `o1-mini`
- `gpt-4.5-preview`
- `gpt-4o`
- `gpt-4o-mini`
**Note:** If you are using a different OpenAI-compatible provider (such as Together AI, Anyscale, etc.), the available model IDs will differ. Always refer to your specific provider's documentation for their supported model names and any unique configuration details.
### v0 (Vercel SDK) in Cline:
- For developers working with v0, their [AI SDK documentation](https://vercel.com/docs/v0/cline) provides valuable insights and examples for integrating various models, many of which are OpenAI-compatible. This can be a helpful resource for understanding how to structure calls and manage configurations when using Cline with services deployed on or integrated with Vercel.
- v0 can be used in Cline with the OpenAI Compatible provider.
- ### Quickstart
- 1. With the OpenAI Compatible provider selected, set the Base URL to https://api.v0.dev/v1.
- 2. Paste in your v0 API Key
- 3. Set the Model ID: v0-1.0-md
- 4. Click Verify to confirm the connection.
### Troubleshooting
- **"Invalid API Key":** Double-check that you've entered the API key correctly and that it's for the correct provider.
- **"Model Not Found":** Ensure you're using a valid model ID for your chosen provider and that it's available at the specified Base URL.
- **Connection Errors:** Verify the Base URL is correct, that your provider's API is accessible from your machine, and that there are no firewall or network issues.
- **Unexpected Results:** If you're getting unexpected outputs, try a different model or double-check all configuration parameters.
By using an OpenAI-compatible provider, you can leverage the flexibility of Cline with a wider array of AI models. Remember to always consult your provider's documentation for the most accurate and up-to-date information.

View File

@ -0,0 +1,48 @@
---
title: "OpenAI"
description: "Learn how to configure and use official OpenAI models with Cline."
---
Cline supports accessing models directly through the official OpenAI API.
**Website:** [https://openai.com/](https://openai.com/)
### Getting an API Key
1. **Sign Up/Sign In:** Visit the [OpenAI Platform](https://platform.openai.com/). You'll need to create an account or sign in if you already have one.
2. **Navigate to API Keys:** Once logged in, go to the [API keys section](https://platform.openai.com/api-keys) of your account.
3. **Create a Key:** Click on "Create new secret key". It's good practice to give your key a descriptive name (e.g., "Cline API Key").
4. **Copy the Key:** **Crucial:** Copy the generated API key immediately. For security reasons, OpenAI will not show it to you again. Store this key in a safe and secure location.
### Supported Models
Cline is compatible with a variety of OpenAI models, including but not limited to:
- 'o3'
- `o3-mini` (medium reasoning effort)
- 'o4-mini'
- `o3-mini-high` (high reasoning effort)
- `o3-mini-low` (low reasoning effort)
- `o1`
- `o1-preview`
- `o1-mini`
- `gpt-4.5-preview`
- `gpt-4o`
- `gpt-4o-mini`
- 'gpt-4.1'
- 'gpt-4.1-mini'
For the most current list of available models and their capabilities, please refer to the official [OpenAI Models documentation](https://platform.openai.com/docs/models).
### Configuration in Cline
1. **Open Cline Settings:** Click the settings gear icon (⚙️) in the Cline panel.
2. **Select Provider:** Choose "OpenAI" from the "API Provider" dropdown menu.
3. **Enter API Key:** Paste your OpenAI API key into the "OpenAI API Key" field.
4. **Select Model:** Choose your desired model from the "Model" dropdown list.
5. **(Optional) Base URL:** If you need to use a proxy or a custom base URL for the OpenAI API, you can enter it here. Most users will not need to change this from the default.
### Tips and Notes
- **Pricing:** Be sure to review the [OpenAI Pricing page](https://openai.com/pricing) for detailed information on the costs associated with different models.
- **Azure OpenAI Service:** If you are looking to use the Azure OpenAI service, please note that specific documentation for Azure OpenAI with Cline may be found separately, or you might need to configure it as an OpenAI-compatible endpoint if such functionality is supported by Cline for custom configurations.

View File

@ -0,0 +1,40 @@
---
title: "OpenRouter"
description: "Learn how to use OpenRouter with Cline to access a wide variety of language models through a single API."
---
OpenRouter is an AI platform that provides access to a wide variety of language models from different providers, all through a single API. This can simplify setup and allow you to easily experiment with different models.
**Website:** [https://openrouter.ai/](https://openrouter.ai/)
### Getting an API Key
1. **Sign Up/Sign In:** Go to the [OpenRouter website](https://openrouter.ai/). Sign in with your Google or GitHub account.
2. **Get an API Key:** Go to the [keys page](https://openrouter.ai/keys). You should see an API key listed. If not, create a new key.
3. **Copy the Key:** Copy the API key.
### Supported Models
OpenRouter supports a large and growing number of models. Cline automatically fetches the list of available models. Refer to the [OpenRouter Models page](https://openrouter.ai/models) for the complete and up-to-date list.
### Configuration in Cline
1. **Open Cline Settings:** Click the settings icon (⚙️) in the Cline panel.
2. **Select Provider:** Choose "OpenRouter" from the "API Provider" dropdown.
3. **Enter API Key:** Paste your OpenRouter API key into the "OpenRouter API Key" field.
4. **Select Model:** Choose your desired model from the "Model" dropdown.
5. **(Optional) Custom Base URL:** If you need to use a custom base URL for the OpenRouter API, check "Use custom base URL" and enter the URL. Leave this blank for most users.
### Supported Transforms
OpenRouter provides an [optional "middle-out" message transform](https://openrouter.ai/docs/features/message-transforms) to help with prompts that exceed the maximum context size of a model. You can enable it by checking the "Compress prompts and message chains to the context size" box.
### Tips and Notes
- **Model Selection:** OpenRouter offers a wide range of models. Experiment to find the best one for your needs.
- **Pricing:** OpenRouter charges based on the underlying model's pricing. See the [OpenRouter Models page](https://openrouter.ai/models) for details.
- **Prompt Caching:**
- OpenRouter passes caching requests to underlying models that support it. Check the [OpenRouter Models page](https://openrouter.ai/models) to see which models offer caching.
- For most models, caching should activate automatically if supported by the model itself (similar to how Requesty works).
- **Exception for Gemini Models via OpenRouter:** Due to potential response delays sometimes observed with Google's caching mechanism when accessed via OpenRouter, a manual activation step is required _specifically for Gemini models_.
- If using a **Gemini model** via OpenRouter, you **must manually check** the "Enable Prompt Caching" box in the provider settings to activate caching for that model. This checkbox serves as a temporary workaround. For non-Gemini models on OpenRouter, this checkbox is not necessary for caching.

View File

@ -0,0 +1,38 @@
---
title: "Requesty"
description: "Learn how to use Requesty with Cline to access and optimize over 150 large language models."
---
Cline supports accessing models through the [Requesty](https://www.requesty.ai/) AI platform. Requesty provides an easy and optimized API for interacting with 150+ large language models (LLMs).
**Website:** [https://www.requesty.ai/](https://www.requesty.ai/)
### Getting an API Key
1. **Sign Up/Sign In:** Go to the [Requesty website](https://www.requesty.ai/) and create an account or sign in.
2. **Get API Key:** You can get an API key from the [API Management](https://app.requesty.ai/manage-api) section of your Requesty dashboard.
### Supported Models
Requesty provides access to a wide range of models. Cline will automatically fetch the latest list of available models. You can see the full list of available models on the [Model List](https://app.requesty.ai/router/list) page.
### Configuration in Cline
1. **Open Cline Settings:** Click the settings icon (⚙️) in the Cline panel.
2. **Select Provider:** Choose "Requesty" from the "API Provider" dropdown.
3. **Enter API Key:** Paste your Requesty API key into the "Requesty API Key" field.
4. **Select Model:** Choose your desired model from the "Model" dropdown.
### Tips and Notes
- **Optimizations**: Requesty offers a range of in-flight cost optimizations to lower your costs.
- **Unified and simplified billing**: Unrestricted access to all providers and models, automatic balance top ups and more via a single [API key](https://app.requesty.ai/manage-api).
- **Cost tracking**: Track cost per model, coding language, changed file, and more via the [Cost dashboard](https://app.requesty.ai/cost-management) or the [Requesty VS Code extension](https://marketplace.visualstudio.com/items?itemName=Requesty.requesty).
- **Stats and logs**: See your [coding stats dashboard](https://app.requesty.ai/usage-stats) or go through your [LLM interaction logs](https://app.requesty.ai/logs).
- **Fallback policies**: Keep your LLM working for you with fallback policies when providers are down.
- **Prompt Caching:** Some providers support prompt caching. [Search models with caching](https://app.requesty.ai/router/list).
### Relevant resources
- [Requesty Youtube channel](https://www.youtube.com/@requestyAI)
- [Requesty Discord](https://requesty.ai/discord)

View File

@ -15,17 +15,24 @@ Cline offers _experimental_ support for the [VS Code Language Model API](https:/
- **VS Code:** The Language Model API is accessible via VS Code (it is not currently supported by Cursor).
- **A Language Model Provider Extension:** An extension that furnishes a language model is required. Examples include:
- **GitHub Copilot:** With a Copilot subscription, the GitHub Copilot and GitHub Copilot Chat extensions can serve as model providers.
- **Alternative Extensions:** Explore the VS Code Marketplace for extensions mentioning "Language Model API" or "lm". Other experimental options may be available.
- **Alternative Extensions:** Explore the VS Code Marketplace for extensions mentioning "Language Model API" or "lm". Other experimental options may be available
### Configuration Steps
1. **Access Cline Settings:** Click the gear icon (⚙️) located in the Cline panel.
2. **Choose Provider:** Select "VS Code LM API" from the "API Provider" dropdown menu.
3. **Select Model:** The "Language Model" dropdown will (eventually) populate with available models. The naming convention is `vendor/family`. For instance, if Copilot is active, you might encounter options such as:
- `copilot - claude-3.5-sonnet`
- `copilot - o3-mini`
- `copilot - o1-ga`
- `copilot - gemini-2.0-flash`
1. **Ensure Copilot Account is Active and Extensions are installed:** User logged into either the Copilot or Copilot Chat extension should be able to gain access via Cline.
2. **Access Cline Settings:** Click the gear icon (⚙️) located in the Cline panel.
3. **Choose Provider:** Select "VS Code LM API" from the "API Provider" dropdown menu.
4. **Select Model:** If the Copilot extension(s) are installed and the user is logged into their Copilot account, the "Language Model" dropdown will populate with available models after a short time. The naming convention is `vendor/family`. For instance, if Copilot is active, you might encounter options such as:
- `copilot - gpt-3.5-turbo`
- `copilot - gpt-4o-mini`
- `copilot - gpt-4`
- `copilot - gpt-4-turbo`
- `copilot - gpt-4o`
- `copilot - claude-3.5-sonnet` **NOTE:** this model does not work.
- `copilot - gemini-2.0-flash`
- `copilot - gpt-4.1`
For best results with the VSCode LM API Provider, we suggest using the OpenAI Models (GPT 3, 4, 4.1, 4o etc.)
### Current Limitations

View File

@ -0,0 +1,85 @@
---
title: "xAI (Grok)"
description: "Learn how to configure and use xAI's Grok models with Cline, including API key setup, supported models, and reasoning capabilities."
---
xAI is the company behind Grok, a large language model known for its conversational abilities and large context window. Grok models are designed to provide helpful, informative, and contextually relevant responses.
**Website:** [https://x.ai/](https://x.ai/)
### Getting an API Key
1. **Sign Up/Sign In:** Go to the [xAI Console](https://console.x.ai/). Create an account or sign in.
2. **Navigate to API Keys:** Go to the API keys section in your dashboard.
3. **Create a Key:** Click to create a new API key. Give your key a descriptive name (e.g., "Cline").
4. **Copy the Key:** **Important:** Copy the API key _immediately_. You will not be able to see it again. Store it securely.
### Supported Models
Cline supports the following xAI Grok models:
#### Grok-3 Models
- `grok-3-beta` (Default) - xAI's Grok-3 beta model with 131K context window
- `grok-3-fast-beta` - xAI's Grok-3 fast beta model with 131K context window
- `grok-3-mini-beta` - xAI's Grok-3 mini beta model with 131K context window
- `grok-3-mini-fast-beta` - xAI's Grok-3 mini fast beta model with 131K context window
#### Grok-2 Models
- `grok-2-latest` - xAI's Grok-2 model - latest version with 131K context window
- `grok-2` - xAI's Grok-2 model with 131K context window
- `grok-2-1212` - xAI's Grok-2 model (version 1212) with 131K context window
#### Grok Vision Models
- `grok-2-vision-latest` - xAI's Grok-2 Vision model - latest version with image support and 32K context window
- `grok-2-vision` - xAI's Grok-2 Vision model with image support and 32K context window
- `grok-2-vision-1212` - xAI's Grok-2 Vision model (version 1212) with image support and 32K context window
- `grok-vision-beta` - xAI's Grok Vision Beta model with image support and 8K context window
#### Legacy Models
- `grok-beta` - xAI's Grok Beta model (legacy) with 131K context window
### Configuration in Cline
1. **Open Cline Settings:** Click the settings icon (⚙️) in the Cline panel.
2. **Select Provider:** Choose "xAI" from the "API Provider" dropdown.
3. **Enter API Key:** Paste your xAI API key into the "xAI API Key" field.
4. **Select Model:** Choose your desired Grok model from the "Model" dropdown.
### Reasoning Capabilities
Grok 3 Mini models feature specialized reasoning capabilities, allowing them to "think before responding" - particularly useful for complex problem-solving tasks.
#### Reasoning-Enabled Models
Reasoning is only supported by:
- `grok-3-mini-beta`
- `grok-3-mini-fast-beta`
The Grok 3 models `grok-3-beta` and `grok-3-fast-beta` do not support reasoning.
#### Controlling Reasoning Effort
When using reasoning-enabled models, you can control how hard the model thinks with the `reasoning_effort` parameter:
- `low`: Minimal thinking time, using fewer tokens for quick responses
- `high`: Maximum thinking time, leveraging more tokens for complex problems
Choose `low` for simple queries that should complete quickly, and `high` for harder problems where response latency is less important.
#### Key Features
- **Step-by-Step Problem Solving**: The model thinks through problems methodically before delivering an answer
- **Math & Quantitative Strength**: Excels at numerical challenges and logic puzzles
- **Reasoning Trace Access**: The model's thinking process is available via the `reasoning_content` field in the response completion object
### Tips and Notes
- **Context Window:** Most Grok models feature large context windows (up to 131K tokens), allowing you to include substantial amounts of code and context in your prompts.
- **Vision Capabilities:** Select vision-enabled models (`grok-2-vision-latest`, `grok-2-vision`, etc.) when you need to process or analyze images.
- **Pricing:** Pricing varies by model, with input costs ranging from $0.3 to $5.0 per million tokens and output costs from $0.5 to $25.0 per million tokens. Refer to the xAI documentation for the most current pricing information.
- **Performance Tradeoffs:** "Fast" variants typically offer quicker response times but may have higher costs, while "mini" variants are more economical but may have reduced capabilities.