Skip to content

Thenlper: GTE-Large

thenlper/gte-large

Created Nov 18, 2025512 context
$0.01/M input tokens$0/M output tokens

The gte-large embedding model converts English sentences, paragraphs and moderate-length documents into a 1024-dimensional dense vector space, delivering high-quality semantic embeddings optimized for information retrieval, semantic textual similarity, reranking and clustering tasks. Trained via multi-stage contrastive learning on a large domain-diverse relevance corpus, it offers excellent performance across general-purpose embedding use-cases.

OpenRouterOpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Models
  • Providers
  • Pricing
  • Enterprise

Company

  • About
  • Announcements
  • CareersHiring
  • Partners
  • Privacy
  • Terms of Service
  • Support
  • State of AI

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube

Providers for GTE-Large

OpenRouter routes requests to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.

Performance for GTE-Large

Compare different providers across OpenRouter

Apps using GTE-Large

Top public apps this month

Recent activity on GTE-Large

Total usage per day on OpenRouter

Prompt
2.08M
Completion
0
Reasoning
0

Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.

Uptime stats for GTE-Large

Uptime stats for GTE-Large across all providers

Sample code and API for GTE-Large

OpenRouter normalizes requests and responses across providers for you.

OpenRouter provides an OpenAI-compatible embeddings API that you can call directly, or using the OpenAI SDK.

In the examples below, the OpenRouter-specific headers are optional. Setting them allows your app to appear on the OpenRouter leaderboards.

Using third-party SDKs

For information about using third-party SDKs and frameworks with OpenRouter, please see our frameworks documentation.

See the Request docs for all possible fields, and Parameters for explanations of specific sampling parameters.