Skip to content
OpenRouterOpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Models
  • Providers
  • Pricing
  • Enterprise

Company

  • About
  • Announcements
  • CareersHiring
  • Partners
  • Privacy
  • Terms of Service
  • Support
  • State of AI

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube
Favicon for neversleep

NeverSleep

Browse models from NeverSleep

6 models

Tokens processed on OpenRouter

  • NeverSleep: Lumimaid v0.2 70BLumimaid v0.2 70B

    Lumimaid v0.2 70B is a finetune of Llama 3.1 70B with a "HUGE step up dataset wise" compared to Lumimaid v0.1. Sloppy chats output were purged. Usage of this model is subject to Meta's Acceptable Use Policy.

    by neversleep131K context
NeverSleep: Lumimaid v0.2 8BLumimaid v0.2 8B
38.9M tokens

Lumimaid v0.2 8B is a finetune of Llama 3.1 8B with a "HUGE step up dataset wise" compared to Lumimaid v0.1. Sloppy chats output were purged. Usage of this model is subject to Meta's Acceptable Use Policy.

by neversleep33K context$0.09/M input tokens$0.60/M output tokens
  • NeverSleep: Llama 3 Lumimaid 70BLlama 3 Lumimaid 70B

    The NeverSleep team is back, with a Llama 3 70B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary. To enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength. Usage of this model is subject to Meta's Acceptable Use Policy.

    by neversleep8K context
  • NeverSleep: Llama 3 Lumimaid 8BLlama 3 Lumimaid 8B

    The NeverSleep team is back, with a Llama 3 8B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary. To enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength. Usage of this model is subject to Meta's Acceptable Use Policy.

    by neversleep25K context
  • Noromaid Mixtral 8x7B InstructNoromaid Mixtral 8x7B Instruct

    This model was trained for 8h(v1) + 8h(v2) + 12h(v3) on customized modified datasets, focusing on RP, uncensoring, and a modified version of the Alpaca prompting (that was already used in LimaRP), which should be at the same conversational level as ChatLM or Llama2-Chat without adding any additional special tokens.

    by neversleep8K context
  • Noromaid 20BNoromaid 20B
    4.01M tokens

    A collab between IkariDev and Undi. This merge is suitable for RP, ERP, and general knowledge. #merge #uncensored

    by neversleep4K context$1/M input tokens$1.75/M output tokens