Foundational Models

Clear Ideas integrates advanced language models (often called foundational models) from providers including OpenAI, Anthropic, Google, Cohere, and XAI. This document describes how these models are used within the Clear Ideas platform, along with considerations for choosing the appropriate model.

Foundational models are AI models trained on large datasets that can generate text based on a user-provided prompt. They are designed to perform a wide range of tasks, such as:

  • Factual Q&A
  • Summarization
  • Creative writing
  • Technical/code-related queries
  • Data analysis

Clear Ideas offers multiple models, each with different strengths, performance trade-offs, and associated costs.

Supported Models

Intelligent
Selects the most appropriate model for the task based on task type, reasoning level required, desired response length, target audience, creativity needs, and time sensitivity. Optimizes for cost efficiency when capabilities are equal.
GPT-5
OpenAI's next-generation flagship model with enhanced capabilities. GPT-5 offers superior reasoning, creativity, and factual accuracy across a wide range of complex tasks. Best for demanding professional use cases requiring the most advanced AI capabilities available.
GPT-5 mini
GPT-5 mini provides the advanced capabilities of GPT-5 in a more efficient and cost-effective package. Ideal for tasks requiring high-quality responses with excellent speed and affordability.
GPT-5 nano
GPT-5 nano is the most cost-effective version of GPT-5, designed for high-volume tasks requiring the latest AI capabilities at the lowest cost per token.
XAI Grok 4
XAI Grok 4. Our latest and greatest flagship model, offering unparalleled performance in natural language, math and reasoning - the perfect jack of all trades.
XAI Grok 3 Mini
Grok 3 Mini is a lightweight, smaller thinking model. Unlike traditional models that generate answers immediately, Grok 3 Mini thinks before responding. It's ideal for reasoning-heavy tasks that don't demand extensive domain knowledge, and shines in math-specific and quantitative use cases, such as solving challenging puzzles or math problems.
XAI Grok 3 Mini with High Reasoning Effort
Beta
Grok 3 Mini is a lightweight, smaller thinking model. Unlike traditional models that generate answers immediately, Grok 3 Mini thinks before responding. It's ideal for reasoning-heavy tasks that don't demand extensive domain knowledge, and shines in math-specific and quantitative use cases, such as solving challenging puzzles or math problems.
Gemini 2.5 Flash
This is the latest, most efficient, and fastest Gemini model, designed for high-volume, low-latency applications. Currently in preview, it offers bleeding-edge speed for real-time tasks
Gemini 2.5 Pro
Gemini 2.5 Pro is a powerful and versatile AI model designed for a wide range of tasks. It excels at complex reasoning, multimodal understanding, and massive context windows.
Claude 3.5 Haiku
Claude Haiku 3.5 is part of the Claude 3 family, typically designed for speed and efficiency in handling simpler tasks.
Claude 4.0 Sonnet
Claude Sonnet 4 is part of Anthropic's newest Claude 4 model family, designed as a smart and efficient solution for everyday tasks. It balances strong performance with accessibility, making it well-suited for general conversations, creative writing, analysis, and coding assistance.
Claude 4.0 Sonnet with Enhanced Reasoning
Claude Sonnet 4 is part of Anthropic's newest Claude 4 model family, designed as a smart and efficient solution for everyday tasks. It balances strong performance with accessibility, making it well-suited for general conversations, creative writing, analysis, and coding assistance.
Command A
Cohere's Command A is a sophisticated large language model designed to assist users by providing thorough, accurate, and contextually relevant responses to a wide range of queries, while maintaining high efficiency in processing and delivering information, ensuring quick and reliable assistance.
GPT-OSS 20B
Beta
GPT-OSS 20B is OpenAI's flagship open source model, built on a Mixture-of-Experts (MoE) architecture with 20 billion parameters and 32 experts.
GPT-OSS 120B
GPT-OSS 120B is OpenAI's flagship open source model, built on a Mixture-of-Experts (MoE) architecture with 120 billion parameters and 128 experts.

Reasoning vs. Conversational Models

Clear Ideas offers models with a primary emphasis on reasoning or conversational capabilities. While there can be overlap, understanding each focus can help match the model to the task:

  • Reasoning Models: Designed to handle more complex analytical tasks. These models are used when the goal is to perform in-depth analysis, solve technical problems, or produce detailed explanations. Reasoning-focused models often require more computational resources and can be more time-intensive, but they excel at:
    • Complex problem-solving
    • Logical or mathematical tasks
    • Technical discussions and code assistance
    • Data-driven analysis or research
  • Conversational Models: Tailored primarily for natural, back-and-forth interactions. They are useful for human-like dialogue, making them well suited for:
    • Q&A sessions or short clarifications
    • Casual engagement or interactive scenarios
    • Iterative brainstorming discussions
    • High-level summaries or succinct explanations

When choosing between a reasoning or conversational model, consider the depth and complexity of your query. If the task involves sophisticated analysis, a reasoning model can be more appropriate. If you need an approachable dialogue or iterative exchange, a conversational model may be preferable.

Intelligent Model Selection

Clear Ideas provides an Intelligent auto-selector that evaluates multiple factors—such as task complexity, desired detail level, audience type, and cost constraints—to pick the most suitable model. If multiple models appear equally suitable, the selection process favors the lower-cost option. This approach reduces the need for manual model comparison.

Model Selection Considerations

Task Complexity

  • Lower complexity (summarization, short Q&A):
  • Higher complexity (in-depth analysis, technical tasks):

Response Length

  • Brief responses
  • Extended output

Audience

  • General audience
  • Expert audience

Time Sensitivity

  • High time sensitivity
  • Lower time sensitivity

Creativity

  • Lower creativity needed
  • Higher creativity needed