We are now part of the NVIDIA Inception Program.Read the announcement
Documentation

Atlas Core (Models)

The foundation of the MX4 AI platform.

Last updated on February 2, 2026

Atlas Core represents our family of sovereign large language models, trained specifically for high-performance Arabic and English reasoning.

Available Models

mx4-atlas-core

Our flagship model. Best for complex reasoning, content generation, and Arabic dialect understanding.

Standard ContextGeneral Purpose

mx4-atlas-lite

Optimized for speed and cost. Ideal for classification, extraction, and real-time chat.

Extended ContextLow Latency

Model Specifications

Model sizes, context windows, and throughput depend on hardware and deployment configuration. We share detailed sizing and benchmark guidance during pilots or under NDA.

Performance Benchmarks (Arabic Tasks)

Arabic benchmark results are available upon request. We review evaluation outcomes during pilots and align model selection to your target tasks and dialects.

Use Case Recommendations

Choose Atlas Core for:

  • Complex reasoning and analysis tasks
  • Long-form content generation
  • Multi-dialect Arabic conversation
  • Nuanced translation tasks
  • Fine-tuning on specialized domains

Choose Atlas Lite for:

  • Real-time chat applications
  • Text classification & extraction
  • High-volume API usage (cost sensitive)
  • Edge deployment / low-latency requirements
  • Structured output tasks (JSON, schemas)

Pricing Guide

Atlas Core

70B parameter model

$0.50 / 1M input tokens

$1.50 / 1M output tokens

Atlas Lite

13B parameter model

$0.10 / 1M input tokens

$0.30 / 1M output tokens

Quick Start

quick_start.pypython
1import openai
2import os
3
4client = openai.OpenAI(
5 api_key=os.getenv("MX4_API_KEY"),
6 base_url="https://api.mx4.ai/v1"
7)
8
9# Use Atlas Core for complex tasks
10response = client.chat.completions.create(
11 model="mx4-atlas-core", # Flagship model
12 messages=[
13 {"role": "user", "content": "اشرح لي مفهوم الذكاء الاصطناعي بطريقة مبسطة"}
14 ],
15 temperature=0.7
16)
17print("Core:", response.choices[0].message.content)
18
19# Use Atlas Lite for fast responses
20response = client.chat.completions.create(
21 model="mx4-atlas-lite", # Lightweight model
22 messages=[
23 {"role": "user", "content": "صنف المشاعر: هذا المنتج ممتاز!"}
24 ],
25 temperature=0.3
26)
27print("Lite:", response.choices[0].message.content)