We are now part of the NVIDIA Inception Program.Read the announcement
Documentation

Atlas Core (Models)

The foundation of the MX4 AI platform.

Last updated on February 2, 2026

Atlas Core represents our family of sovereign large language models, trained specifically for high-performance Arabic and English reasoning.

Available Models

mx4-atlas-core

Our flagship model. Best for complex reasoning, content generation, and Arabic dialect understanding.

8k ContextGeneral Purpose

mx4-atlas-lite

Optimized for speed and cost. Ideal for classification, extraction, and real-time chat.

16k ContextLow Latency

Model Specifications

ModelParametersContextLatencyThroughput
mx4-atlas-core70B8,192 tokens150-250ms150 req/min
mx4-atlas-lite13B16,384 tokens50-100ms500 req/min

Performance Benchmarks (Arabic Tasks)

BenchmarkAtlas CoreAtlas LiteGPT-4
Arabic QA (ARCD)92.3%88.7%91.8%
Sentiment Analysis89.2%85.6%87.4%
Named Entity Recognition94.1%91.3%93.2%
Dialect Understanding (5 dialects)90.8%87.2%85.3%

Use Case Recommendations

Choose Atlas Core for:

  • Complex reasoning and analysis tasks
  • Long-form content generation
  • Multi-dialect Arabic conversation
  • Nuanced translation tasks
  • Fine-tuning on specialized domains

Choose Atlas Lite for:

  • Real-time chat applications
  • Text classification & extraction
  • High-volume API usage (cost sensitive)
  • Edge deployment / low-latency requirements
  • Structured output tasks (JSON, schemas)

Pricing Guide

Atlas Core

70B parameter model

$0.50 / 1M input tokens

$1.50 / 1M output tokens

Atlas Lite

13B parameter model

$0.10 / 1M input tokens

$0.30 / 1M output tokens

Quick Start

quick_start.pypython
1import openai
2import os
3
4client = openai.OpenAI(
5 api_key=os.getenv("MX4_API_KEY"),
6 base_url="https://api.mx4.ai/v1"
7)
8
9# Use Atlas Core for complex tasks
10response = client.chat.completions.create(
11 model="mx4-atlas-core", # Flagship model
12 messages=[
13 {"role": "user", "content": "اشرح لي مفهوم الذكاء الاصطناعي بطريقة مبسطة"}
14 ],
15 temperature=0.7
16)
17print("Core:", response.choices[0].message.content)
18
19# Use Atlas Lite for fast responses
20response = client.chat.completions.create(
21 model="mx4-atlas-lite", # Lightweight model
22 messages=[
23 {"role": "user", "content": "صنف المشاعر: هذا المنتج ممتاز!"}
24 ],
25 temperature=0.3
26)
27print("Lite:", response.choices[0].message.content)