Documentation
Atlas Core (Models)
The foundation of the MX4 AI platform.
Last updated on February 2, 2026
Atlas Core represents our family of sovereign large language models, trained specifically for high-performance Arabic and English reasoning.
Available Models
mx4-atlas-core
Our flagship model. Best for complex reasoning, content generation, and Arabic dialect understanding.
8k ContextGeneral Purpose
mx4-atlas-lite
Optimized for speed and cost. Ideal for classification, extraction, and real-time chat.
16k ContextLow Latency
Model Specifications
| Model | Parameters | Context | Latency | Throughput |
|---|---|---|---|---|
| mx4-atlas-core | 70B | 8,192 tokens | 150-250ms | 150 req/min |
| mx4-atlas-lite | 13B | 16,384 tokens | 50-100ms | 500 req/min |
Performance Benchmarks (Arabic Tasks)
| Benchmark | Atlas Core | Atlas Lite | GPT-4 |
|---|---|---|---|
| Arabic QA (ARCD) | 92.3% | 88.7% | 91.8% |
| Sentiment Analysis | 89.2% | 85.6% | 87.4% |
| Named Entity Recognition | 94.1% | 91.3% | 93.2% |
| Dialect Understanding (5 dialects) | 90.8% | 87.2% | 85.3% |
Use Case Recommendations
Choose Atlas Core for:
- Complex reasoning and analysis tasks
- Long-form content generation
- Multi-dialect Arabic conversation
- Nuanced translation tasks
- Fine-tuning on specialized domains
Choose Atlas Lite for:
- Real-time chat applications
- Text classification & extraction
- High-volume API usage (cost sensitive)
- Edge deployment / low-latency requirements
- Structured output tasks (JSON, schemas)
Pricing Guide
Atlas Core
70B parameter model
$0.50 / 1M input tokens
$1.50 / 1M output tokens
Atlas Lite
13B parameter model
$0.10 / 1M input tokens
$0.30 / 1M output tokens
Quick Start
quick_start.pypython
1import openai2import os34client = openai.OpenAI(5 api_key=os.getenv("MX4_API_KEY"),6 base_url="https://api.mx4.ai/v1"7)89# Use Atlas Core for complex tasks10response = client.chat.completions.create(11 model="mx4-atlas-core", # Flagship model12 messages=[13 {"role": "user", "content": "اشرح لي مفهوم الذكاء الاصطناعي بطريقة مبسطة"}14 ],15 temperature=0.716)17print("Core:", response.choices[0].message.content)1819# Use Atlas Lite for fast responses20response = client.chat.completions.create(21 model="mx4-atlas-lite", # Lightweight model22 messages=[23 {"role": "user", "content": "صنف المشاعر: هذا المنتج ممتاز!"}24 ],25 temperature=0.326)27print("Lite:", response.choices[0].message.content)