We are now part of the NVIDIA Inception Program.Read the announcement
Documentation

SDKs & Libraries

Official and community libraries for MX4 Atlas.

Last updated on February 2, 2026

Overview

MX4 Atlas is API-compatible with OpenAI, so you can use existing OpenAI SDKs and libraries. We provide additional utilities and examples optimized for Arabic language processing and MENA region requirements.

Key Benefits

  • • Drop-in replacement for OpenAI SDKs
  • • Optimized for Arabic text processing
  • • Built-in sovereignty controls
  • • Enhanced error handling and retry logic
  • • Regional data sovereignty controls

SDK Comparison

LanguagePackageInstallationasync/awaitStreaming
Pythonopenaipip install openai
Node.js/TypeScriptopenainpm install openai
Gogithub.com/openai/openai-gogo get -u
Rustasync-openaicargo add async-openai
Javaopenai-javagradle add dependency

Python SDK

Installation

bash
1pip install openai

Use the standard OpenAI Python library. No special MX4 library required.

Basic Usage

basic_usage.pypython
1import openai
2import os
3
4# Configure for MX4 Atlas
5client = openai.OpenAI(
6 api_key=os.getenv("MX4_API_KEY"),
7 base_url="https://api.mx4.ai/v1"
8)
9
10# Chat completion
11response = client.chat.completions.create(
12 model="mx4-atlas-core",
13 messages=[
14 {"role": "system", "content": "You are a helpful assistant."},
15 {"role": "user", "content": "مرحباً، كيف يمكنني مساعدتك؟"}
16 ],
17 temperature=0.7,
18 max_tokens=150
19)
20
21print(response.choices[0].message.content)

Arabic Text Processing

arabic_processing.pypython
1# MX4 Atlas is optimized for Arabic
2arabic_text = """
3المملكة العربية السعودية هي دولة عربية تقع في الشرق الأوسط.
4عاصمتها الرياض وأكبر مدنها جدة.
5"""
6
7response = client.chat.completions.create(
8 model="mx4-atlas-core",
9 messages=[
10 {"role": "user", "content": f"لخص النص التالي بالعربية:\n\n{arabic_text}"}
11 ],
12 temperature=0.3
13)
14
15print("Summary:", response.choices[0].message.content)

Node.js / TypeScript SDK

Installation

bash
1npm install openai

TypeScript Example

atlas-client.tstypescript
1import OpenAI from 'openai';
2
3const client = new OpenAI({
4 apiKey: process.env.MX4_API_KEY,
5 baseURL: 'https://api.mx4.ai/v1',
6});
7
8interface ChatRequest {
9 messages: Array<{role: string, content: string}>;
10 model?: string;
11 temperature?: number;
12}
13
14export async function createChatCompletion(request: ChatRequest) {
15 try {
16 const response = await client.chat.completions.create({
17 model: request.model || 'mx4-atlas-core',
18 messages: request.messages,
19 temperature: request.temperature || 0.7,
20 });
21
22 return response.choices[0].message.content;
23 } catch (error) {
24 console.error('MX4 Atlas API error:', error);
25 throw error;
26 }
27}
28
29// Usage
30const result = await createChatCompletion({
31 messages: [
32 { role: 'user', content: 'Explain quantum computing in Arabic' }
33 ]
34});

LangChain Integration

Installation

bash
1pip install langchain langchain-openai

RAG with Arabic Documents

rag_arabic.pypython
1from langchain_openai import ChatOpenAI
2from langchain_community.document_loaders import TextLoader
3from langchain_text_splitters import RecursiveCharacterTextSplitter
4from langchain_community.vectorstores import FAISS
5from langchain_openai import OpenAIEmbeddings
6from langchain.chains import RetrievalQA
7
8# Initialize MX4 Atlas models
9llm = ChatOpenAI(
10 model="mx4-atlas-core",
11 openai_api_key="mx4-sk-...",
12 openai_api_base="https://api.mx4.ai/v1"
13)
14
15embeddings = OpenAIEmbeddings(
16 openai_api_key="mx4-sk-...",
17 openai_api_base="https://api.mx4.ai/v1"
18)
19
20# Load Arabic documents
21loader = TextLoader("arabic_documents.txt", encoding='utf-8')
22documents = loader.load()
23
24# Split documents
25text_splitter = RecursiveCharacterTextSplitter(
26 chunk_size=1000,
27 chunk_overlap=200
28)
29docs = text_splitter.split_documents(documents)
30
31# Create vector store
32vectorstore = FAISS.from_documents(docs, embeddings)
33
34# Create RAG chain
35qa_chain = RetrievalQA.from_chain_type(
36 llm=llm,
37 chain_type="stuff",
38 retriever=vectorstore.as_retriever(),
39 return_source_documents=True
40)
41
42# Query in Arabic
43query = "ما هي أهمية الذكاء الاصطناعي في المنطقة العربية؟"
44result = qa_chain({"query": query})
45print(result["result"])

Fine-tuning Workflow

fine_tuning.pypython
1import openai
2from langchain_openai import ChatOpenAI
3
4client = openai.OpenAI(
5 api_key="mx4-sk-...",
6 base_url="https://api.mx4.ai/v1"
7)
8
9# Upload training data
10with open("arabic_training_data.jsonl", "rb") as f:
11 training_file = client.files.create(
12 file=f,
13 purpose="fine-tune"
14 )
15
16# Start fine-tuning job
17fine_tune = client.fine_tuning.jobs.create(
18 training_file=training_file.id,
19 model="mx4-atlas-core",
20 hyperparameters={
21 "n_epochs": 3,
22 "batch_size": 8
23 }
24)
25
26print(f"Fine-tuning job started: {fine_tune.id}")
27
28# Monitor progress
29while True:
30 job = client.fine_tuning.jobs.retrieve(fine_tune.id)
31 print(f"Status: {job.status}")
32 if job.status == "succeeded":
33 print(f"Fine-tuned model: {job.fine_tuned_model}")
34 break
35 time.sleep(60)

LlamaIndex Integration

Installation

bash
1pip install llama-index llama-index-llms-openai

Arabic Knowledge Base

llamaindex_arabic.pypython
1from llama_index.llms.openai import OpenAI
2from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
3from llama_index.core import Settings
4
5# Configure MX4 Atlas
6Settings.llm = OpenAI(
7 model="mx4-atlas-core",
8 api_key="mx4-sk-...",
9 api_base="https://api.mx4.ai/v1"
10)
11
12# Load Arabic documents
13documents = SimpleDirectoryReader("arabic_docs").load_data()
14
15# Create index
16index = VectorStoreIndex.from_documents(documents)
17
18# Query engine
19query_engine = index.as_query_engine()
20response = query_engine.query("ما هي التطورات في الذكاء الاصطناعي بالعربية؟")
21print(response)

Migrating from OpenAI

MX4 Atlas is a drop-in replacement for OpenAI. Just change the API key and base URL:

OpenAI Configuration

python
1client = openai.OpenAI(
2 api_key=os.getenv("OPENAI_API_KEY"),
3 base_url="https://api.openai.com/v1"
4)

MX4 Atlas Configuration

python
1client = openai.OpenAI(
2 api_key=os.getenv("MX4_API_KEY"),
3 base_url="https://api.mx4.ai/v1"
4)

Model Mapping

gpt-4-turbomx4-atlas-core
gpt-3.5-turbomx4-atlas-lite
text-embedding-3-largemx4-embeddings

Go SDK

Installation

bash
1go get -u github.com/openai/openai-go/v9

Basic Usage

go
1package main
2
3import (
4 "context"
5 "os"
6 "github.com/openai/openai-go/v9"
7 "github.com/openai/openai-go/v9/option"
8)
9
10func main() {
11 client := openai.NewClient(
12 option.WithAPIKey(os.Getenv("MX4_API_KEY")),
13 option.WithBaseURL("https://api.mx4.ai/v1"),
14 )
15
16 message, err := client.Chat.Completions.New(context.Background(), openai.ChatCompletionNewParams{
17 Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
18 openai.UserMessage("مرحباً، كيف يمكنك مساعدتي؟"),
19 }),
20 Model: openai.F("mx4-atlas-core"),
21 })
22
23 println(message.Choices[0].Message.Content)
24}

Rust SDK

Installation

bash
1cargo add async-openai tokio

Async Example

rust
1use async_openai::client::OpenAIClient;
2use async_openai::types::{ChatCompletionRequestMessage, CreateChatCompletionRequest};
3use std::env;
4
5#[tokio::main]
6async fn main() {
7 let client = OpenAIClient::new(
8 async_openai::config::OpenAIConfig::default()
9 .with_api_key(env::var("MX4_API_KEY").unwrap())
10 .with_api_base("https://api.mx4.ai/v1".to_string()),
11 );
12
13 let request = CreateChatCompletionRequest {
14 model: "mx4-atlas-core".to_string(),
15 messages: vec![
16 ChatCompletionRequestMessage::User(
17 "مرحباً، كيف يمكنك مساعدتي؟".into()
18 ),
19 ],
20 ..Default::default()
21 };
22
23 let response = client.chat().create(request).await.unwrap();
24 println!("{}", response.choices[0].message.content);
25}

Java SDK

Gradle Setup

gradle
1dependencies {
2 implementation 'com.openai:openai-java:0.12.0'
3}

Example

java
1import com.openai.client.OpenAI;
2import com.openai.models.ChatCompletion;
3import com.openai.models.ChatCompletionCreateParams;
4import com.openai.models.ChatCompletionMessage;
5
6OpenAI client = OpenAI.builder()
7 .apiKey(System.getenv("MX4_API_KEY"))
8 .baseUrl("https://api.mx4.ai/v1")
9 .build();
10
11ChatCompletion completion = client.chat().completions().create(
12 ChatCompletionCreateParams.builder()
13 .model("mx4-atlas-core")
14 .addMessage(ChatCompletionMessage.userMessage("مرحباً، كيف يمكنك مساعدتي؟"))
15 .build()
16);
17
18System.out.println(completion.choices().get(0).message().content());

Advanced Features

Streaming Responses

python
1stream = client.chat.completions.create(
2 model="mx4-atlas-core",
3 messages=[...],
4 stream=True
5)
6
7for chunk in stream:
8 if chunk.choices[0].delta.content:
9 print(chunk.choices[0].delta.content, end="")

Token Counting

python
1import tiktoken
2
3encoding = tiktoken.get_encoding("cl100k_base")
4text = "المملكة العربية السعودية"
5tokens = encoding.encode(text)
6
7print(f"Tokens: {len(tokens)}")
8# Arabic text = ~30% fewer tokens

Batch Processing

python
1batch_input = [
2 {"custom_id": "1", "params": {"messages": [...]}},
3 {"custom_id": "2", "params": {"messages": [...]}},
4]
5
6batch = client.batches.create(
7 input_file_id=file_id,
8 endpoint="/v1/chat/completions",
9 completion_window="24h"
10)

Vision API

python
1response = client.chat.completions.create(
2 model="mx4-atlas-core",
3 messages=[
4 {
5 "role": "user",
6 "content": [
7 {"type": "image_url", "image_url": {"url": "..."}},
8 {"type": "text", "text": "اوصف الصورة بالعربية"}
9 ]
10 }
11 ]
12)

Best Practices

Error Handling with Retries

Implement exponential backoff for rate limits and transient errors. Never retry on 4xx errors except rate limits.

UTF-8 Text Encoding

Always use UTF-8 encoding for Arabic text. Verify encoding at input/output boundaries to prevent mojibake.

Token Optimization

Arabic text compresses to 30% fewer tokens. Pre-calculate token counts for budget planning and cost control.

API Key Security

Never hardcode API keys. Use environment variables or secret management systems. Rotate keys regularly.

Request Timeout Management

Set appropriate timeouts based on request type. Streaming requests need longer timeouts than chat completions.

Troubleshooting

Authentication Error (Invalid API Key)

Requests return 401 Unauthorized.

Solution: Verify API key is correct and not expired. Check environment variable name matches. Ensure no whitespace in key.

Rate Limit Exceeded (429)

Too many requests sent in short time.

Solution: Implement exponential backoff retry. Check rate limit headers: X-RateLimit-Remaining. Consider upgrading plan for higher limits.

Arabic Text Encoding Issues

Response text appears garbled or corrupted.

Solution: Verify UTF-8 encoding throughout the stack. Check database/file encoding. Test with Python: text.encode('utf-8') should not raise errors.

Connection Timeout

Requests hang or timeout without response.

Solution: Check network connectivity. Verify firewall allows HTTPS (port 443). Increase timeout values for large requests. Check API status at status.mx4.ai.

Out of Memory with Streaming

Memory usage grows while streaming responses.

Solution: Process stream chunks immediately instead of buffering. Avoid loading entire response in memory. Use generators for efficient iteration.

Need help? Check our API documentation for detailed parameter references, or visit our quick start guide to get started.