SDKs & Libraries
Official and community libraries for MX4 Atlas.
Overview
MX4 Atlas is API-compatible with OpenAI, so you can use existing OpenAI SDKs and libraries. We provide additional utilities and examples optimized for Arabic language processing and MENA region requirements.
Key Benefits
- • Drop-in replacement for OpenAI SDKs
- • Optimized for Arabic text processing
- • Built-in sovereignty controls
- • Enhanced error handling and retry logic
- • Regional data sovereignty controls
SDK Comparison
| Language | Package | Installation | async/await | Streaming |
|---|---|---|---|---|
| Python | openai | pip install openai | ||
| Node.js/TypeScript | openai | npm install openai | ||
| Go | github.com/openai/openai-go | go get -u | ||
| Rust | async-openai | cargo add async-openai | ||
| Java | openai-java | gradle add dependency |
Python SDK
Installation
1pip install openai
Use the standard OpenAI Python library. No special MX4 library required.
Basic Usage
1import openai2import os34# Configure for MX4 Atlas5client = openai.OpenAI(6 api_key=os.getenv("MX4_API_KEY"),7 base_url="https://api.mx4.ai/v1"8)910# Chat completion11response = client.chat.completions.create(12 model="mx4-atlas-core",13 messages=[14 {"role": "system", "content": "You are a helpful assistant."},15 {"role": "user", "content": "مرحباً، كيف يمكنني مساعدتك؟"}16 ],17 temperature=0.7,18 max_tokens=15019)2021print(response.choices[0].message.content)
Arabic Text Processing
1# MX4 Atlas is optimized for Arabic2arabic_text = """3المملكة العربية السعودية هي دولة عربية تقع في الشرق الأوسط.4عاصمتها الرياض وأكبر مدنها جدة.5"""67response = client.chat.completions.create(8 model="mx4-atlas-core",9 messages=[10 {"role": "user", "content": f"لخص النص التالي بالعربية:\n\n{arabic_text}"}11 ],12 temperature=0.313)1415print("Summary:", response.choices[0].message.content)
Node.js / TypeScript SDK
Installation
1npm install openai
TypeScript Example
1import OpenAI from 'openai';23const client = new OpenAI({4 apiKey: process.env.MX4_API_KEY,5 baseURL: 'https://api.mx4.ai/v1',6});78interface ChatRequest {9 messages: Array<{role: string, content: string}>;10 model?: string;11 temperature?: number;12}1314export async function createChatCompletion(request: ChatRequest) {15 try {16 const response = await client.chat.completions.create({17 model: request.model || 'mx4-atlas-core',18 messages: request.messages,19 temperature: request.temperature || 0.7,20 });2122 return response.choices[0].message.content;23 } catch (error) {24 console.error('MX4 Atlas API error:', error);25 throw error;26 }27}2829// Usage30const result = await createChatCompletion({31 messages: [32 { role: 'user', content: 'Explain quantum computing in Arabic' }33 ]34});
LangChain Integration
Installation
1pip install langchain langchain-openai
RAG with Arabic Documents
1from langchain_openai import ChatOpenAI2from langchain_community.document_loaders import TextLoader3from langchain_text_splitters import RecursiveCharacterTextSplitter4from langchain_community.vectorstores import FAISS5from langchain_openai import OpenAIEmbeddings6from langchain.chains import RetrievalQA78# Initialize MX4 Atlas models9llm = ChatOpenAI(10 model="mx4-atlas-core",11 openai_api_key="mx4-sk-...",12 openai_api_base="https://api.mx4.ai/v1"13)1415embeddings = OpenAIEmbeddings(16 openai_api_key="mx4-sk-...",17 openai_api_base="https://api.mx4.ai/v1"18)1920# Load Arabic documents21loader = TextLoader("arabic_documents.txt", encoding='utf-8')22documents = loader.load()2324# Split documents25text_splitter = RecursiveCharacterTextSplitter(26 chunk_size=1000,27 chunk_overlap=20028)29docs = text_splitter.split_documents(documents)3031# Create vector store32vectorstore = FAISS.from_documents(docs, embeddings)3334# Create RAG chain35qa_chain = RetrievalQA.from_chain_type(36 llm=llm,37 chain_type="stuff",38 retriever=vectorstore.as_retriever(),39 return_source_documents=True40)4142# Query in Arabic43query = "ما هي أهمية الذكاء الاصطناعي في المنطقة العربية؟"44result = qa_chain({"query": query})45print(result["result"])
Fine-tuning Workflow
1import openai2from langchain_openai import ChatOpenAI34client = openai.OpenAI(5 api_key="mx4-sk-...",6 base_url="https://api.mx4.ai/v1"7)89# Upload training data10with open("arabic_training_data.jsonl", "rb") as f:11 training_file = client.files.create(12 file=f,13 purpose="fine-tune"14 )1516# Start fine-tuning job17fine_tune = client.fine_tuning.jobs.create(18 training_file=training_file.id,19 model="mx4-atlas-core",20 hyperparameters={21 "n_epochs": 3,22 "batch_size": 823 }24)2526print(f"Fine-tuning job started: {fine_tune.id}")2728# Monitor progress29while True:30 job = client.fine_tuning.jobs.retrieve(fine_tune.id)31 print(f"Status: {job.status}")32 if job.status == "succeeded":33 print(f"Fine-tuned model: {job.fine_tuned_model}")34 break35 time.sleep(60)
LlamaIndex Integration
Installation
1pip install llama-index llama-index-llms-openai
Arabic Knowledge Base
1from llama_index.llms.openai import OpenAI2from llama_index.core import VectorStoreIndex, SimpleDirectoryReader3from llama_index.core import Settings45# Configure MX4 Atlas6Settings.llm = OpenAI(7 model="mx4-atlas-core",8 api_key="mx4-sk-...",9 api_base="https://api.mx4.ai/v1"10)1112# Load Arabic documents13documents = SimpleDirectoryReader("arabic_docs").load_data()1415# Create index16index = VectorStoreIndex.from_documents(documents)1718# Query engine19query_engine = index.as_query_engine()20response = query_engine.query("ما هي التطورات في الذكاء الاصطناعي بالعربية؟")21print(response)
Migrating from OpenAI
MX4 Atlas is a drop-in replacement for OpenAI. Just change the API key and base URL:
OpenAI Configuration
1client = openai.OpenAI(2 api_key=os.getenv("OPENAI_API_KEY"),3 base_url="https://api.openai.com/v1"4)
MX4 Atlas Configuration
1client = openai.OpenAI(2 api_key=os.getenv("MX4_API_KEY"),3 base_url="https://api.mx4.ai/v1"4)
Model Mapping
gpt-4-turbo→mx4-atlas-coregpt-3.5-turbo→mx4-atlas-litetext-embedding-3-large→mx4-embeddingsGo SDK
Installation
1go get -u github.com/openai/openai-go/v9
Basic Usage
1package main23import (4 "context"5 "os"6 "github.com/openai/openai-go/v9"7 "github.com/openai/openai-go/v9/option"8)910func main() {11 client := openai.NewClient(12 option.WithAPIKey(os.Getenv("MX4_API_KEY")),13 option.WithBaseURL("https://api.mx4.ai/v1"),14 )1516 message, err := client.Chat.Completions.New(context.Background(), openai.ChatCompletionNewParams{17 Messages: openai.F([]openai.ChatCompletionMessageParamUnion{18 openai.UserMessage("مرحباً، كيف يمكنك مساعدتي؟"),19 }),20 Model: openai.F("mx4-atlas-core"),21 })2223 println(message.Choices[0].Message.Content)24}
Rust SDK
Installation
1cargo add async-openai tokio
Async Example
1use async_openai::client::OpenAIClient;2use async_openai::types::{ChatCompletionRequestMessage, CreateChatCompletionRequest};3use std::env;45#[tokio::main]6async fn main() {7 let client = OpenAIClient::new(8 async_openai::config::OpenAIConfig::default()9 .with_api_key(env::var("MX4_API_KEY").unwrap())10 .with_api_base("https://api.mx4.ai/v1".to_string()),11 );1213 let request = CreateChatCompletionRequest {14 model: "mx4-atlas-core".to_string(),15 messages: vec![16 ChatCompletionRequestMessage::User(17 "مرحباً، كيف يمكنك مساعدتي؟".into()18 ),19 ],20 ..Default::default()21 };2223 let response = client.chat().create(request).await.unwrap();24 println!("{}", response.choices[0].message.content);25}
Java SDK
Gradle Setup
1dependencies {2 implementation 'com.openai:openai-java:0.12.0'3}
Example
1import com.openai.client.OpenAI;2import com.openai.models.ChatCompletion;3import com.openai.models.ChatCompletionCreateParams;4import com.openai.models.ChatCompletionMessage;56OpenAI client = OpenAI.builder()7 .apiKey(System.getenv("MX4_API_KEY"))8 .baseUrl("https://api.mx4.ai/v1")9 .build();1011ChatCompletion completion = client.chat().completions().create(12 ChatCompletionCreateParams.builder()13 .model("mx4-atlas-core")14 .addMessage(ChatCompletionMessage.userMessage("مرحباً، كيف يمكنك مساعدتي؟"))15 .build()16);1718System.out.println(completion.choices().get(0).message().content());
Advanced Features
Streaming Responses
1stream = client.chat.completions.create(2 model="mx4-atlas-core",3 messages=[...],4 stream=True5)67for chunk in stream:8 if chunk.choices[0].delta.content:9 print(chunk.choices[0].delta.content, end="")
Token Counting
1import tiktoken23encoding = tiktoken.get_encoding("cl100k_base")4text = "المملكة العربية السعودية"5tokens = encoding.encode(text)67print(f"Tokens: {len(tokens)}")8# Arabic text = ~30% fewer tokens
Batch Processing
1batch_input = [2 {"custom_id": "1", "params": {"messages": [...]}},3 {"custom_id": "2", "params": {"messages": [...]}},4]56batch = client.batches.create(7 input_file_id=file_id,8 endpoint="/v1/chat/completions",9 completion_window="24h"10)
Vision API
1response = client.chat.completions.create(2 model="mx4-atlas-core",3 messages=[4 {5 "role": "user",6 "content": [7 {"type": "image_url", "image_url": {"url": "..."}},8 {"type": "text", "text": "اوصف الصورة بالعربية"}9 ]10 }11 ]12)
Best Practices
Error Handling with Retries
Implement exponential backoff for rate limits and transient errors. Never retry on 4xx errors except rate limits.
UTF-8 Text Encoding
Always use UTF-8 encoding for Arabic text. Verify encoding at input/output boundaries to prevent mojibake.
Token Optimization
Arabic text compresses to 30% fewer tokens. Pre-calculate token counts for budget planning and cost control.
API Key Security
Never hardcode API keys. Use environment variables or secret management systems. Rotate keys regularly.
Request Timeout Management
Set appropriate timeouts based on request type. Streaming requests need longer timeouts than chat completions.
Troubleshooting
Authentication Error (Invalid API Key)
Requests return 401 Unauthorized.
Solution: Verify API key is correct and not expired. Check environment variable name matches. Ensure no whitespace in key.
Rate Limit Exceeded (429)
Too many requests sent in short time.
Solution: Implement exponential backoff retry. Check rate limit headers: X-RateLimit-Remaining. Consider upgrading plan for higher limits.
Arabic Text Encoding Issues
Response text appears garbled or corrupted.
Solution: Verify UTF-8 encoding throughout the stack. Check database/file encoding. Test with Python: text.encode('utf-8') should not raise errors.
Connection Timeout
Requests hang or timeout without response.
Solution: Check network connectivity. Verify firewall allows HTTPS (port 443). Increase timeout values for large requests. Check API status at status.mx4.ai.
Out of Memory with Streaming
Memory usage grows while streaming responses.
Solution: Process stream chunks immediately instead of buffering. Avoid loading entire response in memory. Use generators for efficient iteration.
Need help? Check our API documentation for detailed parameter references, or visit our quick start guide to get started.