We are now part of the NVIDIA Inception Program.Read the announcement
Documentation

Quick Start Guide

Make your first request using the OpenAI‑compatible API in Test Access or your own deployment.

Last updated on February 2, 2026

MX4 Atlas provides an OpenAI‑compatible API. You can reuse existing SDKs and tools by updating the base URL and API key.

Get your API Key

You'll need an API key to authenticate your requests. If you don't have one yet, contact us to request Test Access or a Free POC Pilot.

Note: Your API key (`mx4-sk-...`) carries privileges for your account. Keep it secure and never commit it to public repositories.

Install the OpenAI SDK

Since MX4 Atlas is API-compatible with OpenAI, you don't need a special "MX4" library. Just install the standard OpenAI Python package.

bash
1pip install openai

Configure the Client

Initialize the client by pointing it to the MX4 Atlas API endpoint. For Test Access, use the hosted endpoint. For on‑prem, use your internal base URL.

main.pypython
1import openai
2import os
3
4client = openai.OpenAI(
5 # Test Access: https://api.mx4.ai/v1
6 # On‑prem: https://your-atlas-endpoint/v1
7 base_url="https://api.mx4.ai/v1",
8 # Use your MX4 API Key
9 api_key="mx4-sk-1234567890abcdef"
10)

Make your first request

Now you can call the Chat Completions API. Use a model that is available in your deployment (for example, `mx4-atlas-core`).

main.pypython
1response = client.chat.completions.create(
2 model="mx4-atlas-core",
3 messages=[
4 {"role": "system", "content": "You are a helpful assistant."},
5 {"role": "user", "content": "Hello, who are you?"}
6 ]
7)
8
9print(response.choices[0].message.content)

You should see a response from the Atlas API. If you encounter errors, check authentication and model availability.

Other Languages & Methods

JavaScript/Node.js

quickstart.jsjavascript
1const OpenAI = require('openai');
2
3const client = new OpenAI({
4 apiKey: process.env.MX4_API_KEY,
5 baseURL: 'https://api.mx4.ai/v1'
6});
7
8async function main() {
9 const message = await client.chat.completions.create({
10 model: 'mx4-atlas-core',
11 messages: [{ role: 'user', content: 'Hello' }]
12 });
13 console.log(message.choices[0].message.content);
14}
15
16main();

cURL

bash
1curl https://api.mx4.ai/v1/chat/completions \
2 -H "Content-Type: application/json" \
3 -H "Authorization: Bearer $MX4_API_KEY" \
4 -d '{
5 "model": "mx4-atlas-core",
6 "messages": [
7 {"role": "user", "content": "Hello"}
8 ]
9 }'

Error Handling

Always handle errors gracefully in production. Here's a best-practice example:

error_handling.pypython
1import openai
2from openai import APIError, AuthenticationError, RateLimitError
3
4client = openai.OpenAI(
5 api_key="mx4-sk-...",
6 base_url="https://api.mx4.ai/v1"
7)
8
9try:
10 response = client.chat.completions.create(
11 model="mx4-atlas-core",
12 messages=[{"role": "user", "content": "Hello"}],
13 timeout=30
14 )
15 print(response.choices[0].message.content)
16
17except AuthenticationError:
18 print("Invalid API key. Check your credentials.")
19except RateLimitError:
20 print("Rate limit exceeded. Wait before retrying.")
21except APIError as e:
22 print(f"API error: {e.status_code} {e.message}")

Common Issues & Solutions

❌ ModuleNotFoundError: No module named 'openai'

Run pip install --upgrade openai (requires version 1.0+)

❌ 401 Unauthorized

Your API key is missing or incorrect. Check that it starts with mx4-sk-

❌ Model not found

Your account may not have access to that model. Check the Model Catalog for available models in your region.