We are now part of the NVIDIA Inception Program.Read the announcement
Documentation

Fine-Tuning

Customize models for your domain, dialect, or specialized tasks.

Last updated on February 2, 2026

Fine-tuning allows you to adapt MX4 Atlas models to your specific domain, language, or task. By training on your data, you can achieve superior performance on specialized tasks while reducing hallucinations and improving cost efficiency through smaller, more capable models.

Fine-Tuning Benefits

  • • 30-50% improvement in task-specific accuracy
  • • Reduced hallucinations through domain-specific training
  • • Lower API costs with smaller, more efficient models
  • • Faster inference with optimized behavior
  • • Support for proprietary terminology and formats

Fine-Tuning Workflow

Prepare Training Data

Create a JSONL file with training examples. Each line should be a JSON object with a "messages" array following the chat format.

json
1{"messages": [{"role": "system", "content": "You are a financial advisor."}, {"role": "user", "content": "ما هو أفضل استثمار"}, {"role": "assistant", "content": "يعتمد على احتياجاتك..."}]}
2{"messages": [{"role": "system", "content": "You are a financial advisor."}, {"role": "user", "content": "كيف أبدأ الاستثمار"}, {"role": "assistant", "content": "أولاً، تحتاج إلى..."}]}

Upload Training File

Upload your JSONL training file to MX4 Atlas. We validate the format and provide statistics about your dataset.

upload_training_data.pypython
1import openai
2import os
3
4client = openai.OpenAI(
5 api_key=os.getenv("MX4_API_KEY"),
6 base_url="https://api.mx4.ai/v1"
7)
8
9# Upload training file
10response = client.files.create(
11 file=open("training_data.jsonl", "rb"),
12 purpose="fine-tune"
13)
14
15training_file_id = response.id
16print(f"Training file uploaded: {training_file_id}")

Create Fine-Tuning Job

Submit a fine-tuning job with your training file. Specify the base model and training parameters.

start_fine_tuning.pypython
1# Create fine-tuning job
2job = client.fine_tuning.jobs.create(
3 training_file=training_file_id,
4 model="mx4-atlas-core",
5 hyperparameters={
6 "n_epochs": 3,
7 "learning_rate_multiplier": 1.0
8 },
9 suffix="finance-advisor"
10)
11
12job_id = job.id
13print(f"Fine-tuning job started: {job_id}")
14print(f"Status: {job.status}")

Monitor Training Progress

Track your fine-tuning job status. Training typically takes 1-24 hours depending on dataset size.

check_job_status.pypython
1# Check job status
2job = client.fine_tuning.jobs.retrieve(job_id)
3print(f"Job Status: {job.status}")
4print(f"Trained tokens: {job.trained_tokens}")
5print(f"Training loss: {job.result_metrics.get('final_loss', 'N/A')}")
6
7if job.status == "succeeded":
8 model_id = job.fine_tuned_model
9 print(f"Fine-tuned model: {model_id}")

Deploy & Use Fine-Tuned Model

Once training completes, use your fine-tuned model just like any other model via the Chat Completions API.

use_fine_tuned_model.pypython
1# Use your fine-tuned model
2response = client.chat.completions.create(
3 model="mx4-atlas-core-finance-advisor",
4 messages=[
5 {"role": "user", "content": "كيف أستثمر في الأسهم السعودية"}
6 ],
7 temperature=0.5
8)
9
10print(response.choices[0].message.content)

Dataset Requirements

Minimum Requirements

  • ✓ At least 50 examples (100+ recommended)
  • ✓ JSONL format with messages array
  • ✓ Balanced distribution across use cases
  • ✓ High-quality, expert-validated responses

Quality Guidelines

  • ✓ Remove sensitive data and secrets
  • ✓ Ensure consistent response format
  • ✓ Use native Arabic for Arabic tasks
  • ✓ Include diverse examples for robustness

Best Practices

Train on Diverse Examples

Include edge cases and variations to improve generalization beyond your training set.

Validate with Test Set

Reserve 10-20% of data for testing. Evaluate performance before deploying to production.

Iterative Improvement

Start with smaller models, measure performance, and collect feedback to improve future iterations.

Monitor Drift

Periodically evaluate model performance on new data and retrain when accuracy drops below thresholds.

Common Mistakes

Insufficient Training Data

Using fewer than 50 examples often leads to overfitting and poor generalization.

Solution: Aim for 100-500 high-quality examples for reliable domain adaptation.

Low-Quality Training Examples

Garbage in, garbage out: poor training data produces poor models.

Solution: Validate all examples manually, use expert reviewers, and remove low-confidence samples.

Overfitting to Training Data

Model memorizes training examples instead of learning generalizable patterns.

Solution: Use diverse examples, implement early stopping, validate on separate test set.

Ready to get started? Contact our sales team for access to the fine-tuning program, dataset validation, and dedicated support.

Sales team: sales@mx4.ai