mNLU API

mNLU API reference

Training Config Sample

Here is a training config sample:

{
  "task": "intents",
  "model_name": "best_model_ever",
  "training_arguments": {
    "num_train_epochs": 1,
    "per_device_train_batch_size": 32,
    "per_device_eval_batch_size": 32,
    "learning_rate": 0.0001,
    "warmup_ratio": 0.1,
    "lr_scheduler_type": "cosine",
    "label_smoothing_factor": 0.15,
    "do_eval": true,
    "save_strategy": "epoch",
    "evaluation_strategy": "epoch",
    "load_best_model_at_end": true,
    "metric_for_best_model": "eval_loss",
    "greater_is_better": false,
    "no_cuda": false,
    "report_to": "none"
  },
  "transformer_model_name": "bert-base-multilingual-cased",
  "separator": ",",
  "do_split": false,
  "split_ratio": 0.2,
  "placeholders": false
}

Details

Let's break down the different components of this JSON configuration:

This JSON configuration allows you to specify various settings and parameters for training your NLP model, making it a flexible and customizable tool for your natural language processing tasks.