Experiment intro:

Following Jeremy Howard's suggestion in the Practical Deep Learning for Coders course we will run an experiment with two different loss functions on the same data set. In this experiment, we will take a simple multi class model and retrain it as multi-label classification. we will see if:

  1. We can make it work effectively with images that don't contain any tanks.
  2. Try an image with two different kinds of tanks.
  3. Check whether the accuracy on the single-label dataset is impacted using multi-label classification.

Both models are trained to classify between three classes: Merkava mk4 tank, M1 Abrams tank or a water tank

Tank classifier models:

Download imges for training

tank_types = 'merkava mk4','M1 Abrams','water'
path = Path('tanks')

#downloading 150 images to labeled directories
if not path.exists():
    path.mkdir()
    for o in tank_types:
        dest = (path/o)
        dest.mkdir(exist_ok=True)
        urls = search_images_ddg(f'{o} tank', max_images=150)
        download_images(dest, urls=urls)

#deleting not working files
fns = get_image_files(path)
failed = verify_images(fns)
failed.map(Path.unlink);

Bulding a model using multi class loss

Preparing the data for the model

tanks = DataBlock(
    blocks=(ImageBlock, CategoryBlock), 
    get_items=get_image_files, 
    splitter=RandomSplitter(valid_pct=0.2, seed=42),
    get_y=parent_label,
    item_tfms=RandomResizedCrop(224, min_scale=0.5),
    batch_tfms=aug_transforms())

dls = tanks.dataloaders(path)

dls.valid.show_batch(max_n=5, nrows=1)

Training the model and review the results

learnMC = cnn_learner(dls, resnet18, metrics=error_rate)
learnMC.fine_tune(4) #fastai already chooses multi class loss becuase od the CategoryBlock
Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /root/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
epoch train_loss valid_loss error_rate time
0 1.476966 0.376781 0.144231 00:45
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
epoch train_loss valid_loss error_rate time
0 0.504485 0.251117 0.125000 00:24
1 0.339178 0.157399 0.057692 00:26
2 0.282746 0.184535 0.067308 00:27
3 0.231388 0.182214 0.067308 00:25
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "

Review top 10 loss images:

Bulding a model using multi label loss

Preparing the data for the model

def parentlabel(x):
  return [x.parent.name] # as get_y recieve a list
tanks2 = DataBlock(
    blocks=(ImageBlock, MultiCategoryBlock), 
    #MultiCategoryBlock(add_na=True)
    get_items=get_image_files, 
    splitter=RandomSplitter(valid_pct=0.2, seed=42),
    get_y=parentlabel,
    item_tfms=RandomResizedCrop(224, min_scale=0.5),
    batch_tfms=aug_transforms())

dls2 = tanks2.dataloaders(path)

dls2.valid.show_batch(nrows=1, ncols=5)

Training the model and review the results

learnML = cnn_learner(dls2, resnet18, metrics=accuracy_multi) 
#defaults for the accuracy multi are: threshold=0.5, Sigmod=True
learnML.fine_tune(4)
Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /root/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
epoch train_loss valid_loss accuracy_multi time
0 0.942959 0.533273 0.804487 01:40
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
epoch train_loss valid_loss accuracy_multi time
0 0.479595 0.333302 0.846154 00:25
1 0.375680 0.250790 0.894231 00:25
2 0.301029 0.202079 0.923077 00:25
3 0.255428 0.178423 0.935897 00:25
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "

Addjusting the threshhold (as the curve is smooth we shuold nit be worried about overfitting):

preds, targs = learnML.get_preds()
xs = torch.linspace(0.05,0.95,29)
accs = [accuracy_multi(preds, targs, thresh=i, sigmoid=False) for i in xs]
plt.plot(xs,accs);
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
learnML.metrics = partial(accuracy_multi, thresh=0.85)
learnML.validate()
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
(#2) [0.14189694821834564,0.9615384340286255]

Review top 10 loss images:

target predicted probabilities loss
0 merkava mk4 M1 Abrams TensorBase([0.9994, 0.0057, 0.0015]) 4.218245983123779
1 M1 Abrams merkava mk4 TensorBase([0.3920, 0.9105, 0.0013]) 1.1169370412826538
2 M1 Abrams M1 Abrams;merkava mk4 TensorBase([0.8152, 0.9213, 0.0116]) 0.9194316267967224
3 merkava mk4 M1 Abrams;merkava mk4 TensorBase([9.2422e-01, 9.5859e-01, 8.6757e-04]) 0.874360978603363
4 merkava mk4 M1 Abrams TensorBase([0.7902, 0.4187, 0.0231]) 0.8185343742370605
5 merkava mk4 merkava mk4;water TensorBase([0.0296, 0.8759, 0.7703]) 0.5444540977478027
6 M1 Abrams M1 Abrams;merkava mk4 TensorBase([0.6383, 0.6916, 0.0058]) 0.543639063835144
7 water M1 Abrams;water TensorBase([7.7097e-01, 6.5688e-04, 9.9986e-01]) 0.4915701746940613
8 M1 Abrams M1 Abrams;merkava mk4 TensorBase([0.6539, 0.5428, 0.0217]) 0.40980264544487
9 M1 Abrams M1 Abrams;merkava mk4 TensorBase([9.8275e-01, 6.9141e-01, 8.7449e-04]) 0.398002952337265

Addtional validation (out of train scope images)

The first model is binary and forces to select a single class based on any image input. Its softmax encourages a single class-selection even more. While the second model is multi-label and predicts each class separately, it may be better at alerting images outside the scope of the class or two classes within one image.

Now let's see if one model can perform better on out-of-scope images even though they were both trained on the same dataset:

Trying to predict an image with both Merkava and Abrams tanks in it:

twotanks = PILImage.create('/content/mek_abs_3.jpg')
twotanks.show()
<matplotlib.axes._subplots.AxesSubplot at 0x7f5a5f28b710>

Multi class prediction:

learnMC.predict(twotanks)
('M1 Abrams', TensorBase(0), TensorBase([9.9978e-01, 2.2307e-04, 2.6804e-07]))

Multi labal prediction:

learnML.predict(twotanks)
((#1) ['M1 Abrams'],
 TensorBase([ True, False, False]),
 TensorBase([9.9956e-01, 1.7685e-02, 4.3857e-04]))

Both models are quite certain on the same class which is only Abrams tank.

bicycle = PILImage.create('/content/bycycle_nature.jpg')
bicycle.show()
<matplotlib.axes._subplots.AxesSubplot at 0x7f5a5bb5d1d0>

Multi class prediction:

learnMC.predict(bicycle)
('water', TensorBase(2), TensorBase([0.0223, 0.0080, 0.9698]))

Multi labal prediction:

learnML.predict(bicycle)
((#1) ['water'],
 TensorBase([False, False,  True]),
 TensorBase([0.4164, 0.1055, 0.9838]))

Both models are quite certain on the same class which is a water tank.

piano = PILImage.create('/content/Grand-Piano.jpg')
piano.show()
<matplotlib.axes._subplots.AxesSubplot at 0x7f5a58853350>

Multi class prediction:

learnMC.predict('/content/Grand-Piano.jpg')
('M1 Abrams', TensorBase(0), TensorBase([0.8950, 0.0257, 0.0793]))

Multi labal prediction:

learnML.predict('/content/Grand-Piano.jpg')
((#1) ['M1 Abrams'],
 TensorBase([ True, False, False]),
 TensorBase([0.9771, 0.0851, 0.4458]))

Both models are quite certain on the same class which is the Abrams tank.

Comparison summary

Although we used two different loss functions which are designed for a different purpose, based on the tests, it turns out both models are performing quite similar:

  1. Metrics - both models are around a 7% error rate (actually adjusting the multi label threshold increases its accuracy but I believe the difference is more connected to process randomness and different metric functions than the models).

  2. Top loses - looks like the models are having problems on the same images.

  3. Out of scope train images - small random check, the multi label model didn't show any advantage in identity the images were out of scope or label two different images as we might expect.

The bottom line proves that the main factor is the data set, the model will be able to learn as good as the examples he learn during training and as we didn't train the multi label model with the relevant examples, in this case, it couldn't out perform the multi class model.