Incremental vs. Batch Example

Source code notebook compat Author Update time

Overview

All modules in AdaptiveResonance.jl are designed to handle incremental and batch training. In fact, ART modules are generally incremental in their implementation, so their batch methods wrap the incremental ones and handle preprocessing, etc. For example, DDVFA can be run incrementally (i.e. with one sample at a time) with custom algorithmic options and a predetermined data configuration.

Note

In the incremental case, it is necessary to provide a data configuration if the model is not pretrained because the model has no knowledge of the boundaries and dimensionality of the data, which are necessary in the complement coding step. For more info, see the guide in the docs on incremental vs. batch.

Data Setup

We begin with importing AdaptiveResonance for the ART modules and MLDatasets for some data utilities.

using AdaptiveResonance # ART
using MLDatasets        # Iris dataset
using DataFrames        # DataFrames, necessary for MLDatasets.Iris()
using MLDataUtils       # Shuffling and splitting
using Printf            # Formatted number printing

We will download the Iris dataset for its small size and benchmark use for clustering algorithms.

# Get the iris dataset
iris = Iris(as_df=false)
# Manipulate the features and labels into a matrix of features and a vector of labels
features, labels = iris.features, iris.targets
([5.1 4.9 4.7 4.6 5.0 5.4 4.6 5.0 4.4 4.9 5.4 4.8 4.8 4.3 5.8 5.7 5.4 5.1 5.7 5.1 5.4 5.1 4.6 5.1 4.8 5.0 5.0 5.2 5.2 4.7 4.8 5.4 5.2 5.5 4.9 5.0 5.5 4.9 4.4 5.1 5.0 4.5 4.4 5.0 5.1 4.8 5.1 4.6 5.3 5.0 7.0 6.4 6.9 5.5 6.5 5.7 6.3 4.9 6.6 5.2 5.0 5.9 6.0 6.1 5.6 6.7 5.6 5.8 6.2 5.6 5.9 6.1 6.3 6.1 6.4 6.6 6.8 6.7 6.0 5.7 5.5 5.5 5.8 6.0 5.4 6.0 6.7 6.3 5.6 5.5 5.5 6.1 5.8 5.0 5.6 5.7 5.7 6.2 5.1 5.7 6.3 5.8 7.1 6.3 6.5 7.6 4.9 7.3 6.7 7.2 6.5 6.4 6.8 5.7 5.8 6.4 6.5 7.7 7.7 6.0 6.9 5.6 7.7 6.3 6.7 7.2 6.2 6.1 6.4 7.2 7.4 7.9 6.4 6.3 6.1 7.7 6.3 6.4 6.0 6.9 6.7 6.9 5.8 6.8 6.7 6.7 6.3 6.5 6.2 5.9; 3.5 3.0 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 3.7 3.4 3.0 3.0 4.0 4.4 3.9 3.5 3.8 3.8 3.4 3.7 3.6 3.3 3.4 3.0 3.4 3.5 3.4 3.2 3.1 3.4 4.1 4.2 3.1 3.2 3.5 3.1 3.0 3.4 3.5 2.3 3.2 3.5 3.8 3.0 3.8 3.2 3.7 3.3 3.2 3.2 3.1 2.3 2.8 2.8 3.3 2.4 2.9 2.7 2.0 3.0 2.2 2.9 2.9 3.1 3.0 2.7 2.2 2.5 3.2 2.8 2.5 2.8 2.9 3.0 2.8 3.0 2.9 2.6 2.4 2.4 2.7 2.7 3.0 3.4 3.1 2.3 3.0 2.5 2.6 3.0 2.6 2.3 2.7 3.0 2.9 2.9 2.5 2.8 3.3 2.7 3.0 2.9 3.0 3.0 2.5 2.9 2.5 3.6 3.2 2.7 3.0 2.5 2.8 3.2 3.0 3.8 2.6 2.2 3.2 2.8 2.8 2.7 3.3 3.2 2.8 3.0 2.8 3.0 2.8 3.8 2.8 2.8 2.6 3.0 3.4 3.1 3.0 3.1 3.1 3.1 2.7 3.2 3.3 3.0 2.5 3.0 3.4 3.0; 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 1.5 1.6 1.4 1.1 1.2 1.5 1.3 1.4 1.7 1.5 1.7 1.5 1.0 1.7 1.9 1.6 1.6 1.5 1.4 1.6 1.6 1.5 1.5 1.4 1.5 1.2 1.3 1.5 1.3 1.5 1.3 1.3 1.3 1.6 1.9 1.4 1.6 1.4 1.5 1.4 4.7 4.5 4.9 4.0 4.6 4.5 4.7 3.3 4.6 3.9 3.5 4.2 4.0 4.7 3.6 4.4 4.5 4.1 4.5 3.9 4.8 4.0 4.9 4.7 4.3 4.4 4.8 5.0 4.5 3.5 3.8 3.7 3.9 5.1 4.5 4.5 4.7 4.4 4.1 4.0 4.4 4.6 4.0 3.3 4.2 4.2 4.2 4.3 3.0 4.1 6.0 5.1 5.9 5.6 5.8 6.6 4.5 6.3 5.8 6.1 5.1 5.3 5.5 5.0 5.1 5.3 5.5 6.7 6.9 5.0 5.7 4.9 6.7 4.9 5.7 6.0 4.8 4.9 5.6 5.8 6.1 6.4 5.6 5.1 5.6 6.1 5.6 5.5 4.8 5.4 5.6 5.1 5.1 5.9 5.7 5.2 5.0 5.2 5.4 5.1; 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 0.2 0.2 0.1 0.1 0.2 0.4 0.4 0.3 0.3 0.3 0.2 0.4 0.2 0.5 0.2 0.2 0.4 0.2 0.2 0.2 0.2 0.4 0.1 0.2 0.1 0.2 0.2 0.1 0.2 0.2 0.3 0.3 0.2 0.6 0.4 0.3 0.2 0.2 0.2 0.2 1.4 1.5 1.5 1.3 1.5 1.3 1.6 1.0 1.3 1.4 1.0 1.5 1.0 1.4 1.3 1.4 1.5 1.0 1.5 1.1 1.8 1.3 1.5 1.2 1.3 1.4 1.4 1.7 1.5 1.0 1.1 1.0 1.2 1.6 1.5 1.6 1.5 1.3 1.3 1.3 1.2 1.4 1.2 1.0 1.3 1.2 1.3 1.3 1.1 1.3 2.5 1.9 2.1 1.8 2.2 2.1 1.7 1.8 1.8 2.5 2.0 1.9 2.1 2.0 2.4 2.3 1.8 2.2 2.3 1.5 2.3 2.0 2.0 1.8 2.1 1.8 1.8 1.8 2.1 1.6 1.9 2.0 2.2 1.5 1.4 2.3 2.4 1.8 1.8 2.1 2.4 2.3 1.9 2.3 2.5 2.3 1.9 2.0 2.3 1.8], InlineStrings.String15["Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-setosa" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-versicolor" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica" "Iris-virginica"])

Because the MLDatasets package gives us Iris labels as strings, we will use the MLDataUtils.convertlabel method with the MLLabelUtils.LabelEnc.Indices type to get a list of integers representing each class:

labels = convertlabel(LabelEnc.Indices{Int}, vec(labels))
unique(labels)
3-element Vector{Int64}:
 1
 2
 3

Next, we will create a train/test split with the MLDataUtils.stratifiedobs utility:

(X_train, y_train), (X_test, y_test) = stratifiedobs((features, labels))
(([6.0 6.2 6.9 5.8 5.1 6.7 5.7 5.6 6.9 5.1 6.5 4.9 4.9 5.0 6.3 5.7 6.0 6.9 5.2 7.4 5.6 4.7 5.4 6.7 5.7 4.9 5.9 5.8 6.3 5.9 6.3 5.8 5.6 6.7 6.1 4.8 6.5 4.8 5.0 5.4 6.1 4.6 5.2 5.2 5.0 6.1 4.3 5.7 5.4 5.1 5.1 7.9 5.8 4.6 6.4 5.5 5.1 5.4 6.3 6.5 5.3 6.3 5.5 6.5 4.9 7.0 7.2 5.5 4.8 7.6 6.0 6.3 5.5 5.5 5.4 7.7 5.6 7.7 5.7 5.8 5.7 5.7 6.0 6.4 5.6 4.6 7.2 6.4 4.9 6.7 6.3 6.8 6.2 4.8 4.4 4.4 6.1 6.1 5.0 6.4 6.9 5.1 6.0 7.7 5.2; 2.2 2.2 3.1 2.7 3.5 3.0 2.8 3.0 3.2 3.8 3.2 3.1 2.5 3.4 2.8 2.6 3.0 3.1 4.1 2.8 3.0 3.2 3.4 3.1 3.0 3.1 3.2 2.7 2.3 3.0 2.7 2.7 2.7 3.0 3.0 3.4 3.0 3.4 3.0 3.4 2.8 3.1 3.5 3.4 3.3 3.0 3.0 2.8 3.7 3.5 3.4 3.8 4.0 3.6 2.7 2.3 2.5 3.9 2.5 3.0 3.7 3.4 2.6 2.8 3.0 3.2 3.6 2.5 3.0 3.0 3.4 2.5 2.4 4.2 3.0 3.8 2.9 2.8 2.5 2.8 2.9 3.8 2.7 2.8 2.8 3.4 3.2 2.9 3.1 2.5 3.3 3.2 3.4 3.0 3.2 2.9 2.6 2.9 3.2 2.8 3.1 3.8 2.2 2.6 2.7; 4.0 4.5 4.9 3.9 1.4 5.0 4.5 4.5 5.7 1.6 5.1 1.5 4.5 1.6 5.1 3.5 4.8 5.1 1.5 6.1 4.1 1.6 1.5 4.7 4.2 1.5 4.8 4.1 4.4 4.2 4.9 5.1 4.2 5.2 4.9 1.9 5.8 1.6 1.6 1.7 4.0 1.5 1.5 1.4 1.4 4.6 1.1 4.1 1.5 1.4 1.5 6.4 1.2 1.0 5.3 4.0 3.0 1.7 5.0 5.2 1.5 5.6 4.4 4.6 1.4 4.7 6.1 4.0 1.4 6.6 4.5 4.9 3.8 1.4 4.5 6.7 3.6 6.7 5.0 5.1 4.2 1.7 5.1 5.6 4.9 1.4 6.0 4.3 1.5 5.8 6.0 5.9 5.4 1.4 1.3 1.4 5.6 4.7 1.2 5.6 5.4 1.5 5.0 6.9 3.9; 1.0 1.5 1.5 1.2 0.3 1.7 1.3 1.5 2.3 0.2 2.0 0.1 1.7 0.4 1.5 1.0 1.8 2.3 0.1 1.9 1.3 0.2 0.4 1.5 1.2 0.1 1.8 1.0 1.3 1.5 1.8 1.9 1.3 2.3 1.8 0.2 2.2 0.2 0.2 0.2 1.3 0.2 0.2 0.2 0.2 1.4 0.1 1.3 0.2 0.2 0.2 2.0 0.2 0.2 1.9 1.3 1.1 0.4 1.9 2.0 0.2 2.4 1.2 1.5 0.2 1.4 2.5 1.3 0.3 2.1 1.6 1.5 1.1 0.2 1.5 2.2 1.3 2.0 2.0 2.4 1.3 0.3 1.6 2.1 2.0 0.3 1.8 1.3 0.1 1.8 2.5 2.3 2.3 0.1 0.2 0.2 1.4 1.4 0.2 2.2 2.1 0.3 1.5 2.3 1.4], [2, 2, 2, 2, 1, 2, 2, 2, 3, 1, 3, 1, 3, 1, 3, 2, 3, 3, 1, 3, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 3, 3, 2, 3, 3, 1, 3, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 2, 1, 1, 1, 3, 1, 1, 3, 2, 2, 1, 3, 3, 1, 3, 2, 2, 1, 2, 3, 2, 1, 3, 2, 2, 2, 1, 2, 3, 2, 3, 3, 3, 2, 1, 2, 3, 3, 1, 3, 2, 1, 3, 3, 3, 3, 1, 1, 1, 3, 2, 1, 3, 3, 1, 3, 3, 2]), ([6.2 6.8 4.5 6.7 5.5 5.1 6.6 4.6 6.3 5.5 7.1 7.7 6.4 6.7 6.4 4.4 5.8 4.8 7.3 6.7 5.4 5.0 4.7 6.2 7.2 6.6 6.7 6.0 6.1 5.9 5.6 5.0 5.0 5.7 5.8 6.8 6.5 6.3 6.4 5.0 5.1 5.1 4.9 5.0 5.0; 2.8 3.0 2.3 3.1 3.5 3.8 2.9 3.2 3.3 2.4 3.0 3.0 3.1 3.1 3.2 3.0 2.7 3.1 2.9 3.3 3.9 3.6 3.2 2.9 3.0 3.0 3.3 2.9 2.8 3.0 2.5 3.4 3.5 4.4 2.6 2.8 3.0 2.9 3.2 2.0 3.7 3.3 2.4 2.3 3.5; 4.8 5.5 1.3 5.6 1.3 1.9 4.6 1.4 4.7 3.7 5.9 6.1 5.5 4.4 5.3 1.3 5.1 1.6 6.3 5.7 1.3 1.4 1.3 4.3 5.8 4.4 5.7 4.5 4.7 5.1 3.9 1.5 1.3 1.5 4.0 4.8 5.5 5.6 4.5 3.5 1.5 1.7 3.3 3.3 1.6; 1.8 2.1 0.3 2.4 0.2 0.4 1.3 0.2 1.6 1.0 2.1 2.3 1.8 1.4 2.3 0.2 1.9 0.2 1.8 2.1 0.4 0.2 0.2 1.3 1.6 1.4 2.5 1.5 1.2 1.8 1.1 0.2 0.3 0.4 1.2 1.4 1.8 1.8 1.5 1.0 0.4 0.5 1.0 1.0 0.6], [3, 3, 1, 3, 1, 1, 2, 1, 2, 2, 3, 3, 3, 2, 3, 1, 3, 1, 3, 3, 1, 1, 1, 2, 3, 2, 3, 2, 2, 3, 2, 1, 1, 1, 2, 2, 3, 3, 2, 2, 1, 1, 2, 2, 1]))

Incremental vs. Batch

Setup

Now, we can create several modules to illustrate training one in batch and one incrementaly.

# Create several modules for batch and incremental training.
# We can take advantage of the options instantiation method here to use the same options for both modules.
opts = opts_DDVFA(rho_lb=0.6, rho_ub=0.75)
art_batch = DDVFA(opts)
art_incremental = DDVFA(opts)
DDVFA(opts_DDVFA
  rho_lb: Float64 0.6
  rho_ub: Float64 0.75
  alpha: Float64 0.001
  beta: Float64 1.0
  gamma: Float64 3.0
  gamma_ref: Float64 1.0
  similarity: Symbol single
  max_epoch: Int64 1
  display: Bool false
  gamma_normalization: Bool true
  uncommitted: Bool false
  activation: Symbol gamma_activation
  match: Symbol gamma_match
  update: Symbol basic_update
  sort: Bool false
, opts_FuzzyART
  rho: Float64 0.75
  alpha: Float64 0.001
  beta: Float64 1.0
  gamma: Float64 3.0
  gamma_ref: Float64 1.0
  max_epoch: Int64 1
  display: Bool false
  gamma_normalization: Bool true
  uncommitted: Bool false
  activation: Symbol gamma_activation
  match: Symbol gamma_match
  update: Symbol basic_update
  sort: Bool false
, DataConfig(false, Float64[], Float64[], 0, 0), 0.0, FuzzyART[], Int64[], 0, 0, Float64[], Float64[], Dict{String, Any}("bmu" => 0, "mismatch" => false, "M" => 0.0, "T" => 0.0))

For the incremental version, we must setup the data configuration in advance. In batch mode, this is done automatically based upon the provided data, but the incremental variant has not way of knowing the bounds of the individual features. We could preprocess the data and set the data configuration with art.config = DataConfig(0, 1, 4), which translates to the data containing four features that all range from 0 to 1. This would be done in scenarios where we have either done some preprocessing on the data or have prior knowledge about the bounds of individual features. However, in this example we will let the module determine the bounds with the convenience method data_setup!:

# Setup the data config on all of the features.
data_setup!(art_incremental.config, features)

Training

We can train in batch with a simple supervised mode by passing the labels as a keyword argument.

y_hat_batch_train = train!(art_batch, X_train, y=y_train)
println("Training labels: ",  size(y_hat_batch_train), " ", typeof(y_hat_batch_train))
Training labels: (105,) Vector{Int64}

We can also train incrementally with the same method, being careful that we pass a vector features and a single integer as the labels

# Get the number of training samples
n_train = length(y_train)
# Create a container for the training output labels
y_hat_incremental_train = zeros(Int, n_train)
# Iterate over all training samples
for ix in eachindex(y_train)
    sample = X_train[:, ix]
    label = y_train[ix]
    y_hat_incremental_train[ix] = train!(art_incremental, sample, y=label)
end

Testing

We can then classify both networks and check that their performances are equivalent. For both, we will use the best-matching unit in the case of complete mismatch (see the docs on Mismatch vs. BMU)

# Classify one model in batch mode
y_hat_batch = AdaptiveResonance.classify(art_batch, X_test, get_bmu=true)

# Classify one model incrementally
n_test = length(y_test)
y_hat_incremental = zeros(Int, n_test)
for ix = 1:n_test
    y_hat_incremental[ix] = AdaptiveResonance.classify(art_incremental, X_test[:, ix], get_bmu=true)
end

# Check the shape and type of the output labels
println("Batch testing labels: ",  size(y_hat_batch), " ", typeof(y_hat_batch))
println("Incremental testing labels: ",  size(y_hat_incremental), " ", typeof(y_hat_incremental))
Batch testing labels: (45,) Vector{Int64}
Incremental testing labels: (45,) Vector{Int64}

Finally, we check the performance (number of correct classifications over total number of test samples) for both models, verifying that they produce the same results.

# Calculate performance on training data, testing data, and with get_bmu
perf_train_batch = performance(y_hat_batch_train, y_train)
perf_train_incremental = performance(y_hat_incremental_train, y_train)
perf_test_batch = performance(y_hat_batch, y_test)
perf_test_incremental = performance(y_hat_incremental, y_test)

# Format each performance number for comparison
@printf "Batch training performance: %.4f\n" perf_train_batch
@printf "Incremental training performance: %.4f\n" perf_train_incremental
@printf "Batch testing performance: %.4f\n" perf_test_batch
@printf "Incremental testing performance: %.4f\n" perf_test_incremental
Batch training performance: 1.0000
Incremental training performance: 1.0000
Batch testing performance: 0.9778
Incremental testing performance: 1.0000

Visualization

So we showed that the performance and behavior of modules are identical in incremental and batch modes. Great! Sadly, illustrating this point doesn't lend itself to visualization in any meaningful way. Nonetheless, we would like a pretty picture at the end of the experiment to verify that these identical solutions work in the first place. Sanity checks are meaningful in their own right, right?

To do this, we will reduce the dimensionality of the dataset to two dimensions and show in a scatter plot how the modules classify the test data into groups. This will be done with principal component analysis (PCA) to cast the points into a 2-D space while trying to preserve the relative distances between points in the higher dimension. The process isn't perfect by any means, but it suffices for visualization.

# Import visualization utilities
using Printf            # Formatted number printing
using MultivariateStats # Principal component analysis (PCA)
using Plots             # Plotting frontend
gr()                    # Use the default GR backend explicitly

# Train a PCA model
M = fit(PCA, features; maxoutdim=2)

# Apply the PCA model to the testing set
X_test_pca = MultivariateStats.transform(M, X_test)
2×45 Matrix{Float64}:
 -1.25763   -2.16538   2.85221   -2.3143    2.62523  2.20883   -1.0433     2.84032   -1.09522    0.191884  -2.61648   -3.07652   -1.90486    -0.927573  -1.90474    2.98184  -1.41407    2.58846   -2.93201   -2.27585   2.62253  2.72859    2.88982   -0.64169    -2.38756   -0.899641  -2.41939   -0.812456  -0.920503  -1.38967   -0.0432464  2.62648   2.77014   2.38387  -0.228879  -1.33104  -1.94925    -1.97081   -0.932411   0.511086  2.54323   2.30313    0.751467   0.707081  2.40551
 -0.179137   0.21528  -0.932865   0.182609  0.6068   0.442696   0.228957  -0.220576   0.283891  -0.677491   0.341935   0.685764   0.0480475   0.468236   0.118819  -0.48025  -0.574925  -0.197393   0.352377   0.333387  0.81809  0.333925  -0.137346   0.0190712   0.462519   0.329611   0.303504  -0.162332  -0.18239   -0.282887  -0.581489   0.170405  0.271059  1.34475  -0.402258   0.24467   0.0407303  -0.181126   0.319198  -1.26249   0.440032  0.105523  -1.00111   -1.00842   0.195917

Now that we have the test points cast into a 2-D set of points, we can create a scatter plot that shows how each point is categorized by the modules.

# Create a scatterplot object from the data with some additional formatting options
scatter(
    X_test_pca[1, :],       # PCA dimension 1
    X_test_pca[2, :],       # PCA dimension 2
    group = y_hat_batch,    # labels belonging to each point
    markersize = 8,         # size of scatter points
    legend = false,         # no legend
    xtickfontsize = 12,     # x-tick size
    ytickfontsize = 12,     # y-tick size
    dpi = 300,              # Set the dots-per-inch
    xlims = :round,         # Round up the x-limits to the nearest whole number
    xlabel = "\$PCA_1\$",   # x-label
    ylabel = "\$PCA_2\$",   # y-label
    title = (@sprintf "DDVFA Iris Clusters"),   # formatted title
)

This plot shows that the DDVFA modules do well at identifying the structure of the three clusters despite not achieving 100% test performance.

"assets/incremental-batch-cover.png"

This page was generated using DemoCards.jl and Literate.jl.