mixture of experts

Mixture of Experts – C Implementation and Sparse Input Data

Recently, I’ve been working on classification of very high dimensional (input dimension ~150.000) and sparse data as a continuation of Artificial Neural Networks course project I’ve finished last semester. I’m currently experimenting with Mixture of Experts architecture to see if it produces better classification rates. As you can see in my last post, first I have implemented MoE model in MATLAB but I needed an implementation that performs faster so I ported my MATLAB code to C with a few additions. Since my focus was on sparse data where most of the input values are zero, I have implemented separate, optimized functions for training and testing with sparse input data.
Below you can see sample code to initialize, train and test MoE model with synthetic data I have generated for classification.

//      ClassifySyntheticData.c
//      
//      Copyright 2011 Goker Erdogan <goker@goker-laptop>
//      
//      This program is free software; you can redistribute it and/or modify
//      it under the terms of the GNU General Public License as published by
//      the Free Software Foundation; either version 2 of the License, or
//      (at your option) any later version.
//      
//      This program is distributed in the hope that it will be useful,
//      but WITHOUT ANY WARRANTY; without even the implied warranty of
//      MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
//      GNU General Public License for more details.
//      
//      You should have received a copy of the GNU General Public License
//      along with this program; if not, write to the Free Software
//      Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
//      MA 02110-1301, USA.

#include "MoE.h"
#include <stdio.h>

int main( int argc, char **argv)
{
	int inputDim = 2;
	int outputDim = 2;
	int expertCount = 2;
	int trainCount = 600;
	int testCount = 400;
	int iterCount = 5;
	
	MoEType *moe = InitializeMoE( inputDim, outputDim, expertCount, COMPETITIVE, 0.1, 0.99 );
	
	double x[2];
	double rd;
	int r;
	
	FILE *tfp = fopen("trainMoE.txt","r");
	FILE *vfp = fopen("testMoE.txt","r");
	
	int i = 0;
	int n;
	int rc;
	int misclassified = 0;	
	while( i < iterCount )
	{
		i++;
		for ( n = 0; n < trainCount; n++ )
		{
			fscanf(tfp, "%lf %lf %lf\n", &x[0], &x[1], &rd);
			r = (int)rd;
			TrainOnlineD(moe, x, r, 0);		
		}
		
		misclassified = 0;	
		for ( n = 0; n < testCount; n++ )
		{
			fscanf(vfp, "%lf %lf %lf\n", &x[0], &x[1], &rd);
			r = (int)rd;
			rc = TestSample(moe, x);
			if ( rc != r )
			{
				misclassified++;
			}
		}
		
		printf("Iteration %d Classification Rate: %f\n", i, (testCount - misclassified) / (double)testCount);
	}
	
	fclose(tfp);
	fclose(vfp);	
	FreeMoE(moe);
	
	return 0;

}

You can find data structures and functions for MoE model, synthetic classification dataset and sample code given above in the source code which you can download here.

Mixture of Experts

Mixture of experts (MoE) is a neural network architecture where separate linear models are trained for local regions in input dataset. These linear models are called experts and their outputs are combined with weights given by gating model. Gating model discriminates local regions in data and may employ different functions such as radial basis functions or simple dot product. If data present different characteristics for different regions of input space, one approach would be to run a clustering method on data as a preprocessing step and train separate models for each cluster. MoE architecture couples these clustering and separate model fitting tasks into a single task and may prove to be more successful on certain datasets. We can talk about two types of MoE architecture that differ in their clustering approaches. In competitive MoE, local regions in data are forced to focus on discrete sub-spaces in data while in cooperative type, there is no enforcement on location of regions and these regions may and most of the time will overlap.

First, I will present a formal definition of MoE architecture and update equations for training such a model. Afterwards, I will provide MoE implementation in MATLAB and illustrate it on a synthetic regression example.

X = \lbrace x, r \rbrace where x is a d x N matrix where each row vector contains input values and r is a K x N matrix where each row is a vector denoting which class a sample belongs to in classification problems or output values for each output dimension in case of regression.

j=1...D inputs, i=1...K outputs, h=1...H experts, \eta learning rate
Linear fit of expert h for output i: w_{ih} = v_{ih}^{T} x
Weight vector of each linear model: This vector is of size D+1 since we add a bias unit. v_{ih}
Local region center for expert h: m_h is a D-vector.
Gating Model, weight of expert h: Here dot product is used. g_h = \frac{ exp(m_h^T x) }{\Sigma_k exp(m_k^T x)}

Regression:
Output: y_i = \Sigma_{h=1}^{H} w_{ih} g_h
Classification:
Output: y_i are converted to posterior probabilities through softmax function. o_i = \frac{exp(y_i)}{\Sigma_k exp(y_k)}

Update equations for regression and classification are derived using gradient ascent. In regression, sum of squares of error is minimized where in classification likelihood of a training instance is maximized. When derived, update equations for regression and classification turn out to be same and given as follows for two types of MoE.

Cooperative MoE
\Delta v_{ih} = \eta (r_i - o_i) g_h x
\Delta m_h = \eta (r_i - o_i) (w_{ih} - y_i) g_h x
( o_i = y_i for regression )

Competitive MoE
\Delta v_{ih} = \eta f_h (r_i - y_{ih}) x
\Delta m_h = \eta (f_h - g_h) x
For classification, f_h = \frac{ g_h exp ( \Sigma_i r_i log y_{ih} )}{\Sigma_k g_k exp ( \Sigma_i r_i log y_{ik} )} y_{ih} = \frac{exp(w_{ih}}{\Sigma_k exp(w_{kh})}
For regression, f_h = \frac{ g_h exp ( 0.5 \Sigma_i (r_i - y_{ih})^2 )}{\Sigma_k g_k exp ( 0.5 \Sigma_i (r_i - y_{ik})^2 )} y_{ih} = w_{ih}

For testing my implementation I have generated synthetic regression and classification problems. Below can be seen plot of regression dataset. I have generated points from two linear models by using first linear model in first half of space and second model in the rest also adding some Gaussian noise to samples.

Input data plot

Two functions are implemented for realizing MoE architecture where first one TrainMixtureOfExperts is used to train model and TestMixtureOfExperts is utilized for testing instances on trained MoE network. Below code trains the network on training data and reports error for validation set.

[v, m] = TrainMixtureOfExperts('regression', 'competitive', tx2, tr2, 2, 100, 0.5, 0.99);
[err, cr] = TestMixtureOfExperts('regression', vx2, vr2, v, m);
plot(vx2, cr, 'xr')
hold on
plot(vx2, vr2, 'ob')

It can also be seen in the below plot that MoE model learns the underlying model that generates quite successfully by analyzing the predicted outputs on given outputs visually.

Input data (blue) and predicted outputs by model (red)

All source code; MoE functions and synthetic data generation routines can be downloaded from here.