mardi 10 octobre 2017

Basic Perceptron Learning Algorithm diverge

I have tried to implement a basic Perceptron for classification problem in C++. I tried to train it using points separeted by a straight line (take random x and y, if y < x * a + b, class = -1 else class = 1) and it works well when b = 0 (about 10 error on 4k input), when i introduced b factor, the algorithm starts to loop and "never" converges. I think that is something related to Bias, but i'm not able to find the error. The code is very basic, so i post only my learning algorithm.

void Neuron::train(std::vector<trainElement> trainingSet) {
    double error = 1;
    while(error > 0) {
        error = 0;
        for (auto ex = trainingSet.begin(); ex < trainingSet.end(); ex++) {
            if (ex->inputs.size() != weights.size())
                throw std::length_error("Input length does not agree with weight one.");

            //Check the update direction and update the error.
            int dir = ex->result - this->feedForward(ex->inputs);
            error += pow(dir, 2);

            //Compute the new weigths.
            for (size_t i = 0; dir != 0 && i < this->weights.size(); i++)
                this->weights[i] += this->eta * dir * ex->inputs[i];
        }
        error *= 0.5;
    }
}

The weight vector is [bias, w1, w2] and the input vector is [1, x, y]. Weigth for input are initialized to random values between -1 and 1, while the bias to 0. I'm sure that inputs are linearly separable, so the algorithm might converge, but actually it does not. Any suggestion?

Aucun commentaire:

Enregistrer un commentaire