Skip to Main Content
Article navigation
Purpose

In recent years, the application of metaheuristics in training neural network models has gained significance due to the drawbacks of deterministic algorithms. This paper aims to propose the use of a recently developed “memory based hybrid dragonfly algorithm” (MHDA) for training multi-layer perceptron (MLP) model by finding the optimal set of weight and biases.

Design/methodology/approach

The efficiency of MHDA in training MLPs is evaluated by applying it to classification and approximation benchmark data sets. Performance comparison between MHDA and other training algorithms is carried out and the significance of results is proved by statistical methods. The computational complexity of MHDA trained MLP is estimated.

Findings

Simulation result shows that MHDA can effectively find the near optimum set of weight and biases at a higher convergence rate when compared to other training algorithms.

Originality/value

This paper presents MHDA as an alternative optimization algorithm for training MLP. MHDA can effectively optimize set of weight and biases and can be a potential trainer for MLPs.

Licensed re-use rights only
You do not currently have access to this content.
Don't already have an account? Register

Purchased this content as a guest? Enter your email address to restore access.

Please enter valid email address.
Email address must be 94 characters or fewer.
Pay-Per-View Access
$41.00
Rental

or Create an Account

Close Modal
Close Modal