Area of Research

  • Fractional-order Neural Networks

  • Prediction and Forecasting Techniques

  • Hybrid Optimization Algorithms

Fractional-order Activation Functions

ReLU function has become one of the default activation functions for many neural networks. One example of such a type of network is a convolutional neural network. This is because the model with ReLU trains quicker and generally delivers higher performance. The performance of ReLU and its variants can be further improved using fractional calculus. This research focuses on deriving and validating the fractional-order form of rectified linear unit activation function and its linear and nonlinear variants. The linear variants include the leaky and parametric, whereas the nonlinear variants include the exponential, sigmoid-weighted, and Gaussian error functions. Besides, a standard formula has been created and used while developing the fractional form of linear variants. Moreover, different expansion series, such as Maclaurin and Taylor, have been used while designing the fractional version of nonlinear variants. A simulation study has been conducted to validate the performance of all the developed fractional activation functions utilizing a single and multilayer neural network model and to compare them with their conventional counterparts. In this simulation study, a neural network model has been created to predict the system-generated power of a Texas wind turbine. The performance has been evaluated by varying the activation function in the hidden and output layers with the developed functions for single and multilayer networks.

Check our recently published paper in "Mathematical Problems in Engineering" journal

Prediction and Forecasting Techniques

Our research group focuses on developing prediction and forecasting models using machine learning techniques with application to smart grid stability, chaotic time series prediction, etc. Also, recent literature works concluded that most of the works on prediction models using machine learning reported little or no information on the presence and handling of missing data. The missing data is omitted in most models, which is ineffective, affecting their performance. The missing data for the analysis results from many things, such as sensor failure, equipment malfunctions, lost files, etc. This challenges the increasing cost and prediction ability of the proposed models. Thus, there is a need for significant research in handling missing data. On the other hand, predicting the data with neural networks or machine-learning models is more efficient than simply omitting the data or resorting to mean values. Therefore, our group also focuses on developing a prediction model using neural networks that handle missing input variables.

Check the following for our recently published works in this research area

Hybrid Optimization Algorithms

This research proposes a novel hybrid arithmetic–trigonometric optimization algorithm (ATOA) using different trigonometric functions for complex and continuously evolving real-time problems. The proposed algorithm adopts different trigonometric functions, sin, cos, and tan, with the conventional sine cosine algorithm (SCA) and arithmetic optimization algorithm (AOA) to improve the convergence rate and optimal search area in the exploration and exploitation phases. The proposed algorithm is simulated with 33 distinct optimization test problems consisting of multiple dimensions to showcase the effectiveness of ATOA. Furthermore, the different variants of the ATOA optimization technique are used to obtain the controller parameters for the real-time pressure process plant to investigate its performance. The obtained results have shown a remarkable performance improvement compared to existing algorithms.

Check our published works on Optimization Algorithms in "Sensors" and "Fractal and Fractional" Journals.