Electronic Theses and Dissertations

Date of Award

1-1-2025

Document Type

Thesis

Degree Name

M.S. in Engineering Science

First Advisor

Sakib Md. Hasan

Second Advisor

Kasem Khalil

Third Advisor

Hutchcraft Elliott

School

University of Mississippi

Relational Format

dissertation/thesis

Abstract

Echo State Networks (ESNs) have emerged as a promising paradigm for modeling complex temporal dynamics due to their unique architecture in which only the output weights require training, while the recurrent internal reservoir remains fixed. This characteristic significantly reduces training complexity and computation time, making ESNs particularly attractive for real-time processing and resource-constrained, low-power hardware implementations. However, conventional ESNs typically employ the computationally demanding hyperbolic tangent (tanh) activation function within the reservoir neurons, which poses challenges for efficient hardware realization. To address this limitation, recent research has focused on exploring alternative lightweight activation functions—including Logistic, Tent, tanh-Linear, ReLU, and Quadratic functions—that can decrease implementation costs while maintaining competitive predictive performance.

In addition to simplifying neuron activation, structured reservoir topologies such as the Simple Cycle Reservoir (SCR) have been investigated as an alternative to randomly connected reservoirs. SCRs impose a deterministic cyclic connectivity pattern that enables reduced memory footprint, faster execution, and more straightforward hardware mapping without significantly sacrificing computational expressiveness. To further enhance the trade-off between nonlinearity and memory capacity—a critical aspect for temporal information processing—we integrate a controllable memory–nonlinearity balancing mechanism inspired by recent theoretical advancements. This mechanism introduces a tunable hyperparameter that governs the ratio of nonlinear to linear neurons in the reservoir, providing greater flexibility in adjusting the reservoir’s dynamical properties to specific tasks.

Given the increased design complexity resulting from the combination of multiple activation functions, structured connectivity, and memory–nonlinearity tuning, we employ a Genetic Algorithm (GA) to perform automated hyperparameter optimization. The GA efficiently explores the high-dimensional configuration space, adaptively tailoring the reservoir’s architecture to varying datasets and neuron types without the need for exhaustive manual tuning.

Furthermore, to address the computational and power constraints inherent to embedded and edge AI applications, we examine the use of reduced-precision arithmetic, specifically 16- bit IEEE floating-point representations, as a means to lower computational overhead and energy consumption without severely degrading performance.

In summary, this work presents a comprehensive framework that synergistically combines SCR topology, computationally efficient neuron activation functions, GA-based hyperparameter optimization, and reduced-precision arithmetic to construct a lightweight, scalable, and hardware-friendly reservoir computing system. Extensive experiments conducted on diverse benchmark datasets—including the classification problems and regression problems, demonstrate that the proposed design achieves competitive prediction accuracy while significantly enhancing hardware feasibility. These results suggest that the proposed approach holds considerable promise for next-generation low-power temporal modeling tasks in edge AI and real-time applications.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.