Date of Award
1-1-2025
Document Type
Thesis
Degree Name
M.S. in Engineering Science
First Advisor
Sakib Md. Hasan
Second Advisor
Lei Cao
Third Advisor
Kasem Khalil
School
University of Mississippi
Relational Format
dissertation/thesis
Abstract
Reservoir Computing (RC) has established itself as a compelling paradigm for modeling complex temporal dynamics, particularly owing to its efficient training regime, theoretical grounding in dynamical systems, and growing compatibility with hardware-oriented deployments. Among RC variants, Echo State Networks (ESNs) remain popular due to their simplicity and effectiveness; however, traditional ESNs with densely connected, randomly initialized reservoirs present inherent limitations in scalability, energy efficiency, and hardware realizability challenges that hinder their adoption in edge and embedded intelligence scenarios.
In this work, we present an efficient hierarchical reservoir computing framework that integrates three complementary strategies: (1) deterministic Simple Cycle Reservoirs (SCRs) to replace random connectivity, (2) multi-layer deep architectural stacking to enhance dynamic feature extraction across temporal scales, and (3) a reduced-precision arithmetic pipeline that leverages high-precision training followed by low-bit inference (e.g., 16-bit fixed-point) to minimize resource overhead while preserving predictive fidelity. The proposed design systematically reduces multiply-accumulate (MAC) operations, shrinks memory footprint, and eliminates the stochastic variability associated with random reservoir initialization—thereby facilitating hardware friendly implementations.
To comprehensively assess the efficiency of the proposed approach, we conduct a detailed computational complexity analysis, covering MAC operations, parameter count (distinguishing between real-valued and sign-based weights), and memory usage. Furthermore, a unified Figure of Merit (FoM) is introduced to enable fair comparisons across different hierarchical RC architectures, using a traditional shallow ESN configuration as the baseline reference.
Extensive empirical evaluations are performed across both regression and classification tasks. For regression, we employ challenging benchmarks including NARMA10, NARMA20, Mackey-Glass (MG-84), Santa Fe Laser time series, and the Sunspot dataset. For classification, we evaluate performance on diverse time-series classification benchmarks: Adiac, ECG5000, FordA, FordB, and Lightning2.
Our hierarchical SCR-based networks consistently match or outperform conventional deep ESNs in both regression and classification tasks, while achieving substantial reductions in computational complexity, memory footprint, and potential energy consumption. Importantly, the reduced-precision pipeline demonstrates that performance degradation under 16-bit arithmetic remains minimal, confirming the robustness of the proposed framework under constrained numerical precision.
Overall, this study offers a principled and practical pathway toward scalable and energy-efficient reservoir computing, contributing to the development of lightweight, hardware-friendly temporal models suitable for future edge AI and embedded intelligence applications.
Recommended Citation
Song, Shuai, "Efficient Hierarchical Design in Reservoir Computing" (2025). Electronic Theses and Dissertations. 3393.
https://egrove.olemiss.edu/etd/3393