Neuro-PID vs Neuro-FOPID: Comparative Analysis Report

Generated: 20-Apr-2026 19:52
Plant: G(s) = 8.29×10⁵ / (s² + 5s)
Simulation: 3 s, unit step at t = 0


Table of Contents

  1. Executive Summary
  2. System Architecture
  3. Step Response Analysis
  4. Integral Performance Indices
  5. Transient Behaviour Detail
  6. Fractional Orders Analysis
  7. Controller Complexity
  8. Discussion
  9. Conclusions

1. Executive Summary

This report compares two neural-network adaptive controllers on the same DC motor plant. The Neuro-PID outputs three classical gains (Kp, Ki, Kd). The Neuro-FOPID extends this to five outputs, additionally scheduling the fractional integration order λ and differentiation order μ.

MetricNeuro-PIDNeuro-FOPIDWinner
Overshoot (%)0.18740.1611FOPID
Rise Time (s)0.03010.0228FOPID
Settling Time (s)0.05550.2609PID
Steady-State Error0.0013810.000210FOPID
ISE0.0143820.179732PID
IAE0.0255540.220184PID
ITAE0.0067980.005530FOPID

2. System Architecture

Architecture

2.1 Plant

Type-1 system: pole at origin plus stable pole at s = −5 (DC motor model). The integrating character makes Ki scheduling critical — both networks must learn to avoid windup while maintaining zero steady-state error.

2.2 Common Network Topology: 3 → 64 → 64 → 32 → n

Both networks receive the same three error-derived inputs:

PortSymbolDescription
1eTracking error r(t) − y(t)
2de/dtDerivative of error
3∫e dtIntegral of error

Inputs are z-scored using training statistics. Hidden layers use ReLU. Output layer is linear (regression). Both controllers are online adaptive — the NN runs every simulation timestep.

2.3 Neuro-PID

[e, de, ie] → [3→64→64→32→3] → [Kp, Ki, Kd]
                                         ↓
                              u = Kp·e + Ki·∫e + Kd·de/dt
  • Outputs: 3 | Parameters: 6595 | Orders: fixed (λ=1, μ=1)
  • Labels: MATLAB pidtune(), analytical, seconds per plant condition
  • Label normalisation: not required

2.4 Neuro-FOPID

[e, de, ie] → [3→64→64→32→5] → [Kp, Ki, Kd, μ, λ]
                                           ↓
                   u = Kp·e + Ki·s⁻λ·e + Kd·sμ·e
  • Outputs: 5 | Parameters: 6661 | Orders: adaptive λ,μ ∈ [0.6, 1.4]
  • Fractional operators: Oustaloup rational approximation (N=5, ω_b=10⁻⁴, ω_h=10⁴)
  • Labels: fmincon + ISE/overshoot/settling cost, ~44 min on 6 workers
  • Label normalisation: z-score mandatory (Kp/Ki span different decades to μ/λ)

3. Step Response Analysis

Step Response

3.1 Metrics Table

MetricNeuro-PIDNeuro-FOPIDChange
Overshoot (%)0.18740.1611-14.0%
Rise Time (s)0.03010.0228-24.4%
Settling Time (s)0.05550.2609370.2%
Peak Value1.001871.00161-0.026%
Peak Time (s)1.13921.278312.2%
Steady-State Error0.0013810.000210-84.8%

Negative change = FOPID is better.

3.2 Observations

Overshoot — Both stay well below 0.2%. FOPID is marginally better (0.1611% vs 0.1874%). The fractional derivative μ > 1 provides stronger phase lead, suppressing overshoot slightly more aggressively.

Rise Time — FOPID rises 24.4% faster. The fractional integrator s⁻λ with λ < 1 reduces low-frequency phase lag vs a pure integrator, enabling more aggressive initial proportional action without compromising stability.

Settling Time — PID settles 4.7x faster (0.0555 s vs 0.2609 s). The FOPID’s residual oscillation post-rise inflates settling time. Root cause: the fixed Oustaloup operator poles/zeros do not update when λ/μ change at runtime, creating filter mismatch during the settling phase.

Steady-State Error — FOPID achieves 6.6x lower SSE (0.000210 vs 0.001381). The fractional integrator with λ < 1 reduces windup risk while maintaining DC accuracy.


4. Integral Performance Indices

Performance Indices

IndexFormulaNeuro-PIDNeuro-FOPIDWinner
ISE∫e² dt0.0143820.179732PID
IAE∫|e| dt0.0255540.220184PID
ITAE∫t·|e| dt0.0067980.005530FOPID
ITSE∫t·e² dt0.0000640.000153PID

ISE/IAE: PID wins — it accumulates less total error because it settles faster. The FOPID’s slow convergence inflates its integral despite better final accuracy.

ITAE: FOPID wins by 18.6%. ITAE penalises late errors heavily — the FOPID’s superior steady-state precision outweighs its slower gross settling in this time-weighted metric.

Guidance: Use ISE/IAE when convergence speed is paramount. Use ITAE/SSE when long-term accuracy matters most.


5. Transient Behaviour Detail

Transient Zoom

Tracking Error

The 0–0.5 s zoom shows the FOPID reaching the setpoint earlier but with residual oscillatory decay. The PID crosses the setpoint with a very small single overshoot then converges monotonically. The error plot confirms the PID achieves clean exponential-like decay while the FOPID error shows slow oscillatory convergence toward zero.

This is consistent with the Oustaloup approximation introducing additional poles that interact with the closed-loop dynamics as the operating point shifts. In a fully dynamic implementation (bilinear Oustaloup recomputed each step), these artefacts would be eliminated.


6. Fractional Orders Analysis

ParameterRangeTraining MeanStdRole
λ (integrator order)[0.6, 1.4]0.9820.124Integral phase; λ<1 reduces lag
μ (derivative order)[0.6, 1.4]1.2840.043Derivative bandwidth; μ>1 = super-diff

λ ≈ 0.98 (near-integer): The optimiser found near-classical integration optimal for this plant. However λ varies dynamically during the transient — the NN can reduce λ early (less aggressive integration, preventing windup) and increase it late (tighter steady-state accuracy). This dynamic scheduling is impossible in a fixed-parameter FOPID.

μ ≈ 1.28 (super-unitary): Stronger-than-classical differentiation provides enhanced phase lead for this double-integrator-like plant. The low Std (0.043) indicates the NN learned a nearly constant optimal μ across operating conditions — making μ the most “fixed” of the five outputs in practice.


7. Controller Complexity

AspectNeuro-PIDNeuro-FOPID
Network outputs35
Parameters65956661
Label generationpidtune() (seconds)fmincon + oustapid (44 min)
Label normalisationNot requiredZ-score all 5 outputs
Additional Simulink blocksNone2× Oustaloup State-Space
Filter states010 (2 × N=5 operators)
Compute per timestepLowMedium

8. Discussion

8.1 Where Neuro-FOPID Wins

  1. Precision: Lower overshoot (0.1611% vs 0.1874%), faster rise, better SSE (0.000210 vs 0.001381), better ITAE.
  2. Richer gain space: The 5D parameter space lets the NN simultaneously shape transient and steady-state behaviour with more degrees of freedom.
  3. Fractional plant affinity: On plants with inherent fractional dynamics (electrochemical cells, viscoelastic structures), adaptive μ/λ provide exact-order matching unavailable to integer PID.
  4. Phase robustness: Fractional-order controllers are known to provide flat phase response over a frequency band (iso-damping property), offering inherent robustness to gain uncertainty.

8.2 Where Neuro-PID Wins

  1. Settling speed: 4.7x faster (0.0555s vs 0.2609s) — decisive for high-throughput or real-time applications.
  2. ISE/IAE: Lower total accumulated error (ISE: 0.014382 vs 0.179732).
  3. Simplicity: No fractional operator blocks, no z-score normalisation, ~10× shorter training time.
  4. No approximation artefacts: Clean monotonic settling without Oustaloup mismatch oscillation.

8.3 The Settling Time Paradox

The FOPID’s slower settling is architectural, not fundamental. Because the Oustaloup SS matrices are frozen at nominal λ/μ, runtime changes in the NN’s fractional order outputs create a gain-phase mismatch between intent and delivery. Three paths to resolution:

  • Variable-order bilinear Oustaloup (MATLAB Function block, 11 sections, recomputed each Ts): eliminates mismatch, adds compute overhead
  • Scheduled LUT: pre-compute Oustaloup coefficients on a (λ,μ) grid, interpolate at runtime
  • Implicit FOPID via NN: absorb the full control law (including fractional integration) into the NN, outputting u directly

9. Conclusions

  1. Neuro-FOPID is more precise — lower overshoot, faster rise, superior SSE and ITAE.

  2. Neuro-PID settles faster — 4.7× lower settling time and better ISE/IAE over the full horizon.

  3. The FOPID settling limitation is architectural — a variable-order implementation resolves it.

  4. Training cost is 10–20× higher for FOPID — fmincon+oustapid vs analytical pidtune.

  5. Neither controller is universally better — the optimal choice depends on application:

    ApplicationRecommended
    Fast servo, roboticsNeuro-PID
    Precision positioning, process controlNeuro-FOPID
    Fractional-order plantNeuro-FOPID
    Embedded / resource-constrainedNeuro-PID
    Noise-contaminated measurementsNeuro-FOPID (μ < 1)

Appendix: All Figures

FigFileDescription
1fig1_step_response.pngFull step response, 0–3 s
2fig2_error.pngTracking error e(t) = r − y
3fig3_metrics_bar.pngStep metrics bar chart (log scale)
4fig4_perf_indices.pngIntegral performance indices
5fig5_transient_zoom.pngTransient detail 0–0.5 s
6fig6_architecture.pngNetwork architecture

Appendix: Raw Data

Neuro-PID  : OS=0.1874%  Tr=0.0301s  Ts=0.0555s  SSE=0.001381  ISE=0.014382  IAE=0.025554  ITAE=0.006798  ITSE=0.000064
Neuro-FOPID: OS=0.1611%  Tr=0.0228s  Ts=0.2609s  SSE=0.000210  ISE=0.179732  IAE=0.220184  ITAE=0.005530  ITSE=0.000153

Report auto-generated by MATLAB — 20-Apr-2026 19:52:13