CONVERGENCE IN NEURAL NETS.
In designing a neural net, either for biological modeling, cognitive simulation, or numerical computation, it is usually of prime importance to know that the corresponding dynamical system is convergent, meaning that every trajectory converges to a stationary state (which can depend on the initial state of the trajectory). A weaker condition, but practically as useful, is for the trajectory of almost every initial state (in the sense of Lebesgue measure) to converge; such a system is called almost convergent. Another useful but slightly weaker property is for a system to be quasiconvergent, meaning that every trajectory approaches asymptotically a bounded set of equilibrium points (such a set is necessarily connected); an individual trajectory with this property will also be called quasiconvergent. Finally, there is almost-quasiconvergence. The author reviews several ways to guarantee these desirable convergence-like properties for certain kinds of systems of differential equations that can be used for neural nets. It is noted that many of these methods were originally motivated by biological models.