Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Previously Published Works bannerUC Berkeley

CONVERGENCE IN NEURAL NETS.

Abstract

In designing a neural net, either for biological modeling, cognitive simulation, or numerical computation, it is usually of prime importance to know that the corresponding dynamical system is convergent, meaning that every trajectory converges to a stationary state (which can depend on the initial state of the trajectory). A weaker condition, but practically as useful, is for the trajectory of almost every initial state (in the sense of Lebesgue measure) to converge; such a system is called almost convergent. Another useful but slightly weaker property is for a system to be quasiconvergent, meaning that every trajectory approaches asymptotically a bounded set of equilibrium points (such a set is necessarily connected); an individual trajectory with this property will also be called quasiconvergent. Finally, there is almost-quasiconvergence. The author reviews several ways to guarantee these desirable convergence-like properties for certain kinds of systems of differential equations that can be used for neural nets. It is noted that many of these methods were originally motivated by biological models.

Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View