There are still many challenging issues to be addressed, such as how to judge which synapses should be pruned, how synapses are formed, how neuronal networks evolve toward optimal energy efficiency, and how hub neurons in neuronal networks appear. Nonetheless, there is a potential way to study the structural changes of neuronal networks by integrating these factors into a computational model. Here, a neuronal network model is developed to simulate the dynamic structural evolution process of neuronal networks.
In this model, the change in synaptic weights follows a synaptic learning rule, and the formation and elimination of synaptic connections can occur at the same time.
In the following, we use this model to 1 study the change of network connectivity during the decline of the synaptic density, 2 reveal the relationship between network efficiency and the decline of the synaptic density, and 3 investigate the degree distribution of the neuronal network model. Finally, we will discuss the simulation results of this paper in combination with previous studies. Network structure generally serves as a critical driver for complicated functions in a broad class of systems across many research fields, such as gene regulatory networks Hasty et al.
Consider a neuronal network of N neurons, in which the leaky integrate and fire model is used to simulate the dynamics of the neuron membrane potential as follows Dayan and Abbott, ; Izhikevich, :. The value of e ij and e kj can only be 1 or 0, indicating the presence or absence of the corresponding connection, respectively. M denotes the number of external signal sources. Like neurons, the external signal sources generate spikes at a given frequency, whose output signals can be calculated from Equation 3 according to the firing time of their latest spikes.
Obviously, one neuron can receive signals from other neurons within this network simultaneously, as well as from signals outside this network, and its membrane potential can change over time. At the microcircuit level, numerous studies have shown that neuronal networks in the brain are not organized randomly Song et al. To reflect this distance dependence, neurons in the neuronal network are uniformly placed on the surface of a unit sphere, as shown in Figure 1A and are connected to each other based on a probability function of distance-dependent connection defined later.
According to experimental findings Markram et al. For simplicity, inhibitory neurons are used to balance the activity of the entire network Hattori et al. In the neuronal network, excitatory neurons can be connected to any other neurons, but inhibitory neurons can only be connected to excitatory neurons Salinas and Sejnowski, That is, there are three types of synaptic connections in the neuronal network: those from excitatory to excitatory neurons, those from excitatory to inhibitory neurons, and those from inhibitory to excitatory neurons, as shown in Figure 1B.
Note that signal transmission via synaptic connections is unidirectional, but the existence of synaptic connections with opposite directions between two neurons is also allowed in the neuronal network. Here, to accord with the developmental characteristics of neuronal networks in the brain, the neurons are randomly but densely connected to each other, as shown in Figure 1C. During network evolution, unimportant synapses can be selectively pruned to optimize the network structure according to the evolution rule proposed later.
Figure 1. Structural organization of the neuronal network. A Excitatory and inhibitory neurons are placed uniformly on the surface of a unit sphere. B In the network, excitatory neurons can be connected to any neurons, but inhibitory neurons can only be connected to excitatory neurons.
C Before the evolution of the network, the neurons are randomly connected to each other to build a large number of connection paths, which can be selectively pruned during the subsequent evolution. Obviously, the neuronal network is directed. For simplicity, only a portion of the neurons and synaptic connections are drawn in the figure.
Neurons in the brain interact with each other through synaptic connections, and brain functions such as the formation of memories and the learning of actions, are closely related to the changes of synaptic connections Martin et al. Researchers have proposed various theories to interpret these changes. The most influential theory is Spike-Timing-Dependent Plasticity STDP , which studies changes of synaptic connections as a function of presynaptic and postsynaptic neuronal activities Hebb, ; Caporale and Dan, Nowadays, increasing studies indicate that in addition to neuronal activities, other factors can also modulate the changes of synaptic connections, such as neuromodulators and glia Seol et al.
In our model, an energy-modulated STDP, which serves as a synaptic learning rule, is introduced to simulate the changes of synaptic weights Yuan et al. E s i n g l e t r a n s is the metabolic energy expended per action potential in synaptic transmission, and E s i n g l e i n t e g is the metabolic energy expended per action potential in dendritic integration. N j t r a n s is the number of action potentials propagating to the j-th neuron before time t , and N j i n t e g is the number of action potentials generated by the j-th neuron before time t.
According to experimental findings, the constant c is fixed at 0. In this case, the increment or decrement in synaptic weights depends only on the electrical activities of the pre-synaptic and postsynaptic neurons. The evolution of neuronal networks in the brain endows humans a powerful ability to learn complex skills and adapt to changing environments. During the evolution, the elimination and formation of synaptic connections exist at the same time.
The elimination of synaptic connections does not occur randomly but relies on neuronal electrical activity Tessier and Broadie, ; Stoneham et al. Activity-dependent pruning mechanisms ensure that appropriate synaptic connections are conserved, while inappropriate ones are eliminated Cody and Freeman, However, since the cellular and molecular mechanisms of activity-dependent synaptic pruning have not been fully revealed, it is somewhat difficult to determine directly which synaptic connections are appropriate or inappropriate.
Nowadays, it is well-known that frequently used connections are generally conserved in general networks, while infrequently used connections are deleted Navlakha et al. Thus, it is possible to indirectly determine whether synaptic connections are appropriate according to their usage frequency. Because of the synaptic learning rule mentioned above, the weights of frequently used synaptic connections can increase, while those of infrequently used synaptic connections can decrease.
Therefore, the elimination of synaptic connections can be achieved as follows: during the evolution, synaptic weights throughout the neuronal network are sorted in ascending order, and the first n synaptic connections can then be eliminated at intervals.
Similar to the elimination of synaptic connections, their formation is also not random but exhibits some sort of preferential attachment and distance dependencies Boccaletti et al.
In many continuously growing real-world networks, a node's probability of receiving a new edge is proportional to the intensity of its activity, which is called preferential attachment by Barabasi and Albert This mechanism also exists in neuronal networks in the brain, i.
Preferential attachment may result in the emergence of hub neurons, which makes neuronal networks become scale-free Johnson et al. It should be noted, however, that preferential attachment is not the only mechanism affecting the formation of synaptic connections. Otherwise, the neuron with the most synaptic connections would gradually connect to all other neurons in a neuronal network, which is obviously not in accord with experimental findings Sporns and Betzel, Studies have further indicated that the connection probability between two neurons is related not only to the activity intensity of each neuron but also to the distance between them and decays exponentially with increased distance Ercsey-Ravasz et al.
For neurons with a large number of synaptic connections, the probability of forming synaptic connections between two faraway neurons is still very low, even if preferential attachment exists. Therefore, the distance-dependent connection probability P ij from the i-th to j-th neurons in the neuronal network is first defined as follows:. Here, for simplicity, the formation of a new synaptic connection of neurons is assumed as an independent and identically distributed event.
Therefore, the product of their respective probabilities owning a new synaptic connection, i. Based on previous studies on synaptic density during the development of neuronal networks in the brain Johnson et al.
According to the connection probability defined above, the formation of synaptic connections can be induced in the network in a biologically reasonable way. At intervals, the connection probabilities between any two neurons in the network are calculated, and then the first m pair of neurons in descending order of probability are connected to each other. From the above evolution rule of the elimination and formation of synaptic connections, it can be found that n synaptic connections are eliminated and m synaptic connections are formed at intervals.
This means that the synaptic density of the neuronal network becomes stable, i. During the simulation, the parameters are set according to the above values unless otherwise specified. In addition, initial values of all connections w ij , b kj in the network are generated randomly according to a uniform distribution.
In the numerical simulation of the above model, the change of network structure, the calculation of energy metabolism, and the updating of synaptic weights and neuron membrane potentials are involved. It should be noted that the time scales of these processes are different. In the numerical simulation, the synaptic weights and neuron membrane potentials are updated at each time step.
Because the energies consumed by synaptic transmission and dendrite integration of neurons are calculated by counting the total number of spikes within a period of time.
If the time period is too short, the calculated result is not accurate enough and changes sharply. As for the change of the network structure, its time scale is related to the parameters of the synaptic learning rule. After the network structure changes, the synaptic weights will be adjusted automatically according to the synaptic learning rule. When the synaptic weights tend to be stable, the next change of the network structure begins.
During the development of neuronal networks, synapses are overproduced and then pruned back over time, whereas in engineered networks, connections are initially sparse and are then added over time Navlakha et al.
The differences in the construction strategy make neuronal networks exhibit some unique characteristics during the development. Synaptic weights in the neuronal network, which directly reflect the tightness of network connectivity, along with synaptic density and energy metabolism, are investigated here.
We assume that continuously generated external signals obey a normal distribution N 40, 10 , and neurons in the network receive these signals at random. It can be found from Figure 2A that before network evolution, the average synaptic weight still changes due to the synaptic learning rule Yuan et al. The distributions of the synaptic weights at different moments are also drawn in Figures 2B—E. Initial values of the synaptic weights are assigned randomly according to a uniform distribution, as shown in Figure 2B.
As the average synaptic weight stabilizes, the synaptic weights exhibit an approximately normal distribution, as shown in Figure 2C. After long-term stabilization of the average synaptic weight, its distribution is shown in Figure 2D.
Obviously, compared with Figure 2C , the distribution of the synaptic weights further deviates from a normal distribution. This suggests that there are still some slight differences in their distribution. Even though the average synaptic weight stabilizes, the synaptic weights continue to regulate themselves slightly.
Figure 2. Changes in several characteristics of the neuronal network during evolution. A Changes in the average synaptic weight, synaptic density, and average energy metabolism. B—E The distributions and cumulative frequency curves of the initial synaptic weights at 0, , 2,, and 6, s.
In our numerical simulation, the energy metabolism of the entire network is characterized by the average energy metabolism of all neurons. Initially, the average energy metabolism has a similar trend as the average synaptic weight. It can then be inferred that the dramatic change in the synaptic weights makes the average energy metabolism quickly reach the biologically reasonable energy level regulated by the synaptic learning rule.
When the average energy metabolism becomes stable, the slight change in the synaptic weights should be derived from the interactions between neurons in the network. Once the distribution of the synaptic weights becomes substantially stable, the network starts evolving according to the given evolution rule.
From Figure 2A , it can also be found that although the formation and elimination of synaptic connections can occur simultaneously during evolution, the synaptic density of the network still decreases, which is similar to neuronal networks in the brain Goyal and Raichle, ; Navlakha et al.
Furthermore, the average synaptic weight increases gradually with the decrease of the synaptic density. As the average synaptic weight tends to stability, the distribution of the synaptic weights is shown as Figure 2E. Compared with the distribution of the synaptic weights before evolution, the synaptic weights increase significantly after evolution, showing an irregular distribution.
Although the average synaptic weight and synaptic density change dramatically during evolution, the average energy metabolism remains unchanged. Therefore, it is necessary to further study whether the average energy metabolism remains unchanged during the decline of synaptic density.
Figure 2F shows that the average synaptic weight increases as the synaptic density decreases, but the average energy metabolism is not always stable. When the synaptic density drops to a very low level, the average energy metabolism begins to decline. The lower the synaptic density, the lower the average energy metabolism, and the average synaptic weight tends to the upper limit set by the network model.
It is well-known that synaptic transmission and dendritic integration are two main metabolically expensive processes in information processing in neuronal networks Howarth et al. During development, the connectivity of neuronal networks must ensure that the energy metabolism of each neuron is at a normal level.
In our proposed network, the synaptic weights and the synaptic density, which describe the connectivity from different perspectives, determine the amount of information received by neurons, thus affecting the synaptic transmission and dendritic integration. It can be concluded from the above simulation results that although neuronal network construction is determined by the genes of the organism Thompson et al.
The synaptic weights and the synaptic density counterbalance each other so that the energy metabolism of each neuron is at a normal level. In addition, the synaptic weights increase during evolution as the synaptic density decreases, suggesting that the efficiency of frequently used synaptic connections in the network has been improved.
Abstract Density of synaptic profiles in layer 3 of middle frontal gyrus was quantitated in 21 normal human brains ranging from newborn to age 90 years. Publication types Research Support, U. New neurons and synapses are formed by the brain at an extremely high rate during this time. During the first year of life, the number of synapses in the brain of an infant grows more than tenfold. By age 2 or 3, an infant has about 15, synapses per neuron.
In the visual cortex of the brain the part responsible for vision , synapse production hits its peak at about 8 months of age. In the prefrontal cortex, peak levels of synapses occur sometime during the first year of life.
This part of the brain is used for a variety of complex behaviors, including planning and personality. During the second year of life, the number of synapses drops dramatically. Synaptic pruning happens very quickly between ages 2 and During this time, about 50 percent of the extra synapses are eliminated. In the visual cortex, pruning continues until about 6 years of age.
Synaptic pruning continues through adolescence, but not as fast as before. The total number of synapses begins to stabilize. While researchers once thought the brain only pruned synapses until early adolescence, recent advancements have discovered a second pruning period during late adolescence. According to newer research, synaptic pruning actually continues into early adulthood and stops sometime in the late 20s.
Research that looks at the relationship between synaptic pruning and schizophrenia is still in the early stages. For example, when researchers looked at images of the brains of people with mental disorders, such as schizophrenia, they found that people with mental disorders had fewer synapses in the prefrontal region compared to the brains of people without mental disorders.
Then, a large study analyzed post-mortem brain tissue and DNA from more than , people and found that people with schizophrenia have a specific gene variant that may be associated with an acceleration of the process of synaptic pruning. More research is needed to confirm the hypothesis that abnormal synaptic pruning contributes to schizophrenia.
While this is still a long way off, synaptic pruning may represent an interesting target for treatments for people with mental disorders. Repeated confocal imaging of individual dendritic spines in the living hippocampal slice: evidence for changes in length and orientation associated with chemically induced LTP.
Sorra, K. Stability in synapse number and size at 2 hr after long-term potentiation in hippocampal area CA1. Muller, M. Reversible loss of dendritic spines and altered excitability after chronic epilepsy in hippocampal slice cultures. USA 90 , — Murphy, D. Regulation of dendritic spine density in cultured rat hippocampal neurons by steroid hormones.
Gould, E. Gonadal steroids regulate dendritic spine density in hippocampal pyramidal cells in adulthood. Woolley, C. Naturally occurring fluctuation in dendritic spine density on adult hippocampal pyramidal neurons. Fischer, M. Rapid actin-based plasticity in dendritic spines. Neuron 20 , — Ziv, N. Evidence for a role of dendritic filopodia in synaptogenesis and spine formation.
Neuron 17 , 91— Maletic-Savatic, M. Rapid dendritic morphogenesis in CA1 hippocampal dendrites induced by synaptic activity. Science , — Engert, F. Dendritic spine changes associated with hippocampal long-term synaptic plasticity. Nature , 66—70 Cho, K.
The rat brain postsynaptic density fraction contains a homolog of the Drosophila discs-large tumor suppressor protein. Neuron 9 , — Kornau, H. Kistner, U. SAP90, a rat presynaptic protein related to the product of the Drosophila tumor suppressor gene dlg-A.
Kim, E. Nature , 85—88
0コメント