The human brain is constructed of a vast network of interconnected entities. These entities function together to enable us learn and to perform a diverse array of tasks. The neuron is responsible for this learning process and it is made up of three main parts the dendrites, the soma, and the axon. The dendrites form the input network that consists of branches, which connect to tens of thousands of other neurons. This interconnectedness is what determines our adaptability and creativity. The next element of the neuron is the Soma. This is the processing element that determines at which threshold the neuron will respond. A constant flow of chemical signals causes stimulation of an output once the integrated signal reaches a certain threshold value. When a signal is finally generated, it is conducted down the Axon, and then continues to another dendrite or to the muscle cells.
During the learning process, adjustments to the response threshold values are made. These adjustments cause the Soma to become more excitable. This will allow the neuron to generate an output signal at lower integrated values of chemical stimulation. As we become more familiar with certain tasks, a lower level of excitation is exhibited. This lower level of excitation allows tasks to become “second nature”. Therefore tasks require less effort to perform.
Our knowledge of how we learn is somewhat limited, but similar principles are found in biological neural networks. These can be used in artificial networks as well. As in the biological counterparts, the artificial networks have processing elements like the neuron. Neurons have interconnected pathways for inputs, similar to the dendrites. The neurons of the artificial networks also use a summation, or integral process, which determines the output threshold of the individual element. Each element is one of many that are arranged in layers, which are connected to numerous others within the layer. They are also connected to the other elements that occupy different layers. By this arrangement, a highly interconnected array of neurons is formed. This increases the flexibility of the system and allows it to closely mimic the capabilities of their biological counterparts.
In the supervised method, the system is informed of what the outcome should be based on the input values. The system then processes the inputs in order to create the same output with an acceptable error margin. If the desired output is not created, the system goes back to the interconnected weights of the processing elements. Here, adjustments are made until the error is acceptable. The adjustments are made to the integrating equations, which determine the excitability of an element within the network. This method of learning is used to create a network that can generate models when there are a vast number of input variables to be evaluated. The other method is to allow the system to make adjustments without an output model to compare against. It will make the necessary adjustments as needed to help the system discover patterns and inter-relations within the input data.
Many large corporations have begun to use neural networks in a wide range of applications. Medical institutions have started investigating their benefits in areas such as the complicated nature of diagnosis of patients. By inputting various symptoms, the computer would research the vast number of possible afflictions. This would greatly assist in the diagnosis and severity of illnesses. These include, stock market forecasting, assisting in fraud detection, and in foreign market trend analysis. Research is also being done to possibly use neural network software in optical character recognition of cursive handwriting.
The potential uses of neural network technology are a widely diversified market with many possibilities. I believe that as our understanding of biological networks and learning increases, artificial neural networks will continue to advance. These systems are not confined to being controlled by algorithms, as are typical computers. As a result, these systems are not limited in their numerous possibilities of applications. In my opinion, the highly intricate nature of nonlinear mathematics and the increased complexity of ever-growing numbers of interconnected processing elements will make the evolution of artificial networks difficult but extremely functional.