Neural Networks and Applications
Neural networks are computational systems loosely modeled on the brain's architecture, built from layers of interconnected nodes that learn to recognize patterns, classify data, and approximate complex functions by adjusting the strength of their connections through processes like backpropagation. Variants such as recurrent networks, radial basis function networks, and self-organizing maps each offer different inductive biases suited to tasks ranging from time-series prediction to unsupervised clustering, while deep learning has scaled these ideas to image recognition, language modeling, and scientific discovery at a level that was impractical just a decade ago. Despite impressive empirical results, researchers are still working to understand why deep networks generalize so well from limited data, how to make their reasoning interpretable, and whether current architectures can be extended to match the sample efficiency and adaptability of biological intelligence.
- Works
- 251,481
- Total citations
- 4,419,660
- Keywords
- Neural NetworksSelf-Organizing MapsBackpropagation LearningRadial Basis Function NetworksDeep LearningArtificial Neural Networks
Top papers in Neural Networks and Applications
Ordered by total citation count.
- Random Forests↗ 123,154OA
- Long Short-Term Memory↗ 96,303
- Deep learning↗ 80,188OA
- Gradient-based learning applied to document recognition↗ 57,535OA
- Particle swarm optimization↗ 47,084
- A Threshold Selection Method from Gray-Level Histograms↗ 42,811
- Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach↗ 42,166OA
- LIBSVM↗ 41,266
- Support-vector networks↗ 40,177OA
- The Nature of Statistical Learning Theory↗ 39,205
- Dropout: a simple way to prevent neural networks from overfitting↗ 34,246
- Support-Vector Networks↗ 32,486OA
Active researchers
Top authors in this area, ranked by h-index.