Complexity and Robustness Trade-Off for Traditional and Deep Models
1National University of Computer and Emerging Sciences, Islamabad, Pakistan
2Innopolis University, Innopolis, Russia
3University of Messina, Messina, Italy
Complexity and Robustness Trade-Off for Traditional and Deep Models
Description
Conventional and deep learning have attained incredible results in many real-world applications depending upon the availability of large quantities and high quality of labelled training examples. However, the acquisition of reliable labelled training examples is a big challenge for the research community, not only for conventional models but for the deep models as well, due to the fact that deep models require a large number of labelled training examples for learning.
The labelling process for real-world applications is complex, time-intensive, and expensive. Therefore, there is a need to develop some interactive frameworks which may help practitioners to acquire reliable, informative, and heterogeneous labelled training examples by machine-machine interaction without the help of a supervisor. Moreover, a large amount of labelled examples leads to more complex models. The complex models are not easy to interpret, not easy to reproduce, and have more risk of overfitting which ultimately produces biased results.
Thus, this Special Issue aims to present a collection of new trends in learning strategies to limit complexity and enhance the generalization performance. Original research and review articles are welcome.
Potential topics include but are not limited to the following:
- Complexity and robustness
- Learning strategies
- Multi-level and multi-sensor imaging
- Traditional/multispectral/hyperspectral imaging
- IoT and security
- Domain adaptation and randomization
- Statistical learning
- Fuzzy logic for interaction