Computational Intelligence and Neuroscience

Sparse Representation for Machine Learning


Publishing date
01 Jan 2022
Status
Published
Submission deadline
03 Sep 2021

Lead Editor

1Southwest University, Chongqing, China

2Universidad de Buenos Aires, Buenos Aires, Argentina

3Open University of Hong Kong, Hong Kong


Sparse Representation for Machine Learning

Description

Sparse representation attracts great attention as it can significantly save computing resources and find the characteristics of data in a low-dimensional space. Thus, it can be widely applied in engineering fields such as dictionary learning, signal reconstruction, image clustering, feature selection, and extraction.

As real-world data becomes more diverse and complex, it becomes hard to completely reveal the intrinsic structure of data with commonly used approaches. This has led to the exploration of more practicable representation models and efficient optimization approaches. New formulations such as deep sparse representation, graph-based sparse representation, geometry-guided sparse representation, and group sparse representation have achieved remarkable success. This has motivated researchers to utilize the recently developed techniques and tools of mathematics to deal with sparse representation problems.

This Special Issue will accept original research and review articles on the theory and applications of sparse representation. We especially welcome novel sparse formulations and optimization strategies.

Potential topics include but are not limited to the following:

  • Supervised and unsupervised learning with sparse coding
  • Interpretable Artificial Intelligence based on sparse representations
  • Sparse tensor representations
  • Sparse representation models design, analysis, and interpretability
  • Optimization algorithm design and analysis
  • The strategies of regularization parameter selection
  • Sparse representation of non-traditional data such as multichannel signals, etc.
  • Deep sparse representation-based classification
  • Sparse Bayesian learning
  • Object tracking via multitask sparse representation
  • Feature engineering and feature extraction
  • Matrix factorization and completion
  • Applications in signal processing, pattern recognition, multimedia, bioinformatics, etc.

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.