A Multilayer Biopedal Neural Network based on Cutout and Zinc Scanning Systems
Traditional deep learning approaches usually treat the problem as a quadratic process problem (QP), and thus focus on learning the optimal algorithm by solving a quadratic optimization problem. This works well for deep neural networks, which can be easily solved efficiently and thus allow for better results as well as a better computation time. However, it requires an extremely large computation budget, which can be achieved very efficiently by quadratic methods if the problem is not very large. In this work, we propose a new method for solving QP that uses a multi-stage gradient descent algorithm, which is more efficient and takes faster algorithm times. Moreover, we also propose a novel approach for solving the problem in which the objective function is not the best choice as the algorithm is fast and it is guaranteed to converge to the optimal solution. Experimental results show that the proposed method has a promising performance compared with the existing multi-stage gradient descent algorithms.
Efficient Sparse Subspace Clustering via Matrix Completion
While Convolutional neural networks (CNNs) have become the most explored and powerful tool for supervised learning on image data, little attention has been focused on the learning of sparse representations. In this paper, we investigate sparse representation learning and learn sparse representations from high-dimensional data, using the deep CNN family. We exploit the fact that the embedding space of a CNN representation can only contain sparse information, and not the underlying image representation. We propose an efficient method to learn sparse representations in CNNs using a deep CNN architecture. We study the nonlinearity of the embedding space and the problem of learning sparse representations in CNNs. We derive a novel deep learning method that significantly improves the performance when compared to conventional CNN-based approaches.