Efficient AI
Efficient neural modeling of visual information
Designing neural network architectures is one of the most fundamental research topics in the field of AI. It involves crafting the appropriate structure and operation of AI to effectively and efficiently model specific problems and data. For visual information, convolutional neural network is the most popular architecture. It is specialized for learning local information, but has limitations in modeling global information. While simply stacking more layers can achieve desirable performance by increasing receptive field of neural networks, it results in unnecessarily large networks. Therefore, we have designed neural architectures with attention mechanisms, focusing on not only fully utilizing information (i.e., both local and global information) in images but also optimizing efficiency in terms of both the number of model parameters and computational complexity (Kim & Lee, 2018; Kim et al., 2020; Kim et al., 2022; Kim et al., 2024). We have also designed efficient neural networks with recursive or multi-exit structures (Choi et al., 2021; Jeon et al., 2020). In addition, we have explored network pruning for image compression models (Kim et al., 2020).
References
2024
2022
2021
2020
- LarvaNet: Hierarchical super-resolution via multi-exit architectureEuropean Conference on Computer Vision Workshop, 2020
- Efficient deep learning-based lossy image compression via asymmetric autoencoder and pruningIEEE International Conference on Acoustics, Speech and Signal Processing, 2020