Robust AI
Exploring vulnerability & Advancing robustness
To develop efficient AI, rigorous evaluation is essential. If the evaluation is incorrect, the efficiency of the AI model may be inaccurately assessed, which can impede its real-world deployment. If trained models are well equipped with visual intelligence, they should be robust against small deviations of input images from the trained domain that can occur naturally (e.g., successive image compression) or intentionally (e.g., adversarial attacks). However, when we focus exclusively on enhancing model efficiency, we may overlook critical aspects like robustness. Therefore, we have investigated the vulnerabilites of trained models and developed more robust models (Choi et al., 2019; Choi et al., 2020; Choi et al., 2022; Hwang et al., 2021; Kim et al., 2020; Kim et al., 2022). We have also explored fair evaluation methods for generated images (Choi et al., 2020; Lee et al., 2023).
References
2023
2022
- Successive learned image compression: Comprehensive analysis of instabilityNeurocomputing, 2022
2021
2020
- Adversarially robust deep image super-resolution using entropy regularizationAsian Conference on Computer Vision, 2020
- MMInstability of successive deep image compressionACM International Conference on Multimedia, 2020