Posts

Showing posts from May, 2020

Semantic Segmentation

Image
Adapt from  https://www.jeremyjordan.me/semantic-segmentation/ The goal of semantic image segmentation is to label each pixel of an image with a corresponding class of what is being represented. Because we're predicting for each pixel in the image, this task is commonly referred to as dense prediction. Earlier layers tend to learn low-level concepts while later layers develop more high-level feature mappings. Drozdzal et al.   swap out the basic stacked convolution blocks in favor of residual blocks. This residual block introduces short skip connections alongside the existing long skip connections between the corresponding feature maps of encoder and decoder modules found in the standard U-Net structure. The report that the short skip connections allow for faster convergence when training and allow for deeper models to be trained. Expanding on this, Jegou et al. proposed the use of dense blocks, still following  a U-Net structure,  arguing that the "characte...

Few-Shot Class-Incremental Learning

Paper from: https://arxiv.org/pdf/2004.10956.pdf Incremental learning new classes Few-shot class-incremental learning problem(FSCIL) FSCIL requires CNN models to incrementally learn new classes from very few labelled samples. Topology-perserving knowledge incrementer framework(TOPIC) To mitigate forgetting, most class-incremental learning(CIL) works use the knowledge distillation technique that maintains the network's output logins correspond to old classes. They usually store s set of old class exemplars and apply the distillation loss to the network's output. Problems: 1)class-imbalance problem; 2)performance trade-off between old and new classes TOPIC uses a neural gas(NG) network to model the topology of feature space.  When learning the new classes, NG grows to adapt to the change of feature space. On this basis, we formulate FSCIL as an optimization problem with two objectives.  1) to avoid catastrophic forgetting, TOPIC preserves the old...