Back
Learning of layered or "deep" representations has provided significant advances in computer vision in recent years, but has traditionally been limited to fully supervised settings with very large amounts of training data. New results show that such methods can also excel when learning in sparse/weakly labeled settings across modalities and domains. I'll present our recent long-term recurrent network model which can learn cross-modal translation and can provide open-domain video to text transcription. I'll also describe state-of-the-art models for fully convolutional pixel-dense segmentation from weakly labeled input, and finally will discuss new methods for adapting deep recognition models to new domains with few or no target labels for categories of interest.
Trevor Darrell (UC Berkeley)