Lilian Weng 12/5/2021

Learning with not Enough Data Part 1: Semi-Supervised Learning

Read Original

This article introduces a series on handling limited labeled data, focusing on Part 1: Semi-Supervised Learning. It explains the core concept of using both labeled and unlabeled data, presents the common loss function structure (L = Ls + μ(t)Lu), and contrasts its prevalence in vision tasks versus the pre-training paradigm in NLP. It defines key notations and sets the stage for detailed method discussions.

Learning with not Enough Data Part 1: Semi-Supervised Learning

Comments

No comments yet

Be the first to share your thoughts!

Browser Extension

Get instant access to AllDevBlogs from your browser