This talk will outline the challenges of developing efficient models with limited data and resources. It will explore strategies to maximise data and model efficiency, emphasising the importance of managing large models that typically require significant computational resources and are predominantly trained on English data.
It will also discuss techniques like pre-filtering, online methods, data augmentation, and curriculum learning, as well as parameter-efficient training methods like adapters, prompt tuning, and prefix tuning to enhance model performance without extensive data requirements.
More information: https://www.helmholtz-hida.de/en/events/hida-lecture-efficient-natural-language-processing/