Large-scale pretrained transformers learn from corpuses containing oceans of factual knowledge, and are surprisingly good at recalling this knowledge without any fine-tuning. In a new paper, a team from Microsoft Research and Peking University peeps into pretrained transformers, proposing a method to identify the “knowledge neurons” responsible for storing this knowledge and how they can be utilized to edit, update and even erase relational facts.
— Lees op syncedreview.com/2021/04/27/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-6/


Ontdek meer van Djimit van data naar doen.

Abonneer je om de nieuwste berichten naar je e-mail te laten verzenden.

Categories: Data