About OpenBMB
OpenBMB is short for Open Lab for Big Model Base. The goal of OpenBMB is to build the model base and toolkit for large-scale pre-trained language models. We aim to accelerate the process of training, tuning, and inference for big models (with more than 10 billion parameters) and lower the barriers to use them. Based on this, we further aim to build the open-source community together with world-wide developers to promote the open-source ecosystems of pre-trained language models. And finally, make big models standardized, popular and practical, and bring big models to everyone.
OpenBMB is founded and supported by the Natural Language Processing Laboratory of Tsinghua University (THUNLP) and ModelBest Inc. The team has a deep research foundation in natural language processing and pretrained models. In recent years, the team has published several high-level papers on model pretraining, prompt tuning and model compression techniques in top international conferences. Some highlights are as follows:
1. The first team that proposed a knowledge-guided pretrained model (ERNIE) and corresponding paper was published on ACL 2019. The paper was cited more than 600 times.
2. Supported by BAAI, the team has published well-known large-scale pre-trained language models such as Wudao Wenyuan, CPM1, CPM2 and CPM3. The biggest of them has 198 billion parameters and has promising performance on downstream tasks.
3. Focusing on the biomedical scenario, the paper about the pretrained model KV-PLM was published on Nature Communications. It was also selected in Editors' Highlights.
4. The team also has rich experience in the open-source community. We published a series of well-known packages like OpenKE, OpenNRE and OpenNE. These packages have gained 58k stars totally on GitHub. The team ranks 148th among global institutions.
5. In January 2023, the multilingual CPM-Bee model developed by OpenBMB in collaboration with Wallfacer Intelligence topped ZeroCLUE.
OpenBMB is founded and supported by the Natural Language Processing Laboratory of Tsinghua University (THUNLP) and ModelBest Inc. The team has a deep research foundation in natural language processing and pretrained models. In recent years, the team has published several high-level papers on model pretraining, prompt tuning and model compression techniques in top international conferences. Some highlights are as follows:
1. The first team that proposed a knowledge-guided pretrained model (ERNIE) and corresponding paper was published on ACL 2019. The paper was cited more than 600 times.
2. Supported by BAAI, the team has published well-known large-scale pre-trained language models such as Wudao Wenyuan, CPM1, CPM2 and CPM3. The biggest of them has 198 billion parameters and has promising performance on downstream tasks.
3. Focusing on the biomedical scenario, the paper about the pretrained model KV-PLM was published on Nature Communications. It was also selected in Editors' Highlights.
4. The team also has rich experience in the open-source community. We published a series of well-known packages like OpenKE, OpenNRE and OpenNE. These packages have gained 58k stars totally on GitHub. The team ranks 148th among global institutions.
5. In January 2023, the multilingual CPM-Bee model developed by OpenBMB in collaboration with Wallfacer Intelligence topped ZeroCLUE.