The “engine” for big model training. BMTrain performs efficient pre-training and tuning for big models. Compared with toolkit such as DeepSpeed, BMTrain can save 90% on cost in the training process.
Learn more
BMTrain performs amazingly compared to popular frameworks
BMCook
The toolkit for big model “slimming”. BMCook performs efficient compression for big models to improve operating efficiency. Through the combination of algorithms such as quantization, pruning, distillation, and MoEfication, 90%+ effects of the original model can be maintained, and model inference can be accelerated by 10 times.
Learn more
Combination in Any Way
BMInf
Perform big model inference on a thousand-yuan GPU. BMInf performs low-cost and high-efficiency inference for big models,which can perform big model inference with more than 10 billion parameters on a single thousand-yuan GPU (GTX 1060).
Learn more
10B Model Decoding Speed
BMInfPyTorch
OpenPrompt
A “sharp knife” for big model prompt learning. OpenPrompt provides a prompt learning template language with a unified interface. Its compositionality and modularity allow you to easily deploy prompt learning algorithms to run big models.
Learn more
Architecture
OpenDelta
Tiny parameters leverage big models. OpenDelta performs parameter-efficient tuning for big models. By only updating very few parameters (less than 5%), the algorithms can achieve the same effect with full-parameter fine-tuning.
Learn more
Tool Collaboration
ModelCenter
Big Model Warehouse.ModelCenter implements pre-trained language models (PLMs) based on BMTrain backend. It supports Efficient, Low-Resource, Extendable model usage and distributed training.