Sentence Transformers
https://www.sbert.net/
https://github.com/huggingface/sentence-transformers
SentenceTransformers Documentation
Sentence Transformers (a.k.a. SBERT) is the go-to Python module for accessing, using, and training state-of-the-art embedding and reranker models. It can be used to compute embeddings using Sentence Transformer models (quickstart), to calculate similarity scores using Cross-Encoder (a.k.a. reranker) models (quickstart), or to generate sparse embeddings using Sparse Encoder models (quickstart). This unlocks a wide range of applications, including semantic search, semantic textual similarity, and paraphrase mining.
A wide selection of over 10,000 pre-trained Sentence Transformers models are available for immediate use on 🤗 Hugging Face, including many of the state-of-the-art models from the Massive Text Embeddings Benchmark (MTEB) leaderboard. Additionally, it is easy to train or finetune your own embedding models, reranker models, or sparse encoder models using Sentence Transformers, enabling you to create custom models for your specific use cases.
Sentence Transformers was created by UKPLab and is being maintained by 🤗 Hugging Face. Don’t hesitate to open an issue on the Sentence Transformers repository if something is broken or if you have further questions.