mlfoundations/open_clip 源码下载与部署教程

项目概述

An open source implementation of CLIP.

项目地址

https://github.com/mlfoundations/open_clip

项目页面预览

mlfoundations/open_clip preview

关键指标

  • Stars:13236
  • 主要语言:Python
  • License:Other
  • 最近更新:2025-11-04T16:10:46Z
  • 默认分支:main

本站高速下载(国内可用)

安装部署要点(README 精选)

Usage

pip install open_clip_torch
import torch
from PIL import Image
import open_clip

model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-32', pretrained='laion2b_s34b_b79k')
model.eval()  # model in train mode by default, impacts some models with BatchNorm or stochastic depth active
tokenizer = open_clip.get_tokenizer('ViT-B-32')

image = preprocess(Image.open("docs/CLIP.png")).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat"])

with torch.no_grad(), torch.autocast("cuda"):
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    image_features /= image_features.norm(dim=-1, keepdim=True)
    text_features /= text_features.norm(dim=-1, keepdim=True)

    text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)

print("Label probs:", text_probs)  # prints: [[1., 0., 0.]]

If model uses timm image encoders (convnext, siglip, eva, etc) ensure the latest timm is installed. Upgrade timm if you see ‘Unknown model’ errors for the image encoder.

If model uses transformers tokenizers, ensure transformers is installed.

See also this [Clip Colab].

To compute billions of embeddings efficiently, you can use clip-retrieval which has openclip support.

Install

We advise you first create a virtual environment with:

python3 -m venv .env
source .env/bin/activate
pip install -U pip

You can then install openclip for training with pip install 'open_clip_torch[training]'.

Sample single-process running code:

python -m open_clip_train.main \
    --save-frequency 1 \
    --zeroshot-frequency 1 \
    --report-to tensorboard \
    --train-data="/path/to/train_data.csv"  \
    --val-data="/path/to/validation_data.csv"  \
    --csv-img-key filepath \
    --csv-caption-key title \
    --imagenet-val=/path/to/imagenet/root/val/ \
    --warmup 10000 \
    --batch-size=128 \
    --lr=1e-3 \
    --wd=0.1 \
    --epochs=30 \
    --workers=8 \
    --model RN50

Note: imagenet-val is the path to the validation set of ImageNet for zero-shot evaluation, not the training set!
You can remove this argument if you do not want to perform zero-shot evaluation on ImageNet throughout training. Note that the val folder should contain subfolders. If it does not, please use this script.

常用命令(从 README 提取)

pip install open_clip_torch

python3 -m venv .env
source .env/bin/activate
pip install -U pip

python tests/util_test.py --model RN50 RN101 --save_model_list models.txt --git_revision 9d31b2ec4df6d8228f370ff20c8267ec6ba39383

通用部署说明(适用于大多数项目)

  1. 下载源码并阅读 README
  2. 安装依赖(pip/npm/yarn 等)
  3. 配置环境变量(API Key、模型路径、数据库等)
  4. 启动服务并测试访问
  5. 上线建议:Nginx 反代 + HTTPS + 进程守护(systemd / pm2)

免责声明与版权说明

本文仅做开源项目整理与教程索引,源码版权归原作者所有,请遵循对应 License 合规使用。

© 版权声明
THE END
喜欢就支持一下吧
点赞11 分享
评论 抢沙发

请登录后发表评论

    暂无评论内容