使⽤COCO数据集训练YOLOX

 

注意: 训练的时候,如果GPU不够,可以修改batchsize大小。 

(yolox) xuefei@f123:/mnt/d/BaiduNetdiskDownload/CV/YOLOX$ ls
LICENSE      README.md      assets  checkpoints  demo  exps        requirements.txt  setup.py  tools  yolox
MANIFEST.in  YOLOX_outputs  build   datasets     docs  hubconf.py  setup.cfg         tests     venv   yolox.egg-info

(yolox) xuefei@f123:/mnt/d/BaiduNetdiskDownload/CV/YOLOX$
(yolox) xuefei@f123:/mnt/d/BaiduNetdiskDownload/CV/YOLOX$ python -m yolox.tools.train -n yolox-s -d 1 -b 16 --fp16
2024-02-02 08:36:25 | INFO     | yolox.core.trainer:130 - args: Namespace(batch_size=16, cache=None, ckpt=None, devices=1, dist_backend='nccl', dist_url=None, exp_file=None, experiment_name='yolox_s', fp16=True, logger='tensorboard', machine_rank=0, name='yolox-s', num_machines=1, occupy=False, opts=[], resume=False, start_epoch=None)
2024-02-02 08:36:25 | INFO     | yolox.core.trainer:131 - exp value:
╒═══════════════════╤════════════════════════════╕
│ keys              │ values                     │
╞═══════════════════╪════════════════════════════╡
│ seed              │ None                       │
├───────────────────┼────────────────────────────┤
│ output_dir        │ './YOLOX_outputs'          │
├───────────────────┼────────────────────────────┤
│ print_interval    │ 10                         │
├───────────────────┼────────────────────────────┤
│ eval_interval     │ 10                         │
├───────────────────┼────────────────────────────┤
│ dataset           │ None                       │
├───────────────────┼────────────────────────────┤
│ num_classes       │ 80                         │
├───────────────────┼────────────────────────────┤
│ depth             │ 0.33                       │
├───────────────────┼────────────────────────────┤
│ width             │ 0.5                        │
├───────────────────┼────────────────────────────┤
│ act               │ 'silu'                     │
├───────────────────┼────────────────────────────┤
│ data_num_workers  │ 4                          │
├───────────────────┼────────────────────────────┤
│ input_size        │ (640, 640)                 │
├───────────────────┼────────────────────────────┤
│ multiscale_range  │ 5                          │
├───────────────────┼────────────────────────────┤
│ data_dir          │ None                       │
├───────────────────┼────────────────────────────┤
│ train_ann         │ 'instances_train2017.json' │
├───────────────────┼────────────────────────────┤
│ val_ann           │ 'instances_val2017.json'   │
├───────────────────┼────────────────────────────┤
│ test_ann          │ 'instances_test2017.json'  │
├───────────────────┼────────────────────────────┤
│ mosaic_prob       │ 1.0                        │
├───────────────────┼────────────────────────────┤
│ mixup_prob        │ 1.0                        │
├───────────────────┼────────────────────────────┤
│ hsv_prob          │ 1.0                        │
├───────────────────┼────────────────────────────┤
│ flip_prob         │ 0.5                        │
├───────────────────┼────────────────────────────┤
│ degrees           │ 10.0                       │
├───────────────────┼────────────────────────────┤
│ translate         │ 0.1                        │
├───────────────────┼────────────────────────────┤
│ mosaic_scale      │ (0.1, 2)                   │
├───────────────────┼────────────────────────────┤
│ enable_mixup      │ True                       │
├───────────────────┼────────────────────────────┤
│ mixup_scale       │ (0.5, 1.5)                 │
├───────────────────┼────────────────────────────┤
│ shear             │ 2.0                        │
├───────────────────┼────────────────────────────┤
│ warmup_epochs     │ 5                          │
├───────────────────┼────────────────────────────┤
│ max_epoch         │ 300                        │
├───────────────────┼────────────────────────────┤
│ warmup_lr         │ 0                          │
├───────────────────┼────────────────────────────┤
│ min_lr_ratio      │ 0.05                       │
├───────────────────┼────────────────────────────┤
│ basic_lr_per_img  │ 0.00015625                 │
├───────────────────┼────────────────────────────┤
│ scheduler         │ 'yoloxwarmcos'             │
├───────────────────┼────────────────────────────┤
│ no_aug_epochs     │ 15                         │
├───────────────────┼────────────────────────────┤
│ ema               │ True                       │
├───────────────────┼────────────────────────────┤
│ weight_decay      │ 0.0005                     │
├───────────────────┼────────────────────────────┤
│ momentum          │ 0.9                        │
├───────────────────┼────────────────────────────┤
│ save_history_ckpt │ True                       │
├───────────────────┼────────────────────────────┤
│ exp_name          │ 'yolox_s'                  │
├───────────────────┼────────────────────────────┤
│ test_size         │ (640, 640)                 │
├───────────────────┼────────────────────────────┤
│ test_conf         │ 0.01                       │
├───────────────────┼────────────────────────────┤
│ nmsthre           │ 0.65                       │
╘═══════════════════╧════════════════════════════╛
2024-02-02 08:36:25 | INFO     | yolox.core.trainer:137 - Model Summary: Params: 8.97M, Gflops: 26.93
2024-02-02 08:36:28 | INFO     | yolox.data.datasets.coco:63 - loading annotations into memory...
2024-02-02 08:36:38 | INFO     | yolox.data.datasets.coco:63 - Done (t=10.09s)
2024-02-02 08:36:38 | INFO     | pycocotools.coco:86 - creating index...
2024-02-02 08:36:39 | INFO     | pycocotools.coco:86 - index created!
2024-02-02 08:36:55 | INFO     | yolox.core.trainer:155 - init prefetcher, this might take one minute or less...
2024-02-02 08:36:58 | INFO     | yolox.data.datasets.coco:63 - loading annotations into memory...
2024-02-02 08:36:59 | INFO     | yolox.data.datasets.coco:63 - Done (t=0.58s)
2024-02-02 08:36:59 | INFO     | pycocotools.coco:86 - creating index...
2024-02-02 08:36:59 | INFO     | pycocotools.coco:86 - index created!
2024-02-02 08:36:59 | INFO     | yolox.core.trainer:191 - Training start...
2024-02-02 08:36:59 | INFO     | yolox.core.trainer:192 -
YOLOX(
  (backbone): YOLOPAFPN(
    (backbone): CSPDarknet(
      (stem): Focus(
        (conv): BaseConv(
          (conv): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
      (dark2): Sequential(
        (0): BaseConv(
          (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): CSPLayer(
          (conv1): BaseConv(
            (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv3): BaseConv(
            (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
          )
        )
      )
      (dark3): Sequential(
        (0): BaseConv(
          (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): CSPLayer(
          (conv1): BaseConv(
            (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv3): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
            (1): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
            (2): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
          )
        )
      )
      (dark4): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): CSPLayer(
          (conv1): BaseConv(
            (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv3): BaseConv(
            (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
            (1): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
            (2): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
          )
        )
      )
      (dark5): Sequential(
        (0): BaseConv(
          (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): SPPBottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): ModuleList(
            (0): MaxPool2d(kernel_size=5, stride=1, padding=2, dilation=1, ceil_mode=False)
            (1): MaxPool2d(kernel_size=9, stride=1, padding=4, dilation=1, ceil_mode=False)
            (2): MaxPool2d(kernel_size=13, stride=1, padding=6, dilation=1, ceil_mode=False)
          )
          (conv2): BaseConv(
            (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (2): CSPLayer(
          (conv1): BaseConv(
            (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv3): BaseConv(
            (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
          )
        )
      )
    )
    (upsample): Upsample(scale_factor=2.0, mode=nearest)
    (lateral_conv0): BaseConv(
      (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (C3_p4): CSPLayer(
      (conv1): BaseConv(
        (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv2): BaseConv(
        (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv3): BaseConv(
        (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): Sequential(
        (0): Bottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (reduce_conv1): BaseConv(
      (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (C3_p3): CSPLayer(
      (conv1): BaseConv(
        (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv2): BaseConv(
        (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv3): BaseConv(
        (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): Sequential(
        (0): Bottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (bu_conv2): BaseConv(
      (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (C3_n3): CSPLayer(
      (conv1): BaseConv(
        (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv2): BaseConv(
        (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv3): BaseConv(
        (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): Sequential(
        (0): Bottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (bu_conv1): BaseConv(
      (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (C3_n4): CSPLayer(
      (conv1): BaseConv(
        (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv2): BaseConv(
        (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv3): BaseConv(
        (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): Sequential(
        (0): Bottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
  )
  (head): YOLOXHead(
    (cls_convs): ModuleList(
      (0): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
      (1): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
      (2): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
    )
    (reg_convs): ModuleList(
      (0): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
      (1): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
      (2): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
    )
    (cls_preds): ModuleList(
      (0): Conv2d(128, 80, kernel_size=(1, 1), stride=(1, 1))
      (1): Conv2d(128, 80, kernel_size=(1, 1), stride=(1, 1))
      (2): Conv2d(128, 80, kernel_size=(1, 1), stride=(1, 1))
    )
    (reg_preds): ModuleList(
      (0): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))
      (1): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))
      (2): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))
    )
    (obj_preds): ModuleList(
      (0): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
      (1): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
      (2): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
    )
    (stems): ModuleList(
      (0): BaseConv(
        (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (1): BaseConv(
        (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (2): BaseConv(
        (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
    )
    (l1_loss): L1Loss()
    (bcewithlog_loss): BCEWithLogitsLoss()
    (iou_loss): IOUloss()
  )
)
2024-02-02 08:36:59 | INFO     | yolox.core.trainer:203 - ---> start train epoch1
2024-02-02 08:37:07 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 10/7393, gpu mem: 3713Mb, mem: 6.5Gb, iter_time: 0.751s, data_time: 0.008s, total_loss: 20.3, iou_loss: 4.8, l1_loss: 0.0, conf_loss: 13.7, cls_loss: 1.9, lr: 1.830e-10, size: 640, ETA: 19 days, 6:56:56
2024-02-02 08:37:12 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 20/7393, gpu mem: 3713Mb, mem: 6.5Gb, iter_time: 0.467s, data_time: 0.001s, total_loss: 15.7, iou_loss: 4.6, l1_loss: 0.0, conf_loss: 9.0, cls_loss: 2.1, lr: 7.318e-10, size: 640, ETA: 15 days, 15:17:49
2024-02-02 08:37:18 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 30/7393, gpu mem: 3713Mb, mem: 6.5Gb, iter_time: 0.616s, data_time: 0.002s, total_loss: 16.7, iou_loss: 4.6, l1_loss: 0.0, conf_loss: 10.0, cls_loss: 2.1, lr: 1.647e-09, size: 576, ETA: 15 days, 16:38:41
2024-02-02 08:37:30 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 40/7393, gpu mem: 4978Mb, mem: 6.5Gb, iter_time: 1.207s, data_time: 0.002s, total_loss: 17.8, iou_loss: 4.7, l1_loss: 0.0, conf_loss: 11.1, cls_loss: 2.0, lr: 2.927e-09, size: 800, ETA: 19 days, 12:21:38
2024-02-02 08:37:36 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 50/7393, gpu mem: 4978Mb, mem: 6.5Gb, iter_time: 0.630s, data_time: 0.002s, total_loss: 15.8, iou_loss: 4.7, l1_loss: 0.0, conf_loss: 8.9, cls_loss: 2.2, lr: 4.574e-09, size: 544, ETA: 18 days, 20:17:13
2024-02-02 08:37:45 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 60/7393, gpu mem: 4978Mb, mem: 6.5Gb, iter_time: 0.916s, data_time: 0.002s, total_loss: 17.5, iou_loss: 4.7, l1_loss: 0.0, conf_loss: 10.9, cls_loss: 1.9, lr: 6.587e-09, size: 736, ETA: 19 days, 14:59:49
2024-02-02 08:37:55 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 70/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.994s, data_time: 0.002s, total_loss: 18.5, iou_loss: 4.6, l1_loss: 0.0, conf_loss: 11.8, cls_loss: 2.1, lr: 8.965e-09, size: 800, ETA: 20 days, 11:11:27
2024-02-02 08:38:03 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 80/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.816s, data_time: 0.003s, total_loss: 20.7, iou_loss: 4.7, l1_loss: 0.0, conf_loss: 13.8, cls_loss: 2.2, lr: 1.171e-08, size: 800, ETA: 20 days, 12:36:00
2024-02-02 08:38:10 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 90/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.631s, data_time: 0.002s, total_loss: 18.8, iou_loss: 4.6, l1_loss: 0.0, conf_loss: 12.3, cls_loss: 1.9, lr: 1.482e-08, size: 640, ETA: 20 days, 1:04:39
2024-02-02 08:38:15 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 100/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.504s, data_time: 0.002s, total_loss: 15.7, iou_loss: 4.6, l1_loss: 0.0, conf_loss: 9.1, cls_loss: 2.0, lr: 1.830e-08, size: 480, ETA: 19 days, 8:00:53
2024-02-02 08:38:22 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 110/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.746s, data_time: 0.002s, total_loss: 18.6, iou_loss: 4.7, l1_loss: 0.0, conf_loss: 11.8, cls_loss: 2.1, lr: 2.214e-08, size: 672, ETA: 19 days, 7:36:25
2024-02-02 08:38:27 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 120/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.519s, data_time: 0.002s, total_loss: 15.7, iou_loss: 4.6, l1_loss: 0.0, conf_loss: 9.0, cls_loss: 2.0, lr: 2.635e-08, size: 512, ETA: 18 days, 19:38:05
2024-02-02 08:38:32 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 130/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.463s, data_time: 0.001s, total_loss: 17.6, iou_loss: 4.7, l1_loss: 0.0, conf_loss: 11.1, cls_loss: 1.9, lr: 3.092e-08, size: 640, ETA: 18 days, 6:51:07

^C2024-02-02 08:38:32 | INFO     | yolox.core.trainer:196 - Training of experiment is done and the best AP is 0.00

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/661900.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

C语言——P/文件操作

一、为什么使用文件? 如果没有⽂件,我们写的程序的数据是存储在电脑的内存中,如果程序退出,内存回收,数据就丢失了,等再次运⾏程序,是看不到上次程序的数据的,如果要将数据进⾏持久…

[Git版本控制系统]

Git是一种版本控制系统,用于跟踪和控制计算机文件的更改。它具有以下几个基本概念: 仓库(Repository):Git使用仓库来存储项目的所有文件和历史记录。仓库可以分为本地仓库和远程仓库。本地仓库存在于本地计算机上&…

数据结构—动态查找表

动态查找介绍 1. 动态查找的引入:当查找表以线性表的形式组织时,若对查找表进行插入、删除或排序操作,就必须移动大量的记录,当记录数很多时,这种移动的代价很大。 2. 动态查找表的设计思想:表结构本身是…

web前端开发--------阴影与转换

1.阴影分为文本阴影和盒子阴影 我们使用text-shadow属性为文本添加阴影效果,使用结构伪类为第一个子元素p添加阴影效果; 水平偏移量为负值时,表示阴影向左偏移; 垂直偏移量为负值时,表示阴影向上偏移。 …

【Vue】2-14、插槽 自定义指令

一、插槽 插槽&#xff08;Slot&#xff09;是 vue 为组件的封装者提供的能力。允许封装者在封装组件时&#xff0c;把不确定的&#xff0c;希望由用户指定的部分定义为插槽。 <template><div class"app-container"><h1>App 根组件</h1>&…

【Mysql】数据库架构学习合集

目录 1. Mysql整体架构1-1. 连接层1-2. 服务层1-3. 存储引擎层1-4. 文件系统层 2. 一条sql语句的执行过程2-1. 数据库连接池的作用2-2. 查询sql的执行过程2-1. 写sql的执行过程 1. Mysql整体架构 客户端&#xff1a; 由各种语言编写的程序&#xff0c;负责与Mysql服务端进行网…

go数据操作-elasticsearch

go-elasticsearch是Elasticsearch 官方提供的 Go 客户端。每个 Elasticsearch 版本会有一个对应的 go-elasticsearch 版本。 1.安装依赖 执行以下命令安装v8版本的 go 客户端。 go get github.com/elastic/go-elasticsearch/v8latest导入依赖。 import "github.com/el…

vue学习——elementPlus安装及国际化

引入完整的elementPlus 安装 pnpm i element-plus引入 // main.ts import { createApp } from vue import ElementPlus from element-plus import element-plus/dist/index.css import App from ./App.vueconst app createApp(App)app.use(ElementPlus) app.mount(#app)图标…

JAVA斗地主逻辑-控制台版

未排序版&#xff1a; 准备牌->洗牌 -> 发牌 -> 看牌: App程序入口&#xff1a; package doudihzu01;public class App {public static void main(String[] args) {/*作为斗地主程序入口这里不写代码逻辑*///无参创建对象&#xff0c;作为程序启动new PokerGame();…

大力说视频号第二课:视频号如何挂链接带货

最近&#xff0c;随着视频号带货的风潮&#xff0c;不少小伙伴已经成功跟上潮流&#xff0c;在这个平台上轻松赚取收入。 然而&#xff0c;仍有不少小伙伴对于如何在视频号中挂链接带货感到有些困惑。 目前&#xff0c;视频号的主流带货方式主要分为三种&#xff1a; 01 挂“…

(每日持续更新)信息系统项目管理(第四版)(高级项目管理)考试重点整理第9章 项目范围管理(四)

博主2023年11月通过了信息系统项目管理的考试&#xff0c;考试过程中发现考试的内容全部是教材中的内容&#xff0c;非常符合我学习的思路&#xff0c;因此博主想通过该平台把自己学习过程中的经验和教材博主认为重要的知识点分享给大家&#xff0c;希望更多的人能够通过考试&a…

回归预测 | Matlab实现CPO-LSTM【24年新算法】冠豪猪优化长短期记忆神经网络多变量回归预测

回归预测 | Matlab实现CPO-LSTM【24年新算法】冠豪猪优化长短期记忆神经网络多变量回归预测 目录 回归预测 | Matlab实现CPO-LSTM【24年新算法】冠豪猪优化长短期记忆神经网络多变量回归预测效果一览基本介绍程序设计参考资料 效果一览 基本介绍 1.Matlab实现CPO-LSTM【24年新算…

QT 范例阅读: undoframework

一、功能 通过给 QGraphicsScene 添加、删除、移动 QGraphicsPolygonItem 来演示 撤销重做功能 标签 undo framework example 二、核心代码&#xff0c;以添加图例为例 MainWindow.cpp 的核心代码 //1 创建堆栈 undoStack new QUndoStack(this); //2 以列表的形式显…

LeetCode第783题 - 二叉搜索树节点最小距离

题目 解答 方案一 class Solution {private List<Integer> values new ArrayList<>();public void inorder(TreeNode node) {if (node null) {return;}inorder(node.left);values.add(node.val);inorder(node.right);}public int minDiffInBST(TreeNode root)…

AI 代码生成

1 csdnC知道 https://so.csdn.net/chat?from_spm1008.2611.3001.9686 2 百度搜索AI伙伴 https://chat.baidu.com/ 3 阿里通义千问 https://tongyi.aliyun.com/qianwen/?spm5176.28326591.0.0.2a8a6ee1TUqfmH //还有的小伙伴们留言,我来整理 //感谢大家的点赞&#xff0c;…

工业自动化中与多台PLC通讯的基本指南

与多台PLC进行通讯是工业自动化中常见的需求。通常&#xff0c;一台THM&#xff08;通常是触摸屏或人机界面&#xff09;会与多台PLC进行通讯&#xff0c;以实现数据交互和控制功能。以下是一个基本的步骤指南&#xff0c;用于实现1台THM与多台PLC的通讯&#xff1a; 确定通讯…

在本机搭建私有NuGet服务器

写在前面 有时我们不想对外公开源码&#xff0c;同时又想在VisualStudio中通过Nuget包管理器来统一管理这些内部动态库&#xff0c;这时就可以在本地搭建一个NuGet服务器。 操作步骤 1.下载BaGet 这是一个轻量级的NuGet服务器 2.部署BaGet 将下载好的release版本BaGet解…

docker搭建nginx的RTMP服务器的步骤

使用Docker 另一个选项是使用Docker容器运行Nginx和RTMP模块。Docker Hub上有一些预构建的镜像&#xff0c;这些镜像已经集成了Nginx和RTMP模块&#xff0c;可以直接使用&#xff0c;无需手动编译。 例如&#xff0c;使用一个预构建的Nginx-RTMP Docker镜像&#xff1a; doc…

docker- php7.4

安装 gd拓展 anzhuanga在Dockerfile里面安装php7.4的GD库 - 知乎 apt update apt install -y libwebp-dev libjpeg-dev libpng-dev libfreetype6-devdocker-php-source extractdocker-php-ext-configure gd \ --with-jpeg/usr/include \ --with-freetype/usr/include/docker-…

Java Path详解

Java Path详解 大家好&#xff0c;我是免费搭建查券返利机器人赚佣金就用微赚淘客系统3.0的小编&#xff0c;也是冬天不穿秋裤&#xff0c;天冷也要风度的程序猿&#xff01;今天&#xff0c;我们将深入探讨Java中的Path&#xff0c;解析它的功能、用法以及在文件处理中的应用…