Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: (NotFound) Cannot open file C:\Users\Abc/.cache/paddle/infer_weights\STGCN, please confirm whether the file is normal.[Hint: Expected static_cast<bool>(fin.is_open()) == true, but received static_cast<bool>(fin.is_open()):0 != true:1.] (at ..\paddle\fluid\inference\api\analysis_predictor.cc:2808) #9202

Open
1 task done
DazzlingGalaxy opened this issue Nov 4, 2024 · 6 comments
Assignees

Comments

@DazzlingGalaxy
Copy link

问题确认 Search before asking

  • 我已经查询历史issue,没有发现相似的bug。I have searched the issues and found no similar bug report.

Bug组件 Bug Component

Inference

Bug描述 Describe the Bug

新装的PaddleDetection环境,下载仓库develop分支zip代码后安装的。想试一下打架识别,本地已准备了一个mp4文件。参考文档中的用法,运行后报错
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=z:/cam1_9.mp4 --device=gpu

  File "deploy/pipeline/pipeline.py", line 1321, in <module>
    main()
  File "deploy/pipeline/pipeline.py", line 1306, in main
    pipeline = Pipeline(FLAGS, cfg)
  File "deploy/pipeline/pipeline.py", line 97, in __init__
    self.predictor = PipePredictor(args, cfg, self.is_video)
  File "deploy/pipeline/pipeline.py", line 439, in __init__
    self.skeleton_action_predictor = SkeletonActionRecognizer.init_with_cfg(
  File "Z:\PaddleDetection-develop\deploy\pipeline\pphuman\action_infer.py", line 92, in init_with_cfg
    return cls(model_dir=cfg['model_dir'],
  File "Z:\PaddleDetection-develop\deploy\pipeline\pphuman\action_infer.py", line 75, in __init__
    super(SkeletonActionRecognizer, self).__init__(
  File "Z:\PaddleDetection-develop\deploy\python\infer.py", line 108, in __init__
    self.predictor, self.config = load_predictor(
  File "Z:\PaddleDetection-develop\deploy\python\infer.py", line 1098, in load_predictor
    predictor = create_predictor(config)
RuntimeError: (NotFound) Cannot open file C:\Users\Abc/.cache/paddle/infer_weights\STGCN, please confirm whether the file is normal.
  [Hint: Expected static_cast<bool>(fin.is_open()) == true, but received static_cast<bool>(fin.is_open()):0 != true:1.] (at ..\paddle\fluid\inference\api\analysis_predictor.cc:2808)```

`STGCN`文件夹存在且里面有内容
![1](https://github.com/user-attachments/assets/962ef414-0f1d-4298-94e3-d3510da9d898)

这个问题之前有人提过,但没解决, #8287 

顺便再问一个以前有人问但没解决的问题,里面有人说可以手动下载文件,但不知道去哪下载,也不知道在哪个文件能看到它需要的下载地址, #9192 

### 复现环境 Environment

win11

anyio                    4.5.2
astor                    0.8.1
autocommand              2.2.2
babel                    2.16.0
backports.tarfile        1.2.0
bce-python-sdk           0.9.23
blinker                  1.8.2
certifi                  2024.8.30
charset-normalizer       3.4.0
click                    8.1.7
colorama                 0.4.6
contourpy                1.1.1
cycler                   0.12.1
Cython                   3.0.11
decorator                5.1.1
exceptiongroup           1.2.2
Flask                    3.0.3
flask-babel              4.0.0
fonttools                4.54.1
future                   1.0.0
h11                      0.14.0
httpcore                 1.0.6
httpx                    0.27.2
idna                     3.10
imageio                  2.35.1
imgaug                   0.4.0
importlib_metadata       8.5.0
importlib_resources      6.4.5
inflect                  7.3.1
itsdangerous             2.2.0
jaraco.collections       5.1.0
jaraco.context           5.3.0
jaraco.functools         4.0.1
jaraco.text              3.12.1
Jinja2                   3.1.4
joblib                   1.4.2
kiwisolver               1.4.7
lapx                     0.5.11
lazy_loader              0.4
MarkupSafe               2.1.5
matplotlib               3.7.5
more-itertools           10.3.0
motmetrics               1.4.0
networkx                 3.1
numpy                    1.24.4
nvidia-cublas-cu11       11.11.3.6
nvidia-cuda-nvrtc-cu11   11.8.89
nvidia-cuda-runtime-cu11 11.8.89
nvidia-cudnn-cu11        8.9.4.19
nvidia-cufft-cu11        10.9.0.58
nvidia-curand-cu11       10.3.0.86
nvidia-cusolver-cu11     11.4.1.48
nvidia-cusparse-cu11     11.7.5.86
opencv-python            4.5.5.64
opt-einsum               3.3.0
packaging                24.1
paddledet                0.0.0
paddlepaddle-gpu         3.0.0b1
pandas                   2.0.3
pillow                   10.4.0
pip                      24.2
platformdirs             4.2.2
protobuf                 5.28.3
psutil                   6.1.0
pyclipper                1.3.0.post6
pycocotools              2.0.7
pycryptodome             3.21.0
pyparsing                3.1.4
python-dateutil          2.9.0.post0
pytz                     2024.2
PyWavelets               1.4.1
PyYAML                   6.0.2
rarfile                  4.2
requests                 2.32.3
scikit-image             0.21.0
scikit-learn             1.3.2
scipy                    1.10.1
setuptools               75.1.0
shapely                  2.0.6
six                      1.16.0
sklearn                  0.0
sniffio                  1.3.1
terminaltables           3.1.10
threadpoolctl            3.5.0
tifffile                 2023.7.10
tomli                    2.0.1
tqdm                     4.66.6
typeguard                4.4.0
typing_extensions        4.12.2
tzdata                   2024.2
urllib3                  2.2.3
visualdl                 2.5.3
Werkzeug                 3.0.6
wheel                    0.44.0
xmltodict                0.14.2
zipp                     3.20.2

### Bug描述确认 Bug description confirmation

- [X] 我确认已经提供了Bug复现步骤、代码改动说明、以及环境信息,确认问题是可以复现的。I confirm that the bug replication steps, code change instructions, and environment information have been provided, and the problem can be reproduced.


### 是否愿意提交PR? Are you willing to submit a PR?

- [ ] 我愿意提交PR!I'd like to help by submitting a PR!
@TingquanGao TingquanGao self-assigned this Nov 4, 2024
@DazzlingGalaxy
Copy link
Author

@TingquanGao ,更新一下,之前报错的文件夹是paddle/infer_weights/STGCN,我发现它其实是文档中的摔倒识别。摔倒识别和打架识别昨天我都试了,可能是先试的摔倒识别,设置infer_cfg_pphuman.ymlSKELETON_ACTION的enable: True,然后发现报错,没把它再设为False就接着把打架识别的VIDEO_ACTION设为True了,所以昨天我说想用打架识别但报错STGCN文件夹。

今天我做了下面的操作:
1.基于骨骼点的行为识别摔倒识别,只开启SKELETON_ACTION,其他都是False,报错;
2.基于图像分类的行为识别行人检测,只开启ID_BASED_CLSACTION,其他都是False,报错;
3.基于检测的行为识别行人检测,只开启ID_BASED_DETACTION,其他都是False,报错;
4.基于视频分类的行为识别打架识别,只开启VIDEO_ACTION,其他都是False,正常,输入一个视频文件,会输出一个视频文件,并在打架的时刻标出置信度。
前三次的报错都和昨天报错STGCN文件夹类似,都是paddle/infer_weights/xxx这种。

我想问几点:
1.前三种报错,如何解决?
2.打架识别,只能输入一个视频文件吗?我试了一张图片会报错

  File "deploy/pipeline/pipeline.py", line 1321, in <module>
    main()
  File "deploy/pipeline/pipeline.py", line 1308, in main
    pipeline.run_multithreads()
  File "deploy/pipeline/pipeline.py", line 179, in run_multithreads
    self.predictor.run(self.input)
  File "deploy/pipeline/pipeline.py", line 533, in run
    self.predict_video(input, thread_idx=thread_idx)
  File "deploy/pipeline/pipeline.py", line 986, in predict_video
    if frame_id % sample_freq == 0:
ZeroDivisionError: integer division or modulo by zero

3.打架识别完成后,除了手动打开生成的视频文件,怎么知道视频中有没有打架?
4.我的实际需求是,我有一些摄像头,要么我直接从摄像头获取流数据或画面(图片),要么其他人去摄像头获取,我再从他那去获取,然后我识别画面中有没有打架。PaddleDetection能直接识别摄像头数据吗(不管是我直接从摄像头获取,还是我从其他人那获取)?能的话有没有例子?不能的话,我应该频繁将摄像头画面保存成图片,然后合成一小段视频(比如5分钟的视频),再传给PaddleDetection?这样时效性感觉有点滞后。还是说有其他更好的办法?

@TingquanGao
Copy link
Collaborator

  1. 可以尝试手动下载模型(文档)并修改配置文件,可以参考文档
  2. 行为识别模块仅支持视频输入;
  3. 如果要保存下来,需要改下代码;
  4. 请参考文档

@DazzlingGalaxy
Copy link
Author

DazzlingGalaxy commented Nov 5, 2024

  1. 可以尝试手动下载模型(文档)并修改配置文件,可以参考文档
  2. 行为识别模块仅支持视频输入;
  3. 如果要保存下来,需要改下代码;
  4. 请参考文档

针对1.模型文件其实是在的,文件夹中有文件(昨天也发了图片,但没显示出来,只显示了链接),是自动下载的,但还是报错,比如这些
1
2

针对3①.怎么修改代码?目前我是用subprocess.run()去运行deploy/pipeline/pipeline.py,然后获取打印内容,再判断打印内容中有没有video_action_res: {'class': 1, 'score': 0.7190255},有的话说明有打架。
针对3②.运行deploy/pipeline/pipeline.py后有一些打印,意思是frame id40-50100-110之间时有打架吗?

video fps: 30, frame_count: 160
Thread: 0; frame id: 0
Thread: 0; frame id: 10
Thread: 0; frame id: 20
Thread: 0; frame id: 30
Thread: 0; frame id: 40
W1105 09:04:42.407253 12872 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.3, Runtime API Version: 11.8
W1105 09:04:42.407253 12872 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9.
I1105 09:04:42.407253 12872 program_interpreter.cc:243] New Executor is Running.
video_action_res: {'class': 1, 'score': 0.5839483}
Thread: 0; frame id: 50
Thread: 0; frame id: 60
Thread: 0; frame id: 70
Thread: 0; frame id: 80
Thread: 0; frame id: 90
Thread: 0; frame id: 100
video_action_res: {'class': 1, 'score': 0.7190255}
Thread: 0; frame id: 110
Thread: 0; frame id: 120
Thread: 0; frame id: 130
Thread: 0; frame id: 140
Thread: 0; frame id: 150
save result to output\cam1_9.mp4
------------------ Inference Time Info ----------------------
total_time(ms): 150.2, img_num: 109
video_action time(ms): 60.1; per frame average time(ms): 0.5513761467889908
average latency time(ms): 1.38, QPS: 725.699068

针对3③.python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=z:/cam1_9.mp4 --device=gpu,完整的打印信息如下,我运行的正常吗?比如它默认申请8G显存,我只有6G,会有影响吗?可以手动设置申请的显存大小吗?

-----------  Running Arguments -----------
ATTR:
  batch_size: 8
  enable: false
  model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip
DET:
  batch_size: 1
  model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip
ID_BASED_CLSACTION:
  batch_size: 8
  display_frames: 80
  enable: false
  model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip
  skip_frame_num: 2
  threshold: 0.8
ID_BASED_DETACTION:
  batch_size: 8
  display_frames: 80
  enable: false
  model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip
  skip_frame_num: 2
  threshold: 0.6
KPT:
  batch_size: 8
  model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip
MOT:
  batch_size: 1
  enable: false
  model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip
  skip_frame_num: -1
  tracker_config: deploy/pipeline/config/tracker_config.yml
REID:
  batch_size: 16
  enable: false
  model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip
SKELETON_ACTION:
  batch_size: 1
  coord_size:
  - 384
  - 512
  display_frames: 80
  enable: false
  max_frames: 50
  model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip
VIDEO_ACTION:
  batch_size: 1
  enable: true
  frame_len: 8
  model_dir: https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip
  sample_freq: 7
  short_size: 340
  target_size: 320
attr_thresh: 0.5
crop_thresh: 0.5
kpt_thresh: 0.2
visual: true
warmup_frame: 50

------------------------------------------
VideoAction Recognition enabled
DET  model dir:  C:\Users\Abc/.cache/paddle/infer_weights\mot_ppyoloe_l_36e_pipeline
mot_model_dir model_dir:  C:\Users\Abc/.cache/paddle/infer_weights\mot_ppyoloe_l_36e_pipeline
KPT  model dir:  C:\Users\Abc/.cache/paddle/infer_weights\dark_hrnet_w32_256x192
VIDEO_ACTION  model dir:  C:\Users\Abc/.cache/paddle/infer_weights\ppTSM
E1105 11:49:36.710119 23412 analysis_predictor.cc:2137] Allocate too much memory for the GPU memory pool, assigned 8000 MB
E1105 11:49:36.710119 23412 analysis_predictor.cc:2140] Try to shrink the value by setting AnalysisConfig::EnableUseGpu(...)
e[1me[35m--- Running analysis [ir_graph_build_pass]e[0m
I1105 11:49:36.738330 23412 executor.cc:184] Old Executor is Running.
e[1me[35m--- Running analysis [ir_analysis_pass]e[0m
e[32m--- Running IR pass [map_op_to_another_pass]e[0m
I1105 11:49:36.824437 23412 fuse_pass_base.cc:59] ---  detected 55 subgraphs
e[32m--- Running IR pass [is_test_pass]e[0m
e[32m--- Running IR pass [simplify_with_basic_ops_pass]e[0m
e[32m--- Running IR pass [delete_quant_dequant_linear_op_pass]e[0m
e[32m--- Running IR pass [delete_weight_dequant_linear_op_pass]e[0m
e[32m--- Running IR pass [constant_folding_pass]e[0m
e[32m--- Running IR pass [silu_fuse_pass]e[0m
e[32m--- Running IR pass [conv_bn_fuse_pass]e[0m
I1105 11:49:36.929447 23412 fuse_pass_base.cc:59] ---  detected 55 subgraphs
e[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]e[0m
e[32m--- Running IR pass [embedding_eltwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v2]e[0m
e[32m--- Running IR pass [vit_attention_fuse_pass]e[0m
e[32m--- Running IR pass [fused_multi_transformer_encoder_pass]e[0m
e[32m--- Running IR pass [fused_multi_transformer_decoder_pass]e[0m
e[32m--- Running IR pass [fused_multi_transformer_encoder_fuse_qkv_pass]e[0m
e[32m--- Running IR pass [fused_multi_transformer_decoder_fuse_qkv_pass]e[0m
e[32m--- Running IR pass [multi_devices_fused_multi_transformer_encoder_pass]e[0m
e[32m--- Running IR pass [multi_devices_fused_multi_transformer_encoder_fuse_qkv_pass]e[0m
e[32m--- Running IR pass [multi_devices_fused_multi_transformer_decoder_fuse_qkv_pass]e[0m
e[32m--- Running IR pass [fuse_multi_transformer_layer_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass]e[0m
I1105 11:49:38.095309 23412 fuse_pass_base.cc:59] ---  detected 1 subgraphs
e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass]e[0m
e[32m--- Running IR pass [matmul_scale_fuse_pass]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v3]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_to_mul_pass]e[0m
e[32m--- Running IR pass [fc_fuse_pass]e[0m
I1105 11:49:38.129885 23412 fuse_pass_base.cc:59] ---  detected 1 subgraphs
e[32m--- Running IR pass [fc_elementwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_act_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add2_act_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_fuse_pass]e[0m
I1105 11:49:38.204499 23412 fuse_pass_base.cc:59] ---  detected 55 subgraphs
e[32m--- Running IR pass [transpose_flatten_concat_fuse_pass]e[0m
e[32m--- Running IR pass [transfer_layout_pass]e[0m
e[32m--- Running IR pass [transfer_layout_elim_pass]e[0m
e[32m--- Running IR pass [auto_mixed_precision_pass]e[0m
e[32m--- Running IR pass [identity_op_clean_pass]e[0m
e[32m--- Running IR pass [inplace_op_var_pass]e[0m
I1105 11:49:38.212951 23412 fuse_pass_base.cc:59] ---  detected 2 subgraphs
e[1me[35m--- Running analysis [ir_params_sync_among_devices_pass]e[0m
I1105 11:49:38.214633 23412 ir_params_sync_among_devices_pass.cc:51] Sync params from CPU to GPU
e[1me[35m--- Running analysis [adjust_cudnn_workspace_size_pass]e[0m
e[1me[35m--- Running analysis [inference_op_replace_pass]e[0m
e[1me[35m--- Running analysis [save_optimized_model_pass]e[0m
e[1me[35m--- Running analysis [ir_graph_to_program_pass]e[0m
I1105 11:49:38.334097 23412 analysis_predictor.cc:2080] ======= ir optimization completed =======
I1105 11:49:38.334097 23412 naive_executor.cc:200] ---  skip [feed], feed -> data_batch_0
I1105 11:49:38.335599 23412 naive_executor.cc:200] ---  skip [linear_2.tmp_1], fetch -> fetch
video fps: 30, frame_count: 160
Thread: 0; frame id: 0
Thread: 0; frame id: 10
Thread: 0; frame id: 20
Thread: 0; frame id: 30
Thread: 0; frame id: 40
W1105 11:49:40.078011 23412 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.3, Runtime API Version: 11.8
W1105 11:49:40.078011 23412 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9.
I1105 11:49:40.078011 23412 program_interpreter.cc:243] New Executor is Running.
video_action_res: {'class': 1, 'score': 0.5839483}
Thread: 0; frame id: 50
Thread: 0; frame id: 60
Thread: 0; frame id: 70
Thread: 0; frame id: 80
Thread: 0; frame id: 90
Thread: 0; frame id: 100
video_action_res: {'class': 1, 'score': 0.7190255}
Thread: 0; frame id: 110
Thread: 0; frame id: 120
Thread: 0; frame id: 130
Thread: 0; frame id: 140
Thread: 0; frame id: 150
save result to output\cam1_9.mp4
------------------ Inference Time Info ----------------------
total_time(ms): 216.6, img_num: 109
video_action time(ms): 80.10000000000001; per frame average time(ms): 0.734862385321101
average latency time(ms): 1.99, QPS: 503.231764

针对4.之前看到rtsp的方式了,不过发issue时忘了这点,刚试了下报错(单独用opencv连接rtsp,可以获取到摄像头画面),我没修改deploy/pipeline/config/examples/infer_cfg_human_attr.yml中的内容,python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://abc:[email protected]:554/Streaming/Channels/101 --device=gpu

Traceback (most recent call last):
  File "deploy/pipeline/pipeline.py", line 1321, in <module>
    main()
  File "deploy/pipeline/pipeline.py", line 1303, in main
    cfg = merge_cfg(FLAGS)  # use command params to update config
  File "Z:\PaddleDetection-develop\deploy\pipeline\cfg_utils.py", line 212, in merge_cfg
    pred_config = merge_opt(pred_config, args_dict)
  File "Z:\PaddleDetection-develop\deploy\pipeline\cfg_utils.py", line 202, in merge_opt
    for sub_k, sub_v in value.items():
AttributeError: 'bool' object has no attribute 'items'

还试了python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o visual=False --rtsp rtsp://abc:[email protected]:554/Streaming/Channels/101 --device=gpu,也报错,deploy/pipeline/config/infer_cfg_pphuman.yml中只有VIDEO_ACTION是True,其他是False

Traceback (most recent call last):
  File "deploy/pipeline/pipeline.py", line 1321, in <module>
    main()
  File "deploy/pipeline/pipeline.py", line 1303, in main
    cfg = merge_cfg(FLAGS)  # use command params to update config
  File "Z:\PaddleDetection-develop\deploy\pipeline\cfg_utils.py", line 212, in merge_cfg
    pred_config = merge_opt(pred_config, args_dict)
  File "Z:\PaddleDetection-develop\deploy\pipeline\cfg_utils.py", line 202, in merge_opt
    for sub_k, sub_v in value.items():
AttributeError: 'bool' object has no attribute 'items'

还试了python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=rtsp://abc:[email protected]:554/Streaming/Channels/101 --device=gpu,因为文档中说或者video_file后面的视频地址直接更换为rtsp流地址,不报错,但是为什么运行后只处理frame id为0-30的数据?处理完后就没了,程序也没停止运行

video fps: 20, frame_count: -2049638230412172
Thread: 0; frame id: 0
Thread: 0; frame id: 10
Thread: 0; frame id: 20
Thread: 0; frame id: 30
save result to output\101_t00_rtsp.mp4
------------------ Inference Time Info ----------------------
total_time(ms): 0.0, img_num: 0
average latency time(ms): 0.00, QPS: 0.000000

@DazzlingGalaxy
Copy link
Author

@TingquanGao 可以解答下我上面又发的吗?

@TingquanGao
Copy link
Collaborator

TingquanGao commented Nov 12, 2024

  1. 参考文档修改配置文件,传入模型本地路径,是否可以?
  2. 保存的逻辑具体可以看下代码,看起来是在这里
  3. 打印的video_action_res: {'class': 1, 'score': 0.7190255}不是说明在这之间检测到了打架行为,打架行为是使用了最近50帧的视频进行识别的,如果识别到了就会打印,可以看下相关代码
  4. 比如它默认申请8G显存,我只有6G:你提供的配置中,我没看到有8G显存的设置;
  5. 视频流的问题我得再问下其他人。

@DazzlingGalaxy
Copy link
Author

  1. 参考文档修改配置文件,传入模型本地路径,是否可以?
  2. 保存的逻辑具体可以看下代码,看起来是在这里
  3. 打印的video_action_res: {'class': 1, 'score': 0.7190255}不是说明在这之间检测到了打架行为,打架行为是使用了最近50帧的视频进行识别的,如果识别到了就会打印,可以看下相关代码
  4. 比如它默认申请8G显存,我只有6G:你提供的配置中,我没看到有8G显存的设置;
  5. 视频流的问题我得再问下其他人。

最近一直在外地,我有时间了再试下,感谢。8G显存是我前面贴出的打印,Allocate too much memory for the GPU memory pool, assigned 8000 MB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants