Yes Yolo5.NET does launch CUDA enabled and does work with BI. Hmm - I'll have to learn about this as most my experience with BI is Yolo 5.6.2 with ipcam-combined model with specific object confirmations within BI.
But this use case is simple for on demand run when leave home for few indoor...
Recent PC build with Linux mint. Initially got CP 2.9.5 / Yolo 5.6.2 working with old 1070 ti.
(For anyone on Linux mint struggling - the modules get installed with path bug. The modules / runtime get installed with path "linuxmint" no space but modules are looking for "linux mint" with space...
Installed A2000 into poweredge server with dual E5-2680v3. ALPR CUDA working. CPU mode worked too before I resolved CUDNN issues.
(Used CUDA 12.3 + CUDNN 9. Path to CUDNN bin folder didn't work - duplicated cudnn64_9.dll into new cudnn64_8.dll + put all CUDNN files into C:\Program Files\NVIDIA...
Hmm, Interesting - the Nehalem CPU gen of both desktop + servers appears to be impacted.
Ya. One test I tried was CP 2.1.8 , forget which ALPR version it defaulted to install. But it did start ALPR CPU and didn't crash. But no CUDA there either.
Perhaps its due to newer versions of Paddle...
Ya quite familiar with the R710 GPU situation. If I remember correct even the PCIe risers will only give you 25W rather than standard 75W. So can't even use low power cards like 1650 or A2000.
The Xeon 56xx in the R710 is actually same gen CPU as the i7 920 though. So very well may...
:-(
Ya. Well there is a pattern here on this i7 920.
I'm likely going to be installing this A2000 into its permanent server this coming weekend. Poweredge r730xd using dual Xeon E5-2680v3. ESXi7 , Server 2022 VM.
My hunch is ALPR will probably work with these Xeons and this python / paddle /...
Thanks for confirming. Same same.
Good luck. I've tried fresh CP install many times on CUDA 11.8 , 12.2, 12.3.
Well that shows issue crossing Nvidia Turing + Ampere cards. Yet I have another Ampere 3050 running on Ryzen 5900x running ALPR CUDA just fine.
What exact old CPU you using? My...
Thanks for sharing.
So the same ALPR "failed to start" + Win App event log has this error?
App Error - 1000
Faulting application name: python.exe, version: 3.9.6150.1013, time stamp: 0x60d9eb23
Faulting module name: common.dll, version: 0.0.0.0, time stamp: 0x6585a281
Exception code...
No. Sorry I tried to be short explaining goal of this new A2000 but perhaps I was confusing.
The A2000 is actually going into a Poweredge Vmware server which presently has a GTX 1650 installed in it. This GTX 1650 is only doing BI stuff.
Technically the A2000 is going to replace this GTX 1650...
No, misunderstanding - I have ALPR 3.0.2 working on an entirely separate CP deployment with different RTX 3050 8GB. This A2000 I'm testing on a bench is to replace it.
I cannot get ALPR 3.0.3 / A2000 to not crash regardless of trying 3 different CUDA versions 11.8 , 12.2 , 12.3.
And on all 3...
I see. Perhaps something that could be considered in future releases to have archive repos build into CP web UI for old modules. I'm sure much easier said than done though.
Well I was going to explore the ALPR 2.9.1 to see if any different on A2000? If its a bunch of work for ya don't...
I've identified the modulesettings.json , this is a useful compatibility matrix.
But its still not clear to me how to target older module versions within CP Web UI.
Based on this matrix I should be able to be on CP 2.5.6 and install License Plate Reader 2.5.0-RC6 or newer.
But how does one do...
Thanks for investigating. I'll inquire over at CP.
I am however quite curious how one goes about specifying the module version to install.
Is the CP version hard locking the available module versions? Or is there a xml / config can be edited? I'd like to try previous license plate reader...
Nothing within CP log even on trace level.
Windows event log though:
This is a fresh Win10 22H2 install. Basically nothing installed but CP, drivers, gpuz.
It's an old i7 920 platform I had available. Just trying to test new A2000 + CP before installing A2000 it into a Poweredge Server.