Posts by ProDigit

1) Questions and Answers : Unix/Linux : GPU update (Message 731)
Posted 28 Oct 2020 by ProDigit
Post:
I got good news and bad new.

The good news is, I have (finally) received the first test GPU WU served up by the project and its crunching now.

The bad news is that it uses up to 2GB of RAM and 2GB of disk space for DS3 WUs, which are well beyond the limits CPU crunching requires, so if anyone else gets a GPU WU, they'll immediately error out with a "exceeded disk space" error. This leads to a complicated problem on my end....

Do I a) up the limits on all existing WUs so they can be run on a GPU or CPU, meaning pure CPU crunchers will suffer because the client will refuse to run unless it has 2GB/WU free (despite actually using only ~300MB when not using a GPU), and making things like a RPi3 w/ 1GB of ram impossible to schedule, or b) somehow create two sets of WUs, one for GPUs, one for CPUs, and figure out how to tell the BOINC server which is which (hint, that ain't easy).

Option b) is really the only answer, but will require a bit more thought.

Perhaps it would be best to create a separate "mlds-gpu" application with a separate WU pool. Hmm.

option b. Most of my mlc units run on Atom boards that only have 2GB (1.5GB) shared between all 4 cores, and 8GB of remaining disk space.
Cuda also shouldn't be this big.
I was hoping you'd start with Intel, as most intel GPUs are unused (or, doing the collatz).
If gpu acceleration is slow, it would make no sense to have big and heavy gpus do the job, and focus on the smaller ones.
On the other hand, if you can improve performance on big gpus, making them 90-100% utilized, then a separate pool is necessary, as these gpu systems usually do have the ram requirements.
2) Questions and Answers : Unix/Linux : Pi4 (Message 730)
Posted 28 Oct 2020 by ProDigit
Post:
FWIW, I have 5x Raspberry Pi 4 machines crunching quite happily. Right now they are working on WCG, but will come back around here sooner or later. They are OC to 2ghz and have coolers. But then again, x86/x64 machines have coolers too. OTOH, they use less energy combined than a dim light bulb.

True, but their cortex A72 cpus are also much slower than comparable x86 cpus.
I went with an atomic pi. It's 1.6-1.7Ghz cpu can crunch data about as fast as 4x CM4 modules.
On top of that they use the beignet Intel OpenCL drivers, and I run projects on those as well.
All the while each Atomic pi unit pulls about 12.5W (@100% CPU & 100% GPU load).
That's on the wall, with psu inefficiencies and fan.
It is very comparative to 4x Raspberry pi 4Bs, but with added gpu crunching.
The intel cpu is 14nm, vs the pi is 28nm.

Arm is about as good as Atom in terms of efficiency.

And 14nm Atom cpus are about equal to Ryzen 9 3950x cpus (both in speed (as long as you have enough units), price, and efficiency).
That being said, a Ryzen 3900x or 3950x pulls only 150W at the wall (with watercooling), at it's stock 3,5Ghz.
To get that kind of performance out of a Raspberry pi, you'd have to pair 48 to 64 units together.
The cost of such server would be far more than even a threadripper or epic server.
The power draw would also; estimated to be at 500-750W.
If you'd compare a 8 unit pi cluster, it would perform about the same as a 35W celeron dual core laptop.
3) Questions and Answers : Issue Discussion : Rpi4 now erroring new tasks. (Message 687)
Posted 25 Oct 2020 by ProDigit
Post:
Maybe unrelated, but if you're running the Pi from a slow SD card, it may queue up write tasks, and error.
The best you can do, is find a way to boot it off of an SSD (USB TO SATA).
It's not only faster, but more reliable too.
4) Questions and Answers : Unix/Linux : Pi4 (Message 686)
Posted 25 Oct 2020 by ProDigit
Post:
My personal tip for an ARM-based SBC to be used in this project would be the nVidia Jetson Nano (quad-core Cortex-A57 @1430 Mhz, stock), coupled to a good cooling fan so it can be overclocked to 2000 MHz.

You can augment the Jetson Nano using a CORAL USB Accelerator, so you can use not only the TensorFlow, PyTorch, Caffe en MXNet AI framework as can be used with the nVidia Jetson Nano, but also have a TPU-coprocessor in the form of the CORAL USB stick. That stick can also be used in other ARM SBC's such as the Raspberry Pi 4 or the Odroid-N2+ of course.


Apparently all Cortex A50 series CPUs are very, very slow, an not worth doing compute on.
That includes any of the Nvidia developer boards.
The only thing why they may be somewhat good, is their GPU capabilities.
But even those are equivalent of a GT730, which is half the speed of a 1030, which is half the speed of a 1050, which is half the speed of a 1660, which is half the speed of an RTX2060, which is about half the speed of an RTX 3080.
So if you're going to do GPU computations, a 3080 is about 40 to 60x faster as a GT730.

For ARM CPUs, you'll have to rely on Cortex A70 series CPUs.
The 72 is mostly only found in quad core configuration.
The higher end cores (eg: A77) are mostly found in Big-Little configuration, which android disables for program access due to device overheating. Meaning, if you want to crunch data on a big-little in android, the tasks will be shifted to the little cores.
Unless you can find a Linux image for the device, and make sure it has sufficient cooling.

At this point, I'd say that the only ARM CPUs worth investing time over are the A70 cores with Neon instructions, as well as the Neonverse (which should have come out this year already, but hasn't).
These chips also can't be cellphones, tablets, or anything with a battery.
They have to be top boxes,or built into a server or desktop where sufficient cooling is present.
Think Pi4, which throttles unless there's either an active cooling, or a large heat sink cooling the CPU.
And that's only a cortex A72.
The higher end models are built on 12 and 10nm, but can reach over 3Ghz speeds.
No one makes them yet.
5) Questions and Answers : Unix/Linux : GPU update (Message 685)
Posted 25 Oct 2020 by ProDigit
Post:
Hi everyone,
I recently installed Boinc on a raspberry pi4 and was pleased to see it was contributing spare cycles to projects. Then I saw a Nvidia Jetson TK1 new on ebay and thought that might be a useful device to Boinc with..given its parallel processing capabilities. However I see the arm architecture isnt recognised and errors are reported. I'm writing this post on the system now, though Im certain Cuda isn't active yet, I would be grateful if anyone knows of any active forum or links relative to enabling this type of embedded board for project use.
I am reading through https://boinc.berkeley.edu/wiki/Client_configuration#Options
just looking for additional help.
Many Thanks, Risque.

Nvidia developerboards are nothing but low power ARM cores (A55, A53, A52 or lower) with a low power GPU that sits somewhere around a GT730.
They're not the right boards to do compute calculations with.
Especially their CPU compute numbers are very, very slow!
They're expensive, and for the $99 a jetson nano costs, you'll probably rather buy a GT1030 (that's about twice to trice as fast).

The Pi4B uses A72 cores, they're slightly better for compute loads.
6) Questions and Answers : Unix/Linux : Computation Error Orange Pi [ARM] (Message 684)
Posted 25 Oct 2020 by ProDigit
Post:
We need to be better about listing minimum requirements for each task, since we can't link statically. :/ .

If ARM support is going to get more popular, we should look at lowering that glibc requirement. Thanks for trying it out.

I wonder if issues like these can be sent to the Boinc Manager's notices window?
I often see notices when a project has problems there.
7) Questions and Answers : Unix/Linux : Less and less score (Message 683)
Posted 25 Oct 2020 by ProDigit
Post:
I've noticed MLC was getting less and less score on my PCs.
I wonder if you guys have changed project address (from HTTP to HTTPS or something?)
Or if there's just a lack of WUs?
8) Questions and Answers : Unix/Linux : GPU update (Message 682)
Posted 25 Oct 2020 by ProDigit
Post:
For tiny computations, please add Intel GPUs first.
If Vega 54 is getting a very small speed boost, it's probably not worth investing in on big GPUs, like the RTX 3000 series.
There are plenty of Linux and Windows PCs that only have The Collatz (and maybe Einstein) on Intel IGPs.
Intel IGPs should be the right GPU for anything of 60% improvement or less.
If you're talking about comparing CPU vs GPU, a 2080 is about 200x faster than most quad core CPUs, but those aren't the ones you should focus on.
Intel 11th gen CPUs have pretty powerful IGPs (1Tflops and more).
But for most, like Celerons, they can handle 100% extra load (200Gflops CPU and 200Gflops IGP).

For big GPUs they usually run multiple WUs in parallel.
It'll depend how much Double Precision calculations you have to do.
Sometimes hogging a powerful GPU, because MLC maxes out a GPU's DP, isn't the best solution.
I always found 1 CPU + GPU core (CPU for 32 and 64 bit computations, GPU for 32bit and lower) is optimal. Especially for small WUs on a small IGP.
9) Message boards : Science : INT 8 support?? (Message 499)
Posted 19 Sep 2020 by ProDigit
Post:
I read in a news article, that INT8 commands are supported.
Do know that a single RTX 3090 can push around 250-300 Tops on that data.
They're very well optimized for INT8, and full or half precision.
If you ever wish to include GPUs in data crunching, the RTX 3000 series will do in a day what would take normal PCs months or even years!
10) Message boards : News : New client released with ARM support! (Message 423)
Posted 28 Aug 2020 by ProDigit
Post:
Project MLC for ARM isn't viewable on Boinc Android 10 yet.
11) Questions and Answers : Issue Discussion : Too many WUs (Message 288)
Posted 31 Jul 2020 by ProDigit
Post:
Come on, you can't seriously compare FAH to BOINC. The functionality of the FAH client is very limited. That's not negative, it does what it's designed to do. But what is that really? Get a task, finish it, return it, get another one. No decisions to make. And you take that as an example of being smart? BOINC is much more complex than that. Different projects, different applications, work cache, deadlines, resource shares. People use all that to their liking. And then they're annoyed when a machine can't do it on its own or makes decisions they don't approve of. Set up BOINC as simple as FAH. Eliminate the cache, run a single project, and leave it alone. Then you are a big step closer to what FAH does and BOINC will do it well enough. You can carefully add more complexity but remember YOU are responsible for providing the right conditions.

Yes, and that is exactly the reason why BOINC isn't working very well.
Because the scientists need some sort of standard to work towards, like with FAH.
Boinc Mgr won't be able to learn anything from FAH in terms of WU queuing, however they can by looking how well the WUs are optimized for the hardware it's running at.
In my opnion, scientists need a standardized condition, and from that condition each client can individually adjust.

The way things currently are, is that one project aims towards very small WUs, that don't work well on fast hardware; while others run the GPU to 70 or 80% load, while even others are using the full 100% load.
It's a topic more suited for the boinc forums, but the whole boinc statistics, boinc setup, boinc BAM, is a spagetti of all kinds of systems working against one another...
Instead of editing the cc and app_config files, they should have at least some sort of adjustment within the GUI for those things.
Like, use only 1 GPU; Don't use CPU; or set CPU and/or GPU values to for instance 0.5 or 0.33.
12) Questions and Answers : Issue Discussion : Too many WUs (Message 285)
Posted 30 Jul 2020 by ProDigit
Post:
Boinc is one of the worst programs in terms of running independently.
Often it runs WUs inefficiently (either downloads too many WUs, or runs WUs with less than optimal CPU cores due to ram shortage; or just misconfigured multicore WUs. I see them all the time, one WU hogging up multiple CPU cores, and these WUs are public, non beta ones!).
There also have been numerous complaints on Boinc's WU priority algorithm (the algorithm that organizes and decides which WUs get processed first); specifically, some WUs are done nearly 98%, and are pending, while new WUs are started.

Another thing Boinc does extremely lousy, is that the end user really doesn't have control.
Boinc depends on a whole bunch of algorithms and dependencies, to make life easier, but they actually do the opposite.
You can for instance, set in Boinc manager certain configurations. However, these configurations may only work for one system.
Boinc manager offers different profiles, however your system count is limited to the amount profiles you have.
If you want a different setting for each client, you can't use the BAM account manager. I mean, you could, but if you have more than say 6 different systems (that are different in configuration), BAM is not the way to go.

But if you do use it, and one system has few CPU cores, with 2 Nvidia GPUs, while the other only has many CPUs, and an Intel IGP, setting up for these differences of configurations is a real pain in BAM.

Full Control isn't given and prioritized to the client.
You can for instance set something into the client, only to have BAM revert it to a global setting.
And it's infuriating to see how BAM constantly reverts for instance, projects set to GPU only, BAM can reset to GPU + CPU when no GPU work is available.

Yes, you can alter the cc_config and app_config files, however that job is tedious, and changes when a project brings out new WUs, or anything in existing projects change (eg: ram usage of WUs, or WU intensity (eg: atom or thread count)).

I can really go on and on about the real damn annoying issues Boinc has; but the fact of the matter is, that it's a really lousy program.
They could learn a lot from late 1990s developed client software, like Napster, or Azureus/Vuze, which had at least the processing algorithms correct for downloads, and FAH, which can tailor certain WUs to fit certain hardware.
If FAH client tells the server it has a high end RTX GPU (like a 2080), the server will assign and send more of the higher intense WUs (high atom count, high parallel processing).
If the client tells the server it has a low end GPU, or only CPU available, the server will assign more of the lower intensity WUs.
In Boinc, the end user has to adjust for that, however, there's no standard.
For instance, if a CPU runs a WU in X-amount of time, and a GPU runs the WU in Y-amount of time, make it a reference for all WUs, from all projects.
Based on that reference, it would have been so much easier for end users to say "Oh, my GPU is 3x as fast as reference, so I'd like to tune my system WUs for that performance".
No, instead Boinc has no standard, doesn't care what your hardware is, and often runs WUs at a less than optimal setting.
In some cases, GPU WUs run at 1/3rd of my 2080Ti's capabilities (even when tripling them).

Sorry for the rant, but no, boinc does definitely NOT work best when you just leave it. It's AI is virtually zero!

As for the amount of WUs, I was forced to abort around 200 WUs today, because they would have never finished on time.
And this is with me being nice, and offering two days solely to MLC, and put other projects on hold.
13) Questions and Answers : Issue Discussion : No WUs: "This computer has reached a limit on tasks in progress" (Message 284)
Posted 30 Jul 2020 by ProDigit
Post:
I had to abort over 200 WUs, because for some reason they were downloaded, but I would never make the deadline on them.
I may even abort more, depending on the situation.
I think it will benefit more sending these units out to others, than to have them surpass the deadline on mine.
14) Questions and Answers : Issue Discussion : No WUs: "This computer has reached a limit on tasks in progress" (Message 279)
Posted 29 Jul 2020 by ProDigit
Post:
I didn't really download lots of WUs.
The system would download a few and finish, and then wait.
On a quadcore machine, I should at least be able to download 4 WUs, with perhaps 1 or 2 extra, no?

Anyway, this morning I had a whole list of them downloaded (more than I can process before the deadline).
15) Questions and Answers : Unix/Linux : Pi4 (Message 278)
Posted 29 Jul 2020 by ProDigit
Post:
The Pi 4 will be rather slow, compared to an x86/64 CPU.
The overclocked Pi 4 (quad core) to 2Ghz, equals about the performance of a quad core atom processor running at 1,66Ghz.
x86 instructions are performance oriented.
ARM instructions are power optimized.

The Pi4 isn't particularly very power efficient either, due to being built on 28nm.
You might have better results from an AMLOGIC S905X3 TV box, found online for about $30. It uses 12nm process, and runs at about 3W (vs almost 8 Watts for the Pi 4).

Concerning ARM, recent Snapdragons replace the 4 BIG cores with 2, and the 4 small cores with 6.
This is a significant increase in performance compared to previous designs.
Even AMLOGIC's S922X TV boxes (sold for around $80), now incorporate a 2/6 core design,where the little cores are actually running faster than the big cores!

So if you succeed in running it on a Pi,
Perhaps the next phase will be Snapdragon CPUs (like found in the Pixel 3a, Samsung Galaxy A and S- series), and AMLOGIC TV boxes.
They're very underestimated, and perform pretty good for the price!
16) Questions and Answers : Issue Discussion : MLC@home WUs using 2 CPUs (Message 277)
Posted 29 Jul 2020 by ProDigit
Post:
You may also want to set
<project_max_concurrent>1</project_max_concurrent>
in the
app_config.xml
in addition to
<max_concurrent>1</max_concurrent>
.

https://boinc.berkeley.edu/wiki/Client_configuration#Application_configuration

It is possible to set some project specific preferences that will act as defaults for the client, but it will take a bit of setup, and is low on my priority list right now since and the users can configure it manually on the client side if they want that much control.


It depends.
In case MLC will have new WUs requiring same or more RAM, setting project_max_concurrent to 1, will be a better setting than setting 'max_concurrent'.
However, if MLC has new WUs that require substantially less RAM, setting 'max_concurrent' to 1 only, will still allow for more than 1 WU to be loaded from MLC.
I don't mind loading 4 instances per unit of MLC, as long as the WUs will fit in the available RAM.
17) Questions and Answers : Issue Discussion : MLC@home WUs using 2 CPUs (Message 225)
Posted 23 Jul 2020 by ProDigit
Post:
From other projects, this would be found in the 'preference' page.
But I couldn't find it on https://www.mlcathome.org/mlcathome/prefs.php?subset=project

Where do I look for?
18) Questions and Answers : Issue Discussion : No WUs: "This computer has reached a limit on tasks in progress" (Message 224)
Posted 23 Jul 2020 by ProDigit
Post:
Where can I change the limitation?
Below my Boinc log:
7/22/2020 1:57:49 PM |  | cc_config.xml not found - using defaults
7/22/2020 1:57:49 PM |  | Starting BOINC client version 7.16.7 for windows_x86_64
7/22/2020 1:57:49 PM |  | Libraries: libcurl/7.47.1 OpenSSL/1.0.2s zlib/1.2.8
7/22/2020 1:57:49 PM |  | Data directory: X:\ProgramData\BOINC
7/22/2020 1:57:49 PM |  | Running under account creat
7/22/2020 1:57:49 PM |  | OpenCL: Intel GPU 0: Intel(R) UHD Graphics 605 (driver version 25.20.100.6577, device version OpenCL 1.2 NEO, 3277MB, 3277MB available, 108 GFLOPS peak)
7/22/2020 1:57:49 PM |  | OpenCL CPU: Intel(R) Pentium(R) Silver N5000 CPU @ 1.10GHz (OpenCL driver vendor: Intel(R) Corporation, driver version 7.6.0.1125, device version OpenCL 1.2 (Build 0))
7/22/2020 1:57:49 PM |  | Windows processor group 0: 4 processors
7/22/2020 1:57:49 PM |  | Host name: LAPTOP-M1HKAT00
7/22/2020 1:57:49 PM |  | Processor: 4 GenuineIntel Intel(R) Pentium(R) Silver N5000 CPU @ 1.10GHz [Family 6 Model 122 Stepping 1]
7/22/2020 1:57:49 PM |  | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss htt tm pni ssse3 cx16 sse4_1 sse4_2 movebe popcnt aes rdrandsyscall nx lm vmx tm2 pbe fsgsbase smep
7/22/2020 1:57:49 PM |  | OS: Microsoft Windows 10: Core x64 Edition, (10.00.19041.00)
7/22/2020 1:57:49 PM |  | Memory: 15.77 GB physical, 15.77 GB virtual
7/22/2020 1:57:49 PM |  | Disk: 119.08 GB total, 32.19 GB free
7/22/2020 1:57:49 PM |  | Local time is UTC -4 hours
7/22/2020 1:57:49 PM |  | No WSL found.
7/22/2020 1:57:49 PM | MLC@Home | Found app_config.xml
7/22/2020 1:57:49 PM | MLC@Home | mlds: Max 1 concurrent jobs
7/22/2020 1:57:50 PM |  | Reading preferences override file
7/22/2020 1:57:50 PM |  | Preferences:
7/22/2020 1:57:50 PM |  | max memory usage when active: 8073.10 MB
7/22/2020 1:57:50 PM |  | max memory usage when idle: 14531.58 MB
7/22/2020 1:57:50 PM |  | max disk usage: 31.42 GB
7/22/2020 1:57:50 PM |  | max CPUs used: 2
7/22/2020 1:57:50 PM |  | (to change preferences, visit a project web site or select Preferences in the Manager)
7/22/2020 1:57:50 PM |  | Setting up project and slot directories
7/22/2020 1:57:50 PM |  | Checking active tasks
7/22/2020 1:57:50 PM | MLC@Home | URL https://www.mlcathome.org/mlcathome/; Computer ID 1326; resource share 2500
7/22/2020 1:57:50 PM |  | Setting up GUI RPC socket
7/22/2020 1:57:50 PM |  | Checking presence of 1099 project files
7/22/2020 4:10:39 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 6:29:46 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 9:04:07 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 9:28:48 PM | MLC@Home | task EightBitModified-1594569243-3668-3_1 resumed by user
7/22/2020 9:28:51 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 9:30:59 PM |  | Reading preferences override file
7/22/2020 9:30:59 PM |  | Preferences:
7/22/2020 9:30:59 PM |  | max memory usage when active: 8073.10 MB
7/22/2020 9:30:59 PM |  | max memory usage when idle: 14531.58 MB
7/22/2020 9:30:59 PM |  | max disk usage: 31.43 GB
7/22/2020 9:30:59 PM |  | max CPUs used: 2
7/22/2020 9:30:59 PM |  | (to change preferences, visit a project web site or select Preferences in the Manager)
7/22/2020 9:31:32 PM |  | Reading preferences override file
7/22/2020 9:31:32 PM |  | Preferences:
7/22/2020 9:31:32 PM |  | max memory usage when active: 8073.10 MB
7/22/2020 9:31:32 PM |  | max memory usage when idle: 14531.58 MB
7/22/2020 9:31:32 PM |  | max disk usage: 31.43 GB
7/22/2020 9:31:32 PM |  | max CPUs used: 2
7/22/2020 9:31:32 PM |  | (to change preferences, visit a project web site or select Preferences in the Manager)
7/22/2020 9:31:47 PM | MLC@Home | update requested by user
7/22/2020 9:32:04 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 9:35:31 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 9:38:57 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 9:42:25 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 9:45:52 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 9:49:21 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 9:52:47 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 9:56:11 PM | MLC@Home | No tasks sent
7/22/2020 9:56:11 PM | MLC@Home | This computer has reached a limit on tasks in progress
7/22/2020 9:56:11 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 10:03:36 PM | MLC@Home | No tasks sent
7/22/2020 10:03:36 PM | MLC@Home | This computer has reached a limit on tasks in progress
7/22/2020 10:03:36 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 10:15:24 PM | MLC@Home | update requested by user
7/22/2020 10:15:25 PM | MLC@Home | update requested by user
7/22/2020 10:15:44 PM | MLC@Home | Project requested delay of 202 seconds
7/22/2020 10:19:11 PM | MLC@Home | No tasks sent
7/22/2020 10:19:11 PM | MLC@Home | This computer has reached a limit on tasks in progress
7/22/2020 10:19:11 PM | MLC@Home | Project requested delay of 202 seconds


Also, what's the 202 seconds delay for?
I thought my Laptop is sync'd with the atomic clock on the Inets?
19) Questions and Answers : Issue Discussion : MLC@home WUs using 2 CPUs (Message 204)
Posted 22 Jul 2020 by ProDigit
Post:
since the project is shared with other projects, I thought found the correct app_config.xml settings here for me:

<app_config>
<app>
<name>mlds</name>
<max_concurrent>1</max_concurrent>
</app>
</app_config>


The name is 'mlds'.

With running 1 unit, I am able to still share the remaining 2 to 3 threads to other projects (depending on memory availability).

Is there a possibility the WUs can be trimmed to use more like 500MB?
It would work out better with my servers. Even 50-100MB lower memory, is something I'd appreciate.
I have to reconfigure 20 units, to accomodate MLC, and later an additional 20 servers.
Would be nice if perhaps there was some sort of settings in each person's account on the webpage (https://www.mlcathome.org/mlcathome/prefs.php?subset=project) to set the amount of threads.
I presume it's using either Docker or VM, to get this high RAM usage?

Also, is the ram data compressible? If so, I'm thinking about installing zram on these units.
They don't have much emmc space either.

*edit: MLC is also the first project that would completely crash my units, if 3 or 4 MLC WUs are loaded right away. As soon as they load, the unit crashes, so I'd have to be quick to pause boinc before it starts; and configure it correctly before resuming Boinc.
It's only a one time config setting change though...
I'm not sure if there's something that can be done about this from your end?
It appears some units load, and give a 'mem error' when there's not enough memory, and wait. MLC on my units doesn't seem to do that.
20) Questions and Answers : Issue Discussion : MLC@home WUs using 2 CPUs (Message 201)
Posted 21 Jul 2020 by ProDigit
Post:
Ok, I ran into the 60 minute timeout to edit the previous post,
The real issue is that I'm getting low memory errors.
It's hard to see on Boinctui, what the exact reason is, but I found out it's memory related.

I'm running from a 2 core 4 thread, with 2GB. The OS uses up about 150MB, so there's about 1,86GB of RAM left; taken up by 2x MLC WUs.
2 CPU threads are waiting for memory.

What are my settings and options?

<app>config>
<app>
<name>mldg</name>
<gpu_versions>
<cpu_usage>1</cpu_usage>
</gpu_versions>
</app>
</app_config>


Next 20

©2022 MLC@Home Team
A project of the Cognition, Robotics, and Learning (CORAL) Lab at the University of Maryland, Baltimore County (UMBC)