Message boards :
News :
[TWIM Notes] Sep 14 2020
Message board moderation
Author | Message |
---|---|
Send message Joined: 30 Jun 20 Posts: 462 Credit: 21,406,548 RAC: 0 ![]() ![]() ![]() ![]() |
This Week in MLC@Home Notes for Sep 14 2020 A weekly summary of news and notes for MLC@Home We're nearing the 2000 hosts with recent credit mark! Thanks to all who are contributing! We are really, really excited that we're nearing the end of Datasets 1 and 2. Those pesky Parity* and EightBit* networks are starting to finish up, meaning that massive undertaking is nearing an end, and soon will be ready for release. New this week is the launching of a MLDStest project for beta-testing new client releases and new dataset 3 WUs. Users have the option of opting out of running test units if they wish (under account preferences), but the number of WUs is expected to be very small, and it would be a big help to run them if you can, so please consider remaining as a tester. This week we fixed most of the issues behind the scenes with Dataset 3, and have already sent out test WUs. This week we're hoping to queue the bulk for Dataset 3 WUs. Preparations for Dataset 4 (MNIST and TrojAI) are underway. Paper writing continues for a conference deadline at the end of the month. News:
SingleDirectMachine 10002/10004 EightBitMachine 9994/10006 SingleInvertMachine 10001/10003 SimpleXORMachine 10000/10002 ParityMachine 602/10005 ParityModified 118/10005 EightBitModified 4554/10006 SimpleXORModified 10005/10005 SingleDirectModified 10004/10004 SingleInvertModified 10002/10002 Last week's TWIM Notes: Sep 8 2020 Thanks again to all our volunteers! -- The MLC@Home Admins |
![]() Send message Joined: 1 Jul 20 Posts: 32 Credit: 22,436,564 RAC: 0 ![]() ![]() ![]() ![]() |
Any estimate for GPU testing? I have 16 (NVIDIA) I would gladly volunteer for testing and am sure there will be many others. Keep up the good work and thanks again for keeping us informed. Cheers! ![]() |
Send message Joined: 30 Jun 20 Posts: 462 Credit: 21,406,548 RAC: 0 ![]() ![]() ![]() ![]() |
Now that we have a testing project, its easier to try some more experimental things like GPU support without breaking the main app. But mainly we'll see this with Dataset 4 as it rolls out, which may not be for a few more weeks. |
![]() Send message Joined: 3 Aug 20 Posts: 8 Credit: 7,650,164 RAC: 0 ![]() ![]() ![]() ![]() |
What could we expect in terms of supported GPU's? I read someone mentioning 2080 or higher as a possibility. but i have a 2070 super. Thank You for your efforts :> |
Send message Joined: 12 Aug 20 Posts: 21 Credit: 53,001,945 RAC: 0 ![]() ![]() ![]() ![]() |
I read someone mentioning 2080 or higher as a possibility. but i have a 2070 super. Most of us don't have anything even close to a 20xx!!! The two usable GPU:s that I have are a GTX 960 and a GTX 750 Ti. Those are of course not the most modern, but still they keep me going fairly strong in projects like Einstein@home and GPUGRID. I would reckon that at least the GTX 960 are about average among the GPU:s actually running out there, and the 750Ti is a really classic one. Under Linux, they supports both CUDA- and OpenCL-apps, and I have also tested them successfully on projects like Asteroids@home, Milkyway, and Collatz. I would immediately start running MLC with both of them if there comes a GPU app that supports them! :-) //Gunnar |
Send message Joined: 9 Jul 20 Posts: 142 Credit: 11,536,204 RAC: 3 ![]() ![]() ![]() ![]() |
Awesome news! Thanks for keeping us up to date and congrats on reaching the 2000 host milestone. Still very excited for what's to come especially as I am currently thinking of upgrading my GPU horsepower by switching from an old GTX 750 Ti to a 1650 Super. Very glad that you considered separating the beta testing into another distinct app that might come in handy for the future more complex use cases. Keep it up! ![]() |
Send message Joined: 30 Jun 20 Posts: 462 Credit: 21,406,548 RAC: 0 ![]() ![]() ![]() ![]() |
For the record, I don't know where the rumor of a minimum 2xxx series card started, but its not true. We'll be bound by the minimum version of cuda that pytorch supports. For pytorch 1.6, this is 9.2 I think.. so any card that's supported by cuda 9.2 should be supported for this. A high end card shouldn't be necessary, especially since (at the moment) the goal isn't to train start of the art over-parameterized networks on gigabytes of data. A simple 970 or 1050 should be plenty. I also want to re-iterate that, at the moment, the networks in dataset 1 and 2 (and probably 3, though not actually tested) actually train slower on GPUs than CPUs (maybe with a substantial client rewrite it could be better). Sometimes the networks are so small the overhead of using/transferring data to/from the GPU dwarfs the speedup gained in the actual matrix calculations. |
![]() Send message Joined: 3 Aug 20 Posts: 8 Credit: 7,650,164 RAC: 0 ![]() ![]() ![]() ![]() |
For the record, I don't know where the rumor of a minimum 2xxx series card started, but its not true. Thank You for clearing things up! I think it mostly has to do with 2xxx series and 3xxx series having the tensor core coprocessor. A kind of assumption based on that. It's what i was assuming to some degree. But it would definitely be much nicer to be able to use my extra gpu's that aren't 2xxx series for this, i use my 2070 S for gaming right now so yeah, not that i wouldn't dedicate some resources from it when its not in use of course |
©2024 MLC@Home Team
A project of the Cognition, Robotics, and Learning (CORAL) Lab at the University of Maryland, Baltimore County (UMBC)