[TWIM Notes] Sep 14 2020

Message boards : News : [TWIM Notes] Sep 14 2020
Message board moderation

To post messages, you must log in.

AuthorMessage
pianoman [MLC@Home Admin]
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Jun 20
Posts: 462
Credit: 21,406,548
RAC: 0
Message 477 - Posted: 14 Sep 2020, 20:36:35 UTC

This Week in MLC@Home
Notes for Sep 14 2020
A weekly summary of news and notes for MLC@Home

We're nearing the 2000 hosts with recent credit mark! Thanks to all who are contributing!

We are really, really excited that we're nearing the end of Datasets 1 and 2. Those pesky Parity* and EightBit* networks are starting to finish up, meaning that massive undertaking is nearing an end, and soon will be ready for release.

New this week is the launching of a MLDStest project for beta-testing new client releases and new dataset 3 WUs. Users have the option of opting out of running test units if they wish (under account preferences), but the number of WUs is expected to be very small, and it would be a big help to run them if you can, so please consider remaining as a tester.

This week we fixed most of the issues behind the scenes with Dataset 3, and have already sent out test WUs. This week we're hoping to queue the bulk for Dataset 3 WUs. Preparations for Dataset 4 (MNIST and TrojAI) are underway.

Paper writing continues for a conference deadline at the end of the month.

News:

  • New "MLDStest" application added to the site.
  • New project preferences turned on, including the ability to limit which apps you run, and limits on CPU cores to use at once.
  • Client v9.60 in testing, no issues so far, will release to the regular MLDS app later this week.
  • Client changes for Dataset 4 underway.
  • A technical discussion of Dataset 3 is underway in the forums for those interested in the science behind it.
  • We'll do an official release of a preliminary Dataset (1+2) once we have at least 1000 examples of each machine type, and we're getting closer!
  • New server has not shipped from the vendor yet, but should be "any day now".
  • We haven't forgotten about badges! We're just focused on the paper and new WU generation at the moment. That said, if volunteers would like to offer potential designs for badges, head on over the the forums and join the discussion.



Project status snapshot:

Tasks
Tasks ready to send 12847
Tasks in progress 22234
Users
With credit 690
Registered in past 24 hours 25
Hosts
With recent credit 1960
Registered in past 24 hours 42
Current GigaFLOPS 31675.59

Dataset 1 and 2 progress:

SingleDirectMachine      10002/10004
EightBitMachine           9994/10006
SingleInvertMachine      10001/10003
SimpleXORMachine         10000/10002
ParityMachine              602/10005
ParityModified             118/10005
EightBitModified          4554/10006
SimpleXORModified        10005/10005
SingleDirectModified     10004/10004
SingleInvertModified     10002/10002 


Last week's TWIM Notes: Sep 8 2020

Thanks again to all our volunteers!

-- The MLC@Home Admins
ID: 477 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dataman
Avatar

Send message
Joined: 1 Jul 20
Posts: 32
Credit: 22,436,564
RAC: 0
Message 480 - Posted: 15 Sep 2020, 15:24:14 UTC

Any estimate for GPU testing? I have 16 (NVIDIA) I would gladly volunteer for testing and am sure there will be many others. Keep up the good work and thanks again for keeping us informed. Cheers!

ID: 480 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
pianoman [MLC@Home Admin]
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Jun 20
Posts: 462
Credit: 21,406,548
RAC: 0
Message 481 - Posted: 15 Sep 2020, 20:21:59 UTC - in response to Message 480.  

Now that we have a testing project, its easier to try some more experimental things like GPU support without breaking the main app. But mainly we'll see this with Dataset 4 as it rolls out, which may not be for a few more weeks.
ID: 481 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
An0ma1y
Avatar

Send message
Joined: 3 Aug 20
Posts: 8
Credit: 7,650,164
RAC: 0
Message 483 - Posted: 16 Sep 2020, 20:59:38 UTC - in response to Message 481.  

What could we expect in terms of supported GPU's?

I read someone mentioning 2080 or higher as a possibility. but i have a 2070 super.

Thank You for your efforts :>
ID: 483 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Gunnar Hjern

Send message
Joined: 12 Aug 20
Posts: 21
Credit: 53,001,945
RAC: 0
Message 484 - Posted: 17 Sep 2020, 9:47:06 UTC - in response to Message 483.  
Last modified: 17 Sep 2020, 9:49:50 UTC

I read someone mentioning 2080 or higher as a possibility. but i have a 2070 super.

Most of us don't have anything even close to a 20xx!!!

The two usable GPU:s that I have are a GTX 960 and a GTX 750 Ti.
Those are of course not the most modern, but still they keep me going fairly strong in projects like Einstein@home and GPUGRID.
I would reckon that at least the GTX 960 are about average among the GPU:s actually running out there, and the 750Ti is a really classic one.
Under Linux, they supports both CUDA- and OpenCL-apps, and I have also tested them successfully on projects like Asteroids@home, Milkyway, and Collatz.

I would immediately start running MLC with both of them if there comes a GPU app that supports them! :-)

//Gunnar
ID: 484 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
bozz4science

Send message
Joined: 9 Jul 20
Posts: 142
Credit: 11,536,204
RAC: 3
Message 486 - Posted: 17 Sep 2020, 15:41:07 UTC - in response to Message 484.  
Last modified: 17 Sep 2020, 15:41:55 UTC

Awesome news! Thanks for keeping us up to date and congrats on reaching the 2000 host milestone. Still very excited for what's to come especially as I am currently thinking of upgrading my GPU horsepower by switching from an old GTX 750 Ti to a 1650 Super.

Very glad that you considered separating the beta testing into another distinct app that might come in handy for the future more complex use cases.

Keep it up!

ID: 486 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
pianoman [MLC@Home Admin]
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Jun 20
Posts: 462
Credit: 21,406,548
RAC: 0
Message 491 - Posted: 17 Sep 2020, 18:24:43 UTC

For the record, I don't know where the rumor of a minimum 2xxx series card started, but its not true.

We'll be bound by the minimum version of cuda that pytorch supports. For pytorch 1.6, this is 9.2 I think.. so any card that's supported by cuda 9.2 should be supported for this. A high end card shouldn't be necessary, especially since (at the moment) the goal isn't to train start of the art over-parameterized networks on gigabytes of data. A simple 970 or 1050 should be plenty.

I also want to re-iterate that, at the moment, the networks in dataset 1 and 2 (and probably 3, though not actually tested) actually train slower on GPUs than CPUs (maybe with a substantial client rewrite it could be better). Sometimes the networks are so small the overhead of using/transferring data to/from the GPU dwarfs the speedup gained in the actual matrix calculations.
ID: 491 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
An0ma1y
Avatar

Send message
Joined: 3 Aug 20
Posts: 8
Credit: 7,650,164
RAC: 0
Message 496 - Posted: 17 Sep 2020, 20:41:27 UTC - in response to Message 491.  

For the record, I don't know where the rumor of a minimum 2xxx series card started, but its not true.

We'll be bound by the minimum version of cuda that pytorch supports. For pytorch 1.6, this is 9.2 I think.. so any card that's supported by cuda 9.2 should be supported for this. A high end card shouldn't be necessary, especially since (at the moment) the goal isn't to train start of the art over-parameterized networks on gigabytes of data. A simple 970 or 1050 should be plenty.

I also want to re-iterate that, at the moment, the networks in dataset 1 and 2 (and probably 3, though not actually tested) actually train slower on GPUs than CPUs (maybe with a substantial client rewrite it could be better). Sometimes the networks are so small the overhead of using/transferring data to/from the GPU dwarfs the speedup gained in the actual matrix calculations.


Thank You for clearing things up!

I think it mostly has to do with 2xxx series and 3xxx series having the tensor core coprocessor. A kind of assumption based on that. It's what i was assuming to some degree.

But it would definitely be much nicer to be able to use my extra gpu's that aren't 2xxx series for this, i use my 2070 S for gaming right now so yeah, not that i wouldn't dedicate some resources from it when its not in use of course
ID: 496 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : News : [TWIM Notes] Sep 14 2020

©2024 MLC@Home Team
A project of the Cognition, Robotics, and Learning (CORAL) Lab at the University of Maryland, Baltimore County (UMBC)