| Name | ParityModified-1647043985-21789-3-0_0 |
| Workunit | 11639139 |
| Created | 13 Apr 2022, 18:23:13 UTC |
| Sent | 13 Apr 2022, 18:23:20 UTC |
| Report deadline | 21 Apr 2022, 18:23:20 UTC |
| Received | 23 Apr 2022, 13:28:44 UTC |
| Server state | Over |
| Outcome | Success |
| Client state | Done |
| Exit status | 0 (0x00000000) |
| Computer ID | 13966 |
| Run time | 2 hours 39 min 6 sec |
| CPU time | 2 hours 34 min 17 sec |
| Validate state | Task was reported too late to validate |
| Credit | 0.00 |
| Device peak FLOPS | 10,956.98 GFLOPS |
| Application version | Machine Learning Dataset Generator (GPU) v9.75 (cuda10200) windows_x86_64 |
| Peak working set size | 2.08 GB |
| Peak swap size | 4.37 GB |
| Peak disk usage | 1.54 GB |
<core_client_version>7.16.20</core_client_version> <![CDATA[ <stderr_txt> 2022-04-23 21:07:35 main:574] : INFO : Epoch 1757 | loss: 0.031155 | val_loss: 0.0311687 | Time: 4731.28 ms [2022-04-23 21:07:40 main:574] : INFO : Epoch 1758 | loss: 0.0311518 | val_loss: 0.0311647 | Time: 4711.71 ms [2022-04-23 21:07:45 main:574] : INFO : Epoch 1759 | loss: 0.0311508 | val_loss: 0.0311638 | Time: 4693.97 ms [2022-04-23 21:07:49 main:574] : INFO : Epoch 1760 | loss: 0.0311493 | val_loss: 0.0311659 | Time: 4644.1 ms [2022-04-23 21:07:54 main:574] : INFO : Epoch 1761 | loss: 0.0311478 | val_loss: 0.0311661 | Time: 4642.22 ms [2022-04-23 21:07:58 main:574] : INFO : Epoch 1762 | loss: 0.0311476 | val_loss: 0.0311677 | Time: 4600.53 ms [2022-04-23 21:08:03 main:574] : INFO : Epoch 1763 | loss: 0.0311472 | val_loss: 0.0311652 | Time: 4647.13 ms [2022-04-23 21:08:08 main:574] : INFO : Epoch 1764 | loss: 0.0311474 | val_loss: 0.0311684 | Time: 4621.52 ms [2022-04-23 21:08:12 main:574] : INFO : Epoch 1765 | loss: 0.0311496 | val_loss: 0.0311664 | Time: 4693.89 ms [2022-04-23 21:08:17 main:574] : INFO : Epoch 1766 | loss: 0.0311466 | val_loss: 0.0311661 | Time: 4739.58 ms [2022-04-23 21:08:22 main:574] : INFO : Epoch 1767 | loss: 0.0311479 | val_loss: 0.0311667 | Time: 4772.44 ms [2022-04-23 21:08:27 main:574] : INFO : Epoch 1768 | loss: 0.0311529 | val_loss: 0.0311663 | Time: 4642.92 ms [2022-04-23 21:08:32 main:574] : INFO : Epoch 1769 | loss: 0.0311529 | val_loss: 0.0311658 | Time: 4851.45 ms [2022-04-23 21:08:36 main:574] : INFO : Epoch 1770 | loss: 0.0311534 | val_loss: 0.0311635 | Time: 4546.5 ms [2022-04-23 21:08:41 main:574] : INFO : Epoch 1771 | loss: 0.0311516 | val_loss: 0.0311621 | Time: 4628.72 ms [2022-04-23 21:08:46 main:574] : INFO : Epoch 1772 | loss: 0.0311521 | val_loss: 0.0311649 | Time: 4672.06 ms [2022-04-23 21:08:50 main:574] : INFO : Epoch 1773 | loss: 0.0311502 | val_loss: 0.0311645 | Time: 4625.36 ms [2022-04-23 21:08:55 main:574] : INFO : Epoch 1774 | loss: 0.0311492 | val_loss: 0.031164 | Time: 4648.32 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce RTX 2080) [2022-04-23 21:16:26 main:435] : INFO : Set logging level to 1 [2022-04-23 21:16:26 main:441] : INFO : Running in BOINC Client mode [2022-04-23 21:16:26 main:444] : INFO : Resolving all filenames [2022-04-23 21:16:26 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-04-23 21:16:26 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-04-23 21:16:26 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-04-23 21:16:26 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-04-23 21:16:26 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-04-23 21:16:26 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-04-23 21:16:26 main:474] : INFO : Configuration: [2022-04-23 21:16:26 main:475] : INFO : Model type: GRU [2022-04-23 21:16:26 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-04-23 21:16:26 main:477] : INFO : Max Epochs: 2048 [2022-04-23 21:16:26 main:478] : INFO : Batch Size: 128 [2022-04-23 21:16:26 main:479] : INFO : Learning Rate: 0.01 [2022-04-23 21:16:26 main:480] : INFO : Patience: 10 [2022-04-23 21:16:26 main:481] : INFO : Hidden Width: 12 [2022-04-23 21:16:26 main:482] : INFO : # Recurrent Layers: 4 [2022-04-23 21:16:26 main:483] : INFO : # Backend Layers: 4 [2022-04-23 21:16:26 main:484] : INFO : # Threads: 1 [2022-04-23 21:16:26 main:486] : INFO : Preparing Dataset [2022-04-23 21:16:26 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-04-23 21:16:27 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-04-23 21:16:28 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-04-23 21:16:28 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-04-23 21:16:28 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-04-23 21:16:29 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-04-23 21:16:29 main:494] : INFO : Creating Model [2022-04-23 21:16:29 main:507] : INFO : Preparing config file [2022-04-23 21:16:29 main:511] : INFO : Found checkpoint, attempting to load... [2022-04-23 21:16:29 main:512] : INFO : Loading config [2022-04-23 21:16:29 main:514] : INFO : Loading state [2022-04-23 21:16:30 main:559] : INFO : Loading DataLoader into Memory [2022-04-23 21:16:30 main:562] : INFO : Starting Training [2022-04-23 21:16:35 main:574] : INFO : Epoch 1768 | loss: 0.0311846 | val_loss: 0.0311716 | Time: 4994.97 ms [2022-04-23 21:16:39 main:574] : INFO : Epoch 1769 | loss: 0.0311534 | val_loss: 0.0311693 | Time: 4708.45 ms [2022-04-23 21:16:44 main:574] : INFO : Epoch 1770 | loss: 0.031149 | val_loss: 0.0311628 | Time: 4632.47 ms [2022-04-23 21:16:49 main:574] : INFO : Epoch 1771 | loss: 0.0311464 | val_loss: 0.0311653 | Time: 4817.04 ms [2022-04-23 21:16:54 main:574] : INFO : Epoch 1772 | loss: 0.0311449 | val_loss: 0.0311637 | Time: 4749.38 ms [2022-04-23 21:16:58 main:574] : INFO : Epoch 1773 | loss: 0.0311436 | val_loss: 0.0311623 | Time: 4698.79 ms [2022-04-23 21:17:03 main:574] : INFO : Epoch 1774 | loss: 0.0311432 | val_loss: 0.0311647 | Time: 4775 ms [2022-04-23 21:17:08 main:574] : INFO : Epoch 1775 | loss: 0.0311456 | val_loss: 0.031163 | Time: 4884.01 ms [2022-04-23 21:17:13 main:574] : INFO : Epoch 1776 | loss: 0.0311451 | val_loss: 0.0311638 | Time: 4680.4 ms [2022-04-23 21:17:18 main:574] : INFO : Epoch 1777 | loss: 0.0311458 | val_loss: 0.0311675 | Time: 4742.81 ms [2022-04-23 21:17:22 main:574] : INFO : Epoch 1778 | loss: 0.0311459 | val_loss: 0.031166 | Time: 4797.44 ms [2022-04-23 21:17:27 main:574] : INFO : Epoch 1779 | loss: 0.0311453 | val_loss: 0.0311634 | Time: 4802.74 ms [2022-04-23 21:17:32 main:574] : INFO : Epoch 1780 | loss: 0.0311466 | val_loss: 0.0311647 | Time: 4714.92 ms [2022-04-23 21:17:37 main:574] : INFO : Epoch 1781 | loss: 0.0311467 | val_loss: 0.0311616 | Time: 4630.88 ms [2022-04-23 21:17:41 main:574] : INFO : Epoch 1782 | loss: 0.0311453 | val_loss: 0.0311642 | Time: 4672.62 ms [2022-04-23 21:17:46 main:574] : INFO : Epoch 1783 | loss: 0.0311455 | val_loss: 0.031166 | Time: 4660.97 ms [2022-04-23 21:17:51 main:574] : INFO : Epoch 1784 | loss: 0.0311466 | val_loss: 0.0311681 | Time: 4689.11 ms [2022-04-23 21:17:55 main:574] : INFO : Epoch 1785 | loss: 0.0311471 | val_loss: 0.0311665 | Time: 4702.31 ms [2022-04-23 21:18:00 main:574] : INFO : Epoch 1786 | loss: 0.0311477 | val_loss: 0.0311704 | Time: 4647.91 ms [2022-04-23 21:18:05 main:574] : INFO : Epoch 1787 | loss: 0.0311449 | val_loss: 0.0311665 | Time: 4692.4 ms [2022-04-23 21:18:09 main:574] : INFO : Epoch 1788 | loss: 0.0311432 | val_loss: 0.0311642 | Time: 4865.67 ms [2022-04-23 21:18:14 main:574] : INFO : Epoch 1789 | loss: 0.0311435 | val_loss: 0.031165 | Time: 4663.76 ms [2022-04-23 21:18:19 main:574] : INFO : Epoch 1790 | loss: 0.0311443 | val_loss: 0.0311646 | Time: 4677.59 ms [2022-04-23 21:18:24 main:574] : INFO : Epoch 1791 | loss: 0.0311459 | val_loss: 0.0311677 | Time: 5071.26 ms [2022-04-23 21:18:29 main:574] : INFO : Epoch 1792 | loss: 0.0311451 | val_loss: 0.0311674 | Time: 4680.49 ms [2022-04-23 21:18:33 main:574] : INFO : Epoch 1793 | loss: 0.0311474 | val_loss: 0.0311684 | Time: 4625.52 ms [2022-04-23 21:18:38 main:574] : INFO : Epoch 1794 | loss: 0.0311464 | val_loss: 0.0311673 | Time: 4600.34 ms [2022-04-23 21:18:42 main:574] : INFO : Epoch 1795 | loss: 0.0311483 | val_loss: 0.0311722 | Time: 4582.46 ms [2022-04-23 21:18:47 main:574] : INFO : Epoch 1796 | loss: 0.031149 | val_loss: 0.0311738 | Time: 4617.18 ms [2022-04-23 21:18:52 main:574] : INFO : Epoch 1797 | loss: 0.031145 | val_loss: 0.0311666 | Time: 4576.34 ms [2022-04-23 21:18:56 main:574] : INFO : Epoch 1798 | loss: 0.0311467 | val_loss: 0.031168 | Time: 4548.79 ms [2022-04-23 21:19:01 main:574] : INFO : Epoch 1799 | loss: 0.0311443 | val_loss: 0.0311652 | Time: 4631.65 ms [2022-04-23 21:19:06 main:574] : INFO : Epoch 1800 | loss: 0.0311445 | val_loss: 0.0311666 | Time: 4693.1 ms [2022-04-23 21:19:10 main:574] : INFO : Epoch 1801 | loss: 0.0311431 | val_loss: 0.0311648 | Time: 4579.76 ms [2022-04-23 21:19:15 main:574] : INFO : Epoch 1802 | loss: 0.0311523 | val_loss: 0.031165 | Time: 4646.33 ms [2022-04-23 21:19:19 main:574] : INFO : Epoch 1803 | loss: 0.0311505 | val_loss: 0.0311661 | Time: 4683.09 ms [2022-04-23 21:19:24 main:574] : INFO : Epoch 1804 | loss: 0.0311488 | val_loss: 0.0311654 | Time: 4626.94 ms [2022-04-23 21:19:29 main:574] : INFO : Epoch 1805 | loss: 0.0311479 | val_loss: 0.0311694 | Time: 4672.39 ms [2022-04-23 21:19:33 main:574] : INFO : Epoch 1806 | loss: 0.0311485 | val_loss: 0.0311659 | Time: 4558.88 ms [2022-04-23 21:19:38 main:574] : INFO : Epoch 1807 | loss: 0.0311481 | val_loss: 0.0311635 | Time: 4580.58 ms [2022-04-23 21:19:42 main:574] : INFO : Epoch 1808 | loss: 0.0311486 | val_loss: 0.0311629 | Time: 4524.8 ms [2022-04-23 21:19:47 main:574] : INFO : Epoch 1809 | loss: 0.0311474 | val_loss: 0.0311641 | Time: 4721.37 ms [2022-04-23 21:19:52 main:574] : INFO : Epoch 1810 | loss: 0.0311448 | val_loss: 0.0311651 | Time: 4564.28 ms [2022-04-23 21:19:56 main:574] : INFO : Epoch 1811 | loss: 0.0311424 | val_loss: 0.0311684 | Time: 4600.26 ms [2022-04-23 21:20:01 main:574] : INFO : Epoch 1812 | loss: 0.0311479 | val_loss: 0.0311693 | Time: 4614.01 ms [2022-04-23 21:20:06 main:574] : INFO : Epoch 1813 | loss: 0.0311505 | val_loss: 0.0311676 | Time: 4649.97 ms [2022-04-23 21:20:10 main:574] : INFO : Epoch 1814 | loss: 0.0311503 | val_loss: 0.0311671 | Time: 4666.73 ms [2022-04-23 21:20:15 main:574] : INFO : Epoch 1815 | loss: 0.0311509 | val_loss: 0.0311687 | Time: 4592.81 ms [2022-04-23 21:20:20 main:574] : INFO : Epoch 1816 | loss: 0.0311464 | val_loss: 0.031168 | Time: 4624.9 ms [2022-04-23 21:20:24 main:574] : INFO : Epoch 1817 | loss: 0.0311467 | val_loss: 0.0311651 | Time: 4567.62 ms [2022-04-23 21:20:29 main:574] : INFO : Epoch 1818 | loss: 0.0311463 | val_loss: 0.0311684 | Time: 4641.96 ms [2022-04-23 21:20:33 main:574] : INFO : Epoch 1819 | loss: 0.0311503 | val_loss: 0.0311642 | Time: 4534.35 ms [2022-04-23 21:20:38 main:574] : INFO : Epoch 1820 | loss: 0.0311477 | val_loss: 0.0311657 | Time: 4727.43 ms [2022-04-23 21:20:43 main:574] : INFO : Epoch 1821 | loss: 0.0311433 | val_loss: 0.031161 | Time: 4530.13 ms [2022-04-23 21:20:47 main:574] : INFO : Epoch 1822 | loss: 0.0311418 | val_loss: 0.0311632 | Time: 4672.42 ms [2022-04-23 21:20:52 main:574] : INFO : Epoch 1823 | loss: 0.0311412 | val_loss: 0.031164 | Time: 4482.68 ms [2022-04-23 21:20:56 main:574] : INFO : Epoch 1824 | loss: 0.0311411 | val_loss: 0.0311646 | Time: 4556.73 ms [2022-04-23 21:21:01 main:574] : INFO : Epoch 1825 | loss: 0.0311409 | val_loss: 0.0311662 | Time: 4591.38 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce RTX 2080) [2022-04-23 21:30:26 main:435] : INFO : Set logging level to 1 [2022-04-23 21:30:26 main:441] : INFO : Running in BOINC Client mode [2022-04-23 21:30:26 main:444] : INFO : Resolving all filenames [2022-04-23 21:30:26 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-04-23 21:30:26 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-04-23 21:30:26 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-04-23 21:30:26 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-04-23 21:30:26 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-04-23 21:30:26 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-04-23 21:30:26 main:474] : INFO : Configuration: [2022-04-23 21:30:26 main:475] : INFO : Model type: GRU [2022-04-23 21:30:26 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-04-23 21:30:26 main:477] : INFO : Max Epochs: 2048 [2022-04-23 21:30:26 main:478] : INFO : Batch Size: 128 [2022-04-23 21:30:26 main:479] : INFO : Learning Rate: 0.01 [2022-04-23 21:30:26 main:480] : INFO : Patience: 10 [2022-04-23 21:30:26 main:481] : INFO : Hidden Width: 12 [2022-04-23 21:30:26 main:482] : INFO : # Recurrent Layers: 4 [2022-04-23 21:30:26 main:483] : INFO : # Backend Layers: 4 [2022-04-23 21:30:26 main:484] : INFO : # Threads: 1 [2022-04-23 21:30:26 main:486] : INFO : Preparing Dataset [2022-04-23 21:30:26 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-04-23 21:30:26 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-04-23 21:30:28 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-04-23 21:30:28 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-04-23 21:30:28 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-04-23 21:30:28 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-04-23 21:30:28 main:494] : INFO : Creating Model [2022-04-23 21:30:28 main:507] : INFO : Preparing config file [2022-04-23 21:30:28 main:511] : INFO : Found checkpoint, attempting to load... [2022-04-23 21:30:28 main:512] : INFO : Loading config [2022-04-23 21:30:28 main:514] : INFO : Loading state [2022-04-23 21:30:29 main:559] : INFO : Loading DataLoader into Memory [2022-04-23 21:30:29 main:562] : INFO : Starting Training [2022-04-23 21:30:34 main:574] : INFO : Epoch 1825 | loss: 0.0311824 | val_loss: 0.0311685 | Time: 4948.98 ms [2022-04-23 21:30:39 main:574] : INFO : Epoch 1826 | loss: 0.0311466 | val_loss: 0.0311635 | Time: 4975.79 ms [2022-04-23 21:30:44 main:574] : INFO : Epoch 1827 | loss: 0.0311431 | val_loss: 0.0311644 | Time: 4765.23 ms [2022-04-23 21:30:49 main:574] : INFO : Epoch 1828 | loss: 0.0311447 | val_loss: 0.0311673 | Time: 4632.46 ms [2022-04-23 21:30:54 main:574] : INFO : Epoch 1829 | loss: 0.0311455 | val_loss: 0.0311648 | Time: 4788.22 ms [2022-04-23 21:30:58 main:574] : INFO : Epoch 1830 | loss: 0.0311438 | val_loss: 0.0311667 | Time: 4595.31 ms [2022-04-23 21:31:03 main:574] : INFO : Epoch 1831 | loss: 0.0311419 | val_loss: 0.031164 | Time: 4656.97 ms [2022-04-23 21:31:08 main:574] : INFO : Epoch 1832 | loss: 0.0311414 | val_loss: 0.0311642 | Time: 4647.34 ms [2022-04-23 21:31:12 main:574] : INFO : Epoch 1833 | loss: 0.0311427 | val_loss: 0.0311637 | Time: 4764.28 ms [2022-04-23 21:31:17 main:574] : INFO : Epoch 1834 | loss: 0.0311414 | val_loss: 0.031165 | Time: 4615.29 ms [2022-04-23 21:31:22 main:574] : INFO : Epoch 1835 | loss: 0.0311428 | val_loss: 0.0311634 | Time: 4623.53 ms [2022-04-23 21:31:26 main:574] : INFO : Epoch 1836 | loss: 0.0311437 | val_loss: 0.0311659 | Time: 4684.86 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce RTX 2080) [2022-04-23 21:34:30 main:435] : INFO : Set logging level to 1 [2022-04-23 21:34:30 main:441] : INFO : Running in BOINC Client mode [2022-04-23 21:34:30 main:444] : INFO : Resolving all filenames [2022-04-23 21:34:30 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-04-23 21:34:30 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-04-23 21:34:30 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-04-23 21:34:30 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-04-23 21:34:30 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-04-23 21:34:30 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-04-23 21:34:30 main:474] : INFO : Configuration: [2022-04-23 21:34:30 main:475] : INFO : Model type: GRU [2022-04-23 21:34:30 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-04-23 21:34:30 main:477] : INFO : Max Epochs: 2048 [2022-04-23 21:34:30 main:478] : INFO : Batch Size: 128 [2022-04-23 21:34:30 main:479] : INFO : Learning Rate: 0.01 [2022-04-23 21:34:30 main:480] : INFO : Patience: 10 [2022-04-23 21:34:30 main:481] : INFO : Hidden Width: 12 [2022-04-23 21:34:30 main:482] : INFO : # Recurrent Layers: 4 [2022-04-23 21:34:30 main:483] : INFO : # Backend Layers: 4 [2022-04-23 21:34:30 main:484] : INFO : # Threads: 1 [2022-04-23 21:34:30 main:486] : INFO : Preparing Dataset [2022-04-23 21:34:30 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-04-23 21:34:30 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-04-23 21:34:32 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-04-23 21:34:32 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-04-23 21:34:32 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-04-23 21:34:32 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-04-23 21:34:32 main:494] : INFO : Creating Model [2022-04-23 21:34:32 main:507] : INFO : Preparing config file [2022-04-23 21:34:32 main:511] : INFO : Found checkpoint, attempting to load... [2022-04-23 21:34:32 main:512] : INFO : Loading config [2022-04-23 21:34:32 main:514] : INFO : Loading state [2022-04-23 21:34:33 main:559] : INFO : Loading DataLoader into Memory [2022-04-23 21:34:33 main:562] : INFO : Starting Training [2022-04-23 21:34:38 main:574] : INFO : Epoch 1825 | loss: 0.0311789 | val_loss: 0.031167 | Time: 4963.38 ms [2022-04-23 21:34:43 main:574] : INFO : Epoch 1826 | loss: 0.0311483 | val_loss: 0.0311654 | Time: 4740.18 ms [2022-04-23 21:34:48 main:574] : INFO : Epoch 1827 | loss: 0.0311429 | val_loss: 0.0311629 | Time: 4640.03 ms [2022-04-23 21:34:52 main:574] : INFO : Epoch 1828 | loss: 0.031142 | val_loss: 0.0311652 | Time: 4702.78 ms [2022-04-23 21:34:57 main:574] : INFO : Epoch 1829 | loss: 0.0311423 | val_loss: 0.0311692 | Time: 4610.94 ms [2022-04-23 21:35:02 main:574] : INFO : Epoch 1830 | loss: 0.0311449 | val_loss: 0.0311643 | Time: 4698.19 ms [2022-04-23 21:35:06 main:574] : INFO : Epoch 1831 | loss: 0.0311442 | val_loss: 0.031166 | Time: 4660.49 ms [2022-04-23 21:35:11 main:574] : INFO : Epoch 1832 | loss: 0.0311429 | val_loss: 0.0311652 | Time: 4681.81 ms [2022-04-23 21:35:16 main:574] : INFO : Epoch 1833 | loss: 0.0311421 | val_loss: 0.0311649 | Time: 4744.09 ms [2022-04-23 21:35:21 main:574] : INFO : Epoch 1834 | loss: 0.0311427 | val_loss: 0.0311634 | Time: 4875.66 ms [2022-04-23 21:35:25 main:574] : INFO : Epoch 1835 | loss: 0.0311437 | val_loss: 0.0311628 | Time: 4589.83 ms [2022-04-23 21:35:30 main:574] : INFO : Epoch 1836 | loss: 0.0311467 | val_loss: 0.0311712 | Time: 4623.76 ms [2022-04-23 21:35:35 main:574] : INFO : Epoch 1837 | loss: 0.0311495 | val_loss: 0.0311647 | Time: 4985.35 ms [2022-04-23 21:35:40 main:574] : INFO : Epoch 1838 | loss: 0.0311452 | val_loss: 0.0311674 | Time: 4665.02 ms [2022-04-23 21:35:44 main:574] : INFO : Epoch 1839 | loss: 0.0311432 | val_loss: 0.0311664 | Time: 4653.61 ms [2022-04-23 21:35:49 main:574] : INFO : Epoch 1840 | loss: 0.0311441 | val_loss: 0.0311663 | Time: 4579.45 ms [2022-04-23 21:35:54 main:574] : INFO : Epoch 1841 | loss: 0.0311421 | val_loss: 0.0311662 | Time: 4688.5 ms [2022-04-23 21:35:58 main:574] : INFO : Epoch 1842 | loss: 0.0311437 | val_loss: 0.031165 | Time: 4707.33 ms [2022-04-23 21:36:03 main:574] : INFO : Epoch 1843 | loss: 0.031143 | val_loss: 0.0311617 | Time: 4618.89 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce RTX 2080) [2022-04-23 21:39:05 main:435] : INFO : Set logging level to 1 [2022-04-23 21:39:05 main:441] : INFO : Running in BOINC Client mode [2022-04-23 21:39:05 main:444] : INFO : Resolving all filenames [2022-04-23 21:39:05 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-04-23 21:39:05 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-04-23 21:39:05 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-04-23 21:39:05 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-04-23 21:39:05 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-04-23 21:39:05 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-04-23 21:39:05 main:474] : INFO : Configuration: [2022-04-23 21:39:05 main:475] : INFO : Model type: GRU [2022-04-23 21:39:05 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-04-23 21:39:05 main:477] : INFO : Max Epochs: 2048 [2022-04-23 21:39:05 main:478] : INFO : Batch Size: 128 [2022-04-23 21:39:05 main:479] : INFO : Learning Rate: 0.01 [2022-04-23 21:39:05 main:480] : INFO : Patience: 10 [2022-04-23 21:39:05 main:481] : INFO : Hidden Width: 12 [2022-04-23 21:39:05 main:482] : INFO : # Recurrent Layers: 4 [2022-04-23 21:39:05 main:483] : INFO : # Backend Layers: 4 [2022-04-23 21:39:05 main:484] : INFO : # Threads: 1 [2022-04-23 21:39:05 main:486] : INFO : Preparing Dataset [2022-04-23 21:39:05 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-04-23 21:39:06 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-04-23 21:39:07 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-04-23 21:39:07 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-04-23 21:39:07 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-04-23 21:39:07 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-04-23 21:39:07 main:494] : INFO : Creating Model [2022-04-23 21:39:07 main:507] : INFO : Preparing config file [2022-04-23 21:39:07 main:511] : INFO : Found checkpoint, attempting to load... [2022-04-23 21:39:07 main:512] : INFO : Loading config [2022-04-23 21:39:07 main:514] : INFO : Loading state [2022-04-23 21:39:08 main:559] : INFO : Loading DataLoader into Memory [2022-04-23 21:39:09 main:562] : INFO : Starting Training [2022-04-23 21:39:13 main:574] : INFO : Epoch 1839 | loss: 0.0311948 | val_loss: 0.0311789 | Time: 4906.86 ms [2022-04-23 21:39:18 main:574] : INFO : Epoch 1840 | loss: 0.0311507 | val_loss: 0.0311689 | Time: 4713.84 ms [2022-04-23 21:39:23 main:574] : INFO : Epoch 1841 | loss: 0.0311456 | val_loss: 0.0311643 | Time: 4749.37 ms [2022-04-23 21:39:28 main:574] : INFO : Epoch 1842 | loss: 0.0311439 | val_loss: 0.0311635 | Time: 4733.59 ms [2022-04-23 21:39:32 main:574] : INFO : Epoch 1843 | loss: 0.0311435 | val_loss: 0.0311657 | Time: 4739.13 ms [2022-04-23 21:39:37 main:574] : INFO : Epoch 1844 | loss: 0.0311489 | val_loss: 0.0311681 | Time: 4681.93 ms [2022-04-23 21:39:42 main:574] : INFO : Epoch 1845 | loss: 0.0311432 | val_loss: 0.031166 | Time: 4751.13 ms [2022-04-23 21:39:47 main:574] : INFO : Epoch 1846 | loss: 0.031144 | val_loss: 0.0311638 | Time: 4851.69 ms [2022-04-23 21:39:52 main:574] : INFO : Epoch 1847 | loss: 0.0311421 | val_loss: 0.0311642 | Time: 4868.79 ms [2022-04-23 21:39:56 main:574] : INFO : Epoch 1848 | loss: 0.0311468 | val_loss: 0.0311712 | Time: 4681.23 ms [2022-04-23 21:40:01 main:574] : INFO : Epoch 1849 | loss: 0.0311415 | val_loss: 0.0311656 | Time: 4601.59 ms [2022-04-23 21:40:06 main:574] : INFO : Epoch 1850 | loss: 0.0311421 | val_loss: 0.0311645 | Time: 4760.78 ms [2022-04-23 21:40:10 main:574] : INFO : Epoch 1851 | loss: 0.031143 | val_loss: 0.0311663 | Time: 4670.87 ms [2022-04-23 21:40:15 main:574] : INFO : Epoch 1852 | loss: 0.0311412 | val_loss: 0.0311658 | Time: 4934.8 ms [2022-04-23 21:40:20 main:574] : INFO : Epoch 1853 | loss: 0.0311431 | val_loss: 0.0311614 | Time: 4764.66 ms [2022-04-23 21:40:25 main:574] : INFO : Epoch 1854 | loss: 0.0311451 | val_loss: 0.031164 | Time: 4634.87 ms [2022-04-23 21:40:29 main:574] : INFO : Epoch 1855 | loss: 0.0311431 | val_loss: 0.0311623 | Time: 4761.1 ms [2022-04-23 21:40:35 main:574] : INFO : Epoch 1856 | loss: 0.0311414 | val_loss: 0.0311618 | Time: 5182.96 ms [2022-04-23 21:40:39 main:574] : INFO : Epoch 1857 | loss: 0.0311409 | val_loss: 0.031164 | Time: 4766.12 ms [2022-04-23 21:40:44 main:574] : INFO : Epoch 1858 | loss: 0.0311433 | val_loss: 0.0311642 | Time: 4792.15 ms [2022-04-23 21:40:49 main:574] : INFO : Epoch 1859 | loss: 0.0311452 | val_loss: 0.0311666 | Time: 4970.25 ms [2022-04-23 21:40:54 main:574] : INFO : Epoch 1860 | loss: 0.0311428 | val_loss: 0.03116 | Time: 4895.16 ms [2022-04-23 21:40:59 main:574] : INFO : Epoch 1861 | loss: 0.0311444 | val_loss: 0.031163 | Time: 4648.43 ms [2022-04-23 21:41:04 main:574] : INFO : Epoch 1862 | loss: 0.0311454 | val_loss: 0.0311621 | Time: 5042.16 ms [2022-04-23 21:41:08 main:574] : INFO : Epoch 1863 | loss: 0.0311418 | val_loss: 0.0311624 | Time: 4667.41 ms [2022-04-23 21:41:13 main:574] : INFO : Epoch 1864 | loss: 0.0311406 | val_loss: 0.0311666 | Time: 4593.06 ms [2022-04-23 21:41:18 main:574] : INFO : Epoch 1865 | loss: 0.0311424 | val_loss: 0.0311619 | Time: 4576.96 ms [2022-04-23 21:41:22 main:574] : INFO : Epoch 1866 | loss: 0.0311404 | val_loss: 0.0311657 | Time: 4570.21 ms [2022-04-23 21:41:27 main:574] : INFO : Epoch 1867 | loss: 0.0311405 | val_loss: 0.0311622 | Time: 5032.14 ms [2022-04-23 21:41:32 main:574] : INFO : Epoch 1868 | loss: 0.0311384 | val_loss: 0.0311638 | Time: 4623.12 ms [2022-04-23 21:41:37 main:574] : INFO : Epoch 1869 | loss: 0.0311381 | val_loss: 0.0311615 | Time: 4612.53 ms [2022-04-23 21:41:41 main:574] : INFO : Epoch 1870 | loss: 0.0311391 | val_loss: 0.0311631 | Time: 4793.31 ms [2022-04-23 21:41:46 main:574] : INFO : Epoch 1871 | loss: 0.0311402 | val_loss: 0.031167 | Time: 4561.87 ms [2022-04-23 21:41:51 main:574] : INFO : Epoch 1872 | loss: 0.0311424 | val_loss: 0.0311662 | Time: 4616.84 ms [2022-04-23 21:41:55 main:574] : INFO : Epoch 1873 | loss: 0.0311424 | val_loss: 0.0311675 | Time: 4651.78 ms [2022-04-23 21:42:00 main:574] : INFO : Epoch 1874 | loss: 0.0311427 | val_loss: 0.031169 | Time: 4643.97 ms [2022-04-23 21:42:05 main:574] : INFO : Epoch 1875 | loss: 0.0311393 | val_loss: 0.031165 | Time: 4615.35 ms [2022-04-23 21:42:09 main:574] : INFO : Epoch 1876 | loss: 0.0311405 | val_loss: 0.0311638 | Time: 4717.58 ms [2022-04-23 21:42:14 main:574] : INFO : Epoch 1877 | loss: 0.0311419 | val_loss: 0.0311684 | Time: 4774.18 ms [2022-04-23 21:42:19 main:574] : INFO : Epoch 1878 | loss: 0.0311421 | val_loss: 0.031169 | Time: 4797.78 ms [2022-04-23 21:42:23 main:574] : INFO : Epoch 1879 | loss: 0.0311426 | val_loss: 0.0311697 | Time: 4569.92 ms [2022-04-23 21:42:28 main:574] : INFO : Epoch 1880 | loss: 0.0311391 | val_loss: 0.0311661 | Time: 4506.36 ms [2022-04-23 21:42:33 main:574] : INFO : Epoch 1881 | loss: 0.031138 | val_loss: 0.031166 | Time: 4612.71 ms [2022-04-23 21:42:37 main:574] : INFO : Epoch 1882 | loss: 0.0311368 | val_loss: 0.0311676 | Time: 4544.49 ms [2022-04-23 21:42:42 main:574] : INFO : Epoch 1883 | loss: 0.0311374 | val_loss: 0.0311671 | Time: 4544.28 ms [2022-04-23 21:42:46 main:574] : INFO : Epoch 1884 | loss: 0.0311352 | val_loss: 0.031169 | Time: 4638.64 ms [2022-04-23 21:42:51 main:574] : INFO : Epoch 1885 | loss: 0.0311373 | val_loss: 0.0311683 | Time: 4513.44 ms [2022-04-23 21:42:56 main:574] : INFO : Epoch 1886 | loss: 0.0311391 | val_loss: 0.0311711 | Time: 4732.11 ms [2022-04-23 21:43:00 main:574] : INFO : Epoch 1887 | loss: 0.0311389 | val_loss: 0.0311684 | Time: 4624.08 ms [2022-04-23 21:43:05 main:574] : INFO : Epoch 1888 | loss: 0.0311377 | val_loss: 0.0311656 | Time: 4587.37 ms [2022-04-23 21:43:09 main:574] : INFO : Epoch 1889 | loss: 0.0311378 | val_loss: 0.0311677 | Time: 4563.42 ms [2022-04-23 21:43:14 main:574] : INFO : Epoch 1890 | loss: 0.0311424 | val_loss: 0.0311759 | Time: 4493.26 ms [2022-04-23 21:43:19 main:574] : INFO : Epoch 1891 | loss: 0.0311416 | val_loss: 0.0311648 | Time: 4853.19 ms [2022-04-23 21:43:23 main:574] : INFO : Epoch 1892 | loss: 0.0311422 | val_loss: 0.031167 | Time: 4589.16 ms [2022-04-23 21:43:28 main:574] : INFO : Epoch 1893 | loss: 0.0311528 | val_loss: 0.0311678 | Time: 4628.02 ms [2022-04-23 21:43:32 main:574] : INFO : Epoch 1894 | loss: 0.0311516 | val_loss: 0.0311693 | Time: 4533.81 ms [2022-04-23 21:43:37 main:574] : INFO : Epoch 1895 | loss: 0.0311505 | val_loss: 0.031175 | Time: 4605.69 ms [2022-04-23 21:43:42 main:574] : INFO : Epoch 1896 | loss: 0.03115 | val_loss: 0.0311724 | Time: 4563.06 ms [2022-04-23 21:43:46 main:574] : INFO : Epoch 1897 | loss: 0.0311512 | val_loss: 0.0311718 | Time: 4541.97 ms [2022-04-23 21:43:51 main:574] : INFO : Epoch 1898 | loss: 0.0311565 | val_loss: 0.0311808 | Time: 4484.79 ms [2022-04-23 21:43:56 main:574] : INFO : Epoch 1899 | loss: 0.0311597 | val_loss: 0.0311733 | Time: 4903.16 ms [2022-04-23 21:44:00 main:574] : INFO : Epoch 1900 | loss: 0.0311556 | val_loss: 0.0311724 | Time: 4657.59 ms [2022-04-23 21:44:05 main:574] : INFO : Epoch 1901 | loss: 0.0311573 | val_loss: 0.0311758 | Time: 4526.68 ms [2022-04-23 21:44:09 main:574] : INFO : Epoch 1902 | loss: 0.0311612 | val_loss: 0.0311712 | Time: 4565.72 ms [2022-04-23 21:44:14 main:574] : INFO : Epoch 1903 | loss: 0.0311613 | val_loss: 0.0311734 | Time: 4516.89 ms [2022-04-23 21:44:19 main:574] : INFO : Epoch 1904 | loss: 0.0311608 | val_loss: 0.0311727 | Time: 4600.16 ms [2022-04-23 21:44:23 main:574] : INFO : Epoch 1905 | loss: 0.0311592 | val_loss: 0.031169 | Time: 4831.01 ms [2022-04-23 21:44:28 main:574] : INFO : Epoch 1906 | loss: 0.0311606 | val_loss: 0.0311676 | Time: 4756.69 ms [2022-04-23 21:44:33 main:574] : INFO : Epoch 1907 | loss: 0.0311571 | val_loss: 0.0311682 | Time: 4616.89 ms [2022-04-23 21:44:37 main:574] : INFO : Epoch 1908 | loss: 0.0311558 | val_loss: 0.0311663 | Time: 4588.09 ms [2022-04-23 21:44:42 main:574] : INFO : Epoch 1909 | loss: 0.0311548 | val_loss: 0.0311674 | Time: 4545.65 ms [2022-04-23 21:44:46 main:574] : INFO : Epoch 1910 | loss: 0.0311592 | val_loss: 0.031167 | Time: 4508.69 ms [2022-04-23 21:44:51 main:574] : INFO : Epoch 1911 | loss: 0.0311566 | val_loss: 0.0311654 | Time: 4815.68 ms [2022-04-23 21:44:56 main:574] : INFO : Epoch 1912 | loss: 0.0311569 | val_loss: 0.0311652 | Time: 4655.73 ms [2022-04-23 21:45:00 main:574] : INFO : Epoch 1913 | loss: 0.0311548 | val_loss: 0.0311667 | Time: 4565.19 ms [2022-04-23 21:45:05 main:574] : INFO : Epoch 1914 | loss: 0.0311526 | val_loss: 0.0311671 | Time: 4574.78 ms [2022-04-23 21:45:10 main:574] : INFO : Epoch 1915 | loss: 0.0311523 | val_loss: 0.0311658 | Time: 4606.43 ms [2022-04-23 21:45:14 main:574] : INFO : Epoch 1916 | loss: 0.0311525 | val_loss: 0.0311684 | Time: 4625.94 ms [2022-04-23 21:45:19 main:574] : INFO : Epoch 1917 | loss: 0.0311508 | val_loss: 0.0311672 | Time: 4528.5 ms [2022-04-23 21:45:23 main:574] : INFO : Epoch 1918 | loss: 0.0311509 | val_loss: 0.0311648 | Time: 4583.14 ms [2022-04-23 21:45:28 main:574] : INFO : Epoch 1919 | loss: 0.0311508 | val_loss: 0.0311658 | Time: 4509.43 ms [2022-04-23 21:45:32 main:574] : INFO : Epoch 1920 | loss: 0.0311522 | val_loss: 0.0311664 | Time: 4523.43 ms [2022-04-23 21:45:37 main:574] : INFO : Epoch 1921 | loss: 0.0311496 | val_loss: 0.0311641 | Time: 4671.48 ms [2022-04-23 21:45:42 main:574] : INFO : Epoch 1922 | loss: 0.0311488 | val_loss: 0.0311682 | Time: 4725.13 ms [2022-04-23 21:45:47 main:574] : INFO : Epoch 1923 | loss: 0.0311489 | val_loss: 0.0311649 | Time: 4698.88 ms [2022-04-23 21:45:51 main:574] : INFO : Epoch 1924 | loss: 0.0311498 | val_loss: 0.0311655 | Time: 4484.01 ms [2022-04-23 21:45:56 main:574] : INFO : Epoch 1925 | loss: 0.0311477 | val_loss: 0.0311654 | Time: 4630.72 ms [2022-04-23 21:46:00 main:574] : INFO : Epoch 1926 | loss: 0.031147 | val_loss: 0.0311672 | Time: 4640.07 ms [2022-04-23 21:46:05 main:574] : INFO : Epoch 1927 | loss: 0.0311471 | val_loss: 0.0311659 | Time: 4481.18 ms [2022-04-23 21:46:09 main:574] : INFO : Epoch 1928 | loss: 0.0311486 | val_loss: 0.0311693 | Time: 4584.66 ms [2022-04-23 21:46:14 main:574] : INFO : Epoch 1929 | loss: 0.0311472 | val_loss: 0.0311676 | Time: 4445.53 ms [2022-04-23 21:46:19 main:574] : INFO : Epoch 1930 | loss: 0.0311468 | val_loss: 0.0311652 | Time: 4645.33 ms [2022-04-23 21:46:23 main:574] : INFO : Epoch 1931 | loss: 0.0311467 | val_loss: 0.0311678 | Time: 4541.8 ms [2022-04-23 21:46:28 main:574] : INFO : Epoch 1932 | loss: 0.0311471 | val_loss: 0.0311638 | Time: 4645.98 ms [2022-04-23 21:46:32 main:574] : INFO : Epoch 1933 | loss: 0.0311457 | val_loss: 0.0311677 | Time: 4675.22 ms [2022-04-23 21:46:37 main:574] : INFO : Epoch 1934 | loss: 0.0311453 | val_loss: 0.0311677 | Time: 4555.28 ms [2022-04-23 21:46:42 main:574] : INFO : Epoch 1935 | loss: 0.0311453 | val_loss: 0.0311659 | Time: 4732.38 ms [2022-04-23 21:46:46 main:574] : INFO : Epoch 1936 | loss: 0.0311456 | val_loss: 0.0311649 | Time: 4602.61 ms [2022-04-23 21:46:51 main:574] : INFO : Epoch 1937 | loss: 0.0311456 | val_loss: 0.0311665 | Time: 4620.04 ms [2022-04-23 21:46:55 main:574] : INFO : Epoch 1938 | loss: 0.0311439 | val_loss: 0.0311721 | Time: 4532.62 ms [2022-04-23 21:47:00 main:574] : INFO : Epoch 1939 | loss: 0.0311459 | val_loss: 0.0311679 | Time: 4540.36 ms [2022-04-23 21:47:05 main:574] : INFO : Epoch 1940 | loss: 0.0311442 | val_loss: 0.0311665 | Time: 4559.57 ms [2022-04-23 21:47:09 main:574] : INFO : Epoch 1941 | loss: 0.0311424 | val_loss: 0.0311668 | Time: 4553.24 ms [2022-04-23 21:47:14 main:574] : INFO : Epoch 1942 | loss: 0.0311422 | val_loss: 0.0311659 | Time: 4591.08 ms [2022-04-23 21:47:18 main:574] : INFO : Epoch 1943 | loss: 0.031143 | val_loss: 0.0311729 | Time: 4579.98 ms [2022-04-23 21:47:23 main:574] : INFO : Epoch 1944 | loss: 0.0311449 | val_loss: 0.0311664 | Time: 4612.99 ms [2022-04-23 21:47:28 main:574] : INFO : Epoch 1945 | loss: 0.0311462 | val_loss: 0.0311709 | Time: 4571.8 ms [2022-04-23 21:47:32 main:574] : INFO : Epoch 1946 | loss: 0.0311481 | val_loss: 0.0311795 | Time: 4572.19 ms [2022-04-23 21:47:37 main:574] : INFO : Epoch 1947 | loss: 0.0311493 | val_loss: 0.0311731 | Time: 4502.36 ms [2022-04-23 21:47:41 main:574] : INFO : Epoch 1948 | loss: 0.0311445 | val_loss: 0.0311704 | Time: 4682.67 ms [2022-04-23 21:47:46 main:574] : INFO : Epoch 1949 | loss: 0.031146 | val_loss: 0.0311677 | Time: 4521.11 ms [2022-04-23 21:47:51 main:574] : INFO : Epoch 1950 | loss: 0.0311437 | val_loss: 0.0311715 | Time: 4648.27 ms [2022-04-23 21:47:55 main:574] : INFO : Epoch 1951 | loss: 0.0311448 | val_loss: 0.031169 | Time: 4630.59 ms [2022-04-23 21:48:00 main:574] : INFO : Epoch 1952 | loss: 0.0311436 | val_loss: 0.0311656 | Time: 4597.68 ms [2022-04-23 21:48:04 main:574] : INFO : Epoch 1953 | loss: 0.0311453 | val_loss: 0.0311676 | Time: 4614.84 ms [2022-04-23 21:48:09 main:574] : INFO : Epoch 1954 | loss: 0.0311491 | val_loss: 0.0311642 | Time: 4539.06 ms [2022-04-23 21:48:14 main:574] : INFO : Epoch 1955 | loss: 0.0311504 | val_loss: 0.0311636 | Time: 4628.57 ms [2022-04-23 21:48:18 main:574] : INFO : Epoch 1956 | loss: 0.0311492 | val_loss: 0.0311706 | Time: 4521.51 ms [2022-04-23 21:48:23 main:574] : INFO : Epoch 1957 | loss: 0.031149 | val_loss: 0.0311696 | Time: 4516.02 ms [2022-04-23 21:48:27 main:574] : INFO : Epoch 1958 | loss: 0.0311487 | val_loss: 0.0311632 | Time: 4607.64 ms [2022-04-23 21:48:32 main:574] : INFO : Epoch 1959 | loss: 0.0311464 | val_loss: 0.0311653 | Time: 4632.08 ms [2022-04-23 21:48:36 main:574] : INFO : Epoch 1960 | loss: 0.0311483 | val_loss: 0.0311659 | Time: 4630.48 ms [2022-04-23 21:48:41 main:574] : INFO : Epoch 1961 | loss: 0.031146 | val_loss: 0.0311679 | Time: 4774.91 ms [2022-04-23 21:48:46 main:574] : INFO : Epoch 1962 | loss: 0.0311455 | val_loss: 0.031166 | Time: 4587.98 ms [2022-04-23 21:48:51 main:574] : INFO : Epoch 1963 | loss: 0.0311438 | val_loss: 0.0311662 | Time: 4611.51 ms [2022-04-23 21:48:55 main:574] : INFO : Epoch 1964 | loss: 0.0311433 | val_loss: 0.0311659 | Time: 4741.08 ms [2022-04-23 21:49:00 main:574] : INFO : Epoch 1965 | loss: 0.0311458 | val_loss: 0.0311646 | Time: 4605.92 ms [2022-04-23 21:49:04 main:574] : INFO : Epoch 1966 | loss: 0.0311514 | val_loss: 0.0311672 | Time: 4619.92 ms [2022-04-23 21:49:09 main:574] : INFO : Epoch 1967 | loss: 0.0311485 | val_loss: 0.0311613 | Time: 4634.34 ms [2022-04-23 21:49:14 main:574] : INFO : Epoch 1968 | loss: 0.0311475 | val_loss: 0.0311658 | Time: 4617.45 ms [2022-04-23 21:49:18 main:574] : INFO : Epoch 1969 | loss: 0.0311488 | val_loss: 0.0311644 | Time: 4592.7 ms [2022-04-23 21:49:23 main:574] : INFO : Epoch 1970 | loss: 0.0311474 | val_loss: 0.0311623 | Time: 4767.52 ms [2022-04-23 21:49:28 main:574] : INFO : Epoch 1971 | loss: 0.0311478 | val_loss: 0.0311629 | Time: 4563.35 ms [2022-04-23 21:49:32 main:574] : INFO : Epoch 1972 | loss: 0.0311483 | val_loss: 0.0311595 | Time: 4578.13 ms [2022-04-23 21:49:37 main:574] : INFO : Epoch 1973 | loss: 0.031148 | val_loss: 0.0311656 | Time: 4557.49 ms [2022-04-23 21:49:41 main:574] : INFO : Epoch 1974 | loss: 0.0311488 | val_loss: 0.0311619 | Time: 4593.06 ms [2022-04-23 21:49:46 main:574] : INFO : Epoch 1975 | loss: 0.0311466 | val_loss: 0.0311601 | Time: 4586.91 ms [2022-04-23 21:49:51 main:574] : INFO : Epoch 1976 | loss: 0.0311467 | val_loss: 0.0311645 | Time: 4539.88 ms [2022-04-23 21:49:55 main:574] : INFO : Epoch 1977 | loss: 0.0311441 | val_loss: 0.0311612 | Time: 4580.72 ms [2022-04-23 21:50:00 main:574] : INFO : Epoch 1978 | loss: 0.0311453 | val_loss: 0.031162 | Time: 4613.93 ms [2022-04-23 21:50:04 main:574] : INFO : Epoch 1979 | loss: 0.0311447 | val_loss: 0.031166 | Time: 4563.74 ms [2022-04-23 21:50:09 main:574] : INFO : Epoch 1980 | loss: 0.0311453 | val_loss: 0.0311595 | Time: 4595.85 ms [2022-04-23 21:50:14 main:574] : INFO : Epoch 1981 | loss: 0.0311507 | val_loss: 0.031173 | Time: 4612.29 ms [2022-04-23 21:50:18 main:574] : INFO : Epoch 1982 | loss: 0.0311617 | val_loss: 0.0311661 | Time: 4556.32 ms [2022-04-23 21:50:23 main:574] : INFO : Epoch 1983 | loss: 0.0311607 | val_loss: 0.0311666 | Time: 4614.17 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce RTX 2080) [2022-04-23 23:02:42 main:435] : INFO : Set logging level to 1 [2022-04-23 23:02:42 main:441] : INFO : Running in BOINC Client mode [2022-04-23 23:02:42 main:444] : INFO : Resolving all filenames [2022-04-23 23:02:42 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-04-23 23:02:42 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-04-23 23:02:42 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-04-23 23:02:42 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-04-23 23:02:42 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-04-23 23:02:42 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-04-23 23:02:42 main:474] : INFO : Configuration: [2022-04-23 23:02:42 main:475] : INFO : Model type: GRU [2022-04-23 23:02:42 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-04-23 23:02:42 main:477] : INFO : Max Epochs: 2048 [2022-04-23 23:02:42 main:478] : INFO : Batch Size: 128 [2022-04-23 23:02:42 main:479] : INFO : Learning Rate: 0.01 [2022-04-23 23:02:42 main:480] : INFO : Patience: 10 [2022-04-23 23:02:42 main:481] : INFO : Hidden Width: 12 [2022-04-23 23:02:42 main:482] : INFO : # Recurrent Layers: 4 [2022-04-23 23:02:42 main:483] : INFO : # Backend Layers: 4 [2022-04-23 23:02:42 main:484] : INFO : # Threads: 1 [2022-04-23 23:02:42 main:486] : INFO : Preparing Dataset [2022-04-23 23:02:42 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-04-23 23:02:42 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-04-23 23:02:44 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-04-23 23:02:44 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-04-23 23:02:44 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-04-23 23:02:44 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-04-23 23:02:44 main:494] : INFO : Creating Model [2022-04-23 23:02:44 main:507] : INFO : Preparing config file [2022-04-23 23:02:44 main:511] : INFO : Found checkpoint, attempting to load... [2022-04-23 23:02:44 main:512] : INFO : Loading config [2022-04-23 23:02:44 main:514] : INFO : Loading state [2022-04-23 23:02:45 main:559] : INFO : Loading DataLoader into Memory [2022-04-23 23:02:45 main:562] : INFO : Starting Training [2022-04-23 23:02:50 main:574] : INFO : Epoch 1982 | loss: 0.0311808 | val_loss: 0.0311652 | Time: 5525.31 ms [2022-04-23 23:02:56 main:574] : INFO : Epoch 1983 | loss: 0.0311533 | val_loss: 0.0311686 | Time: 5357.69 ms [2022-04-23 23:03:01 main:574] : INFO : Epoch 1984 | loss: 0.0311505 | val_loss: 0.0311621 | Time: 4811.53 ms [2022-04-23 23:03:06 main:574] : INFO : Epoch 1985 | loss: 0.0311503 | val_loss: 0.0311675 | Time: 5486.86 ms [2022-04-23 23:03:11 main:574] : INFO : Epoch 1986 | loss: 0.031149 | val_loss: 0.0311636 | Time: 5269.01 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce RTX 2080) [2022-04-23 23:10:37 main:435] : INFO : Set logging level to 1 [2022-04-23 23:10:37 main:441] : INFO : Running in BOINC Client mode [2022-04-23 23:10:37 main:444] : INFO : Resolving all filenames [2022-04-23 23:10:37 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-04-23 23:10:37 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-04-23 23:10:37 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-04-23 23:10:37 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-04-23 23:10:37 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-04-23 23:10:37 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-04-23 23:10:37 main:474] : INFO : Configuration: [2022-04-23 23:10:37 main:475] : INFO : Model type: GRU [2022-04-23 23:10:37 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-04-23 23:10:37 main:477] : INFO : Max Epochs: 2048 [2022-04-23 23:10:37 main:478] : INFO : Batch Size: 128 [2022-04-23 23:10:37 main:479] : INFO : Learning Rate: 0.01 [2022-04-23 23:10:37 main:480] : INFO : Patience: 10 [2022-04-23 23:10:37 main:481] : INFO : Hidden Width: 12 [2022-04-23 23:10:37 main:482] : INFO : # Recurrent Layers: 4 [2022-04-23 23:10:37 main:483] : INFO : # Backend Layers: 4 [2022-04-23 23:10:37 main:484] : INFO : # Threads: 1 [2022-04-23 23:10:37 main:486] : INFO : Preparing Dataset [2022-04-23 23:10:37 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-04-23 23:10:37 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-04-23 23:10:39 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-04-23 23:10:39 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-04-23 23:10:39 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-04-23 23:10:39 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-04-23 23:10:39 main:494] : INFO : Creating Model [2022-04-23 23:10:39 main:507] : INFO : Preparing config file [2022-04-23 23:10:39 main:511] : INFO : Found checkpoint, attempting to load... [2022-04-23 23:10:39 main:512] : INFO : Loading config [2022-04-23 23:10:39 main:514] : INFO : Loading state [2022-04-23 23:10:40 main:559] : INFO : Loading DataLoader into Memory [2022-04-23 23:10:40 main:562] : INFO : Starting Training [2022-04-23 23:10:45 main:574] : INFO : Epoch 1982 | loss: 0.031179 | val_loss: 0.0311652 | Time: 5066.1 ms [2022-04-23 23:10:50 main:574] : INFO : Epoch 1983 | loss: 0.031153 | val_loss: 0.0311667 | Time: 4549.16 ms [2022-04-23 23:10:54 main:574] : INFO : Epoch 1984 | loss: 0.0311506 | val_loss: 0.0311695 | Time: 4532.71 ms [2022-04-23 23:10:59 main:574] : INFO : Epoch 1985 | loss: 0.0311491 | val_loss: 0.0311645 | Time: 4510.26 ms [2022-04-23 23:11:03 main:574] : INFO : Epoch 1986 | loss: 0.0311483 | val_loss: 0.0311624 | Time: 4525.64 ms [2022-04-23 23:11:08 main:574] : INFO : Epoch 1987 | loss: 0.0311482 | val_loss: 0.0311642 | Time: 4536.43 ms [2022-04-23 23:11:13 main:574] : INFO : Epoch 1988 | loss: 0.0311479 | val_loss: 0.0311605 | Time: 4518.52 ms [2022-04-23 23:11:17 main:574] : INFO : Epoch 1989 | loss: 0.0311457 | val_loss: 0.0311629 | Time: 4560.47 ms [2022-04-23 23:11:22 main:574] : INFO : Epoch 1990 | loss: 0.0311438 | val_loss: 0.0311588 | Time: 4478.15 ms [2022-04-23 23:11:26 main:574] : INFO : Epoch 1991 | loss: 0.0311431 | val_loss: 0.0311623 | Time: 4488.81 ms [2022-04-23 23:11:31 main:574] : INFO : Epoch 1992 | loss: 0.0311439 | val_loss: 0.0311625 | Time: 4564.27 ms [2022-04-23 23:11:35 main:574] : INFO : Epoch 1993 | loss: 0.0311469 | val_loss: 0.0311631 | Time: 4566.01 ms [2022-04-23 23:11:40 main:574] : INFO : Epoch 1994 | loss: 0.0311451 | val_loss: 0.031162 | Time: 4687.4 ms [2022-04-23 23:11:45 main:574] : INFO : Epoch 1995 | loss: 0.0311428 | val_loss: 0.0311663 | Time: 4715.63 ms [2022-04-23 23:11:49 main:574] : INFO : Epoch 1996 | loss: 0.0311418 | val_loss: 0.031164 | Time: 4530.7 ms [2022-04-23 23:11:54 main:574] : INFO : Epoch 1997 | loss: 0.0311403 | val_loss: 0.0311623 | Time: 4557.22 ms [2022-04-23 23:11:58 main:574] : INFO : Epoch 1998 | loss: 0.0311413 | val_loss: 0.0311632 | Time: 4583.32 ms [2022-04-23 23:12:04 main:574] : INFO : Epoch 1999 | loss: 0.0311405 | val_loss: 0.0311619 | Time: 5597.36 ms [2022-04-23 23:12:09 main:574] : INFO : Epoch 2000 | loss: 0.0311396 | val_loss: 0.0311638 | Time: 4913.53 ms [2022-04-23 23:12:14 main:574] : INFO : Epoch 2001 | loss: 0.0311417 | val_loss: 0.0311646 | Time: 4627.77 ms [2022-04-23 23:12:19 main:574] : INFO : Epoch 2002 | loss: 0.0311422 | val_loss: 0.0311748 | Time: 5513.19 ms [2022-04-23 23:12:24 main:574] : INFO : Epoch 2003 | loss: 0.0311431 | val_loss: 0.0311684 | Time: 4560.55 ms [2022-04-23 23:12:28 main:574] : INFO : Epoch 2004 | loss: 0.0311432 | val_loss: 0.0311676 | Time: 4587.33 ms [2022-04-23 23:12:33 main:574] : INFO : Epoch 2005 | loss: 0.0311419 | val_loss: 0.0311672 | Time: 4547.23 ms [2022-04-23 23:12:37 main:574] : INFO : Epoch 2006 | loss: 0.0311424 | val_loss: 0.0311679 | Time: 4573.3 ms [2022-04-23 23:12:42 main:574] : INFO : Epoch 2007 | loss: 0.0311421 | val_loss: 0.0311674 | Time: 4563.7 ms [2022-04-23 23:12:46 main:574] : INFO : Epoch 2008 | loss: 0.0311424 | val_loss: 0.0311693 | Time: 4375.36 ms [2022-04-23 23:12:51 main:574] : INFO : Epoch 2009 | loss: 0.0311422 | val_loss: 0.0311651 | Time: 4458.67 ms [2022-04-23 23:12:55 main:574] : INFO : Epoch 2010 | loss: 0.0311405 | val_loss: 0.0311663 | Time: 4343.91 ms [2022-04-23 23:13:00 main:574] : INFO : Epoch 2011 | loss: 0.0311431 | val_loss: 0.0311679 | Time: 4426.15 ms [2022-04-23 23:13:04 main:574] : INFO : Epoch 2012 | loss: 0.0311461 | val_loss: 0.0311687 | Time: 4426.7 ms [2022-04-23 23:13:08 main:574] : INFO : Epoch 2013 | loss: 0.0311437 | val_loss: 0.0311682 | Time: 4457.55 ms [2022-04-23 23:13:13 main:574] : INFO : Epoch 2014 | loss: 0.0311444 | val_loss: 0.0311691 | Time: 4429.17 ms [2022-04-23 23:13:17 main:574] : INFO : Epoch 2015 | loss: 0.0311466 | val_loss: 0.0311697 | Time: 4442.79 ms [2022-04-23 23:13:22 main:574] : INFO : Epoch 2016 | loss: 0.0311462 | val_loss: 0.0311668 | Time: 4376.8 ms [2022-04-23 23:13:26 main:574] : INFO : Epoch 2017 | loss: 0.0311439 | val_loss: 0.0311634 | Time: 4418.3 ms [2022-04-23 23:13:30 main:574] : INFO : Epoch 2018 | loss: 0.0311442 | val_loss: 0.0311606 | Time: 4381.89 ms [2022-04-23 23:13:35 main:574] : INFO : Epoch 2019 | loss: 0.0311431 | val_loss: 0.0311607 | Time: 4399.25 ms [2022-04-23 23:13:39 main:574] : INFO : Epoch 2020 | loss: 0.0311418 | val_loss: 0.0311666 | Time: 4427.97 ms [2022-04-23 23:13:44 main:574] : INFO : Epoch 2021 | loss: 0.0311433 | val_loss: 0.0311673 | Time: 4554.39 ms [2022-04-23 23:13:48 main:574] : INFO : Epoch 2022 | loss: 0.0311442 | val_loss: 0.0311673 | Time: 4485.09 ms [2022-04-23 23:13:53 main:574] : INFO : Epoch 2023 | loss: 0.0311453 | val_loss: 0.0311679 | Time: 4404.39 ms [2022-04-23 23:13:57 main:574] : INFO : Epoch 2024 | loss: 0.0311443 | val_loss: 0.031166 | Time: 4508.51 ms [2022-04-23 23:14:02 main:574] : INFO : Epoch 2025 | loss: 0.0311445 | val_loss: 0.0311632 | Time: 4462.56 ms [2022-04-23 23:14:06 main:574] : INFO : Epoch 2026 | loss: 0.0311469 | val_loss: 0.0311642 | Time: 4459.47 ms [2022-04-23 23:14:11 main:574] : INFO : Epoch 2027 | loss: 0.0311438 | val_loss: 0.0311641 | Time: 4384.21 ms [2022-04-23 23:14:15 main:574] : INFO : Epoch 2028 | loss: 0.0311431 | val_loss: 0.0311625 | Time: 4457.13 ms [2022-04-23 23:14:19 main:574] : INFO : Epoch 2029 | loss: 0.0311435 | val_loss: 0.0311643 | Time: 4373.4 ms [2022-04-23 23:14:24 main:574] : INFO : Epoch 2030 | loss: 0.0311462 | val_loss: 0.0311658 | Time: 4477.27 ms [2022-04-23 23:14:29 main:574] : INFO : Epoch 2031 | loss: 0.0311482 | val_loss: 0.0311667 | Time: 4586.36 ms [2022-04-23 23:14:33 main:574] : INFO : Epoch 2032 | loss: 0.0311491 | val_loss: 0.0311611 | Time: 4414.32 ms [2022-04-23 23:14:37 main:574] : INFO : Epoch 2033 | loss: 0.0311452 | val_loss: 0.031162 | Time: 4410.87 ms [2022-04-23 23:14:42 main:574] : INFO : Epoch 2034 | loss: 0.0311448 | val_loss: 0.0311662 | Time: 4499.94 ms [2022-04-23 23:14:46 main:574] : INFO : Epoch 2035 | loss: 0.0311447 | val_loss: 0.0311657 | Time: 4554.55 ms [2022-04-23 23:14:51 main:574] : INFO : Epoch 2036 | loss: 0.031143 | val_loss: 0.0311619 | Time: 4616.32 ms [2022-04-23 23:14:56 main:574] : INFO : Epoch 2037 | loss: 0.0311432 | val_loss: 0.0311651 | Time: 4425.24 ms [2022-04-23 23:15:00 main:574] : INFO : Epoch 2038 | loss: 0.0311413 | val_loss: 0.0311652 | Time: 4355.91 ms [2022-04-23 23:15:04 main:574] : INFO : Epoch 2039 | loss: 0.0311409 | val_loss: 0.0311628 | Time: 4407.05 ms [2022-04-23 23:15:09 main:574] : INFO : Epoch 2040 | loss: 0.0311422 | val_loss: 0.0311606 | Time: 4433.98 ms [2022-04-23 23:15:13 main:574] : INFO : Epoch 2041 | loss: 0.0311427 | val_loss: 0.0311733 | Time: 4359.76 ms [2022-04-23 23:15:17 main:574] : INFO : Epoch 2042 | loss: 0.0311441 | val_loss: 0.0311633 | Time: 4360.66 ms [2022-04-23 23:15:22 main:574] : INFO : Epoch 2043 | loss: 0.0311443 | val_loss: 0.0311652 | Time: 4415.74 ms [2022-04-23 23:15:26 main:574] : INFO : Epoch 2044 | loss: 0.0311409 | val_loss: 0.0311643 | Time: 4414.58 ms [2022-04-23 23:15:31 main:574] : INFO : Epoch 2045 | loss: 0.0311395 | val_loss: 0.0311643 | Time: 4462.5 ms [2022-04-23 23:15:35 main:574] : INFO : Epoch 2046 | loss: 0.0311406 | val_loss: 0.0311689 | Time: 4416.6 ms [2022-04-23 23:15:40 main:574] : INFO : Epoch 2047 | loss: 0.0311459 | val_loss: 0.0311648 | Time: 4445.55 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce RTX 2080) [2022-04-23 23:27:46 main:435] : INFO : Set logging level to 1 [2022-04-23 23:27:46 main:441] : INFO : Running in BOINC Client mode [2022-04-23 23:27:46 main:444] : INFO : Resolving all filenames [2022-04-23 23:27:46 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-04-23 23:27:46 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-04-23 23:27:46 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-04-23 23:27:46 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-04-23 23:27:46 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-04-23 23:27:46 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-04-23 23:27:46 main:474] : INFO : Configuration: [2022-04-23 23:27:46 main:475] : INFO : Model type: GRU [2022-04-23 23:27:46 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-04-23 23:27:46 main:477] : INFO : Max Epochs: 2048 [2022-04-23 23:27:46 main:478] : INFO : Batch Size: 128 [2022-04-23 23:27:46 main:479] : INFO : Learning Rate: 0.01 [2022-04-23 23:27:46 main:480] : INFO : Patience: 10 [2022-04-23 23:27:46 main:481] : INFO : Hidden Width: 12 [2022-04-23 23:27:46 main:482] : INFO : # Recurrent Layers: 4 [2022-04-23 23:27:46 main:483] : INFO : # Backend Layers: 4 [2022-04-23 23:27:46 main:484] : INFO : # Threads: 1 [2022-04-23 23:27:46 main:486] : INFO : Preparing Dataset [2022-04-23 23:27:46 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-04-23 23:27:47 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-04-23 23:27:48 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-04-23 23:27:48 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-04-23 23:27:48 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-04-23 23:27:48 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-04-23 23:27:49 main:494] : INFO : Creating Model [2022-04-23 23:27:49 main:507] : INFO : Preparing config file [2022-04-23 23:27:49 main:511] : INFO : Found checkpoint, attempting to load... [2022-04-23 23:27:49 main:512] : INFO : Loading config [2022-04-23 23:27:49 main:514] : INFO : Loading state [2022-04-23 23:27:50 main:559] : INFO : Loading DataLoader into Memory [2022-04-23 23:27:50 main:562] : INFO : Starting Training [2022-04-23 23:27:54 main:574] : INFO : Epoch 2040 | loss: 0.0311721 | val_loss: 0.0311685 | Time: 4853.48 ms [2022-04-23 23:27:59 main:574] : INFO : Epoch 2041 | loss: 0.0311473 | val_loss: 0.0311634 | Time: 4435.25 ms [2022-04-23 23:28:03 main:574] : INFO : Epoch 2042 | loss: 0.0311436 | val_loss: 0.0311598 | Time: 4542.74 ms [2022-04-23 23:28:08 main:574] : INFO : Epoch 2043 | loss: 0.0311429 | val_loss: 0.0311593 | Time: 4508.18 ms [2022-04-23 23:28:12 main:574] : INFO : Epoch 2044 | loss: 0.0311403 | val_loss: 0.0311591 | Time: 4510.97 ms [2022-04-23 23:28:17 main:574] : INFO : Epoch 2045 | loss: 0.03114 | val_loss: 0.0311627 | Time: 4453.27 ms [2022-04-23 23:28:21 main:574] : INFO : Epoch 2046 | loss: 0.0311393 | val_loss: 0.0311632 | Time: 4431.77 ms [2022-04-23 23:28:26 main:574] : INFO : Epoch 2047 | loss: 0.0311398 | val_loss: 0.0311631 | Time: 4486.98 ms [2022-04-23 23:28:30 main:574] : INFO : Epoch 2048 | loss: 0.0311394 | val_loss: 0.0311623 | Time: 4441.38 ms [2022-04-23 23:28:30 main:597] : INFO : Saving trained model to model-final.pt, val_loss 0.0311623 [2022-04-23 23:28:30 main:603] : INFO : Saving end state to config to file [2022-04-23 23:28:30 main:608] : INFO : Success, exiting.. 23:28:30 (132860): called boinc_finish(0) </stderr_txt> ]]>
©2022 MLC@Home Team
A project of the Cognition, Robotics, and Learning (CORAL) Lab at the University of Maryland, Baltimore County (UMBC)