| Name | ParityModified-1639960724-32379-3-0_0 |
| Workunit | 8625717 |
| Created | 6 Feb 2022, 5:57:12 UTC |
| Sent | 18 Feb 2022, 17:21:46 UTC |
| Report deadline | 26 Feb 2022, 17:21:46 UTC |
| Received | 6 Mar 2022, 14:34:06 UTC |
| Server state | Over |
| Outcome | Success |
| Client state | Done |
| Exit status | 0 (0x00000000) |
| Computer ID | 12272 |
| Run time | 1 hours 10 min 39 sec |
| CPU time | 46 min 21 sec |
| Validate state | Task was reported too late to validate |
| Credit | 0.00 |
| Device peak FLOPS | 4,174.67 GFLOPS |
| Application version | Machine Learning Dataset Generator (GPU) v9.75 (cuda10200) windows_x86_64 |
| Peak working set size | 1.54 GB |
| Peak swap size | 3.48 GB |
| Peak disk usage | 1.54 GB |
<core_client_version>7.16.11</core_client_version> <![CDATA[ <stderr_txt> och 1739 | loss: 0.0311561 | val_loss: 0.0311652 | Time: 1751.62 ms [2022-03-05 10:58:33 main:574] : INFO : Epoch 1740 | loss: 0.0311575 | val_loss: 0.0311692 | Time: 291871 ms [2022-03-05 10:58:35 main:574] : INFO : Epoch 1741 | loss: 0.0311621 | val_loss: 0.0311659 | Time: 1723.89 ms [2022-03-05 10:58:37 main:574] : INFO : Epoch 1742 | loss: 0.0311602 | val_loss: 0.0311682 | Time: 1747.98 ms [2022-03-05 10:58:38 main:574] : INFO : Epoch 1743 | loss: 0.0311585 | val_loss: 0.0311667 | Time: 1709.5 ms [2022-03-05 10:58:40 main:574] : INFO : Epoch 1744 | loss: 0.0311581 | val_loss: 0.0311668 | Time: 1730.7 ms [2022-03-05 11:03:32 main:574] : INFO : Epoch 1745 | loss: 0.0311561 | val_loss: 0.0311647 | Time: 291920 ms [2022-03-05 11:03:34 main:574] : INFO : Epoch 1746 | loss: 0.0311544 | val_loss: 0.0311665 | Time: 1721.37 ms [2022-03-05 11:03:36 main:574] : INFO : Epoch 1747 | loss: 0.031156 | val_loss: 0.0311709 | Time: 1704.59 ms [2022-03-05 11:03:37 main:574] : INFO : Epoch 1748 | loss: 0.0311555 | val_loss: 0.0311711 | Time: 1747.69 ms [2022-03-05 11:03:39 main:574] : INFO : Epoch 1749 | loss: 0.0311559 | val_loss: 0.031169 | Time: 1710.9 ms [2022-03-05 11:03:41 main:574] : INFO : Epoch 1750 | loss: 0.0311551 | val_loss: 0.0311695 | Time: 1703.53 ms [2022-03-05 11:04:23 main:574] : INFO : Epoch 1751 | loss: 0.0311544 | val_loss: 0.0311664 | Time: 41719.8 ms [2022-03-05 11:04:24 main:574] : INFO : Epoch 1752 | loss: 0.0311536 | val_loss: 0.0311678 | Time: 1715.86 ms [2022-03-05 11:04:26 main:574] : INFO : Epoch 1753 | loss: 0.0311528 | val_loss: 0.0311667 | Time: 1734.76 ms [2022-03-05 11:04:28 main:574] : INFO : Epoch 1754 | loss: 0.0311526 | val_loss: 0.0311736 | Time: 1707.27 ms [2022-03-05 11:04:30 main:574] : INFO : Epoch 1755 | loss: 0.0311561 | val_loss: 0.0311677 | Time: 1741.65 ms [2022-03-05 11:04:31 main:574] : INFO : Epoch 1756 | loss: 0.0311536 | val_loss: 0.0311643 | Time: 1712.83 ms [2022-03-05 11:08:23 main:574] : INFO : Epoch 1757 | loss: 0.0311511 | val_loss: 0.0311637 | Time: 232057 ms [2022-03-05 11:08:25 main:574] : INFO : Epoch 1758 | loss: 0.0311553 | val_loss: 0.0311659 | Time: 1729.19 ms [2022-03-05 11:08:27 main:574] : INFO : Epoch 1759 | loss: 0.0311593 | val_loss: 0.0311669 | Time: 1747.94 ms [2022-03-05 11:08:29 main:574] : INFO : Epoch 1760 | loss: 0.0311594 | val_loss: 0.0311692 | Time: 1769.15 ms [2022-03-05 11:08:31 main:574] : INFO : Epoch 1761 | loss: 0.0311567 | val_loss: 0.0311677 | Time: 1743.35 ms [2022-03-05 11:11:02 main:574] : INFO : Epoch 1762 | loss: 0.031156 | val_loss: 0.0311662 | Time: 151746 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GTX 970) [2022-03-05 13:12:38 main:435] : INFO : Set logging level to 1 [2022-03-05 13:12:38 main:441] : INFO : Running in BOINC Client mode [2022-03-05 13:12:38 main:444] : INFO : Resolving all filenames [2022-03-05 13:12:38 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-03-05 13:12:38 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-03-05 13:12:38 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-03-05 13:12:38 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-03-05 13:12:38 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-03-05 13:12:38 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-03-05 13:12:38 main:474] : INFO : Configuration: [2022-03-05 13:12:38 main:475] : INFO : Model type: GRU [2022-03-05 13:12:38 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-03-05 13:12:38 main:477] : INFO : Max Epochs: 2048 [2022-03-05 13:12:38 main:478] : INFO : Batch Size: 128 [2022-03-05 13:12:38 main:479] : INFO : Learning Rate: 0.01 [2022-03-05 13:12:38 main:480] : INFO : Patience: 10 [2022-03-05 13:12:38 main:481] : INFO : Hidden Width: 12 [2022-03-05 13:12:38 main:482] : INFO : # Recurrent Layers: 4 [2022-03-05 13:12:38 main:483] : INFO : # Backend Layers: 4 [2022-03-05 13:12:38 main:484] : INFO : # Threads: 1 [2022-03-05 13:12:38 main:486] : INFO : Preparing Dataset [2022-03-05 13:12:38 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-03-05 13:12:38 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-03-05 13:12:41 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-03-05 13:12:41 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-03-05 13:12:41 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-03-05 13:12:41 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-03-05 13:12:41 main:494] : INFO : Creating Model [2022-03-05 13:12:41 main:507] : INFO : Preparing config file [2022-03-05 13:12:41 main:511] : INFO : Found checkpoint, attempting to load... [2022-03-05 13:12:41 main:512] : INFO : Loading config [2022-03-05 13:12:41 main:514] : INFO : Loading state [2022-03-05 13:12:42 main:559] : INFO : Loading DataLoader into Memory [2022-03-05 13:12:42 main:562] : INFO : Starting Training [2022-03-05 13:12:44 main:574] : INFO : Epoch 1763 | loss: 0.0311883 | val_loss: 0.0311717 | Time: 2122.94 ms [2022-03-05 13:12:46 main:574] : INFO : Epoch 1764 | loss: 0.0311574 | val_loss: 0.0311656 | Time: 1724.9 ms [2022-03-05 13:28:28 main:574] : INFO : Epoch 1765 | loss: 0.031155 | val_loss: 0.0311651 | Time: 942343 ms [2022-03-05 13:28:30 main:574] : INFO : Epoch 1766 | loss: 0.0311548 | val_loss: 0.0311657 | Time: 1723.82 ms [2022-03-05 13:28:32 main:574] : INFO : Epoch 1767 | loss: 0.0311533 | val_loss: 0.0311663 | Time: 1738.86 ms [2022-03-05 13:28:34 main:574] : INFO : Epoch 1768 | loss: 0.0311525 | val_loss: 0.031166 | Time: 1721.1 ms [2022-03-05 13:28:36 main:574] : INFO : Epoch 1769 | loss: 0.0311522 | val_loss: 0.0311634 | Time: 1716.61 ms [2022-03-05 13:28:37 main:574] : INFO : Epoch 1770 | loss: 0.0311508 | val_loss: 0.0311624 | Time: 1738.09 ms [2022-03-05 13:33:29 main:574] : INFO : Epoch 1771 | loss: 0.0311504 | val_loss: 0.0311627 | Time: 291892 ms [2022-03-05 13:33:31 main:574] : INFO : Epoch 1772 | loss: 0.0311516 | val_loss: 0.0311653 | Time: 1724.3 ms [2022-03-05 13:33:33 main:574] : INFO : Epoch 1773 | loss: 0.0311535 | val_loss: 0.0311652 | Time: 1695.72 ms [2022-03-05 13:33:34 main:574] : INFO : Epoch 1774 | loss: 0.031153 | val_loss: 0.0311662 | Time: 1724.02 ms [2022-03-05 13:33:36 main:574] : INFO : Epoch 1775 | loss: 0.0311555 | val_loss: 0.0311665 | Time: 1740.58 ms [2022-03-05 13:33:38 main:574] : INFO : Epoch 1776 | loss: 0.0311541 | val_loss: 0.0311655 | Time: 1714.99 ms [2022-03-05 13:38:30 main:574] : INFO : Epoch 1777 | loss: 0.0311521 | val_loss: 0.031165 | Time: 291965 ms [2022-03-05 13:38:32 main:574] : INFO : Epoch 1778 | loss: 0.0311521 | val_loss: 0.0311674 | Time: 1693.78 ms [2022-03-05 13:38:33 main:574] : INFO : Epoch 1779 | loss: 0.0311528 | val_loss: 0.0311674 | Time: 1727.11 ms [2022-03-05 13:38:35 main:574] : INFO : Epoch 1780 | loss: 0.0311515 | val_loss: 0.0311677 | Time: 1738.69 ms [2022-03-05 13:38:37 main:574] : INFO : Epoch 1781 | loss: 0.0311538 | val_loss: 0.0311701 | Time: 1788.68 ms [2022-03-05 13:41:19 main:574] : INFO : Epoch 1782 | loss: 0.0311551 | val_loss: 0.0311686 | Time: 161902 ms [2022-03-05 13:41:21 main:574] : INFO : Epoch 1783 | loss: 0.0311593 | val_loss: 0.0311707 | Time: 1730.7 ms [2022-03-05 13:41:22 main:574] : INFO : Epoch 1784 | loss: 0.031155 | val_loss: 0.0311642 | Time: 1707.98 ms [2022-03-05 13:41:24 main:574] : INFO : Epoch 1785 | loss: 0.0311518 | val_loss: 0.0311654 | Time: 1717.71 ms [2022-03-05 13:41:26 main:574] : INFO : Epoch 1786 | loss: 0.031151 | val_loss: 0.0311652 | Time: 1727.97 ms [2022-03-05 13:41:28 main:574] : INFO : Epoch 1787 | loss: 0.0311533 | val_loss: 0.0311671 | Time: 1742.11 ms [2022-03-05 13:43:30 main:574] : INFO : Epoch 1788 | loss: 0.031152 | val_loss: 0.0311676 | Time: 121883 ms [2022-03-05 13:43:31 main:574] : INFO : Epoch 1789 | loss: 0.0311507 | val_loss: 0.0311675 | Time: 1701.1 ms [2022-03-05 13:43:33 main:574] : INFO : Epoch 1790 | loss: 0.0311496 | val_loss: 0.0311651 | Time: 1696.35 ms [2022-03-05 13:43:35 main:574] : INFO : Epoch 1791 | loss: 0.0311507 | val_loss: 0.0311677 | Time: 1712.58 ms [2022-03-05 13:43:37 main:574] : INFO : Epoch 1792 | loss: 0.0311568 | val_loss: 0.0311689 | Time: 1732.24 ms [2022-03-05 13:43:38 main:574] : INFO : Epoch 1793 | loss: 0.0311558 | val_loss: 0.0311673 | Time: 1724.94 ms [2022-03-05 13:48:30 main:574] : INFO : Epoch 1794 | loss: 0.0311547 | val_loss: 0.0311673 | Time: 291925 ms [2022-03-05 13:48:32 main:574] : INFO : Epoch 1795 | loss: 0.0311533 | val_loss: 0.031168 | Time: 1704.51 ms [2022-03-05 13:48:34 main:574] : INFO : Epoch 1796 | loss: 0.0311528 | val_loss: 0.031166 | Time: 1707.66 ms [2022-03-05 13:48:36 main:574] : INFO : Epoch 1797 | loss: 0.0311509 | val_loss: 0.0311653 | Time: 1717.01 ms [2022-03-05 13:48:37 main:574] : INFO : Epoch 1798 | loss: 0.0311494 | val_loss: 0.0311651 | Time: 1719.55 ms [2022-03-05 13:48:39 main:574] : INFO : Epoch 1799 | loss: 0.0311488 | val_loss: 0.031164 | Time: 1709.3 ms [2022-03-05 13:53:31 main:574] : INFO : Epoch 1800 | loss: 0.0311486 | val_loss: 0.031166 | Time: 291906 ms [2022-03-05 13:53:33 main:574] : INFO : Epoch 1801 | loss: 0.0311506 | val_loss: 0.0311675 | Time: 1710.8 ms [2022-03-05 13:53:35 main:574] : INFO : Epoch 1802 | loss: 0.0311512 | val_loss: 0.0311676 | Time: 1732.65 ms [2022-03-05 13:53:36 main:574] : INFO : Epoch 1803 | loss: 0.0311519 | val_loss: 0.0311648 | Time: 1700.99 ms [2022-03-05 13:53:38 main:574] : INFO : Epoch 1804 | loss: 0.0311483 | val_loss: 0.0311716 | Time: 1708.94 ms [2022-03-05 14:02:40 main:574] : INFO : Epoch 1805 | loss: 0.0311494 | val_loss: 0.0311651 | Time: 542016 ms [2022-03-05 14:02:42 main:574] : INFO : Epoch 1806 | loss: 0.0311489 | val_loss: 0.0311653 | Time: 1744.26 ms [2022-03-05 14:02:44 main:574] : INFO : Epoch 1807 | loss: 0.0311472 | val_loss: 0.0311665 | Time: 1713.41 ms [2022-03-05 14:02:45 main:574] : INFO : Epoch 1808 | loss: 0.0311475 | val_loss: 0.0311655 | Time: 1754.3 ms [2022-03-05 14:02:47 main:574] : INFO : Epoch 1809 | loss: 0.0311449 | val_loss: 0.0311638 | Time: 1719.95 ms [2022-03-05 14:02:49 main:574] : INFO : Epoch 1810 | loss: 0.0311458 | val_loss: 0.0311685 | Time: 1837.01 ms [2022-03-05 14:03:11 main:574] : INFO : Epoch 1811 | loss: 0.0311474 | val_loss: 0.0311715 | Time: 21688 ms [2022-03-05 14:03:12 main:574] : INFO : Epoch 1812 | loss: 0.0311466 | val_loss: 0.0311681 | Time: 1698.95 ms [2022-03-05 14:03:14 main:574] : INFO : Epoch 1813 | loss: 0.0311468 | val_loss: 0.0311738 | Time: 1714.53 ms [2022-03-05 14:03:16 main:574] : INFO : Epoch 1814 | loss: 0.0311479 | val_loss: 0.0311703 | Time: 1697.47 ms [2022-03-05 14:03:18 main:574] : INFO : Epoch 1815 | loss: 0.0311473 | val_loss: 0.0311649 | Time: 1745.06 ms [2022-03-05 14:03:19 main:574] : INFO : Epoch 1816 | loss: 0.0311475 | val_loss: 0.0311659 | Time: 1736.04 ms [2022-03-05 14:08:31 main:574] : INFO : Epoch 1817 | loss: 0.0311479 | val_loss: 0.0311664 | Time: 311925 ms [2022-03-05 14:08:33 main:574] : INFO : Epoch 1818 | loss: 0.0311469 | val_loss: 0.0311621 | Time: 1708.29 ms [2022-03-05 14:08:35 main:574] : INFO : Epoch 1819 | loss: 0.031147 | val_loss: 0.031165 | Time: 1694.02 ms [2022-03-05 14:08:37 main:574] : INFO : Epoch 1820 | loss: 0.0311454 | val_loss: 0.0311658 | Time: 1787.61 ms [2022-03-05 14:08:39 main:574] : INFO : Epoch 1821 | loss: 0.0311482 | val_loss: 0.0311667 | Time: 1758.3 ms [2022-03-05 14:13:31 main:574] : INFO : Epoch 1822 | loss: 0.0311458 | val_loss: 0.0311655 | Time: 291932 ms [2022-03-05 14:13:32 main:574] : INFO : Epoch 1823 | loss: 0.0311445 | val_loss: 0.0311667 | Time: 1719.74 ms [2022-03-05 14:13:34 main:574] : INFO : Epoch 1824 | loss: 0.0311464 | val_loss: 0.0311677 | Time: 1746.03 ms [2022-03-05 14:13:36 main:574] : INFO : Epoch 1825 | loss: 0.0311455 | val_loss: 0.0311678 | Time: 1694.93 ms [2022-03-05 14:13:38 main:574] : INFO : Epoch 1826 | loss: 0.0311458 | val_loss: 0.0311687 | Time: 1737.06 ms [2022-03-05 14:13:39 main:574] : INFO : Epoch 1827 | loss: 0.0311464 | val_loss: 0.0311776 | Time: 1715.78 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GTX 970) [2022-03-05 14:23:31 main:435] : INFO : Set logging level to 1 [2022-03-05 14:23:31 main:441] : INFO : Running in BOINC Client mode [2022-03-05 14:23:31 main:444] : INFO : Resolving all filenames [2022-03-05 14:23:31 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-03-05 14:23:31 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-03-05 14:23:31 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-03-05 14:23:31 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-03-05 14:23:31 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-03-05 14:23:31 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-03-05 14:23:31 main:474] : INFO : Configuration: [2022-03-05 14:23:31 main:475] : INFO : Model type: GRU [2022-03-05 14:23:31 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-03-05 14:23:31 main:477] : INFO : Max Epochs: 2048 [2022-03-05 14:23:31 main:478] : INFO : Batch Size: 128 [2022-03-05 14:23:31 main:479] : INFO : Learning Rate: 0.01 [2022-03-05 14:23:31 main:480] : INFO : Patience: 10 [2022-03-05 14:23:31 main:481] : INFO : Hidden Width: 12 [2022-03-05 14:23:31 main:482] : INFO : # Recurrent Layers: 4 [2022-03-05 14:23:31 main:483] : INFO : # Backend Layers: 4 [2022-03-05 14:23:31 main:484] : INFO : # Threads: 1 [2022-03-05 14:23:31 main:486] : INFO : Preparing Dataset [2022-03-05 14:23:31 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-03-05 14:23:31 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-03-05 14:23:34 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-03-05 14:23:34 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-03-05 14:23:34 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-03-05 14:23:34 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-03-05 14:23:34 main:494] : INFO : Creating Model [2022-03-05 14:23:34 main:507] : INFO : Preparing config file [2022-03-05 14:23:34 main:511] : INFO : Found checkpoint, attempting to load... [2022-03-05 14:23:34 main:512] : INFO : Loading config [2022-03-05 14:23:34 main:514] : INFO : Loading state [2022-03-05 14:23:35 main:559] : INFO : Loading DataLoader into Memory [2022-03-05 14:23:35 main:562] : INFO : Starting Training [2022-03-05 14:23:37 main:574] : INFO : Epoch 1823 | loss: 0.0311863 | val_loss: 0.0311675 | Time: 2134.27 ms [2022-03-05 14:23:39 main:574] : INFO : Epoch 1824 | loss: 0.0311518 | val_loss: 0.031167 | Time: 1758.95 ms [2022-03-05 14:23:41 main:574] : INFO : Epoch 1825 | loss: 0.0311476 | val_loss: 0.0311663 | Time: 1735.59 ms [2022-03-05 14:28:33 main:574] : INFO : Epoch 1826 | loss: 0.0311463 | val_loss: 0.0311687 | Time: 291974 ms [2022-03-05 14:28:34 main:574] : INFO : Epoch 1827 | loss: 0.0311498 | val_loss: 0.0311662 | Time: 1739.52 ms [2022-03-05 14:28:36 main:574] : INFO : Epoch 1828 | loss: 0.0311494 | val_loss: 0.0311663 | Time: 1723.73 ms [2022-03-05 14:28:38 main:574] : INFO : Epoch 1829 | loss: 0.0311467 | val_loss: 0.031166 | Time: 1744.41 ms [2022-03-05 14:28:40 main:574] : INFO : Epoch 1830 | loss: 0.0311464 | val_loss: 0.0311681 | Time: 1718.27 ms [2022-03-05 14:33:32 main:574] : INFO : Epoch 1831 | loss: 0.0311454 | val_loss: 0.0311678 | Time: 292068 ms [2022-03-05 14:33:34 main:574] : INFO : Epoch 1832 | loss: 0.0311478 | val_loss: 0.0311718 | Time: 1767.15 ms [2022-03-05 14:33:35 main:574] : INFO : Epoch 1833 | loss: 0.0311463 | val_loss: 0.0311699 | Time: 1726.27 ms [2022-03-05 14:33:37 main:574] : INFO : Epoch 1834 | loss: 0.031148 | val_loss: 0.0311749 | Time: 1725.41 ms [2022-03-05 14:33:39 main:574] : INFO : Epoch 1835 | loss: 0.0311464 | val_loss: 0.0311694 | Time: 1718.9 ms [2022-03-05 14:33:41 main:574] : INFO : Epoch 1836 | loss: 0.0311441 | val_loss: 0.0311685 | Time: 1715.23 ms [2022-03-05 14:38:33 main:574] : INFO : Epoch 1837 | loss: 0.031143 | val_loss: 0.0311674 | Time: 292002 ms [2022-03-05 14:38:34 main:574] : INFO : Epoch 1838 | loss: 0.0311428 | val_loss: 0.0311708 | Time: 1728.22 ms [2022-03-05 14:38:36 main:574] : INFO : Epoch 1839 | loss: 0.0311426 | val_loss: 0.031172 | Time: 1735.53 ms [2022-03-05 14:38:38 main:574] : INFO : Epoch 1840 | loss: 0.0311445 | val_loss: 0.0311696 | Time: 1776.99 ms [2022-03-05 14:38:40 main:574] : INFO : Epoch 1841 | loss: 0.0311447 | val_loss: 0.0311692 | Time: 1721.5 ms [2022-03-05 14:43:32 main:574] : INFO : Epoch 1842 | loss: 0.031145 | val_loss: 0.0311693 | Time: 292202 ms [2022-03-05 14:43:34 main:574] : INFO : Epoch 1843 | loss: 0.0311449 | val_loss: 0.0311715 | Time: 1717.77 ms [2022-03-05 14:43:36 main:574] : INFO : Epoch 1844 | loss: 0.0311442 | val_loss: 0.0311718 | Time: 1715.62 ms [2022-03-05 14:43:37 main:574] : INFO : Epoch 1845 | loss: 0.0311468 | val_loss: 0.0311676 | Time: 1751.8 ms [2022-03-05 14:43:39 main:574] : INFO : Epoch 1846 | loss: 0.0311439 | val_loss: 0.0311693 | Time: 1717.53 ms [2022-03-05 14:43:41 main:574] : INFO : Epoch 1847 | loss: 0.0311428 | val_loss: 0.03117 | Time: 1708.59 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GTX 970) [2022-03-05 15:18:45 main:435] : INFO : Set logging level to 1 [2022-03-05 15:18:45 main:441] : INFO : Running in BOINC Client mode [2022-03-05 15:18:45 main:444] : INFO : Resolving all filenames [2022-03-05 15:18:45 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-03-05 15:18:45 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-03-05 15:18:45 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-03-05 15:18:45 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-03-05 15:18:45 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-03-05 15:18:45 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-03-05 15:18:45 main:474] : INFO : Configuration: [2022-03-05 15:18:45 main:475] : INFO : Model type: GRU [2022-03-05 15:18:45 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-03-05 15:18:45 main:477] : INFO : Max Epochs: 2048 [2022-03-05 15:18:45 main:478] : INFO : Batch Size: 128 [2022-03-05 15:18:45 main:479] : INFO : Learning Rate: 0.01 [2022-03-05 15:18:45 main:480] : INFO : Patience: 10 [2022-03-05 15:18:45 main:481] : INFO : Hidden Width: 12 [2022-03-05 15:18:45 main:482] : INFO : # Recurrent Layers: 4 [2022-03-05 15:18:45 main:483] : INFO : # Backend Layers: 4 [2022-03-05 15:18:45 main:484] : INFO : # Threads: 1 [2022-03-05 15:18:45 main:486] : INFO : Preparing Dataset [2022-03-05 15:18:45 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-03-05 15:18:45 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-03-05 15:18:48 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-03-05 15:18:48 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-03-05 15:18:49 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-03-05 15:18:49 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-03-05 15:18:49 main:494] : INFO : Creating Model [2022-03-05 15:18:49 main:507] : INFO : Preparing config file [2022-03-05 15:18:49 main:511] : INFO : Found checkpoint, attempting to load... [2022-03-05 15:18:49 main:512] : INFO : Loading config [2022-03-05 15:18:49 main:514] : INFO : Loading state [2022-03-05 15:18:50 main:559] : INFO : Loading DataLoader into Memory [2022-03-05 15:18:50 main:562] : INFO : Starting Training [2022-03-05 15:18:52 main:574] : INFO : Epoch 1843 | loss: 0.0312169 | val_loss: 0.0311902 | Time: 2220.78 ms [2022-03-05 15:18:54 main:574] : INFO : Epoch 1844 | loss: 0.031158 | val_loss: 0.0311673 | Time: 1800.78 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GTX 970) [2022-03-05 15:37:05 main:435] : INFO : Set logging level to 1 [2022-03-05 15:37:05 main:441] : INFO : Running in BOINC Client mode [2022-03-05 15:37:05 main:444] : INFO : Resolving all filenames [2022-03-05 15:37:05 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-03-05 15:37:05 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-03-05 15:37:05 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-03-05 15:37:05 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-03-05 15:37:05 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-03-05 15:37:05 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-03-05 15:37:05 main:474] : INFO : Configuration: [2022-03-05 15:37:05 main:475] : INFO : Model type: GRU [2022-03-05 15:37:05 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-03-05 15:37:05 main:477] : INFO : Max Epochs: 2048 [2022-03-05 15:37:05 main:478] : INFO : Batch Size: 128 [2022-03-05 15:37:05 main:479] : INFO : Learning Rate: 0.01 [2022-03-05 15:37:05 main:480] : INFO : Patience: 10 [2022-03-05 15:37:05 main:481] : INFO : Hidden Width: 12 [2022-03-05 15:37:05 main:482] : INFO : # Recurrent Layers: 4 [2022-03-05 15:37:05 main:483] : INFO : # Backend Layers: 4 [2022-03-05 15:37:05 main:484] : INFO : # Threads: 1 [2022-03-05 15:37:05 main:486] : INFO : Preparing Dataset [2022-03-05 15:37:05 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-03-05 15:37:06 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-03-05 15:37:08 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-03-05 15:37:08 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-03-05 15:37:08 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-03-05 15:37:08 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-03-05 15:37:08 main:494] : INFO : Creating Model [2022-03-05 15:37:08 main:507] : INFO : Preparing config file [2022-03-05 15:37:08 main:511] : INFO : Found checkpoint, attempting to load... [2022-03-05 15:37:08 main:512] : INFO : Loading config [2022-03-05 15:37:08 main:514] : INFO : Loading state [2022-03-05 15:37:09 main:559] : INFO : Loading DataLoader into Memory [2022-03-05 15:37:09 main:562] : INFO : Starting Training [2022-03-05 15:37:11 main:574] : INFO : Epoch 1843 | loss: 0.0311912 | val_loss: 0.0311711 | Time: 2112.29 ms [2022-03-05 15:37:13 main:574] : INFO : Epoch 1844 | loss: 0.0311515 | val_loss: 0.0311762 | Time: 1708.84 ms [2022-03-05 15:37:15 main:574] : INFO : Epoch 1845 | loss: 0.0311502 | val_loss: 0.0311679 | Time: 1721.99 ms [2022-03-05 15:38:06 main:574] : INFO : Epoch 1846 | loss: 0.0311487 | val_loss: 0.0311678 | Time: 51670.6 ms [2022-03-05 15:38:08 main:574] : INFO : Epoch 1847 | loss: 0.031149 | val_loss: 0.0311702 | Time: 1742.4 ms [2022-03-05 15:38:10 main:574] : INFO : Epoch 1848 | loss: 0.0311497 | val_loss: 0.0311689 | Time: 1711.1 ms [2022-03-05 15:38:12 main:574] : INFO : Epoch 1849 | loss: 0.0311456 | val_loss: 0.0311672 | Time: 1722.24 ms [2022-03-05 15:38:13 main:574] : INFO : Epoch 1850 | loss: 0.0311457 | val_loss: 0.0311668 | Time: 1696.49 ms [2022-03-05 15:46:15 main:574] : INFO : Epoch 1851 | loss: 0.0311456 | val_loss: 0.0311662 | Time: 482088 ms [2022-03-05 15:46:17 main:574] : INFO : Epoch 1852 | loss: 0.031144 | val_loss: 0.0311702 | Time: 1713.57 ms [2022-03-05 15:46:19 main:574] : INFO : Epoch 1853 | loss: 0.0311441 | val_loss: 0.0311711 | Time: 1723.54 ms [2022-03-05 15:46:21 main:574] : INFO : Epoch 1854 | loss: 0.031143 | val_loss: 0.0311738 | Time: 1703.25 ms [2022-03-05 15:46:22 main:574] : INFO : Epoch 1855 | loss: 0.0311452 | val_loss: 0.0311699 | Time: 1688.63 ms [2022-03-05 15:46:24 main:574] : INFO : Epoch 1856 | loss: 0.0311456 | val_loss: 0.03117 | Time: 1695.67 ms [2022-03-05 15:47:26 main:574] : INFO : Epoch 1857 | loss: 0.0311422 | val_loss: 0.0311681 | Time: 61853 ms [2022-03-05 15:47:28 main:574] : INFO : Epoch 1858 | loss: 0.031141 | val_loss: 0.0311689 | Time: 1699.17 ms [2022-03-05 15:47:29 main:574] : INFO : Epoch 1859 | loss: 0.0311412 | val_loss: 0.0311719 | Time: 1714.84 ms [2022-03-05 15:47:31 main:574] : INFO : Epoch 1860 | loss: 0.031142 | val_loss: 0.0311706 | Time: 1707.58 ms [2022-03-05 15:47:33 main:574] : INFO : Epoch 1861 | loss: 0.031141 | val_loss: 0.0311757 | Time: 1745.43 ms [2022-03-05 15:47:35 main:574] : INFO : Epoch 1862 | loss: 0.0311413 | val_loss: 0.0311728 | Time: 1755.68 ms [2022-03-05 16:13:48 main:574] : INFO : Epoch 1863 | loss: 0.0311404 | val_loss: 0.0311697 | Time: 1.57291e+06 ms [2022-03-05 16:13:49 main:574] : INFO : Epoch 1864 | loss: 0.0311428 | val_loss: 0.0311718 | Time: 1718.76 ms [2022-03-05 16:13:51 main:574] : INFO : Epoch 1865 | loss: 0.0311451 | val_loss: 0.0311681 | Time: 1723.94 ms [2022-03-05 16:13:53 main:574] : INFO : Epoch 1866 | loss: 0.0311482 | val_loss: 0.0311683 | Time: 1732.36 ms [2022-03-05 16:13:55 main:574] : INFO : Epoch 1867 | loss: 0.0311481 | val_loss: 0.0311661 | Time: 1705.32 ms [2022-03-05 16:18:47 main:574] : INFO : Epoch 1868 | loss: 0.0311478 | val_loss: 0.0311687 | Time: 291911 ms [2022-03-05 16:18:48 main:574] : INFO : Epoch 1869 | loss: 0.0311458 | val_loss: 0.0311664 | Time: 1729.96 ms [2022-03-05 16:18:50 main:574] : INFO : Epoch 1870 | loss: 0.0311459 | val_loss: 0.0311702 | Time: 1756.55 ms [2022-03-05 16:18:52 main:574] : INFO : Epoch 1871 | loss: 0.0311463 | val_loss: 0.0311661 | Time: 1698.91 ms [2022-03-05 16:18:54 main:574] : INFO : Epoch 1872 | loss: 0.0311462 | val_loss: 0.0311666 | Time: 1708.2 ms [2022-03-05 16:18:55 main:574] : INFO : Epoch 1873 | loss: 0.0311426 | val_loss: 0.0311704 | Time: 1717.43 ms [2022-03-05 16:29:27 main:574] : INFO : Epoch 1874 | loss: 0.0311435 | val_loss: 0.0311684 | Time: 632159 ms [2022-03-05 16:29:29 main:574] : INFO : Epoch 1875 | loss: 0.0311427 | val_loss: 0.0311637 | Time: 1712.87 ms [2022-03-05 16:29:31 main:574] : INFO : Epoch 1876 | loss: 0.0311454 | val_loss: 0.0311669 | Time: 1688.05 ms [2022-03-05 16:29:33 main:574] : INFO : Epoch 1877 | loss: 0.0311453 | val_loss: 0.0311656 | Time: 1681.36 ms [2022-03-05 16:29:34 main:574] : INFO : Epoch 1878 | loss: 0.0311453 | val_loss: 0.0311649 | Time: 1680 ms [2022-03-05 16:29:36 main:574] : INFO : Epoch 1879 | loss: 0.0311485 | val_loss: 0.0311597 | Time: 1696.3 ms [2022-03-05 16:33:48 main:574] : INFO : Epoch 1880 | loss: 0.0311509 | val_loss: 0.031161 | Time: 251900 ms [2022-03-05 16:33:50 main:574] : INFO : Epoch 1881 | loss: 0.0311479 | val_loss: 0.0311631 | Time: 1678.82 ms [2022-03-05 16:33:51 main:574] : INFO : Epoch 1882 | loss: 0.0311477 | val_loss: 0.0311616 | Time: 1708.27 ms [2022-03-05 16:33:53 main:574] : INFO : Epoch 1883 | loss: 0.0311462 | val_loss: 0.0311636 | Time: 1702.68 ms [2022-03-05 16:33:55 main:574] : INFO : Epoch 1884 | loss: 0.0311447 | val_loss: 0.0311615 | Time: 1701.18 ms [2022-03-05 16:33:57 main:574] : INFO : Epoch 1885 | loss: 0.0311448 | val_loss: 0.0311626 | Time: 1751.41 ms [2022-03-05 16:38:49 main:574] : INFO : Epoch 1886 | loss: 0.0311442 | val_loss: 0.0311614 | Time: 291909 ms [2022-03-05 16:38:50 main:574] : INFO : Epoch 1887 | loss: 0.0311464 | val_loss: 0.0311647 | Time: 1709.04 ms [2022-03-05 16:38:52 main:574] : INFO : Epoch 1888 | loss: 0.0311431 | val_loss: 0.0311622 | Time: 1701.58 ms [2022-03-05 16:38:54 main:574] : INFO : Epoch 1889 | loss: 0.0311434 | val_loss: 0.0311642 | Time: 1700.4 ms [2022-03-05 16:38:56 main:574] : INFO : Epoch 1890 | loss: 0.0311431 | val_loss: 0.0311657 | Time: 1693.36 ms [2022-03-05 16:43:47 main:574] : INFO : Epoch 1891 | loss: 0.0311412 | val_loss: 0.0311663 | Time: 291952 ms [2022-03-05 16:43:49 main:574] : INFO : Epoch 1892 | loss: 0.0311436 | val_loss: 0.0311664 | Time: 1754.96 ms [2022-03-05 16:43:51 main:574] : INFO : Epoch 1893 | loss: 0.0311458 | val_loss: 0.0311651 | Time: 1714.14 ms [2022-03-05 16:43:53 main:574] : INFO : Epoch 1894 | loss: 0.0311463 | val_loss: 0.031165 | Time: 1698.92 ms [2022-03-05 16:43:55 main:574] : INFO : Epoch 1895 | loss: 0.0311477 | val_loss: 0.0311641 | Time: 1736.27 ms [2022-03-05 16:43:56 main:574] : INFO : Epoch 1896 | loss: 0.0311456 | val_loss: 0.0311639 | Time: 1704.77 ms [2022-03-05 16:48:38 main:574] : INFO : Epoch 1897 | loss: 0.0311442 | val_loss: 0.0311673 | Time: 281970 ms [2022-03-05 16:48:40 main:574] : INFO : Epoch 1898 | loss: 0.03115 | val_loss: 0.0311676 | Time: 1713.6 ms [2022-03-05 16:48:42 main:574] : INFO : Epoch 1899 | loss: 0.0311479 | val_loss: 0.031165 | Time: 1706.95 ms [2022-03-05 16:48:44 main:574] : INFO : Epoch 1900 | loss: 0.0311454 | val_loss: 0.0311707 | Time: 1703.12 ms [2022-03-05 16:48:45 main:574] : INFO : Epoch 1901 | loss: 0.0311452 | val_loss: 0.0311684 | Time: 1701.72 ms [2022-03-05 16:48:47 main:574] : INFO : Epoch 1902 | loss: 0.0311459 | val_loss: 0.0311658 | Time: 1741.17 ms [2022-03-05 16:53:39 main:574] : INFO : Epoch 1903 | loss: 0.0311528 | val_loss: 0.0311672 | Time: 292066 ms [2022-03-05 16:53:41 main:574] : INFO : Epoch 1904 | loss: 0.0311528 | val_loss: 0.0311678 | Time: 1710.39 ms [2022-03-05 16:53:43 main:574] : INFO : Epoch 1905 | loss: 0.0311506 | val_loss: 0.0311674 | Time: 1706.83 ms [2022-03-05 16:53:45 main:574] : INFO : Epoch 1906 | loss: 0.0311472 | val_loss: 0.0311696 | Time: 1712.1 ms [2022-03-05 16:53:46 main:574] : INFO : Epoch 1907 | loss: 0.0311489 | val_loss: 0.0311624 | Time: 1735.04 ms [2022-03-05 16:58:38 main:574] : INFO : Epoch 1908 | loss: 0.031148 | val_loss: 0.0311619 | Time: 292048 ms [2022-03-05 16:58:40 main:574] : INFO : Epoch 1909 | loss: 0.0311455 | val_loss: 0.0311634 | Time: 1698.15 ms [2022-03-05 16:58:42 main:574] : INFO : Epoch 1910 | loss: 0.0311436 | val_loss: 0.0311633 | Time: 1706.99 ms [2022-03-05 16:58:44 main:574] : INFO : Epoch 1911 | loss: 0.0311432 | val_loss: 0.0311651 | Time: 1751.57 ms [2022-03-05 16:58:45 main:574] : INFO : Epoch 1912 | loss: 0.0311431 | val_loss: 0.0311641 | Time: 1689.15 ms [2022-03-05 16:58:47 main:574] : INFO : Epoch 1913 | loss: 0.0311472 | val_loss: 0.0311638 | Time: 1718.47 ms [2022-03-05 17:08:39 main:574] : INFO : Epoch 1914 | loss: 0.031146 | val_loss: 0.0311672 | Time: 592160 ms [2022-03-05 17:08:41 main:574] : INFO : Epoch 1915 | loss: 0.031146 | val_loss: 0.0311643 | Time: 1714.1 ms [2022-03-05 17:08:43 main:574] : INFO : Epoch 1916 | loss: 0.0311433 | val_loss: 0.031163 | Time: 1719.5 ms [2022-03-05 17:08:45 main:574] : INFO : Epoch 1917 | loss: 0.0311449 | val_loss: 0.031164 | Time: 1691.89 ms [2022-03-05 17:08:46 main:574] : INFO : Epoch 1918 | loss: 0.0311433 | val_loss: 0.0311661 | Time: 1695.05 ms [2022-03-05 17:08:48 main:574] : INFO : Epoch 1919 | loss: 0.0311442 | val_loss: 0.0311622 | Time: 1707.34 ms [2022-03-05 17:11:00 main:574] : INFO : Epoch 1920 | loss: 0.0311441 | val_loss: 0.0311671 | Time: 131860 ms [2022-03-05 17:11:02 main:574] : INFO : Epoch 1921 | loss: 0.0311428 | val_loss: 0.0311646 | Time: 1700.02 ms [2022-03-05 17:11:03 main:574] : INFO : Epoch 1922 | loss: 0.0311447 | val_loss: 0.0311619 | Time: 1696.94 ms [2022-03-05 17:11:05 main:574] : INFO : Epoch 1923 | loss: 0.0311452 | val_loss: 0.0311643 | Time: 1695.46 ms [2022-03-05 17:11:07 main:574] : INFO : Epoch 1924 | loss: 0.0311451 | val_loss: 0.0311641 | Time: 1751.6 ms [2022-03-05 17:12:49 main:574] : INFO : Epoch 1925 | loss: 0.031146 | val_loss: 0.0311616 | Time: 101705 ms [2022-03-05 17:12:50 main:574] : INFO : Epoch 1926 | loss: 0.0311477 | val_loss: 0.031162 | Time: 1712.22 ms [2022-03-05 17:12:52 main:574] : INFO : Epoch 1927 | loss: 0.0311483 | val_loss: 0.0311661 | Time: 1768.74 ms [2022-03-05 17:12:54 main:574] : INFO : Epoch 1928 | loss: 0.0311464 | val_loss: 0.0311653 | Time: 1682.49 ms [2022-03-05 17:12:56 main:574] : INFO : Epoch 1929 | loss: 0.0311452 | val_loss: 0.0311633 | Time: 1696.33 ms [2022-03-05 17:12:57 main:574] : INFO : Epoch 1930 | loss: 0.0311437 | val_loss: 0.0311641 | Time: 1701.93 ms [2022-03-05 17:13:29 main:574] : INFO : Epoch 1931 | loss: 0.0311442 | val_loss: 0.0311639 | Time: 31778.5 ms [2022-03-05 17:13:31 main:574] : INFO : Epoch 1932 | loss: 0.0311449 | val_loss: 0.0311607 | Time: 1733.83 ms [2022-03-05 17:13:33 main:574] : INFO : Epoch 1933 | loss: 0.0311467 | val_loss: 0.0311652 | Time: 1706.33 ms [2022-03-05 17:13:34 main:574] : INFO : Epoch 1934 | loss: 0.0311476 | val_loss: 0.0311674 | Time: 1702.43 ms [2022-03-05 17:13:36 main:574] : INFO : Epoch 1935 | loss: 0.0311486 | val_loss: 0.0311611 | Time: 1740.86 ms [2022-03-05 17:13:38 main:574] : INFO : Epoch 1936 | loss: 0.0311442 | val_loss: 0.0311628 | Time: 1695.25 ms [2022-03-05 17:13:40 main:574] : INFO : Epoch 1937 | loss: 0.0311444 | val_loss: 0.0311615 | Time: 1719.05 ms [2022-03-05 17:13:41 main:574] : INFO : Epoch 1938 | loss: 0.0311429 | val_loss: 0.0311638 | Time: 1691.85 ms [2022-03-05 17:13:43 main:574] : INFO : Epoch 1939 | loss: 0.0311428 | val_loss: 0.0311651 | Time: 1712.15 ms [2022-03-05 17:13:45 main:574] : INFO : Epoch 1940 | loss: 0.0311436 | val_loss: 0.0311618 | Time: 1696.88 ms [2022-03-05 17:13:46 main:574] : INFO : Epoch 1941 | loss: 0.0311418 | val_loss: 0.031163 | Time: 1707.14 ms [2022-03-05 17:13:48 main:574] : INFO : Epoch 1942 | loss: 0.0311408 | val_loss: 0.031163 | Time: 1702.44 ms [2022-03-05 17:18:40 main:574] : INFO : Epoch 1943 | loss: 0.0311424 | val_loss: 0.0311628 | Time: 291950 ms [2022-03-05 17:18:42 main:574] : INFO : Epoch 1944 | loss: 0.0311421 | val_loss: 0.0311624 | Time: 1703.83 ms [2022-03-05 17:18:44 main:574] : INFO : Epoch 1945 | loss: 0.0311403 | val_loss: 0.0311633 | Time: 1694.57 ms [2022-03-05 17:18:45 main:574] : INFO : Epoch 1946 | loss: 0.0311406 | val_loss: 0.0311648 | Time: 1685.37 ms [2022-03-05 17:18:47 main:574] : INFO : Epoch 1947 | loss: 0.031142 | val_loss: 0.0311624 | Time: 1699.81 ms [2022-03-05 17:18:49 main:574] : INFO : Epoch 1948 | loss: 0.0311412 | val_loss: 0.0311667 | Time: 1712.86 ms [2022-03-05 17:23:41 main:574] : INFO : Epoch 1949 | loss: 0.03114 | val_loss: 0.0311644 | Time: 291960 ms [2022-03-05 17:23:43 main:574] : INFO : Epoch 1950 | loss: 0.0311401 | val_loss: 0.0311633 | Time: 1712.31 ms [2022-03-05 17:23:44 main:574] : INFO : Epoch 1951 | loss: 0.0311398 | val_loss: 0.0311651 | Time: 1679.05 ms [2022-03-05 17:23:46 main:574] : INFO : Epoch 1952 | loss: 0.0311408 | val_loss: 0.0311711 | Time: 1696.8 ms [2022-03-05 17:23:48 main:574] : INFO : Epoch 1953 | loss: 0.0311414 | val_loss: 0.0311675 | Time: 1754.6 ms [2022-03-05 17:28:40 main:574] : INFO : Epoch 1954 | loss: 0.0311398 | val_loss: 0.0311672 | Time: 291945 ms [2022-03-05 17:28:41 main:574] : INFO : Epoch 1955 | loss: 0.0311423 | val_loss: 0.0311675 | Time: 1703.64 ms [2022-03-05 17:28:43 main:574] : INFO : Epoch 1956 | loss: 0.0311425 | val_loss: 0.0311664 | Time: 1684.2 ms [2022-03-05 17:28:45 main:574] : INFO : Epoch 1957 | loss: 0.0311426 | val_loss: 0.0311704 | Time: 1756.41 ms [2022-03-05 17:28:47 main:574] : INFO : Epoch 1958 | loss: 0.0311453 | val_loss: 0.0311714 | Time: 1689.79 ms [2022-03-05 17:28:48 main:574] : INFO : Epoch 1959 | loss: 0.0311428 | val_loss: 0.0311681 | Time: 1697.99 ms [2022-03-05 17:29:30 main:574] : INFO : Epoch 1960 | loss: 0.0311419 | val_loss: 0.0311654 | Time: 41768.2 ms [2022-03-05 17:29:32 main:574] : INFO : Epoch 1961 | loss: 0.031142 | val_loss: 0.0311677 | Time: 1714.32 ms [2022-03-05 17:29:34 main:574] : INFO : Epoch 1962 | loss: 0.0311393 | val_loss: 0.0311664 | Time: 1754.19 ms [2022-03-05 17:29:35 main:574] : INFO : Epoch 1963 | loss: 0.0311399 | val_loss: 0.0311643 | Time: 1697.99 ms [2022-03-05 17:29:37 main:574] : INFO : Epoch 1964 | loss: 0.0311403 | val_loss: 0.0311694 | Time: 1705.29 ms [2022-03-05 17:29:39 main:574] : INFO : Epoch 1965 | loss: 0.0311395 | val_loss: 0.0311667 | Time: 1701.1 ms [2022-03-05 17:33:41 main:574] : INFO : Epoch 1966 | loss: 0.0311392 | val_loss: 0.031166 | Time: 241869 ms [2022-03-05 17:33:43 main:574] : INFO : Epoch 1967 | loss: 0.0311392 | val_loss: 0.0311654 | Time: 1734.21 ms [2022-03-05 17:33:44 main:574] : INFO : Epoch 1968 | loss: 0.0311381 | val_loss: 0.0311691 | Time: 1699.13 ms [2022-03-05 17:33:46 main:574] : INFO : Epoch 1969 | loss: 0.0311373 | val_loss: 0.0311683 | Time: 1689.59 ms [2022-03-05 17:33:48 main:574] : INFO : Epoch 1970 | loss: 0.0311421 | val_loss: 0.0311678 | Time: 1718.64 ms [2022-03-05 17:43:30 main:574] : INFO : Epoch 1971 | loss: 0.0311542 | val_loss: 0.0311728 | Time: 582024 ms [2022-03-05 17:43:32 main:574] : INFO : Epoch 1972 | loss: 0.0311545 | val_loss: 0.0311732 | Time: 1694.33 ms [2022-03-05 17:43:33 main:574] : INFO : Epoch 1973 | loss: 0.031153 | val_loss: 0.0311733 | Time: 1715.96 ms [2022-03-05 17:43:35 main:574] : INFO : Epoch 1974 | loss: 0.0311486 | val_loss: 0.0311681 | Time: 1690.8 ms [2022-03-05 17:43:37 main:574] : INFO : Epoch 1975 | loss: 0.0311517 | val_loss: 0.0311697 | Time: 1680.74 ms [2022-03-05 17:43:39 main:574] : INFO : Epoch 1976 | loss: 0.0311512 | val_loss: 0.0311672 | Time: 1696.34 ms [2022-03-05 17:44:20 main:574] : INFO : Epoch 1977 | loss: 0.031151 | val_loss: 0.0311697 | Time: 41740.3 ms [2022-03-05 17:44:22 main:574] : INFO : Epoch 1978 | loss: 0.0311504 | val_loss: 0.0311699 | Time: 1710.84 ms [2022-03-05 17:44:24 main:574] : INFO : Epoch 1979 | loss: 0.0311468 | val_loss: 0.0311671 | Time: 1741.62 ms [2022-03-05 17:44:26 main:574] : INFO : Epoch 1980 | loss: 0.0311454 | val_loss: 0.0311701 | Time: 1728.39 ms [2022-03-05 17:44:27 main:574] : INFO : Epoch 1981 | loss: 0.031149 | val_loss: 0.0311697 | Time: 1731.55 ms [2022-03-05 17:44:29 main:574] : INFO : Epoch 1982 | loss: 0.031148 | val_loss: 0.0311704 | Time: 1707.03 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GTX 970) [2022-03-05 17:58:41 main:435] : INFO : Set logging level to 1 [2022-03-05 17:58:41 main:441] : INFO : Running in BOINC Client mode [2022-03-05 17:58:41 main:444] : INFO : Resolving all filenames [2022-03-05 17:58:41 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-03-05 17:58:41 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-03-05 17:58:41 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-03-05 17:58:41 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-03-05 17:58:41 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-03-05 17:58:41 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-03-05 17:58:41 main:474] : INFO : Configuration: [2022-03-05 17:58:41 main:475] : INFO : Model type: GRU [2022-03-05 17:58:41 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-03-05 17:58:41 main:477] : INFO : Max Epochs: 2048 [2022-03-05 17:58:41 main:478] : INFO : Batch Size: 128 [2022-03-05 17:58:41 main:479] : INFO : Learning Rate: 0.01 [2022-03-05 17:58:41 main:480] : INFO : Patience: 10 [2022-03-05 17:58:41 main:481] : INFO : Hidden Width: 12 [2022-03-05 17:58:41 main:482] : INFO : # Recurrent Layers: 4 [2022-03-05 17:58:41 main:483] : INFO : # Backend Layers: 4 [2022-03-05 17:58:41 main:484] : INFO : # Threads: 1 [2022-03-05 17:58:41 main:486] : INFO : Preparing Dataset [2022-03-05 17:58:41 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-03-05 17:58:41 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-03-05 17:58:44 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-03-05 17:58:44 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-03-05 17:58:44 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-03-05 17:58:44 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-03-05 17:58:44 main:494] : INFO : Creating Model [2022-03-05 17:58:44 main:507] : INFO : Preparing config file [2022-03-05 17:58:44 main:511] : INFO : Found checkpoint, attempting to load... [2022-03-05 17:58:44 main:512] : INFO : Loading config [2022-03-05 17:58:44 main:514] : INFO : Loading state [2022-03-05 17:58:45 main:559] : INFO : Loading DataLoader into Memory [2022-03-05 17:58:45 main:562] : INFO : Starting Training [2022-03-05 17:58:47 main:574] : INFO : Epoch 1972 | loss: 0.0311829 | val_loss: 0.0311734 | Time: 2134.79 ms [2022-03-05 17:58:49 main:574] : INFO : Epoch 1973 | loss: 0.0311534 | val_loss: 0.0311654 | Time: 1698.56 ms [2022-03-05 18:04:41 main:574] : INFO : Epoch 1974 | loss: 0.0311469 | val_loss: 0.0311708 | Time: 352161 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GTX 970) [2022-03-06 08:49:44 main:435] : INFO : Set logging level to 1 [2022-03-06 08:49:44 main:441] : INFO : Running in BOINC Client mode [2022-03-06 08:49:44 main:444] : INFO : Resolving all filenames [2022-03-06 08:49:44 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-03-06 08:49:44 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-03-06 08:49:44 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-03-06 08:49:44 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-03-06 08:49:44 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-03-06 08:49:44 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-03-06 08:49:44 main:474] : INFO : Configuration: [2022-03-06 08:49:44 main:475] : INFO : Model type: GRU [2022-03-06 08:49:44 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-03-06 08:49:44 main:477] : INFO : Max Epochs: 2048 [2022-03-06 08:49:44 main:478] : INFO : Batch Size: 128 [2022-03-06 08:49:44 main:479] : INFO : Learning Rate: 0.01 [2022-03-06 08:49:44 main:480] : INFO : Patience: 10 [2022-03-06 08:49:44 main:481] : INFO : Hidden Width: 12 [2022-03-06 08:49:44 main:482] : INFO : # Recurrent Layers: 4 [2022-03-06 08:49:44 main:483] : INFO : # Backend Layers: 4 [2022-03-06 08:49:44 main:484] : INFO : # Threads: 1 [2022-03-06 08:49:44 main:486] : INFO : Preparing Dataset [2022-03-06 08:49:44 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-03-06 08:49:44 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-03-06 08:49:47 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-03-06 08:49:47 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-03-06 08:49:47 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-03-06 08:49:47 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-03-06 08:49:47 main:494] : INFO : Creating Model [2022-03-06 08:49:47 main:507] : INFO : Preparing config file [2022-03-06 08:49:47 main:511] : INFO : Found checkpoint, attempting to load... [2022-03-06 08:49:47 main:512] : INFO : Loading config [2022-03-06 08:49:47 main:514] : INFO : Loading state [2022-03-06 08:49:48 main:559] : INFO : Loading DataLoader into Memory [2022-03-06 08:49:49 main:562] : INFO : Starting Training [2022-03-06 08:50:41 main:574] : INFO : Epoch 1975 | loss: 0.0311822 | val_loss: 0.0311709 | Time: 52239.3 ms [2022-03-06 08:50:42 main:574] : INFO : Epoch 1976 | loss: 0.0311492 | val_loss: 0.0311664 | Time: 1693.77 ms [2022-03-06 08:50:44 main:574] : INFO : Epoch 1977 | loss: 0.0311442 | val_loss: 0.0311685 | Time: 1695.33 ms [2022-03-06 08:50:46 main:574] : INFO : Epoch 1978 | loss: 0.0311442 | val_loss: 0.0311708 | Time: 1681.17 ms [2022-03-06 08:50:48 main:574] : INFO : Epoch 1979 | loss: 0.0311425 | val_loss: 0.0311723 | Time: 1714.87 ms [2022-03-06 08:50:49 main:574] : INFO : Epoch 1980 | loss: 0.0311426 | val_loss: 0.0311654 | Time: 1685.34 ms [2022-03-06 08:52:21 main:574] : INFO : Epoch 1981 | loss: 0.0311414 | val_loss: 0.0311681 | Time: 91780.5 ms [2022-03-06 08:52:23 main:574] : INFO : Epoch 1982 | loss: 0.0311422 | val_loss: 0.0311688 | Time: 1688.43 ms [2022-03-06 08:52:25 main:574] : INFO : Epoch 1983 | loss: 0.0311441 | val_loss: 0.0311706 | Time: 1757.7 ms [2022-03-06 08:52:26 main:574] : INFO : Epoch 1984 | loss: 0.0311408 | val_loss: 0.0311652 | Time: 1700.51 ms [2022-03-06 08:52:28 main:574] : INFO : Epoch 1985 | loss: 0.0311398 | val_loss: 0.0311686 | Time: 1693.23 ms [2022-03-06 08:52:30 main:574] : INFO : Epoch 1986 | loss: 0.0311387 | val_loss: 0.0311685 | Time: 1710.59 ms [2022-03-06 08:52:42 main:574] : INFO : Epoch 1987 | loss: 0.0311401 | val_loss: 0.0311673 | Time: 11708.3 ms [2022-03-06 08:52:43 main:574] : INFO : Epoch 1988 | loss: 0.0311385 | val_loss: 0.0311764 | Time: 1706.44 ms [2022-03-06 08:52:45 main:574] : INFO : Epoch 1989 | loss: 0.0311399 | val_loss: 0.0311646 | Time: 1688.6 ms [2022-03-06 08:52:47 main:574] : INFO : Epoch 1990 | loss: 0.0311384 | val_loss: 0.0311711 | Time: 1673.61 ms [2022-03-06 08:52:48 main:574] : INFO : Epoch 1991 | loss: 0.0311384 | val_loss: 0.0311724 | Time: 1703.1 ms [2022-03-06 08:52:50 main:574] : INFO : Epoch 1992 | loss: 0.0311404 | val_loss: 0.031169 | Time: 1711.68 ms [2022-03-06 08:53:42 main:574] : INFO : Epoch 1993 | loss: 0.0311384 | val_loss: 0.03118 | Time: 51914.2 ms [2022-03-06 08:53:44 main:574] : INFO : Epoch 1994 | loss: 0.0311434 | val_loss: 0.0311666 | Time: 1724.42 ms [2022-03-06 08:53:46 main:574] : INFO : Epoch 1995 | loss: 0.0311394 | val_loss: 0.0311703 | Time: 1741.85 ms [2022-03-06 08:53:47 main:574] : INFO : Epoch 1996 | loss: 0.0311373 | val_loss: 0.0311685 | Time: 1697.03 ms [2022-03-06 08:53:49 main:574] : INFO : Epoch 1997 | loss: 0.0311382 | val_loss: 0.0311734 | Time: 1682.13 ms [2022-03-06 08:59:01 main:574] : INFO : Epoch 1998 | loss: 0.0311371 | val_loss: 0.0311713 | Time: 311877 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GTX 970) [2022-03-06 15:17:15 main:435] : INFO : Set logging level to 1 [2022-03-06 15:17:15 main:441] : INFO : Running in BOINC Client mode [2022-03-06 15:17:15 main:444] : INFO : Resolving all filenames [2022-03-06 15:17:15 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-03-06 15:17:15 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-03-06 15:17:15 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-03-06 15:17:15 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 1) [2022-03-06 15:17:15 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-03-06 15:17:15 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-03-06 15:17:15 main:474] : INFO : Configuration: [2022-03-06 15:17:15 main:475] : INFO : Model type: GRU [2022-03-06 15:17:15 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-03-06 15:17:15 main:477] : INFO : Max Epochs: 2048 [2022-03-06 15:17:15 main:478] : INFO : Batch Size: 128 [2022-03-06 15:17:15 main:479] : INFO : Learning Rate: 0.01 [2022-03-06 15:17:15 main:480] : INFO : Patience: 10 [2022-03-06 15:17:15 main:481] : INFO : Hidden Width: 12 [2022-03-06 15:17:15 main:482] : INFO : # Recurrent Layers: 4 [2022-03-06 15:17:15 main:483] : INFO : # Backend Layers: 4 [2022-03-06 15:17:15 main:484] : INFO : # Threads: 1 [2022-03-06 15:17:15 main:486] : INFO : Preparing Dataset [2022-03-06 15:17:15 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-03-06 15:17:15 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-03-06 15:17:18 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-03-06 15:17:18 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-03-06 15:17:18 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-03-06 15:17:18 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-03-06 15:17:18 main:494] : INFO : Creating Model [2022-03-06 15:17:18 main:507] : INFO : Preparing config file [2022-03-06 15:17:18 main:511] : INFO : Found checkpoint, attempting to load... [2022-03-06 15:17:18 main:512] : INFO : Loading config [2022-03-06 15:17:18 main:514] : INFO : Loading state [2022-03-06 15:17:20 main:559] : INFO : Loading DataLoader into Memory [2022-03-06 15:17:20 main:562] : INFO : Starting Training [2022-03-06 15:17:22 main:574] : INFO : Epoch 1999 | loss: 0.0311889 | val_loss: 0.0311767 | Time: 2290.28 ms [2022-03-06 15:17:24 main:574] : INFO : Epoch 2000 | loss: 0.0311433 | val_loss: 0.0311726 | Time: 1708.86 ms [2022-03-06 15:17:36 main:574] : INFO : Epoch 2001 | loss: 0.0311379 | val_loss: 0.0311733 | Time: 11822.8 ms [2022-03-06 15:17:37 main:574] : INFO : Epoch 2002 | loss: 0.0311381 | val_loss: 0.0311701 | Time: 1671.01 ms [2022-03-06 15:17:39 main:574] : INFO : Epoch 2003 | loss: 0.0311364 | val_loss: 0.0311735 | Time: 1752.91 ms [2022-03-06 15:17:41 main:574] : INFO : Epoch 2004 | loss: 0.0311389 | val_loss: 0.0311744 | Time: 1694.54 ms [2022-03-06 15:17:43 main:574] : INFO : Epoch 2005 | loss: 0.0311381 | val_loss: 0.0311704 | Time: 1710.58 ms [2022-03-06 15:17:44 main:574] : INFO : Epoch 2006 | loss: 0.031138 | val_loss: 0.0311672 | Time: 1700.56 ms [2022-03-06 15:18:06 main:574] : INFO : Epoch 2007 | loss: 0.0311394 | val_loss: 0.0311694 | Time: 22012.7 ms [2022-03-06 15:18:08 main:574] : INFO : Epoch 2008 | loss: 0.0311392 | val_loss: 0.0311651 | Time: 2017.22 ms [2022-03-06 15:18:10 main:574] : INFO : Epoch 2009 | loss: 0.0311382 | val_loss: 0.0311723 | Time: 1852.91 ms [2022-03-06 15:18:13 main:574] : INFO : Epoch 2010 | loss: 0.0311401 | val_loss: 0.0311708 | Time: 2362.27 ms [2022-03-06 15:19:15 main:574] : INFO : Epoch 2011 | loss: 0.0311374 | val_loss: 0.031166 | Time: 62453.7 ms [2022-03-06 15:19:17 main:574] : INFO : Epoch 2012 | loss: 0.0311363 | val_loss: 0.031167 | Time: 1702.33 ms [2022-03-06 15:19:19 main:574] : INFO : Epoch 2013 | loss: 0.0311385 | val_loss: 0.0311657 | Time: 1689.4 ms [2022-03-06 15:19:20 main:574] : INFO : Epoch 2014 | loss: 0.0311419 | val_loss: 0.031171 | Time: 1752.71 ms [2022-03-06 15:19:22 main:574] : INFO : Epoch 2015 | loss: 0.0311446 | val_loss: 0.0311701 | Time: 1689.74 ms [2022-03-06 15:19:24 main:574] : INFO : Epoch 2016 | loss: 0.0311425 | val_loss: 0.0311662 | Time: 1689.51 ms [2022-03-06 15:19:26 main:574] : INFO : Epoch 2017 | loss: 0.0311398 | val_loss: 0.031169 | Time: 1692.46 ms [2022-03-06 15:19:27 main:574] : INFO : Epoch 2018 | loss: 0.0311433 | val_loss: 0.0311707 | Time: 1682.1 ms [2022-03-06 15:19:29 main:574] : INFO : Epoch 2019 | loss: 0.0311422 | val_loss: 0.0311684 | Time: 1810.85 ms [2022-03-06 15:19:31 main:574] : INFO : Epoch 2020 | loss: 0.0311388 | val_loss: 0.0311676 | Time: 1722.77 ms [2022-03-06 15:19:33 main:574] : INFO : Epoch 2021 | loss: 0.0311402 | val_loss: 0.031167 | Time: 1707.27 ms [2022-03-06 15:24:25 main:574] : INFO : Epoch 2022 | loss: 0.0311407 | val_loss: 0.0311669 | Time: 291969 ms [2022-03-06 15:24:27 main:574] : INFO : Epoch 2023 | loss: 0.0311383 | val_loss: 0.0311736 | Time: 1733.49 ms [2022-03-06 15:24:28 main:574] : INFO : Epoch 2024 | loss: 0.0311369 | val_loss: 0.0311733 | Time: 1752.05 ms [2022-03-06 15:24:30 main:574] : INFO : Epoch 2025 | loss: 0.0311388 | val_loss: 0.0311775 | Time: 1731.71 ms [2022-03-06 15:24:32 main:574] : INFO : Epoch 2026 | loss: 0.0311388 | val_loss: 0.0311687 | Time: 1683.43 ms [2022-03-06 15:24:34 main:574] : INFO : Epoch 2027 | loss: 0.031137 | val_loss: 0.031169 | Time: 1700.25 ms [2022-03-06 15:28:55 main:574] : INFO : Epoch 2028 | loss: 0.0311353 | val_loss: 0.0311715 | Time: 261865 ms [2022-03-06 15:28:57 main:574] : INFO : Epoch 2029 | loss: 0.0311349 | val_loss: 0.0311731 | Time: 1707.05 ms [2022-03-06 15:28:59 main:574] : INFO : Epoch 2030 | loss: 0.0311349 | val_loss: 0.0311719 | Time: 1702.68 ms [2022-03-06 15:29:01 main:574] : INFO : Epoch 2031 | loss: 0.0311355 | val_loss: 0.0311654 | Time: 1677.81 ms [2022-03-06 15:29:02 main:574] : INFO : Epoch 2032 | loss: 0.0311403 | val_loss: 0.0311683 | Time: 1694.1 ms [2022-03-06 15:29:04 main:574] : INFO : Epoch 2033 | loss: 0.0311405 | val_loss: 0.0311688 | Time: 1684.7 ms [2022-03-06 15:29:26 main:574] : INFO : Epoch 2034 | loss: 0.0311371 | val_loss: 0.0311675 | Time: 21704.6 ms [2022-03-06 15:29:28 main:574] : INFO : Epoch 2035 | loss: 0.0311386 | val_loss: 0.0311654 | Time: 1740.57 ms [2022-03-06 15:29:29 main:574] : INFO : Epoch 2036 | loss: 0.031138 | val_loss: 0.0311641 | Time: 1689.31 ms [2022-03-06 15:29:31 main:574] : INFO : Epoch 2037 | loss: 0.031138 | val_loss: 0.0311657 | Time: 1699.08 ms [2022-03-06 15:29:33 main:574] : INFO : Epoch 2038 | loss: 0.0311376 | val_loss: 0.0311661 | Time: 1713.23 ms [2022-03-06 15:29:35 main:574] : INFO : Epoch 2039 | loss: 0.0311371 | val_loss: 0.0311678 | Time: 1719.61 ms [2022-03-06 15:32:06 main:574] : INFO : Epoch 2040 | loss: 0.0311375 | val_loss: 0.0311654 | Time: 151942 ms [2022-03-06 15:32:08 main:574] : INFO : Epoch 2041 | loss: 0.0311383 | val_loss: 0.0311634 | Time: 1702.93 ms [2022-03-06 15:32:10 main:574] : INFO : Epoch 2042 | loss: 0.0311447 | val_loss: 0.0311677 | Time: 1716.44 ms [2022-03-06 15:32:12 main:574] : INFO : Epoch 2043 | loss: 0.0311422 | val_loss: 0.0311681 | Time: 1692.97 ms [2022-03-06 15:32:13 main:574] : INFO : Epoch 2044 | loss: 0.0311402 | val_loss: 0.0311737 | Time: 1738.85 ms [2022-03-06 15:32:25 main:574] : INFO : Epoch 2045 | loss: 0.0311432 | val_loss: 0.0311671 | Time: 11772.3 ms [2022-03-06 15:32:27 main:574] : INFO : Epoch 2046 | loss: 0.0311394 | val_loss: 0.0311652 | Time: 1739.11 ms [2022-03-06 15:32:29 main:574] : INFO : Epoch 2047 | loss: 0.03114 | val_loss: 0.0311659 | Time: 1702.83 ms [2022-03-06 15:32:31 main:574] : INFO : Epoch 2048 | loss: 0.0311382 | val_loss: 0.031167 | Time: 1702.54 ms [2022-03-06 15:32:31 main:597] : INFO : Saving trained model to model-final.pt, val_loss 0.031167 [2022-03-06 15:32:31 main:603] : INFO : Saving end state to config to file [2022-03-06 15:32:31 main:608] : INFO : Success, exiting.. 15:32:31 (15256): called boinc_finish(0) </stderr_txt> ]]>
©2022 MLC@Home Team
A project of the Cognition, Robotics, and Learning (CORAL) Lab at the University of Maryland, Baltimore County (UMBC)