| Name | ParityModified-1638631199-31532-0_0 |
| Workunit | 6833904 |
| Created | 4 Dec 2021, 15:20:02 UTC |
| Sent | 15 Dec 2021, 13:10:06 UTC |
| Report deadline | 23 Dec 2021, 13:10:06 UTC |
| Received | 1 Feb 2022, 18:44:27 UTC |
| Server state | Over |
| Outcome | Success |
| Client state | Done |
| Exit status | 0 (0x00000000) |
| Computer ID | 16717 |
| Run time | 3 hours 36 min 17 sec |
| CPU time | 3 hours 15 min 19 sec |
| Validate state | Task was reported too late to validate |
| Credit | 0.00 |
| Device peak FLOPS | 2,142.45 GFLOPS |
| Application version | Machine Learning Dataset Generator (GPU) v9.75 (cuda10200) windows_x86_64 |
| Peak working set size | 1.64 GB |
| Peak swap size | 3.60 GB |
| Peak disk usage | 1.54 GB |
<core_client_version>7.16.20</core_client_version> <![CDATA[ <stderr_txt> NFO : Epoch 1750 | loss: 0.0310907 | val_loss: 0.0311726 | Time: 4647.07 ms [2021-12-16 14:25:20 main:574] : INFO : Epoch 1751 | loss: 0.0310907 | val_loss: 0.0311819 | Time: 4518.06 ms [2021-12-16 14:25:25 main:574] : INFO : Epoch 1752 | loss: 0.0310928 | val_loss: 0.0311783 | Time: 4555.7 ms [2021-12-16 14:25:29 main:574] : INFO : Epoch 1753 | loss: 0.0310906 | val_loss: 0.0311793 | Time: 4668.85 ms [2021-12-16 14:25:34 main:574] : INFO : Epoch 1754 | loss: 0.0310873 | val_loss: 0.0311892 | Time: 4596.12 ms [2021-12-16 14:25:39 main:574] : INFO : Epoch 1755 | loss: 0.0310922 | val_loss: 0.0311759 | Time: 4621.38 ms [2021-12-16 14:25:43 main:574] : INFO : Epoch 1756 | loss: 0.0310888 | val_loss: 0.0311835 | Time: 4577.17 ms [2021-12-16 14:25:48 main:574] : INFO : Epoch 1757 | loss: 0.0310873 | val_loss: 0.031178 | Time: 4579.61 ms [2021-12-16 14:25:52 main:574] : INFO : Epoch 1758 | loss: 0.0311122 | val_loss: 0.0311624 | Time: 4279.13 ms [2021-12-16 14:25:57 main:574] : INFO : Epoch 1759 | loss: 0.0311213 | val_loss: 0.0311761 | Time: 4527.58 ms [2021-12-16 14:26:01 main:574] : INFO : Epoch 1760 | loss: 0.0311151 | val_loss: 0.0311642 | Time: 4588.51 ms [2021-12-16 14:26:06 main:574] : INFO : Epoch 1761 | loss: 0.0311137 | val_loss: 0.0311748 | Time: 4561.78 ms [2021-12-16 14:26:10 main:574] : INFO : Epoch 1762 | loss: 0.0311096 | val_loss: 0.0311801 | Time: 4649.93 ms [2021-12-16 14:26:15 main:574] : INFO : Epoch 1763 | loss: 0.0311104 | val_loss: 0.0311697 | Time: 4467.76 ms [2021-12-16 14:26:20 main:574] : INFO : Epoch 1764 | loss: 0.031109 | val_loss: 0.0311798 | Time: 4754.37 ms [2021-12-16 14:26:24 main:574] : INFO : Epoch 1765 | loss: 0.0311047 | val_loss: 0.0311827 | Time: 4688.39 ms [2021-12-16 14:26:29 main:574] : INFO : Epoch 1766 | loss: 0.0311001 | val_loss: 0.03118 | Time: 4549.81 ms [2021-12-16 14:26:34 main:574] : INFO : Epoch 1767 | loss: 0.0311003 | val_loss: 0.0311762 | Time: 4793.41 ms [2021-12-16 14:26:38 main:574] : INFO : Epoch 1768 | loss: 0.0311045 | val_loss: 0.0311756 | Time: 4427.43 ms [2021-12-16 14:26:43 main:574] : INFO : Epoch 1769 | loss: 0.0310999 | val_loss: 0.0311737 | Time: 4714.52 ms [2021-12-16 14:26:48 main:574] : INFO : Epoch 1770 | loss: 0.0311 | val_loss: 0.0311829 | Time: 4716.77 ms [2021-12-16 14:26:52 main:574] : INFO : Epoch 1771 | loss: 0.0311037 | val_loss: 0.0311887 | Time: 4599.61 ms [2021-12-16 14:26:57 main:574] : INFO : Epoch 1772 | loss: 0.0311067 | val_loss: 0.0311734 | Time: 4706.95 ms [2021-12-16 14:27:01 main:574] : INFO : Epoch 1773 | loss: 0.0311016 | val_loss: 0.0311854 | Time: 4563.76 ms [2021-12-16 14:27:06 main:574] : INFO : Epoch 1774 | loss: 0.0310976 | val_loss: 0.0311789 | Time: 4759.05 ms [2021-12-16 14:27:11 main:574] : INFO : Epoch 1775 | loss: 0.0311047 | val_loss: 0.0311702 | Time: 4627.12 ms [2021-12-16 14:27:16 main:574] : INFO : Epoch 1776 | loss: 0.0311076 | val_loss: 0.031174 | Time: 4577.61 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: GeForce GTX 1050 Ti) [2021-12-16 17:42:59 main:435] : INFO : Set logging level to 1 [2021-12-16 17:42:59 main:441] : INFO : Running in BOINC Client mode [2021-12-16 17:42:59 main:444] : INFO : Resolving all filenames [2021-12-16 17:42:59 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2021-12-16 17:42:59 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2021-12-16 17:42:59 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2021-12-16 17:42:59 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 0) [2021-12-16 17:42:59 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2021-12-16 17:42:59 main:472] : INFO : Dataset filename: dataset.hdf5 [2021-12-16 17:42:59 main:474] : INFO : Configuration: [2021-12-16 17:42:59 main:475] : INFO : Model type: GRU [2021-12-16 17:42:59 main:476] : INFO : Validation Loss Threshold: 0.0001 [2021-12-16 17:42:59 main:477] : INFO : Max Epochs: 2048 [2021-12-16 17:42:59 main:478] : INFO : Batch Size: 128 [2021-12-16 17:42:59 main:479] : INFO : Learning Rate: 0.01 [2021-12-16 17:42:59 main:480] : INFO : Patience: 10 [2021-12-16 17:42:59 main:481] : INFO : Hidden Width: 12 [2021-12-16 17:42:59 main:482] : INFO : # Recurrent Layers: 4 [2021-12-16 17:42:59 main:483] : INFO : # Backend Layers: 4 [2021-12-16 17:42:59 main:484] : INFO : # Threads: 1 [2021-12-16 17:42:59 main:486] : INFO : Preparing Dataset [2021-12-16 17:42:59 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2021-12-16 17:42:59 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2021-12-16 17:47:04 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2021-12-16 17:47:04 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2021-12-16 17:47:05 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2021-12-16 17:47:05 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2021-12-16 17:47:05 main:494] : INFO : Creating Model [2021-12-16 17:47:05 main:507] : INFO : Preparing config file [2021-12-16 17:47:05 main:511] : INFO : Found checkpoint, attempting to load... [2021-12-16 17:47:05 main:512] : INFO : Loading config [2021-12-16 17:47:05 main:514] : INFO : Loading state [2021-12-16 17:47:07 main:559] : INFO : Loading DataLoader into Memory [2021-12-16 17:47:07 main:562] : INFO : Starting Training [2021-12-16 17:47:12 main:574] : INFO : Epoch 1775 | loss: 0.0311327 | val_loss: 0.0311859 | Time: 5739.5 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: GeForce GTX 1050 Ti) [2022-01-31 20:19:09 main:435] : INFO : Set logging level to 1 [2022-01-31 20:19:09 main:441] : INFO : Running in BOINC Client mode [2022-01-31 20:19:09 main:444] : INFO : Resolving all filenames [2022-01-31 20:19:09 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-01-31 20:19:09 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-01-31 20:19:09 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-01-31 20:19:09 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 0) [2022-01-31 20:19:09 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-01-31 20:19:09 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-01-31 20:19:09 main:474] : INFO : Configuration: [2022-01-31 20:19:09 main:475] : INFO : Model type: GRU [2022-01-31 20:19:09 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-01-31 20:19:09 main:477] : INFO : Max Epochs: 2048 [2022-01-31 20:19:09 main:478] : INFO : Batch Size: 128 [2022-01-31 20:19:09 main:479] : INFO : Learning Rate: 0.01 [2022-01-31 20:19:09 main:480] : INFO : Patience: 10 [2022-01-31 20:19:09 main:481] : INFO : Hidden Width: 12 [2022-01-31 20:19:09 main:482] : INFO : # Recurrent Layers: 4 [2022-01-31 20:19:09 main:483] : INFO : # Backend Layers: 4 [2022-01-31 20:19:09 main:484] : INFO : # Threads: 1 [2022-01-31 20:19:09 main:486] : INFO : Preparing Dataset [2022-01-31 20:19:09 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-01-31 20:19:12 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-01-31 20:19:16 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-01-31 20:19:16 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-01-31 20:19:16 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-01-31 20:19:17 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-01-31 20:19:17 main:494] : INFO : Creating Model [2022-01-31 20:19:17 main:507] : INFO : Preparing config file [2022-01-31 20:19:17 main:511] : INFO : Found checkpoint, attempting to load... [2022-01-31 20:19:17 main:512] : INFO : Loading config [2022-01-31 20:19:17 main:514] : INFO : Loading state [2022-01-31 20:23:09 main:559] : INFO : Loading DataLoader into Memory [2022-01-31 20:23:09 main:562] : INFO : Starting Training [2022-01-31 20:23:15 main:574] : INFO : Epoch 1776 | loss: 0.0311488 | val_loss: 0.0311869 | Time: 6204.87 ms [2022-01-31 20:23:44 main:574] : INFO : Epoch 1777 | loss: 0.0311072 | val_loss: 0.0311673 | Time: 29002.4 ms [2022-01-31 20:27:34 main:574] : INFO : Epoch 1778 | loss: 0.0311046 | val_loss: 0.031173 | Time: 230298 ms [2022-01-31 20:32:53 main:574] : INFO : Epoch 1779 | loss: 0.0310989 | val_loss: 0.0311803 | Time: 318798 ms [2022-01-31 20:33:00 main:574] : INFO : Epoch 1780 | loss: 0.0310958 | val_loss: 0.0311799 | Time: 6216.39 ms [2022-01-31 20:33:16 main:574] : INFO : Epoch 1781 | loss: 0.0311002 | val_loss: 0.0311694 | Time: 16323.1 ms [2022-01-31 20:33:22 main:574] : INFO : Epoch 1782 | loss: 0.0311019 | val_loss: 0.0311834 | Time: 5882.06 ms [2022-01-31 20:33:28 main:574] : INFO : Epoch 1783 | loss: 0.031098 | val_loss: 0.0311745 | Time: 5733.27 ms [2022-01-31 20:33:40 main:574] : INFO : Epoch 1784 | loss: 0.0310951 | val_loss: 0.03118 | Time: 12441.5 ms [2022-01-31 20:35:58 main:574] : INFO : Epoch 1785 | loss: 0.031095 | val_loss: 0.0311757 | Time: 137730 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: GeForce GTX 1050 Ti) [2022-01-31 20:56:19 main:435] : INFO : Set logging level to 1 [2022-01-31 20:56:19 main:441] : INFO : Running in BOINC Client mode [2022-01-31 20:56:19 main:444] : INFO : Resolving all filenames [2022-01-31 20:56:19 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-01-31 20:56:19 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-01-31 20:56:19 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-01-31 20:56:19 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 0) [2022-01-31 20:56:19 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-01-31 20:56:19 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-01-31 20:56:19 main:474] : INFO : Configuration: [2022-01-31 20:56:19 main:475] : INFO : Model type: GRU [2022-01-31 20:56:19 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-01-31 20:56:19 main:477] : INFO : Max Epochs: 2048 [2022-01-31 20:56:19 main:478] : INFO : Batch Size: 128 [2022-01-31 20:56:19 main:479] : INFO : Learning Rate: 0.01 [2022-01-31 20:56:19 main:480] : INFO : Patience: 10 [2022-01-31 20:56:19 main:481] : INFO : Hidden Width: 12 [2022-01-31 20:56:19 main:482] : INFO : # Recurrent Layers: 4 [2022-01-31 20:56:19 main:483] : INFO : # Backend Layers: 4 [2022-01-31 20:56:19 main:484] : INFO : # Threads: 1 [2022-01-31 20:56:19 main:486] : INFO : Preparing Dataset [2022-01-31 20:56:19 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-01-31 20:56:20 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-01-31 20:56:23 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-01-31 20:56:23 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-01-31 20:56:23 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-01-31 20:56:23 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-01-31 20:56:23 main:494] : INFO : Creating Model [2022-01-31 20:56:24 main:507] : INFO : Preparing config file [2022-01-31 20:56:24 main:511] : INFO : Found checkpoint, attempting to load... [2022-01-31 20:56:24 main:512] : INFO : Loading config [2022-01-31 20:56:24 main:514] : INFO : Loading state [2022-01-31 20:56:25 main:559] : INFO : Loading DataLoader into Memory [2022-01-31 20:56:25 main:562] : INFO : Starting Training [2022-01-31 20:57:01 main:574] : INFO : Epoch 1780 | loss: 0.0311391 | val_loss: 0.0311724 | Time: 35583.7 ms [2022-01-31 20:57:27 main:574] : INFO : Epoch 1781 | loss: 0.0311085 | val_loss: 0.0311703 | Time: 26253 ms [2022-01-31 21:07:00 main:574] : INFO : Epoch 1782 | loss: 0.0311066 | val_loss: 0.0311654 | Time: 572201 ms [2022-01-31 21:07:06 main:574] : INFO : Epoch 1783 | loss: 0.0311031 | val_loss: 0.0311754 | Time: 6673.94 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: GeForce GTX 1050 Ti) [2022-01-31 21:19:31 main:435] : INFO : Set logging level to 1 [2022-01-31 21:19:31 main:441] : INFO : Running in BOINC Client mode [2022-01-31 21:19:31 main:444] : INFO : Resolving all filenames [2022-01-31 21:19:31 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-01-31 21:19:31 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-01-31 21:19:31 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-01-31 21:19:31 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 0) [2022-01-31 21:19:31 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-01-31 21:19:31 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-01-31 21:19:31 main:474] : INFO : Configuration: [2022-01-31 21:19:31 main:475] : INFO : Model type: GRU [2022-01-31 21:19:31 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-01-31 21:19:31 main:477] : INFO : Max Epochs: 2048 [2022-01-31 21:19:31 main:478] : INFO : Batch Size: 128 [2022-01-31 21:19:31 main:479] : INFO : Learning Rate: 0.01 [2022-01-31 21:19:31 main:480] : INFO : Patience: 10 [2022-01-31 21:19:31 main:481] : INFO : Hidden Width: 12 [2022-01-31 21:19:31 main:482] : INFO : # Recurrent Layers: 4 [2022-01-31 21:19:31 main:483] : INFO : # Backend Layers: 4 [2022-01-31 21:19:31 main:484] : INFO : # Threads: 1 [2022-01-31 21:19:31 main:486] : INFO : Preparing Dataset [2022-01-31 21:19:31 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-01-31 21:19:31 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-01-31 21:19:35 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-01-31 21:19:35 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-01-31 21:19:35 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-01-31 21:19:35 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-01-31 21:19:35 main:494] : INFO : Creating Model [2022-01-31 21:19:35 main:507] : INFO : Preparing config file [2022-01-31 21:19:35 main:511] : INFO : Found checkpoint, attempting to load... [2022-01-31 21:19:35 main:512] : INFO : Loading config [2022-01-31 21:19:35 main:514] : INFO : Loading state [2022-01-31 21:19:37 main:559] : INFO : Loading DataLoader into Memory [2022-01-31 21:19:37 main:562] : INFO : Starting Training [2022-01-31 21:24:36 main:574] : INFO : Epoch 1783 | loss: 0.0311527 | val_loss: 0.0311918 | Time: 299152 ms [2022-01-31 21:25:15 main:574] : INFO : Epoch 1784 | loss: 0.0311071 | val_loss: 0.0311711 | Time: 38335.5 ms [2022-01-31 21:26:54 main:574] : INFO : Epoch 1785 | loss: 0.0311023 | val_loss: 0.031166 | Time: 99053.3 ms [2022-01-31 21:27:02 main:574] : INFO : Epoch 1786 | loss: 0.0310992 | val_loss: 0.0311652 | Time: 8339.64 ms [2022-01-31 21:27:20 main:574] : INFO : Epoch 1787 | loss: 0.0311006 | val_loss: 0.0311706 | Time: 18267.8 ms [2022-01-31 21:27:39 main:574] : INFO : Epoch 1788 | loss: 0.0310982 | val_loss: 0.0311752 | Time: 18206.9 ms [2022-01-31 21:29:38 main:574] : INFO : Epoch 1789 | loss: 0.0310987 | val_loss: 0.0311673 | Time: 119176 ms [2022-01-31 21:30:06 main:574] : INFO : Epoch 1790 | loss: 0.0310952 | val_loss: 0.0311776 | Time: 28228.9 ms [2022-01-31 21:30:14 main:574] : INFO : Epoch 1791 | loss: 0.0310942 | val_loss: 0.0311788 | Time: 8165.95 ms [2022-01-31 21:32:04 main:574] : INFO : Epoch 1792 | loss: 0.0310938 | val_loss: 0.0311744 | Time: 109309 ms [2022-01-31 21:32:42 main:574] : INFO : Epoch 1793 | loss: 0.0310961 | val_loss: 0.0311742 | Time: 38261.9 ms [2022-01-31 21:33:00 main:574] : INFO : Epoch 1794 | loss: 0.0311002 | val_loss: 0.0311675 | Time: 18468.7 ms [2022-01-31 21:33:29 main:574] : INFO : Epoch 1795 | loss: 0.0311049 | val_loss: 0.0311832 | Time: 28439.1 ms [2022-01-31 21:33:37 main:574] : INFO : Epoch 1796 | loss: 0.031097 | val_loss: 0.0311813 | Time: 8129.1 ms [2022-01-31 21:33:56 main:574] : INFO : Epoch 1797 | loss: 0.0310971 | val_loss: 0.0311785 | Time: 18268.6 ms [2022-01-31 21:34:14 main:574] : INFO : Epoch 1798 | loss: 0.0310954 | val_loss: 0.0311753 | Time: 18131.9 ms [2022-01-31 21:34:32 main:574] : INFO : Epoch 1799 | loss: 0.031092 | val_loss: 0.0311826 | Time: 18462.3 ms [2022-01-31 21:34:51 main:574] : INFO : Epoch 1800 | loss: 0.0310898 | val_loss: 0.0311726 | Time: 18380.4 ms [2022-01-31 21:35:19 main:574] : INFO : Epoch 1801 | loss: 0.031091 | val_loss: 0.031182 | Time: 28360.2 ms [2022-01-31 21:35:27 main:574] : INFO : Epoch 1802 | loss: 0.0310882 | val_loss: 0.0311808 | Time: 8413.25 ms [2022-01-31 21:36:16 main:574] : INFO : Epoch 1803 | loss: 0.0310853 | val_loss: 0.0311801 | Time: 48745.9 ms [2022-01-31 21:36:45 main:574] : INFO : Epoch 1804 | loss: 0.0310872 | val_loss: 0.031185 | Time: 28535.7 ms [2022-01-31 21:37:13 main:574] : INFO : Epoch 1805 | loss: 0.0310911 | val_loss: 0.0311764 | Time: 27984.3 ms [2022-01-31 21:38:02 main:574] : INFO : Epoch 1806 | loss: 0.0310918 | val_loss: 0.0311825 | Time: 48819.6 ms [2022-01-31 21:38:10 main:574] : INFO : Epoch 1807 | loss: 0.0310919 | val_loss: 0.0311777 | Time: 8303.14 ms [2022-01-31 21:38:28 main:574] : INFO : Epoch 1808 | loss: 0.031091 | val_loss: 0.0311848 | Time: 17977.9 ms [2022-01-31 21:39:06 main:574] : INFO : Epoch 1809 | loss: 0.0310892 | val_loss: 0.031186 | Time: 38293 ms [2022-01-31 21:39:24 main:574] : INFO : Epoch 1810 | loss: 0.0310916 | val_loss: 0.0311729 | Time: 17538 ms [2022-01-31 21:39:42 main:574] : INFO : Epoch 1811 | loss: 0.0310971 | val_loss: 0.0311821 | Time: 18140 ms [2022-01-31 21:39:50 main:574] : INFO : Epoch 1812 | loss: 0.0310925 | val_loss: 0.0311785 | Time: 8117.15 ms [2022-01-31 21:40:19 main:574] : INFO : Epoch 1813 | loss: 0.0310943 | val_loss: 0.0311808 | Time: 28602.4 ms [2022-01-31 21:40:37 main:574] : INFO : Epoch 1814 | loss: 0.0310932 | val_loss: 0.0311804 | Time: 18147.9 ms [2022-01-31 21:41:36 main:574] : INFO : Epoch 1815 | loss: 0.0310927 | val_loss: 0.0311819 | Time: 58774.6 ms [2022-01-31 21:41:54 main:574] : INFO : Epoch 1816 | loss: 0.03109 | val_loss: 0.03118 | Time: 18418.7 ms [2022-01-31 21:42:03 main:574] : INFO : Epoch 1817 | loss: 0.0310892 | val_loss: 0.0311845 | Time: 8351.25 ms [2022-01-31 21:42:51 main:574] : INFO : Epoch 1818 | loss: 0.031087 | val_loss: 0.0311842 | Time: 48774.1 ms [2022-01-31 21:43:10 main:574] : INFO : Epoch 1819 | loss: 0.0310863 | val_loss: 0.0311788 | Time: 18616.9 ms [2022-01-31 21:43:48 main:574] : INFO : Epoch 1820 | loss: 0.0310842 | val_loss: 0.0311807 | Time: 38425.3 ms [2022-01-31 21:44:58 main:574] : INFO : Epoch 1821 | loss: 0.0310819 | val_loss: 0.0311778 | Time: 69427.4 ms [2022-01-31 21:45:46 main:574] : INFO : Epoch 1822 | loss: 0.0310836 | val_loss: 0.0311791 | Time: 47523.4 ms [2022-01-31 21:45:50 main:574] : INFO : Epoch 1823 | loss: 0.031089 | val_loss: 0.0311819 | Time: 4687.04 ms [2022-01-31 21:45:55 main:574] : INFO : Epoch 1824 | loss: 0.0310859 | val_loss: 0.0311801 | Time: 4691.93 ms [2022-01-31 21:46:00 main:574] : INFO : Epoch 1825 | loss: 0.0310861 | val_loss: 0.0311821 | Time: 4773.29 ms [2022-01-31 21:46:05 main:574] : INFO : Epoch 1826 | loss: 0.0310882 | val_loss: 0.031175 | Time: 5237.84 ms [2022-01-31 21:46:20 main:574] : INFO : Epoch 1827 | loss: 0.0310893 | val_loss: 0.03118 | Time: 14832.2 ms [2022-01-31 21:46:25 main:574] : INFO : Epoch 1828 | loss: 0.0310902 | val_loss: 0.0311874 | Time: 4670.25 ms [2022-01-31 21:46:29 main:574] : INFO : Epoch 1829 | loss: 0.0310901 | val_loss: 0.0311822 | Time: 4721.78 ms [2022-01-31 21:46:34 main:574] : INFO : Epoch 1830 | loss: 0.031093 | val_loss: 0.0311864 | Time: 4657.47 ms [2022-01-31 21:46:39 main:574] : INFO : Epoch 1831 | loss: 0.0310908 | val_loss: 0.0311768 | Time: 4902.33 ms [2022-01-31 21:46:44 main:574] : INFO : Epoch 1832 | loss: 0.0310881 | val_loss: 0.031171 | Time: 4934.81 ms [2022-01-31 21:46:48 main:574] : INFO : Epoch 1833 | loss: 0.0310873 | val_loss: 0.0311864 | Time: 4699.22 ms [2022-01-31 21:46:53 main:574] : INFO : Epoch 1834 | loss: 0.0310846 | val_loss: 0.0311797 | Time: 4781.59 ms [2022-01-31 21:46:58 main:574] : INFO : Epoch 1835 | loss: 0.0310846 | val_loss: 0.031184 | Time: 4973.04 ms [2022-01-31 21:47:05 main:574] : INFO : Epoch 1836 | loss: 0.0310837 | val_loss: 0.0311819 | Time: 6782.02 ms [2022-01-31 21:47:30 main:574] : INFO : Epoch 1837 | loss: 0.0310865 | val_loss: 0.0311888 | Time: 24852.6 ms [2022-01-31 21:47:35 main:574] : INFO : Epoch 1838 | loss: 0.0310854 | val_loss: 0.0311795 | Time: 4612.22 ms [2022-01-31 21:47:39 main:574] : INFO : Epoch 1839 | loss: 0.0310897 | val_loss: 0.0311782 | Time: 4555.45 ms [2022-01-31 21:47:44 main:574] : INFO : Epoch 1840 | loss: 0.0310883 | val_loss: 0.0311772 | Time: 4590.1 ms [2022-01-31 21:47:49 main:574] : INFO : Epoch 1841 | loss: 0.0310908 | val_loss: 0.0311814 | Time: 4925.72 ms [2022-01-31 21:47:53 main:574] : INFO : Epoch 1842 | loss: 0.0310899 | val_loss: 0.0311765 | Time: 4592.84 ms [2022-01-31 21:47:58 main:574] : INFO : Epoch 1843 | loss: 0.0310935 | val_loss: 0.0311874 | Time: 4619.04 ms [2022-01-31 21:48:03 main:574] : INFO : Epoch 1844 | loss: 0.0311007 | val_loss: 0.031171 | Time: 4637.73 ms [2022-01-31 21:48:07 main:574] : INFO : Epoch 1845 | loss: 0.0311253 | val_loss: 0.0311712 | Time: 4359.99 ms [2022-01-31 21:48:11 main:574] : INFO : Epoch 1846 | loss: 0.03112 | val_loss: 0.0311851 | Time: 4168.46 ms [2022-01-31 21:48:16 main:574] : INFO : Epoch 1847 | loss: 0.031114 | val_loss: 0.0311785 | Time: 4582.72 ms [2022-01-31 21:48:20 main:574] : INFO : Epoch 1848 | loss: 0.0311103 | val_loss: 0.0311892 | Time: 4648.62 ms [2022-01-31 21:48:25 main:574] : INFO : Epoch 1849 | loss: 0.031103 | val_loss: 0.0311765 | Time: 4658.52 ms [2022-01-31 21:48:30 main:574] : INFO : Epoch 1850 | loss: 0.0311005 | val_loss: 0.0311838 | Time: 4565.31 ms [2022-01-31 21:48:34 main:574] : INFO : Epoch 1851 | loss: 0.0310992 | val_loss: 0.0311769 | Time: 4866.6 ms [2022-01-31 21:48:39 main:574] : INFO : Epoch 1852 | loss: 0.0310989 | val_loss: 0.0311755 | Time: 4548.62 ms [2022-01-31 21:48:44 main:574] : INFO : Epoch 1853 | loss: 0.0310989 | val_loss: 0.0311803 | Time: 4562.42 ms [2022-01-31 21:48:48 main:574] : INFO : Epoch 1854 | loss: 0.0310945 | val_loss: 0.0311749 | Time: 4682.01 ms [2022-01-31 21:48:53 main:574] : INFO : Epoch 1855 | loss: 0.0310939 | val_loss: 0.0311827 | Time: 4660.89 ms [2022-01-31 21:48:58 main:574] : INFO : Epoch 1856 | loss: 0.0310942 | val_loss: 0.0311842 | Time: 4625.85 ms [2022-01-31 21:49:02 main:574] : INFO : Epoch 1857 | loss: 0.0310944 | val_loss: 0.0311823 | Time: 4628.1 ms [2022-01-31 21:49:07 main:574] : INFO : Epoch 1858 | loss: 0.0310896 | val_loss: 0.0311854 | Time: 4558.56 ms [2022-01-31 21:49:12 main:574] : INFO : Epoch 1859 | loss: 0.0310904 | val_loss: 0.0311832 | Time: 4674.85 ms [2022-01-31 21:49:17 main:574] : INFO : Epoch 1860 | loss: 0.0310954 | val_loss: 0.0311883 | Time: 5021.54 ms [2022-01-31 21:49:21 main:574] : INFO : Epoch 1861 | loss: 0.0310916 | val_loss: 0.0311724 | Time: 4736.37 ms [2022-01-31 21:49:26 main:574] : INFO : Epoch 1862 | loss: 0.0310894 | val_loss: 0.0311745 | Time: 4590.86 ms [2022-01-31 21:49:31 main:574] : INFO : Epoch 1863 | loss: 0.031086 | val_loss: 0.0311831 | Time: 4654.52 ms [2022-01-31 21:49:35 main:574] : INFO : Epoch 1864 | loss: 0.0310873 | val_loss: 0.0311806 | Time: 4680.36 ms [2022-01-31 21:49:40 main:574] : INFO : Epoch 1865 | loss: 0.0310904 | val_loss: 0.0311784 | Time: 4713.22 ms [2022-01-31 21:49:45 main:574] : INFO : Epoch 1866 | loss: 0.0310884 | val_loss: 0.0311812 | Time: 4526.62 ms [2022-01-31 21:49:49 main:574] : INFO : Epoch 1867 | loss: 0.0310926 | val_loss: 0.0311686 | Time: 4651.32 ms [2022-01-31 21:49:54 main:574] : INFO : Epoch 1868 | loss: 0.0310933 | val_loss: 0.0311752 | Time: 4728.25 ms [2022-01-31 21:49:59 main:574] : INFO : Epoch 1869 | loss: 0.0310901 | val_loss: 0.0311715 | Time: 4638.35 ms [2022-01-31 21:50:04 main:574] : INFO : Epoch 1870 | loss: 0.0310931 | val_loss: 0.0311784 | Time: 4987.15 ms [2022-01-31 21:50:08 main:574] : INFO : Epoch 1871 | loss: 0.0310997 | val_loss: 0.031172 | Time: 4664.9 ms [2022-01-31 21:50:13 main:574] : INFO : Epoch 1872 | loss: 0.0311039 | val_loss: 0.0311717 | Time: 4575.25 ms [2022-01-31 21:50:18 main:574] : INFO : Epoch 1873 | loss: 0.0311017 | val_loss: 0.0311685 | Time: 4696.47 ms [2022-01-31 21:50:22 main:574] : INFO : Epoch 1874 | loss: 0.031105 | val_loss: 0.0311709 | Time: 4732.43 ms [2022-01-31 21:50:27 main:574] : INFO : Epoch 1875 | loss: 0.0311008 | val_loss: 0.0311789 | Time: 4606.57 ms [2022-01-31 21:50:32 main:574] : INFO : Epoch 1876 | loss: 0.0310976 | val_loss: 0.0311793 | Time: 4616.33 ms [2022-01-31 21:50:36 main:574] : INFO : Epoch 1877 | loss: 0.0310976 | val_loss: 0.0311693 | Time: 4653.67 ms [2022-01-31 21:50:41 main:574] : INFO : Epoch 1878 | loss: 0.0310962 | val_loss: 0.0311682 | Time: 4576.67 ms [2022-01-31 21:50:45 main:574] : INFO : Epoch 1879 | loss: 0.0310971 | val_loss: 0.0311723 | Time: 4469.09 ms [2022-01-31 21:50:50 main:574] : INFO : Epoch 1880 | loss: 0.0310927 | val_loss: 0.0311761 | Time: 4551.17 ms [2022-01-31 21:50:54 main:574] : INFO : Epoch 1881 | loss: 0.0310933 | val_loss: 0.0311828 | Time: 4453.5 ms [2022-01-31 21:50:59 main:574] : INFO : Epoch 1882 | loss: 0.0310945 | val_loss: 0.0311858 | Time: 4611.91 ms [2022-01-31 21:51:03 main:574] : INFO : Epoch 1883 | loss: 0.0310892 | val_loss: 0.0311776 | Time: 4410.08 ms [2022-01-31 21:51:08 main:574] : INFO : Epoch 1884 | loss: 0.0310931 | val_loss: 0.0311738 | Time: 4396.87 ms [2022-01-31 21:51:12 main:574] : INFO : Epoch 1885 | loss: 0.0310964 | val_loss: 0.0311713 | Time: 4368.67 ms [2022-01-31 21:51:17 main:574] : INFO : Epoch 1886 | loss: 0.0310898 | val_loss: 0.0311805 | Time: 4482.62 ms [2022-01-31 21:51:21 main:574] : INFO : Epoch 1887 | loss: 0.0310873 | val_loss: 0.0311854 | Time: 4536.43 ms [2022-01-31 21:51:26 main:574] : INFO : Epoch 1888 | loss: 0.0310899 | val_loss: 0.0311874 | Time: 4414.33 ms [2022-01-31 21:51:30 main:574] : INFO : Epoch 1889 | loss: 0.0310865 | val_loss: 0.0311818 | Time: 4574.75 ms [2022-01-31 21:51:35 main:574] : INFO : Epoch 1890 | loss: 0.0310858 | val_loss: 0.0311844 | Time: 4789.61 ms [2022-01-31 21:51:40 main:574] : INFO : Epoch 1891 | loss: 0.031086 | val_loss: 0.0311721 | Time: 4927.78 ms [2022-01-31 21:51:45 main:574] : INFO : Epoch 1892 | loss: 0.031085 | val_loss: 0.0311873 | Time: 4713.34 ms [2022-01-31 21:51:49 main:574] : INFO : Epoch 1893 | loss: 0.0310842 | val_loss: 0.0311876 | Time: 4655.76 ms [2022-01-31 21:51:54 main:574] : INFO : Epoch 1894 | loss: 0.0310806 | val_loss: 0.031183 | Time: 4634.92 ms [2022-01-31 21:51:59 main:574] : INFO : Epoch 1895 | loss: 0.0310787 | val_loss: 0.031187 | Time: 4661.58 ms [2022-01-31 21:52:03 main:574] : INFO : Epoch 1896 | loss: 0.0310793 | val_loss: 0.031192 | Time: 4782.84 ms [2022-01-31 21:52:08 main:574] : INFO : Epoch 1897 | loss: 0.0310794 | val_loss: 0.0311849 | Time: 4816.75 ms [2022-01-31 21:52:13 main:574] : INFO : Epoch 1898 | loss: 0.0310789 | val_loss: 0.0311756 | Time: 4712.37 ms [2022-01-31 21:52:18 main:574] : INFO : Epoch 1899 | loss: 0.0310768 | val_loss: 0.0311832 | Time: 4599.91 ms [2022-01-31 21:52:22 main:574] : INFO : Epoch 1900 | loss: 0.0310817 | val_loss: 0.0311889 | Time: 4707.43 ms [2022-01-31 21:52:27 main:574] : INFO : Epoch 1901 | loss: 0.031088 | val_loss: 0.0311861 | Time: 4355.86 ms [2022-01-31 21:52:31 main:574] : INFO : Epoch 1902 | loss: 0.0310868 | val_loss: 0.0311774 | Time: 4411.02 ms [2022-01-31 21:52:35 main:574] : INFO : Epoch 1903 | loss: 0.0310847 | val_loss: 0.0311803 | Time: 4321.9 ms [2022-01-31 21:52:40 main:574] : INFO : Epoch 1904 | loss: 0.0310828 | val_loss: 0.0311791 | Time: 4293.22 ms [2022-01-31 21:52:44 main:574] : INFO : Epoch 1905 | loss: 0.0310814 | val_loss: 0.0311834 | Time: 4295.98 ms [2022-01-31 21:52:48 main:574] : INFO : Epoch 1906 | loss: 0.0310882 | val_loss: 0.0311869 | Time: 4371.03 ms [2022-01-31 21:52:53 main:574] : INFO : Epoch 1907 | loss: 0.0310982 | val_loss: 0.0311851 | Time: 4379.54 ms [2022-01-31 21:52:57 main:574] : INFO : Epoch 1908 | loss: 0.0311112 | val_loss: 0.0311735 | Time: 4379.78 ms [2022-01-31 21:53:02 main:574] : INFO : Epoch 1909 | loss: 0.0310994 | val_loss: 0.031175 | Time: 4377.31 ms [2022-01-31 21:53:06 main:574] : INFO : Epoch 1910 | loss: 0.0310982 | val_loss: 0.0311795 | Time: 4198.96 ms [2022-01-31 21:53:10 main:574] : INFO : Epoch 1911 | loss: 0.0310908 | val_loss: 0.0311833 | Time: 4239.17 ms [2022-01-31 21:53:15 main:574] : INFO : Epoch 1912 | loss: 0.031089 | val_loss: 0.0311828 | Time: 4625.3 ms [2022-01-31 21:53:19 main:574] : INFO : Epoch 1913 | loss: 0.0310853 | val_loss: 0.0311931 | Time: 4530.98 ms [2022-01-31 21:53:24 main:574] : INFO : Epoch 1914 | loss: 0.031084 | val_loss: 0.0311943 | Time: 4597.96 ms [2022-01-31 21:53:29 main:574] : INFO : Epoch 1915 | loss: 0.0310816 | val_loss: 0.0311818 | Time: 4684.52 ms [2022-01-31 21:53:33 main:574] : INFO : Epoch 1916 | loss: 0.03108 | val_loss: 0.0311786 | Time: 4534.99 ms [2022-01-31 21:53:38 main:574] : INFO : Epoch 1917 | loss: 0.0310838 | val_loss: 0.031189 | Time: 4550.13 ms [2022-01-31 21:53:42 main:574] : INFO : Epoch 1918 | loss: 0.0310791 | val_loss: 0.0311813 | Time: 4596.68 ms [2022-01-31 21:53:48 main:574] : INFO : Epoch 1919 | loss: 0.0310812 | val_loss: 0.0311965 | Time: 5475.73 ms [2022-01-31 21:53:54 main:574] : INFO : Epoch 1920 | loss: 0.0310796 | val_loss: 0.0311821 | Time: 6090.91 ms [2022-01-31 21:54:01 main:574] : INFO : Epoch 1921 | loss: 0.0310856 | val_loss: 0.0311843 | Time: 6584.79 ms [2022-01-31 21:54:18 main:574] : INFO : Epoch 1922 | loss: 0.0310867 | val_loss: 0.0311865 | Time: 17237.8 ms [2022-01-31 21:54:36 main:574] : INFO : Epoch 1923 | loss: 0.031086 | val_loss: 0.03118 | Time: 18353.5 ms [2022-01-31 21:54:55 main:574] : INFO : Epoch 1924 | loss: 0.0310842 | val_loss: 0.0311784 | Time: 18437.4 ms [2022-01-31 21:55:13 main:574] : INFO : Epoch 1925 | loss: 0.0310846 | val_loss: 0.0311931 | Time: 18234.1 ms [2022-01-31 21:55:21 main:574] : INFO : Epoch 1926 | loss: 0.0310809 | val_loss: 0.0311723 | Time: 8251.19 ms [2022-01-31 21:55:40 main:574] : INFO : Epoch 1927 | loss: 0.0310832 | val_loss: 0.0311875 | Time: 18443.4 ms [2022-01-31 21:55:58 main:574] : INFO : Epoch 1928 | loss: 0.0310837 | val_loss: 0.031186 | Time: 18577.8 ms [2022-01-31 21:56:17 main:574] : INFO : Epoch 1929 | loss: 0.0310896 | val_loss: 0.0311744 | Time: 18425.1 ms [2022-01-31 21:56:35 main:574] : INFO : Epoch 1930 | loss: 0.0310893 | val_loss: 0.0311809 | Time: 18312.1 ms [2022-01-31 21:56:54 main:574] : INFO : Epoch 1931 | loss: 0.0310881 | val_loss: 0.0311835 | Time: 18467 ms [2022-01-31 21:57:01 main:574] : INFO : Epoch 1932 | loss: 0.0310829 | val_loss: 0.0311853 | Time: 7901.7 ms [2022-01-31 21:57:19 main:574] : INFO : Epoch 1933 | loss: 0.031085 | val_loss: 0.0311907 | Time: 17993.8 ms [2022-01-31 21:57:38 main:574] : INFO : Epoch 1934 | loss: 0.0310843 | val_loss: 0.0311823 | Time: 18149.9 ms [2022-01-31 21:57:56 main:574] : INFO : Epoch 1935 | loss: 0.0310798 | val_loss: 0.0311868 | Time: 18389.1 ms [2022-01-31 21:58:15 main:574] : INFO : Epoch 1936 | loss: 0.0310793 | val_loss: 0.0311856 | Time: 18428.1 ms [2022-01-31 21:58:24 main:574] : INFO : Epoch 1937 | loss: 0.0310783 | val_loss: 0.0311792 | Time: 8916.99 ms [2022-01-31 21:58:42 main:574] : INFO : Epoch 1938 | loss: 0.0310834 | val_loss: 0.0311726 | Time: 18456.1 ms [2022-01-31 21:59:00 main:574] : INFO : Epoch 1939 | loss: 0.0310879 | val_loss: 0.0311845 | Time: 18262.4 ms [2022-01-31 21:59:18 main:574] : INFO : Epoch 1940 | loss: 0.0310804 | val_loss: 0.0311809 | Time: 18184.3 ms [2022-01-31 21:59:36 main:574] : INFO : Epoch 1941 | loss: 0.031084 | val_loss: 0.0311864 | Time: 17993.4 ms [2022-01-31 21:59:45 main:574] : INFO : Epoch 1942 | loss: 0.0310873 | val_loss: 0.0311872 | Time: 8264.76 ms [2022-01-31 22:00:03 main:574] : INFO : Epoch 1943 | loss: 0.0310878 | val_loss: 0.0311739 | Time: 18641.3 ms [2022-01-31 22:00:32 main:574] : INFO : Epoch 1944 | loss: 0.0310876 | val_loss: 0.0311763 | Time: 28444.7 ms [2022-01-31 22:00:50 main:574] : INFO : Epoch 1945 | loss: 0.0310966 | val_loss: 0.0311749 | Time: 17814.1 ms [2022-01-31 22:01:08 main:574] : INFO : Epoch 1946 | loss: 0.0310933 | val_loss: 0.031181 | Time: 17869.6 ms [2022-01-31 22:01:16 main:574] : INFO : Epoch 1947 | loss: 0.0310996 | val_loss: 0.0311693 | Time: 7877.07 ms [2022-01-31 22:01:33 main:574] : INFO : Epoch 1948 | loss: 0.0311004 | val_loss: 0.0311799 | Time: 17786.2 ms [2022-01-31 22:01:51 main:574] : INFO : Epoch 1949 | loss: 0.0310932 | val_loss: 0.0311809 | Time: 17676.5 ms [2022-01-31 22:02:09 main:574] : INFO : Epoch 1950 | loss: 0.0311035 | val_loss: 0.0311813 | Time: 17972.5 ms [2022-01-31 22:02:27 main:574] : INFO : Epoch 1951 | loss: 0.0311007 | val_loss: 0.0311669 | Time: 18247.9 ms [2022-01-31 22:02:35 main:574] : INFO : Epoch 1952 | loss: 0.0310955 | val_loss: 0.0311744 | Time: 7722.04 ms [2022-01-31 22:02:53 main:574] : INFO : Epoch 1953 | loss: 0.0310902 | val_loss: 0.031168 | Time: 17427.5 ms [2022-01-31 22:03:21 main:574] : INFO : Epoch 1954 | loss: 0.031092 | val_loss: 0.0311796 | Time: 28168.3 ms [2022-01-31 22:03:39 main:574] : INFO : Epoch 1955 | loss: 0.031092 | val_loss: 0.0311756 | Time: 17981.7 ms [2022-01-31 22:03:46 main:574] : INFO : Epoch 1956 | loss: 0.0310847 | val_loss: 0.0311824 | Time: 7732.64 ms [2022-01-31 22:04:04 main:574] : INFO : Epoch 1957 | loss: 0.0310844 | val_loss: 0.031176 | Time: 18003.6 ms [2022-01-31 22:04:23 main:574] : INFO : Epoch 1958 | loss: 0.0310855 | val_loss: 0.0311825 | Time: 18368.9 ms [2022-01-31 22:04:41 main:574] : INFO : Epoch 1959 | loss: 0.0310841 | val_loss: 0.0311852 | Time: 18040.8 ms [2022-01-31 22:04:48 main:574] : INFO : Epoch 1960 | loss: 0.0310832 | val_loss: 0.0311825 | Time: 7429.01 ms [2022-01-31 22:05:06 main:574] : INFO : Epoch 1961 | loss: 0.0310813 | val_loss: 0.0311851 | Time: 17364.3 ms [2022-01-31 22:05:23 main:574] : INFO : Epoch 1962 | loss: 0.0310809 | val_loss: 0.0311805 | Time: 17167.2 ms [2022-01-31 22:05:40 main:574] : INFO : Epoch 1963 | loss: 0.0310778 | val_loss: 0.0311842 | Time: 17258.8 ms [2022-01-31 22:05:47 main:574] : INFO : Epoch 1964 | loss: 0.0310799 | val_loss: 0.0311821 | Time: 6864.93 ms [2022-01-31 22:06:04 main:574] : INFO : Epoch 1965 | loss: 0.0310786 | val_loss: 0.0311891 | Time: 16774.7 ms [2022-01-31 22:06:21 main:574] : INFO : Epoch 1966 | loss: 0.031079 | val_loss: 0.03118 | Time: 17331.7 ms [2022-01-31 22:06:29 main:574] : INFO : Epoch 1967 | loss: 0.0310764 | val_loss: 0.0311756 | Time: 7141.23 ms [2022-01-31 22:06:46 main:574] : INFO : Epoch 1968 | loss: 0.0310794 | val_loss: 0.0311847 | Time: 17429.6 ms [2022-01-31 22:07:03 main:574] : INFO : Epoch 1969 | loss: 0.0310774 | val_loss: 0.0311902 | Time: 17115.1 ms [2022-01-31 22:07:10 main:574] : INFO : Epoch 1970 | loss: 0.0310789 | val_loss: 0.0311817 | Time: 7059.37 ms [2022-01-31 22:07:27 main:574] : INFO : Epoch 1971 | loss: 0.0310855 | val_loss: 0.0311884 | Time: 7078.22 ms [2022-01-31 22:07:45 main:574] : INFO : Epoch 1972 | loss: 0.0310937 | val_loss: 0.0311765 | Time: 17170.8 ms [2022-01-31 22:08:02 main:574] : INFO : Epoch 1973 | loss: 0.031092 | val_loss: 0.0311774 | Time: 16995.6 ms [2022-01-31 22:08:09 main:574] : INFO : Epoch 1974 | loss: 0.0310885 | val_loss: 0.0311804 | Time: 7525.3 ms [2022-01-31 22:08:27 main:574] : INFO : Epoch 1975 | loss: 0.0310858 | val_loss: 0.031178 | Time: 17760.8 ms [2022-01-31 22:08:45 main:574] : INFO : Epoch 1976 | loss: 0.0310887 | val_loss: 0.0311801 | Time: 17415.1 ms [2022-01-31 22:09:02 main:574] : INFO : Epoch 1977 | loss: 0.0310832 | val_loss: 0.0311768 | Time: 17097.8 ms [2022-01-31 22:09:09 main:574] : INFO : Epoch 1978 | loss: 0.0310834 | val_loss: 0.0311836 | Time: 7375.53 ms [2022-01-31 22:09:26 main:574] : INFO : Epoch 1979 | loss: 0.0310802 | val_loss: 0.0311823 | Time: 16997.1 ms [2022-01-31 22:09:43 main:574] : INFO : Epoch 1980 | loss: 0.0310768 | val_loss: 0.031182 | Time: 17051.3 ms [2022-01-31 22:09:51 main:574] : INFO : Epoch 1981 | loss: 0.0310778 | val_loss: 0.0311902 | Time: 7604.88 ms [2022-01-31 22:10:08 main:574] : INFO : Epoch 1982 | loss: 0.0310815 | val_loss: 0.0311811 | Time: 17382.1 ms [2022-01-31 22:10:25 main:574] : INFO : Epoch 1983 | loss: 0.0310828 | val_loss: 0.0311858 | Time: 17321.6 ms [2022-01-31 22:10:43 main:574] : INFO : Epoch 1984 | loss: 0.0310813 | val_loss: 0.0311791 | Time: 17318.9 ms [2022-01-31 22:10:51 main:574] : INFO : Epoch 1985 | loss: 0.0310808 | val_loss: 0.0311812 | Time: 8282.26 ms [2022-01-31 22:11:39 main:574] : INFO : Epoch 1986 | loss: 0.031077 | val_loss: 0.0311862 | Time: 47800.1 ms [2022-01-31 22:12:07 main:574] : INFO : Epoch 1987 | loss: 0.0310794 | val_loss: 0.0311806 | Time: 27710.4 ms [2022-01-31 22:12:24 main:574] : INFO : Epoch 1988 | loss: 0.0310827 | val_loss: 0.0311746 | Time: 17551.8 ms [2022-01-31 22:12:31 main:574] : INFO : Epoch 1989 | loss: 0.0310864 | val_loss: 0.031192 | Time: 7195.08 ms [2022-01-31 22:12:48 main:574] : INFO : Epoch 1990 | loss: 0.0310843 | val_loss: 0.0311862 | Time: 16813.4 ms [2022-01-31 22:13:06 main:574] : INFO : Epoch 1991 | loss: 0.0310817 | val_loss: 0.0311846 | Time: 17557.3 ms [2022-01-31 22:13:14 main:574] : INFO : Epoch 1992 | loss: 0.0310825 | val_loss: 0.0311853 | Time: 8127.32 ms [2022-01-31 22:13:42 main:574] : INFO : Epoch 1993 | loss: 0.0310833 | val_loss: 0.0311855 | Time: 28100.6 ms [2022-01-31 22:14:00 main:574] : INFO : Epoch 1994 | loss: 0.0310827 | val_loss: 0.03118 | Time: 17903.4 ms [2022-01-31 22:14:18 main:574] : INFO : Epoch 1995 | loss: 0.0310823 | val_loss: 0.0311717 | Time: 17885.9 ms [2022-01-31 22:14:36 main:574] : INFO : Epoch 1996 | loss: 0.0310845 | val_loss: 0.0311889 | Time: 17981 ms [2022-01-31 22:14:44 main:574] : INFO : Epoch 1997 | loss: 0.0310867 | val_loss: 0.0311728 | Time: 8212.49 ms [2022-01-31 22:15:02 main:574] : INFO : Epoch 1998 | loss: 0.0310899 | val_loss: 0.0311791 | Time: 17611 ms [2022-01-31 22:15:19 main:574] : INFO : Epoch 1999 | loss: 0.0310861 | val_loss: 0.0311864 | Time: 17025.1 ms [2022-01-31 22:15:36 main:574] : INFO : Epoch 2000 | loss: 0.0310878 | val_loss: 0.0311762 | Time: 17072.7 ms [2022-01-31 22:15:43 main:574] : INFO : Epoch 2001 | loss: 0.0311059 | val_loss: 0.0311785 | Time: 6896.71 ms [2022-01-31 22:16:00 main:574] : INFO : Epoch 2002 | loss: 0.0311194 | val_loss: 0.0311661 | Time: 16705.9 ms [2022-01-31 22:16:26 main:574] : INFO : Epoch 2003 | loss: 0.0311082 | val_loss: 0.0311708 | Time: 26696.8 ms [2022-01-31 22:16:33 main:574] : INFO : Epoch 2004 | loss: 0.0310994 | val_loss: 0.0311706 | Time: 6699.49 ms [2022-01-31 22:16:50 main:574] : INFO : Epoch 2005 | loss: 0.0311057 | val_loss: 0.0311769 | Time: 16882.2 ms [2022-01-31 22:17:17 main:574] : INFO : Epoch 2006 | loss: 0.0310993 | val_loss: 0.0311783 | Time: 27504.9 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: GeForce GTX 1050 Ti) [2022-02-01 17:42:26 main:435] : INFO : Set logging level to 1 [2022-02-01 17:42:26 main:441] : INFO : Running in BOINC Client mode [2022-02-01 17:42:26 main:444] : INFO : Resolving all filenames [2022-02-01 17:42:26 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-02-01 17:42:26 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-02-01 17:42:26 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-02-01 17:42:26 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 0) [2022-02-01 17:42:26 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-02-01 17:42:26 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-02-01 17:42:26 main:474] : INFO : Configuration: [2022-02-01 17:42:26 main:475] : INFO : Model type: GRU [2022-02-01 17:42:26 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-02-01 17:42:26 main:477] : INFO : Max Epochs: 2048 [2022-02-01 17:42:26 main:478] : INFO : Batch Size: 128 [2022-02-01 17:42:26 main:479] : INFO : Learning Rate: 0.01 [2022-02-01 17:42:26 main:480] : INFO : Patience: 10 [2022-02-01 17:42:26 main:481] : INFO : Hidden Width: 12 [2022-02-01 17:42:26 main:482] : INFO : # Recurrent Layers: 4 [2022-02-01 17:42:26 main:483] : INFO : # Backend Layers: 4 [2022-02-01 17:42:26 main:484] : INFO : # Threads: 1 [2022-02-01 17:42:26 main:486] : INFO : Preparing Dataset [2022-02-01 17:42:26 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-02-01 17:42:26 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-02-01 17:42:29 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-02-01 17:42:29 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-02-01 17:42:29 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-02-01 17:42:29 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-02-01 17:42:29 main:494] : INFO : Creating Model [2022-02-01 17:42:29 main:507] : INFO : Preparing config file [2022-02-01 17:42:29 main:511] : INFO : Found checkpoint, attempting to load... [2022-02-01 17:42:29 main:512] : INFO : Loading config [2022-02-01 17:42:29 main:514] : INFO : Loading state [2022-02-01 17:42:31 main:559] : INFO : Loading DataLoader into Memory [2022-02-01 17:42:31 main:562] : INFO : Starting Training [2022-02-01 17:42:36 main:574] : INFO : Epoch 1999 | loss: 0.0311168 | val_loss: 0.0311666 | Time: 4549.27 ms [2022-02-01 17:42:45 main:574] : INFO : Epoch 2000 | loss: 0.0310903 | val_loss: 0.0311785 | Time: 9289.52 ms [2022-02-01 17:43:12 main:574] : INFO : Epoch 2001 | loss: 0.0310875 | val_loss: 0.0311743 | Time: 26726.6 ms [2022-02-01 17:43:28 main:574] : INFO : Epoch 2002 | loss: 0.031084 | val_loss: 0.031186 | Time: 16218.7 ms [2022-02-01 17:43:34 main:574] : INFO : Epoch 2003 | loss: 0.0310827 | val_loss: 0.0311751 | Time: 6538.46 ms [2022-02-01 17:43:41 main:574] : INFO : Epoch 2004 | loss: 0.0310847 | val_loss: 0.0311832 | Time: 6660.99 ms [2022-02-01 17:43:48 main:574] : INFO : Epoch 2005 | loss: 0.0310866 | val_loss: 0.0311702 | Time: 7244.35 ms [2022-02-01 17:43:55 main:574] : INFO : Epoch 2006 | loss: 0.0310885 | val_loss: 0.031192 | Time: 6410.88 ms [2022-02-01 17:44:02 main:574] : INFO : Epoch 2007 | loss: 0.0310822 | val_loss: 0.0311778 | Time: 6777.52 ms [2022-02-01 17:44:19 main:574] : INFO : Epoch 2008 | loss: 0.0310814 | val_loss: 0.031185 | Time: 17099.7 ms [2022-02-01 17:44:36 main:574] : INFO : Epoch 2009 | loss: 0.031083 | val_loss: 0.0311865 | Time: 17682 ms [2022-02-01 17:44:43 main:574] : INFO : Epoch 2010 | loss: 0.0310885 | val_loss: 0.0311768 | Time: 6829.69 ms [2022-02-01 17:45:20 main:574] : INFO : Epoch 2011 | loss: 0.0310959 | val_loss: 0.0311935 | Time: 36651.7 ms [2022-02-01 17:45:26 main:574] : INFO : Epoch 2012 | loss: 0.0310903 | val_loss: 0.0311835 | Time: 6082.99 ms [2022-02-01 17:45:32 main:574] : INFO : Epoch 2013 | loss: 0.0310845 | val_loss: 0.0311876 | Time: 6226.15 ms [2022-02-01 17:45:39 main:574] : INFO : Epoch 2014 | loss: 0.0310797 | val_loss: 0.0311831 | Time: 6533.93 ms [2022-02-01 17:45:46 main:574] : INFO : Epoch 2015 | loss: 0.0310813 | val_loss: 0.0311769 | Time: 7364.16 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: GeForce GTX 1050 Ti) [2022-02-01 18:15:26 main:435] : INFO : Set logging level to 1 [2022-02-01 18:15:26 main:441] : INFO : Running in BOINC Client mode [2022-02-01 18:15:26 main:444] : INFO : Resolving all filenames [2022-02-01 18:15:26 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-02-01 18:15:26 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-02-01 18:15:26 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-02-01 18:15:26 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 0) [2022-02-01 18:15:26 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-02-01 18:15:26 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-02-01 18:15:26 main:474] : INFO : Configuration: [2022-02-01 18:15:26 main:475] : INFO : Model type: GRU [2022-02-01 18:15:26 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-02-01 18:15:26 main:477] : INFO : Max Epochs: 2048 [2022-02-01 18:15:26 main:478] : INFO : Batch Size: 128 [2022-02-01 18:15:26 main:479] : INFO : Learning Rate: 0.01 [2022-02-01 18:15:26 main:480] : INFO : Patience: 10 [2022-02-01 18:15:26 main:481] : INFO : Hidden Width: 12 [2022-02-01 18:15:26 main:482] : INFO : # Recurrent Layers: 4 [2022-02-01 18:15:26 main:483] : INFO : # Backend Layers: 4 [2022-02-01 18:15:26 main:484] : INFO : # Threads: 1 [2022-02-01 18:15:26 main:486] : INFO : Preparing Dataset [2022-02-01 18:15:26 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-02-01 18:15:27 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-02-01 18:15:29 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-02-01 18:15:29 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-02-01 18:15:29 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-02-01 18:15:29 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-02-01 18:15:29 main:494] : INFO : Creating Model [2022-02-01 18:15:29 main:507] : INFO : Preparing config file [2022-02-01 18:15:29 main:511] : INFO : Found checkpoint, attempting to load... [2022-02-01 18:15:29 main:512] : INFO : Loading config [2022-02-01 18:15:29 main:514] : INFO : Loading state [2022-02-01 18:15:31 main:559] : INFO : Loading DataLoader into Memory [2022-02-01 18:15:31 main:562] : INFO : Starting Training [2022-02-01 18:15:56 main:574] : INFO : Epoch 2016 | loss: 0.0311137 | val_loss: 0.0311994 | Time: 24784.4 ms [2022-02-01 18:16:00 main:574] : INFO : Epoch 2017 | loss: 0.0310887 | val_loss: 0.0311804 | Time: 4338.63 ms [2022-02-01 18:16:07 main:574] : INFO : Epoch 2018 | loss: 0.0310818 | val_loss: 0.0311889 | Time: 6257.98 ms [2022-02-01 18:16:13 main:574] : INFO : Epoch 2019 | loss: 0.0310911 | val_loss: 0.0311786 | Time: 6075.7 ms [2022-02-01 18:16:19 main:574] : INFO : Epoch 2020 | loss: 0.0310931 | val_loss: 0.0311802 | Time: 6198.76 ms [2022-02-01 18:16:25 main:574] : INFO : Epoch 2021 | loss: 0.0310857 | val_loss: 0.031181 | Time: 6158.19 ms [2022-02-01 18:16:31 main:574] : INFO : Epoch 2022 | loss: 0.0310822 | val_loss: 0.0311796 | Time: 6349.02 ms [2022-02-01 18:16:37 main:574] : INFO : Epoch 2023 | loss: 0.0310786 | val_loss: 0.0311785 | Time: 5954.79 ms [2022-02-01 18:16:44 main:574] : INFO : Epoch 2024 | loss: 0.0310743 | val_loss: 0.0311798 | Time: 6261.12 ms [2022-02-01 18:16:50 main:574] : INFO : Epoch 2025 | loss: 0.0310729 | val_loss: 0.031181 | Time: 6099.76 ms [2022-02-01 18:16:56 main:574] : INFO : Epoch 2026 | loss: 0.0310744 | val_loss: 0.0311819 | Time: 6316.01 ms [2022-02-01 18:17:02 main:574] : INFO : Epoch 2027 | loss: 0.0310863 | val_loss: 0.0311814 | Time: 6032.6 ms [2022-02-01 18:17:09 main:574] : INFO : Epoch 2028 | loss: 0.0310935 | val_loss: 0.0311724 | Time: 6533.68 ms [2022-02-01 18:17:14 main:574] : INFO : Epoch 2029 | loss: 0.0310881 | val_loss: 0.031183 | Time: 5827.78 ms [2022-02-01 18:17:21 main:574] : INFO : Epoch 2030 | loss: 0.0310829 | val_loss: 0.0311807 | Time: 6237.06 ms [2022-02-01 18:17:27 main:574] : INFO : Epoch 2031 | loss: 0.0310809 | val_loss: 0.0311768 | Time: 6260.37 ms [2022-02-01 18:17:33 main:574] : INFO : Epoch 2032 | loss: 0.0310789 | val_loss: 0.0311812 | Time: 6022.06 ms Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: GeForce GTX 1050 Ti) [2022-02-01 18:20:40 main:435] : INFO : Set logging level to 1 [2022-02-01 18:20:40 main:441] : INFO : Running in BOINC Client mode [2022-02-01 18:20:40 main:444] : INFO : Resolving all filenames [2022-02-01 18:20:40 main:452] : INFO : Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1) [2022-02-01 18:20:40 main:452] : INFO : Resolved: model.cfg => model.cfg (exists = 1) [2022-02-01 18:20:40 main:452] : INFO : Resolved: model-final.pt => model-final.pt (exists = 0) [2022-02-01 18:20:40 main:452] : INFO : Resolved: model-input.pt => model-input.pt (exists = 0) [2022-02-01 18:20:41 main:452] : INFO : Resolved: snapshot.pt => snapshot.pt (exists = 1) [2022-02-01 18:20:41 main:472] : INFO : Dataset filename: dataset.hdf5 [2022-02-01 18:20:41 main:474] : INFO : Configuration: [2022-02-01 18:20:41 main:475] : INFO : Model type: GRU [2022-02-01 18:20:41 main:476] : INFO : Validation Loss Threshold: 0.0001 [2022-02-01 18:20:41 main:477] : INFO : Max Epochs: 2048 [2022-02-01 18:20:41 main:478] : INFO : Batch Size: 128 [2022-02-01 18:20:41 main:479] : INFO : Learning Rate: 0.01 [2022-02-01 18:20:41 main:480] : INFO : Patience: 10 [2022-02-01 18:20:41 main:481] : INFO : Hidden Width: 12 [2022-02-01 18:20:41 main:482] : INFO : # Recurrent Layers: 4 [2022-02-01 18:20:41 main:483] : INFO : # Backend Layers: 4 [2022-02-01 18:20:41 main:484] : INFO : # Threads: 1 [2022-02-01 18:20:41 main:486] : INFO : Preparing Dataset [2022-02-01 18:20:41 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xt from dataset.hdf5 into memory [2022-02-01 18:20:41 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yt from dataset.hdf5 into memory [2022-02-01 18:20:43 load:106] : INFO : Successfully loaded dataset of 2048 examples into memory. [2022-02-01 18:20:43 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Xv from dataset.hdf5 into memory [2022-02-01 18:20:43 load_hdf5_ds_into_tensor:28] : INFO : Loading Dataset /Yv from dataset.hdf5 into memory [2022-02-01 18:20:43 load:106] : INFO : Successfully loaded dataset of 512 examples into memory. [2022-02-01 18:20:43 main:494] : INFO : Creating Model [2022-02-01 18:20:43 main:507] : INFO : Preparing config file [2022-02-01 18:20:43 main:511] : INFO : Found checkpoint, attempting to load... [2022-02-01 18:20:43 main:512] : INFO : Loading config [2022-02-01 18:20:43 main:514] : INFO : Loading state [2022-02-01 18:20:44 main:559] : INFO : Loading DataLoader into Memory [2022-02-01 18:20:45 main:562] : INFO : Starting Training [2022-02-01 18:20:49 main:574] : INFO : Epoch 2016 | loss: 0.0311198 | val_loss: 0.0311858 | Time: 4464.76 ms [2022-02-01 18:20:53 main:574] : INFO : Epoch 2017 | loss: 0.0310888 | val_loss: 0.0311671 | Time: 4286.26 ms [2022-02-01 18:20:59 main:574] : INFO : Epoch 2018 | loss: 0.0310827 | val_loss: 0.0311763 | Time: 5135.32 ms [2022-02-01 18:21:05 main:574] : INFO : Epoch 2019 | loss: 0.0310828 | val_loss: 0.0311965 | Time: 6280.35 ms [2022-02-01 18:21:11 main:574] : INFO : Epoch 2020 | loss: 0.0310846 | val_loss: 0.0311786 | Time: 5819.49 ms [2022-02-01 18:21:17 main:574] : INFO : Epoch 2021 | loss: 0.031085 | val_loss: 0.0311863 | Time: 6210.73 ms [2022-02-01 18:21:23 main:574] : INFO : Epoch 2022 | loss: 0.0310815 | val_loss: 0.0311835 | Time: 5892.3 ms [2022-02-01 18:21:29 main:574] : INFO : Epoch 2023 | loss: 0.0310784 | val_loss: 0.0311871 | Time: 5967.81 ms [2022-02-01 18:21:35 main:574] : INFO : Epoch 2024 | loss: 0.0310804 | val_loss: 0.031184 | Time: 6108.73 ms [2022-02-01 18:21:41 main:574] : INFO : Epoch 2025 | loss: 0.0310816 | val_loss: 0.0311877 | Time: 6238.45 ms [2022-02-01 18:21:47 main:574] : INFO : Epoch 2026 | loss: 0.0310817 | val_loss: 0.0311855 | Time: 5483.31 ms [2022-02-01 18:21:52 main:574] : INFO : Epoch 2027 | loss: 0.0310956 | val_loss: 0.0311803 | Time: 5323.22 ms [2022-02-01 18:21:58 main:574] : INFO : Epoch 2028 | loss: 0.0311072 | val_loss: 0.0311693 | Time: 6078.4 ms [2022-02-01 18:22:04 main:574] : INFO : Epoch 2029 | loss: 0.031104 | val_loss: 0.0311726 | Time: 6206.32 ms [2022-02-01 18:22:13 main:574] : INFO : Epoch 2030 | loss: 0.031105 | val_loss: 0.0311802 | Time: 8309.09 ms [2022-02-01 18:23:53 main:574] : INFO : Epoch 2031 | loss: 0.0311019 | val_loss: 0.0311695 | Time: 100386 ms [2022-02-01 18:31:42 main:574] : INFO : Epoch 2032 | loss: 0.0311006 | val_loss: 0.0311766 | Time: 469284 ms [2022-02-01 18:31:49 main:574] : INFO : Epoch 2033 | loss: 0.031097 | val_loss: 0.031174 | Time: 6148.05 ms [2022-02-01 18:31:55 main:574] : INFO : Epoch 2034 | loss: 0.0310932 | val_loss: 0.0311819 | Time: 6431.18 ms [2022-02-01 18:37:51 main:574] : INFO : Epoch 2035 | loss: 0.0310893 | val_loss: 0.0311761 | Time: 355719 ms [2022-02-01 18:37:57 main:574] : INFO : Epoch 2036 | loss: 0.0310889 | val_loss: 0.0311838 | Time: 6427.3 ms [2022-02-01 18:38:04 main:574] : INFO : Epoch 2037 | loss: 0.0310871 | val_loss: 0.0311745 | Time: 6408 ms [2022-02-01 18:38:10 main:574] : INFO : Epoch 2038 | loss: 0.0310845 | val_loss: 0.0311895 | Time: 6485.26 ms [2022-02-01 18:38:17 main:574] : INFO : Epoch 2039 | loss: 0.0310864 | val_loss: 0.031182 | Time: 6393.84 ms [2022-02-01 18:38:23 main:574] : INFO : Epoch 2040 | loss: 0.0310822 | val_loss: 0.0311815 | Time: 6294.25 ms [2022-02-01 18:38:29 main:574] : INFO : Epoch 2041 | loss: 0.031078 | val_loss: 0.0311902 | Time: 6411.08 ms [2022-02-01 18:38:36 main:574] : INFO : Epoch 2042 | loss: 0.0310797 | val_loss: 0.0311842 | Time: 6313.34 ms [2022-02-01 18:38:43 main:574] : INFO : Epoch 2043 | loss: 0.0310791 | val_loss: 0.0311792 | Time: 6718.94 ms [2022-02-01 18:38:49 main:574] : INFO : Epoch 2044 | loss: 0.0310778 | val_loss: 0.0311882 | Time: 6526.19 ms [2022-02-01 18:38:56 main:574] : INFO : Epoch 2045 | loss: 0.0310785 | val_loss: 0.0311836 | Time: 6609.52 ms [2022-02-01 18:43:43 main:574] : INFO : Epoch 2046 | loss: 0.0310866 | val_loss: 0.0311847 | Time: 287456 ms [2022-02-01 18:43:51 main:574] : INFO : Epoch 2047 | loss: 0.0310865 | val_loss: 0.031178 | Time: 7936.6 ms [2022-02-01 18:44:10 main:574] : INFO : Epoch 2048 | loss: 0.0310836 | val_loss: 0.0311833 | Time: 18205.1 ms [2022-02-01 18:44:10 main:597] : INFO : Saving trained model to model-final.pt, val_loss 0.0311833 [2022-02-01 18:44:10 main:603] : INFO : Saving end state to config to file [2022-02-01 18:44:10 main:608] : INFO : Success, exiting.. 18:44:10 (2488): called boinc_finish(0) </stderr_txt> ]]>
©2022 MLC@Home Team
A project of the Cognition, Robotics, and Learning (CORAL) Lab at the University of Maryland, Baltimore County (UMBC)