Task 12628563

Name ParityModified-1645996554-7574-2-0_0
Workunit 9782602
Created 7 Mar 2022, 3:26:08 UTC
Sent 8 Mar 2022, 20:00:34 UTC
Report deadline 16 Mar 2022, 20:00:34 UTC
Received 11 Mar 2022, 12:39:19 UTC
Server state Over
Outcome Success
Client state Done
Exit status 0 (0x00000000)
Computer ID 6180
Run time 3 hours 44 min 12 sec
CPU time 3 hours 18 min 56 sec
Validate state Valid
Credit 4,160.00
Device peak FLOPS 884.73 GFLOPS
Application version Machine Learning Dataset Generator (GPU) v9.75 (cuda10200)
windows_x86_64
Peak working set size 1.54 GB
Peak swap size 3.44 GB
Peak disk usage 1.54 GB

Stderr output

<core_client_version>7.16.20</core_client_version>
<![CDATA[
<stderr_txt>
3-10 22:44:20	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-10 22:44:20	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-10 22:44:20	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-10 22:44:20	                main:480]	:	INFO	:	    Patience: 10
[2022-03-10 22:44:20	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-10 22:44:20	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-10 22:44:20	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-10 22:44:20	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-10 22:44:20	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-10 22:44:20	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-10 22:44:20	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-10 22:44:25	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-10 22:44:25	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-10 22:44:25	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-10 22:44:25	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-10 22:44:25	                main:494]	:	INFO	:	Creating Model
[2022-03-10 22:44:25	                main:507]	:	INFO	:	Preparing config file
[2022-03-10 22:44:25	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-10 22:44:25	                main:512]	:	INFO	:	Loading config
[2022-03-10 22:44:25	                main:514]	:	INFO	:	Loading state
[2022-03-10 22:44:27	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-10 22:44:28	                main:562]	:	INFO	:	Starting Training
[2022-03-10 22:44:35	                main:574]	:	INFO	:	Epoch 1747 | loss: 0.0311844 | val_loss: 0.0311362 | Time: 6998.18 ms
[2022-03-10 22:44:41	                main:574]	:	INFO	:	Epoch 1748 | loss: 0.0311251 | val_loss: 0.0311044 | Time: 6171.81 ms
[2022-03-10 22:44:47	                main:574]	:	INFO	:	Epoch 1749 | loss: 0.0310939 | val_loss: 0.0310772 | Time: 6258.41 ms
[2022-03-10 22:44:53	                main:574]	:	INFO	:	Epoch 1750 | loss: 0.03107 | val_loss: 0.0310665 | Time: 6287.51 ms
[2022-03-10 22:45:00	                main:574]	:	INFO	:	Epoch 1751 | loss: 0.031063 | val_loss: 0.0310634 | Time: 6294.48 ms
[2022-03-10 22:45:06	                main:574]	:	INFO	:	Epoch 1752 | loss: 0.031061 | val_loss: 0.0310623 | Time: 6312.13 ms
[2022-03-10 22:45:12	                main:574]	:	INFO	:	Epoch 1753 | loss: 0.0310505 | val_loss: 0.0310408 | Time: 6269.32 ms
[2022-03-10 22:45:18	                main:574]	:	INFO	:	Epoch 1754 | loss: 0.0310401 | val_loss: 0.0310343 | Time: 6277.78 ms
[2022-03-10 22:45:25	                main:574]	:	INFO	:	Epoch 1755 | loss: 0.0310333 | val_loss: 0.0310249 | Time: 6271.13 ms
[2022-03-10 22:45:31	                main:574]	:	INFO	:	Epoch 1756 | loss: 0.0310219 | val_loss: 0.0310236 | Time: 6279.29 ms
[2022-03-10 22:45:37	                main:574]	:	INFO	:	Epoch 1757 | loss: 0.0310163 | val_loss: 0.031054 | Time: 6308.27 ms
[2022-03-10 22:45:44	                main:574]	:	INFO	:	Epoch 1758 | loss: 0.0310474 | val_loss: 0.0310463 | Time: 6303.97 ms
[2022-03-10 22:45:50	                main:574]	:	INFO	:	Epoch 1759 | loss: 0.0310362 | val_loss: 0.0310257 | Time: 6715.17 ms
[2022-03-10 22:45:58	                main:574]	:	INFO	:	Epoch 1760 | loss: 0.0310154 | val_loss: 0.0310559 | Time: 7250.71 ms
[2022-03-10 22:46:25	                main:574]	:	INFO	:	Epoch 1761 | loss: 0.0310177 | val_loss: 0.0310198 | Time: 27030.9 ms
[2022-03-10 22:46:31	                main:574]	:	INFO	:	Epoch 1762 | loss: 0.0310118 | val_loss: 0.0310239 | Time: 6242.65 ms
[2022-03-10 22:46:37	                main:574]	:	INFO	:	Epoch 1763 | loss: 0.0310128 | val_loss: 0.0310244 | Time: 6279.81 ms
[2022-03-10 22:46:44	                main:574]	:	INFO	:	Epoch 1764 | loss: 0.0310009 | val_loss: 0.0310114 | Time: 6310.28 ms
[2022-03-10 22:46:50	                main:574]	:	INFO	:	Epoch 1765 | loss: 0.0309991 | val_loss: 0.0310081 | Time: 6301.75 ms
[2022-03-10 22:46:56	                main:574]	:	INFO	:	Epoch 1766 | loss: 0.0309948 | val_loss: 0.0309984 | Time: 6470.93 ms
[2022-03-10 22:47:03	                main:574]	:	INFO	:	Epoch 1767 | loss: 0.0309847 | val_loss: 0.0310016 | Time: 6273.95 ms
[2022-03-10 22:47:09	                main:574]	:	INFO	:	Epoch 1768 | loss: 0.0309773 | val_loss: 0.0309923 | Time: 6303.05 ms
[2022-03-10 22:47:15	                main:574]	:	INFO	:	Epoch 1769 | loss: 0.0309838 | val_loss: 0.0309962 | Time: 6304.16 ms
[2022-03-10 22:47:22	                main:574]	:	INFO	:	Epoch 1770 | loss: 0.030983 | val_loss: 0.0309987 | Time: 6275.17 ms
[2022-03-10 22:47:28	                main:574]	:	INFO	:	Epoch 1771 | loss: 0.0309918 | val_loss: 0.0309908 | Time: 6280.92 ms
[2022-03-10 22:47:34	                main:574]	:	INFO	:	Epoch 1772 | loss: 0.0309963 | val_loss: 0.0310163 | Time: 6271.53 ms
[2022-03-10 22:47:40	                main:574]	:	INFO	:	Epoch 1773 | loss: 0.0310038 | val_loss: 0.0309982 | Time: 6283.59 ms
[2022-03-10 22:47:47	                main:574]	:	INFO	:	Epoch 1774 | loss: 0.0309868 | val_loss: 0.030987 | Time: 6285.89 ms
[2022-03-10 22:47:53	                main:574]	:	INFO	:	Epoch 1775 | loss: 0.0309879 | val_loss: 0.0309978 | Time: 6276.15 ms
[2022-03-10 22:47:59	                main:574]	:	INFO	:	Epoch 1776 | loss: 0.0309779 | val_loss: 0.0309784 | Time: 6279.23 ms
[2022-03-10 22:48:06	                main:574]	:	INFO	:	Epoch 1777 | loss: 0.0309742 | val_loss: 0.0309844 | Time: 6270.73 ms
[2022-03-10 22:48:12	                main:574]	:	INFO	:	Epoch 1778 | loss: 0.0309711 | val_loss: 0.0309827 | Time: 6259.31 ms
[2022-03-10 22:48:18	                main:574]	:	INFO	:	Epoch 1779 | loss: 0.0309625 | val_loss: 0.0309762 | Time: 6279.79 ms
[2022-03-10 22:48:24	                main:574]	:	INFO	:	Epoch 1780 | loss: 0.0309578 | val_loss: 0.0309811 | Time: 6258.13 ms
[2022-03-10 22:48:31	                main:574]	:	INFO	:	Epoch 1781 | loss: 0.0309735 | val_loss: 0.0309957 | Time: 6269.8 ms
[2022-03-10 22:48:37	                main:574]	:	INFO	:	Epoch 1782 | loss: 0.0309748 | val_loss: 0.030969 | Time: 6254.86 ms
[2022-03-10 22:48:43	                main:574]	:	INFO	:	Epoch 1783 | loss: 0.0309661 | val_loss: 0.0309764 | Time: 6291.36 ms
[2022-03-10 22:48:50	                main:574]	:	INFO	:	Epoch 1784 | loss: 0.0309719 | val_loss: 0.0309777 | Time: 6291.78 ms
[2022-03-10 22:48:56	                main:574]	:	INFO	:	Epoch 1785 | loss: 0.0309642 | val_loss: 0.030967 | Time: 6338.89 ms
[2022-03-10 22:49:02	                main:574]	:	INFO	:	Epoch 1786 | loss: 0.0309655 | val_loss: 0.0309842 | Time: 6278.31 ms
[2022-03-10 22:49:08	                main:574]	:	INFO	:	Epoch 1787 | loss: 0.0309989 | val_loss: 0.0309807 | Time: 6263.82 ms
[2022-03-10 22:49:15	                main:574]	:	INFO	:	Epoch 1788 | loss: 0.0309889 | val_loss: 0.0309961 | Time: 6280.78 ms
[2022-03-10 22:49:21	                main:574]	:	INFO	:	Epoch 1789 | loss: 0.0310255 | val_loss: 0.031034 | Time: 6264.03 ms
[2022-03-10 22:49:27	                main:574]	:	INFO	:	Epoch 1790 | loss: 0.0310095 | val_loss: 0.0309907 | Time: 6266.56 ms
[2022-03-10 22:49:34	                main:574]	:	INFO	:	Epoch 1791 | loss: 0.030974 | val_loss: 0.0309611 | Time: 6281.47 ms
[2022-03-10 22:49:40	                main:574]	:	INFO	:	Epoch 1792 | loss: 0.0309492 | val_loss: 0.0309481 | Time: 6282.11 ms
[2022-03-10 22:49:46	                main:574]	:	INFO	:	Epoch 1793 | loss: 0.0309365 | val_loss: 0.0309525 | Time: 6279.46 ms
[2022-03-10 22:49:52	                main:574]	:	INFO	:	Epoch 1794 | loss: 0.0309318 | val_loss: 0.0309458 | Time: 6261.21 ms
[2022-03-10 22:49:59	                main:574]	:	INFO	:	Epoch 1795 | loss: 0.0309402 | val_loss: 0.0309541 | Time: 6892.64 ms
[2022-03-10 22:50:06	                main:574]	:	INFO	:	Epoch 1796 | loss: 0.030944 | val_loss: 0.0309569 | Time: 6278.35 ms
[2022-03-10 22:50:12	                main:574]	:	INFO	:	Epoch 1797 | loss: 0.0309364 | val_loss: 0.0309383 | Time: 6256.86 ms
[2022-03-10 22:50:18	                main:574]	:	INFO	:	Epoch 1798 | loss: 0.0309257 | val_loss: 0.0309341 | Time: 6261.67 ms
[2022-03-10 22:50:24	                main:574]	:	INFO	:	Epoch 1799 | loss: 0.030926 | val_loss: 0.0309418 | Time: 6255.62 ms
[2022-03-10 22:50:31	                main:574]	:	INFO	:	Epoch 1800 | loss: 0.0309312 | val_loss: 0.0309436 | Time: 6253.96 ms
[2022-03-10 22:50:37	                main:574]	:	INFO	:	Epoch 1801 | loss: 0.0309223 | val_loss: 0.0309179 | Time: 6260.79 ms
[2022-03-10 22:50:43	                main:574]	:	INFO	:	Epoch 1802 | loss: 0.0309101 | val_loss: 0.0309158 | Time: 6302.34 ms
[2022-03-10 22:50:49	                main:574]	:	INFO	:	Epoch 1803 | loss: 0.0309055 | val_loss: 0.0309066 | Time: 6276.02 ms
[2022-03-10 22:50:56	                main:574]	:	INFO	:	Epoch 1804 | loss: 0.0308986 | val_loss: 0.0309183 | Time: 6646.41 ms
[2022-03-10 22:51:02	                main:574]	:	INFO	:	Epoch 1805 | loss: 0.0309058 | val_loss: 0.0309288 | Time: 6269.37 ms
[2022-03-10 22:51:09	                main:574]	:	INFO	:	Epoch 1806 | loss: 0.0309925 | val_loss: 0.030963 | Time: 6252.81 ms
[2022-03-10 22:51:15	                main:574]	:	INFO	:	Epoch 1807 | loss: 0.0309378 | val_loss: 0.0309382 | Time: 6262.71 ms
[2022-03-10 22:51:21	                main:574]	:	INFO	:	Epoch 1808 | loss: 0.030915 | val_loss: 0.0309139 | Time: 6258.76 ms
[2022-03-10 22:51:28	                main:574]	:	INFO	:	Epoch 1809 | loss: 0.0308893 | val_loss: 0.0308902 | Time: 6324.75 ms
[2022-03-10 22:51:34	                main:574]	:	INFO	:	Epoch 1810 | loss: 0.0309467 | val_loss: 0.0309796 | Time: 6306.51 ms
[2022-03-10 22:51:40	                main:574]	:	INFO	:	Epoch 1811 | loss: 0.0309504 | val_loss: 0.0309373 | Time: 6282.66 ms
[2022-03-10 22:51:46	                main:574]	:	INFO	:	Epoch 1812 | loss: 0.0309355 | val_loss: 0.0309733 | Time: 6270.08 ms
[2022-03-10 22:51:53	                main:574]	:	INFO	:	Epoch 1813 | loss: 0.0309697 | val_loss: 0.03097 | Time: 6268.32 ms
[2022-03-10 22:51:59	                main:574]	:	INFO	:	Epoch 1814 | loss: 0.0309467 | val_loss: 0.0309321 | Time: 6267.45 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-11 12:03:45	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-11 12:03:45	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-11 12:03:45	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-11 12:03:45	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-11 12:03:45	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-11 12:03:46	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-11 12:03:46	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-11 12:03:46	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-11 12:03:46	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-11 12:03:46	                main:474]	:	INFO	:	Configuration: 
[2022-03-11 12:03:46	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-11 12:03:46	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-11 12:03:46	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-11 12:03:46	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-11 12:03:46	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-11 12:03:46	                main:480]	:	INFO	:	    Patience: 10
[2022-03-11 12:03:46	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-11 12:03:46	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-11 12:03:46	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-11 12:03:46	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-11 12:03:46	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-11 12:03:46	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-11 12:03:46	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-11 12:05:55	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-11 12:05:55	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-11 12:05:55	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-11 12:05:55	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-11 12:05:55	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-11 12:05:55	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-11 12:05:55	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-11 12:05:55	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-11 12:05:55	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-11 12:05:55	                main:474]	:	INFO	:	Configuration: 
[2022-03-11 12:05:55	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-11 12:05:55	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-11 12:05:55	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-11 12:05:55	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-11 12:05:55	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-11 12:05:55	                main:480]	:	INFO	:	    Patience: 10
[2022-03-11 12:05:55	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-11 12:05:55	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-11 12:05:55	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-11 12:05:55	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-11 12:05:55	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-11 12:05:55	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-11 12:05:55	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-11 12:05:59	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-11 12:05:59	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-11 12:05:59	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-11 12:05:59	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-11 12:05:59	                main:494]	:	INFO	:	Creating Model
[2022-03-11 12:05:59	                main:507]	:	INFO	:	Preparing config file
[2022-03-11 12:05:59	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-11 12:05:59	                main:512]	:	INFO	:	Loading config
[2022-03-11 12:05:59	                main:514]	:	INFO	:	Loading state
[2022-03-11 12:06:02	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-11 12:06:02	                main:562]	:	INFO	:	Starting Training
[2022-03-11 12:06:09	                main:574]	:	INFO	:	Epoch 1813 | loss: 0.0309689 | val_loss: 0.0309267 | Time: 7113.36 ms
[2022-03-11 12:06:15	                main:574]	:	INFO	:	Epoch 1814 | loss: 0.0308985 | val_loss: 0.0309041 | Time: 6181.56 ms
[2022-03-11 12:06:21	                main:574]	:	INFO	:	Epoch 1815 | loss: 0.0308799 | val_loss: 0.0308898 | Time: 6236.54 ms
[2022-03-11 12:06:27	                main:574]	:	INFO	:	Epoch 1816 | loss: 0.0308921 | val_loss: 0.0309078 | Time: 6267.98 ms
[2022-03-11 12:06:34	                main:574]	:	INFO	:	Epoch 1817 | loss: 0.0308893 | val_loss: 0.0308925 | Time: 6256.92 ms
[2022-03-11 12:06:40	                main:574]	:	INFO	:	Epoch 1818 | loss: 0.0309611 | val_loss: 0.0309796 | Time: 6263.55 ms
[2022-03-11 12:06:46	                main:574]	:	INFO	:	Epoch 1819 | loss: 0.0309303 | val_loss: 0.030915 | Time: 6352.92 ms
[2022-03-11 12:06:53	                main:574]	:	INFO	:	Epoch 1820 | loss: 0.0308948 | val_loss: 0.0308904 | Time: 6263.83 ms
[2022-03-11 12:06:59	                main:574]	:	INFO	:	Epoch 1821 | loss: 0.0308825 | val_loss: 0.0308757 | Time: 6283.41 ms
[2022-03-11 12:07:05	                main:574]	:	INFO	:	Epoch 1822 | loss: 0.0308751 | val_loss: 0.0308909 | Time: 6279.92 ms
[2022-03-11 12:07:12	                main:574]	:	INFO	:	Epoch 1823 | loss: 0.0308778 | val_loss: 0.0308995 | Time: 6262.55 ms
[2022-03-11 12:07:18	                main:574]	:	INFO	:	Epoch 1824 | loss: 0.0308717 | val_loss: 0.0308837 | Time: 6272.29 ms
[2022-03-11 12:07:24	                main:574]	:	INFO	:	Epoch 1825 | loss: 0.0308609 | val_loss: 0.0308588 | Time: 6263.48 ms
[2022-03-11 12:07:30	                main:574]	:	INFO	:	Epoch 1826 | loss: 0.0308453 | val_loss: 0.0308616 | Time: 6261.75 ms
[2022-03-11 12:07:37	                main:574]	:	INFO	:	Epoch 1827 | loss: 0.0308498 | val_loss: 0.0308561 | Time: 6265.84 ms
[2022-03-11 12:07:43	                main:574]	:	INFO	:	Epoch 1828 | loss: 0.0308428 | val_loss: 0.0308555 | Time: 6339.36 ms
[2022-03-11 12:07:49	                main:574]	:	INFO	:	Epoch 1829 | loss: 0.0308335 | val_loss: 0.0308478 | Time: 6264.71 ms
[2022-03-11 12:07:55	                main:574]	:	INFO	:	Epoch 1830 | loss: 0.0308332 | val_loss: 0.0308505 | Time: 6263.09 ms
[2022-03-11 12:08:02	                main:574]	:	INFO	:	Epoch 1831 | loss: 0.0308333 | val_loss: 0.0308481 | Time: 6253.17 ms
[2022-03-11 12:08:08	                main:574]	:	INFO	:	Epoch 1832 | loss: 0.030843 | val_loss: 0.0308485 | Time: 6255.12 ms
[2022-03-11 12:08:14	                main:574]	:	INFO	:	Epoch 1833 | loss: 0.0308238 | val_loss: 0.0308446 | Time: 6261.35 ms
[2022-03-11 12:08:21	                main:574]	:	INFO	:	Epoch 1834 | loss: 0.0308199 | val_loss: 0.0308495 | Time: 6266.46 ms
[2022-03-11 12:08:27	                main:574]	:	INFO	:	Epoch 1835 | loss: 0.030839 | val_loss: 0.0308584 | Time: 6263.44 ms
[2022-03-11 12:08:33	                main:574]	:	INFO	:	Epoch 1836 | loss: 0.0308341 | val_loss: 0.0308399 | Time: 6274.1 ms
[2022-03-11 12:08:39	                main:574]	:	INFO	:	Epoch 1837 | loss: 0.0308293 | val_loss: 0.0308377 | Time: 6266.07 ms
[2022-03-11 12:08:46	                main:574]	:	INFO	:	Epoch 1838 | loss: 0.0308348 | val_loss: 0.0308849 | Time: 6316.99 ms
[2022-03-11 12:08:52	                main:574]	:	INFO	:	Epoch 1839 | loss: 0.0308685 | val_loss: 0.0308851 | Time: 6432.82 ms
[2022-03-11 12:08:59	                main:574]	:	INFO	:	Epoch 1840 | loss: 0.0308417 | val_loss: 0.0308584 | Time: 6920.96 ms
[2022-03-11 12:09:05	                main:574]	:	INFO	:	Epoch 1841 | loss: 0.0308436 | val_loss: 0.0308765 | Time: 6308.06 ms
[2022-03-11 12:09:12	                main:574]	:	INFO	:	Epoch 1842 | loss: 0.0308392 | val_loss: 0.0308477 | Time: 6279.66 ms
[2022-03-11 12:09:18	                main:574]	:	INFO	:	Epoch 1843 | loss: 0.0308212 | val_loss: 0.0308401 | Time: 6262.22 ms
[2022-03-11 12:09:24	                main:574]	:	INFO	:	Epoch 1844 | loss: 0.0308185 | val_loss: 0.0308377 | Time: 6256.83 ms
[2022-03-11 12:09:30	                main:574]	:	INFO	:	Epoch 1845 | loss: 0.030826 | val_loss: 0.0308315 | Time: 6265.46 ms
[2022-03-11 12:09:37	                main:574]	:	INFO	:	Epoch 1846 | loss: 0.0308153 | val_loss: 0.0308318 | Time: 6258.01 ms
[2022-03-11 12:09:43	                main:574]	:	INFO	:	Epoch 1847 | loss: 0.0308111 | val_loss: 0.0308541 | Time: 6255.22 ms
[2022-03-11 12:09:49	                main:574]	:	INFO	:	Epoch 1848 | loss: 0.030859 | val_loss: 0.0308549 | Time: 6263.78 ms
[2022-03-11 12:09:56	                main:574]	:	INFO	:	Epoch 1849 | loss: 0.030835 | val_loss: 0.030853 | Time: 6270.14 ms
[2022-03-11 12:10:02	                main:574]	:	INFO	:	Epoch 1850 | loss: 0.0308208 | val_loss: 0.0308394 | Time: 6269.18 ms
[2022-03-11 12:10:08	                main:574]	:	INFO	:	Epoch 1851 | loss: 0.0308155 | val_loss: 0.0308338 | Time: 6256.45 ms
[2022-03-11 12:10:14	                main:574]	:	INFO	:	Epoch 1852 | loss: 0.0308044 | val_loss: 0.0308339 | Time: 6293.86 ms
[2022-03-11 12:10:21	                main:574]	:	INFO	:	Epoch 1853 | loss: 0.030804 | val_loss: 0.0308254 | Time: 6255.61 ms
[2022-03-11 12:10:27	                main:574]	:	INFO	:	Epoch 1854 | loss: 0.0308026 | val_loss: 0.0308166 | Time: 6273.71 ms
[2022-03-11 12:10:33	                main:574]	:	INFO	:	Epoch 1855 | loss: 0.030813 | val_loss: 0.0308351 | Time: 6270.25 ms
[2022-03-11 12:10:39	                main:574]	:	INFO	:	Epoch 1856 | loss: 0.030834 | val_loss: 0.030862 | Time: 6260.93 ms
[2022-03-11 12:10:46	                main:574]	:	INFO	:	Epoch 1857 | loss: 0.0308345 | val_loss: 0.030863 | Time: 6269.95 ms
[2022-03-11 12:10:52	                main:574]	:	INFO	:	Epoch 1858 | loss: 0.0308226 | val_loss: 0.0308397 | Time: 6260.01 ms
[2022-03-11 12:10:58	                main:574]	:	INFO	:	Epoch 1859 | loss: 0.0308156 | val_loss: 0.0308297 | Time: 6253.41 ms
[2022-03-11 12:11:04	                main:574]	:	INFO	:	Epoch 1860 | loss: 0.030806 | val_loss: 0.0308325 | Time: 6253.68 ms
[2022-03-11 12:11:11	                main:574]	:	INFO	:	Epoch 1861 | loss: 0.0308163 | val_loss: 0.0308515 | Time: 6262.46 ms
[2022-03-11 12:11:17	                main:574]	:	INFO	:	Epoch 1862 | loss: 0.0308082 | val_loss: 0.030832 | Time: 6273.32 ms
[2022-03-11 12:11:23	                main:574]	:	INFO	:	Epoch 1863 | loss: 0.0308146 | val_loss: 0.0308674 | Time: 6260.23 ms
[2022-03-11 12:11:30	                main:574]	:	INFO	:	Epoch 1864 | loss: 0.0308256 | val_loss: 0.0308408 | Time: 6260.98 ms
[2022-03-11 12:11:36	                main:574]	:	INFO	:	Epoch 1865 | loss: 0.0308354 | val_loss: 0.0308557 | Time: 6261.71 ms
[2022-03-11 12:11:42	                main:574]	:	INFO	:	Epoch 1866 | loss: 0.0308282 | val_loss: 0.030839 | Time: 6255.79 ms
[2022-03-11 12:11:48	                main:574]	:	INFO	:	Epoch 1867 | loss: 0.0308158 | val_loss: 0.0308446 | Time: 6256.87 ms
[2022-03-11 12:11:55	                main:574]	:	INFO	:	Epoch 1868 | loss: 0.0308838 | val_loss: 0.0309158 | Time: 6263.51 ms
[2022-03-11 12:12:01	                main:574]	:	INFO	:	Epoch 1869 | loss: 0.0308854 | val_loss: 0.0308698 | Time: 6255.82 ms
[2022-03-11 12:12:07	                main:574]	:	INFO	:	Epoch 1870 | loss: 0.0308416 | val_loss: 0.0308484 | Time: 6278.17 ms
[2022-03-11 12:12:13	                main:574]	:	INFO	:	Epoch 1871 | loss: 0.030824 | val_loss: 0.0308356 | Time: 6267.41 ms
[2022-03-11 12:12:20	                main:574]	:	INFO	:	Epoch 1872 | loss: 0.030819 | val_loss: 0.0308259 | Time: 6270.18 ms
[2022-03-11 12:12:26	                main:574]	:	INFO	:	Epoch 1873 | loss: 0.0308219 | val_loss: 0.0308257 | Time: 6267.03 ms
[2022-03-11 12:12:32	                main:574]	:	INFO	:	Epoch 1874 | loss: 0.0308205 | val_loss: 0.0308492 | Time: 6255.29 ms
[2022-03-11 12:12:39	                main:574]	:	INFO	:	Epoch 1875 | loss: 0.0308699 | val_loss: 0.0308611 | Time: 6259.69 ms
[2022-03-11 12:12:45	                main:574]	:	INFO	:	Epoch 1876 | loss: 0.0308387 | val_loss: 0.0308293 | Time: 6268.81 ms
[2022-03-11 12:12:51	                main:574]	:	INFO	:	Epoch 1877 | loss: 0.0308453 | val_loss: 0.0309344 | Time: 6261.92 ms
[2022-03-11 12:12:57	                main:574]	:	INFO	:	Epoch 1878 | loss: 0.0310272 | val_loss: 0.0310353 | Time: 6270 ms
[2022-03-11 12:13:04	                main:574]	:	INFO	:	Epoch 1879 | loss: 0.031013 | val_loss: 0.0309779 | Time: 6268.14 ms
[2022-03-11 12:13:10	                main:574]	:	INFO	:	Epoch 1880 | loss: 0.0309578 | val_loss: 0.0309178 | Time: 6262.94 ms
[2022-03-11 12:13:16	                main:574]	:	INFO	:	Epoch 1881 | loss: 0.0309147 | val_loss: 0.0308901 | Time: 6258.22 ms
[2022-03-11 12:13:22	                main:574]	:	INFO	:	Epoch 1882 | loss: 0.0308893 | val_loss: 0.0308825 | Time: 6263.83 ms
[2022-03-11 12:13:29	                main:574]	:	INFO	:	Epoch 1883 | loss: 0.0308885 | val_loss: 0.0309453 | Time: 6262.57 ms
[2022-03-11 12:13:35	                main:574]	:	INFO	:	Epoch 1884 | loss: 0.0309531 | val_loss: 0.0309489 | Time: 6273.05 ms
[2022-03-11 12:13:41	                main:574]	:	INFO	:	Epoch 1885 | loss: 0.0309081 | val_loss: 0.0309287 | Time: 6277.31 ms
[2022-03-11 12:13:47	                main:574]	:	INFO	:	Epoch 1886 | loss: 0.0308778 | val_loss: 0.030887 | Time: 6264.67 ms
[2022-03-11 12:13:54	                main:574]	:	INFO	:	Epoch 1887 | loss: 0.0308614 | val_loss: 0.030962 | Time: 6264.1 ms
[2022-03-11 12:14:00	                main:574]	:	INFO	:	Epoch 1888 | loss: 0.031094 | val_loss: 0.0311198 | Time: 6261.56 ms
[2022-03-11 12:14:06	                main:574]	:	INFO	:	Epoch 1889 | loss: 0.0310786 | val_loss: 0.0310541 | Time: 6266.41 ms
[2022-03-11 12:14:13	                main:574]	:	INFO	:	Epoch 1890 | loss: 0.0310369 | val_loss: 0.0310319 | Time: 6263.09 ms
[2022-03-11 12:14:19	                main:574]	:	INFO	:	Epoch 1891 | loss: 0.031022 | val_loss: 0.0309927 | Time: 6254.08 ms
[2022-03-11 12:14:25	                main:574]	:	INFO	:	Epoch 1892 | loss: 0.0309752 | val_loss: 0.0309417 | Time: 6258.07 ms
[2022-03-11 12:14:31	                main:574]	:	INFO	:	Epoch 1893 | loss: 0.030916 | val_loss: 0.0309197 | Time: 6268.01 ms
[2022-03-11 12:14:38	                main:574]	:	INFO	:	Epoch 1894 | loss: 0.0308927 | val_loss: 0.0308965 | Time: 6257.73 ms
[2022-03-11 12:14:44	                main:574]	:	INFO	:	Epoch 1895 | loss: 0.0308822 | val_loss: 0.0308897 | Time: 6249.86 ms
[2022-03-11 12:14:50	                main:574]	:	INFO	:	Epoch 1896 | loss: 0.0308616 | val_loss: 0.0308752 | Time: 6261.59 ms
[2022-03-11 12:14:56	                main:574]	:	INFO	:	Epoch 1897 | loss: 0.0308607 | val_loss: 0.0308728 | Time: 6256.71 ms
[2022-03-11 12:15:03	                main:574]	:	INFO	:	Epoch 1898 | loss: 0.0308919 | val_loss: 0.0310069 | Time: 6269.63 ms
[2022-03-11 12:15:09	                main:574]	:	INFO	:	Epoch 1899 | loss: 0.0310491 | val_loss: 0.0310439 | Time: 6257.22 ms
[2022-03-11 12:15:15	                main:574]	:	INFO	:	Epoch 1900 | loss: 0.0310379 | val_loss: 0.0309749 | Time: 6267.14 ms
[2022-03-11 12:15:21	                main:574]	:	INFO	:	Epoch 1901 | loss: 0.0309558 | val_loss: 0.0308855 | Time: 6268.5 ms
[2022-03-11 12:15:28	                main:574]	:	INFO	:	Epoch 1902 | loss: 0.0309153 | val_loss: 0.0309136 | Time: 6269.16 ms
[2022-03-11 12:15:34	                main:574]	:	INFO	:	Epoch 1903 | loss: 0.0309034 | val_loss: 0.0308737 | Time: 6265.65 ms
[2022-03-11 12:15:40	                main:574]	:	INFO	:	Epoch 1904 | loss: 0.0308756 | val_loss: 0.030862 | Time: 6257.97 ms
[2022-03-11 12:15:47	                main:574]	:	INFO	:	Epoch 1905 | loss: 0.0308644 | val_loss: 0.0308543 | Time: 6258.67 ms
[2022-03-11 12:15:53	                main:574]	:	INFO	:	Epoch 1906 | loss: 0.0308812 | val_loss: 0.0308867 | Time: 6264.41 ms
[2022-03-11 12:15:59	                main:574]	:	INFO	:	Epoch 1907 | loss: 0.0308682 | val_loss: 0.0308578 | Time: 6271.07 ms
[2022-03-11 12:16:05	                main:574]	:	INFO	:	Epoch 1908 | loss: 0.0308859 | val_loss: 0.030871 | Time: 6302.63 ms
[2022-03-11 12:16:12	                main:574]	:	INFO	:	Epoch 1909 | loss: 0.0308864 | val_loss: 0.0308794 | Time: 6336.04 ms
[2022-03-11 12:16:18	                main:574]	:	INFO	:	Epoch 1910 | loss: 0.0309011 | val_loss: 0.0308785 | Time: 6302.35 ms
[2022-03-11 12:16:24	                main:574]	:	INFO	:	Epoch 1911 | loss: 0.0308759 | val_loss: 0.0308796 | Time: 6266.58 ms
[2022-03-11 12:16:31	                main:574]	:	INFO	:	Epoch 1912 | loss: 0.0308627 | val_loss: 0.0308572 | Time: 6262.33 ms
[2022-03-11 12:16:37	                main:574]	:	INFO	:	Epoch 1913 | loss: 0.0308482 | val_loss: 0.0308494 | Time: 6265.68 ms
[2022-03-11 12:16:43	                main:574]	:	INFO	:	Epoch 1914 | loss: 0.0308409 | val_loss: 0.0308444 | Time: 6264 ms
[2022-03-11 12:16:49	                main:574]	:	INFO	:	Epoch 1915 | loss: 0.0308346 | val_loss: 0.0308507 | Time: 6264.38 ms
[2022-03-11 12:16:56	                main:574]	:	INFO	:	Epoch 1916 | loss: 0.0308872 | val_loss: 0.0309989 | Time: 6255.72 ms
[2022-03-11 12:17:02	                main:574]	:	INFO	:	Epoch 1917 | loss: 0.0309384 | val_loss: 0.0308781 | Time: 6272.61 ms
[2022-03-11 12:17:08	                main:574]	:	INFO	:	Epoch 1918 | loss: 0.0308703 | val_loss: 0.0308532 | Time: 6264.29 ms
[2022-03-11 12:17:14	                main:574]	:	INFO	:	Epoch 1919 | loss: 0.0308502 | val_loss: 0.0308524 | Time: 6260 ms
[2022-03-11 12:17:21	                main:574]	:	INFO	:	Epoch 1920 | loss: 0.030856 | val_loss: 0.0308742 | Time: 6257.55 ms
[2022-03-11 12:17:27	                main:574]	:	INFO	:	Epoch 1921 | loss: 0.0308622 | val_loss: 0.0308497 | Time: 6263.92 ms
[2022-03-11 12:17:33	                main:574]	:	INFO	:	Epoch 1922 | loss: 0.0308432 | val_loss: 0.0308508 | Time: 6259.79 ms
[2022-03-11 12:17:40	                main:574]	:	INFO	:	Epoch 1923 | loss: 0.0308499 | val_loss: 0.03084 | Time: 6271.18 ms
[2022-03-11 12:17:46	                main:574]	:	INFO	:	Epoch 1924 | loss: 0.0308329 | val_loss: 0.0308359 | Time: 6272.87 ms
[2022-03-11 12:17:52	                main:574]	:	INFO	:	Epoch 1925 | loss: 0.0308239 | val_loss: 0.0308264 | Time: 6280.28 ms
[2022-03-11 12:17:58	                main:574]	:	INFO	:	Epoch 1926 | loss: 0.0308251 | val_loss: 0.0308198 | Time: 6256.02 ms
[2022-03-11 12:18:05	                main:574]	:	INFO	:	Epoch 1927 | loss: 0.030826 | val_loss: 0.0308262 | Time: 6263.12 ms
[2022-03-11 12:18:11	                main:574]	:	INFO	:	Epoch 1928 | loss: 0.0308211 | val_loss: 0.0308209 | Time: 6262.59 ms
[2022-03-11 12:18:17	                main:574]	:	INFO	:	Epoch 1929 | loss: 0.0308188 | val_loss: 0.0308287 | Time: 6261.14 ms
[2022-03-11 12:18:23	                main:574]	:	INFO	:	Epoch 1930 | loss: 0.030832 | val_loss: 0.03083 | Time: 6254.67 ms
[2022-03-11 12:18:30	                main:574]	:	INFO	:	Epoch 1931 | loss: 0.0308532 | val_loss: 0.0309112 | Time: 6267.12 ms
[2022-03-11 12:18:36	                main:574]	:	INFO	:	Epoch 1932 | loss: 0.0308786 | val_loss: 0.0308735 | Time: 6267.08 ms
[2022-03-11 12:18:42	                main:574]	:	INFO	:	Epoch 1933 | loss: 0.0308721 | val_loss: 0.0308555 | Time: 6251.05 ms
[2022-03-11 12:18:49	                main:574]	:	INFO	:	Epoch 1934 | loss: 0.0309327 | val_loss: 0.0309653 | Time: 6286.57 ms
[2022-03-11 12:18:55	                main:574]	:	INFO	:	Epoch 1935 | loss: 0.0309365 | val_loss: 0.0309017 | Time: 6366.58 ms
[2022-03-11 12:19:01	                main:574]	:	INFO	:	Epoch 1936 | loss: 0.0309012 | val_loss: 0.0308783 | Time: 6288.65 ms
[2022-03-11 12:19:07	                main:574]	:	INFO	:	Epoch 1937 | loss: 0.0308747 | val_loss: 0.030872 | Time: 6282.87 ms
[2022-03-11 12:19:14	                main:574]	:	INFO	:	Epoch 1938 | loss: 0.0308648 | val_loss: 0.0308674 | Time: 6268.98 ms
[2022-03-11 12:19:20	                main:574]	:	INFO	:	Epoch 1939 | loss: 0.0308445 | val_loss: 0.0308539 | Time: 6256.71 ms
[2022-03-11 12:19:26	                main:574]	:	INFO	:	Epoch 1940 | loss: 0.0308383 | val_loss: 0.0308614 | Time: 6251.93 ms
[2022-03-11 12:19:33	                main:574]	:	INFO	:	Epoch 1941 | loss: 0.0308473 | val_loss: 0.0308403 | Time: 6247.65 ms
[2022-03-11 12:19:39	                main:574]	:	INFO	:	Epoch 1942 | loss: 0.0308322 | val_loss: 0.0308343 | Time: 6266.36 ms
[2022-03-11 12:19:45	                main:574]	:	INFO	:	Epoch 1943 | loss: 0.0308289 | val_loss: 0.0308226 | Time: 6251.97 ms
[2022-03-11 12:19:51	                main:574]	:	INFO	:	Epoch 1944 | loss: 0.0308228 | val_loss: 0.0308411 | Time: 6255.9 ms
[2022-03-11 12:19:58	                main:574]	:	INFO	:	Epoch 1945 | loss: 0.0308259 | val_loss: 0.0308312 | Time: 6523.15 ms
[2022-03-11 12:20:04	                main:574]	:	INFO	:	Epoch 1946 | loss: 0.0308194 | val_loss: 0.0308405 | Time: 6262.73 ms
[2022-03-11 12:20:10	                main:574]	:	INFO	:	Epoch 1947 | loss: 0.0308134 | val_loss: 0.0308281 | Time: 6252.04 ms
[2022-03-11 12:20:17	                main:574]	:	INFO	:	Epoch 1948 | loss: 0.0308109 | val_loss: 0.030844 | Time: 6251.47 ms
[2022-03-11 12:20:23	                main:574]	:	INFO	:	Epoch 1949 | loss: 0.0310048 | val_loss: 0.0310148 | Time: 6270.93 ms
[2022-03-11 12:20:29	                main:574]	:	INFO	:	Epoch 1950 | loss: 0.0308889 | val_loss: 0.0308563 | Time: 6500.91 ms
[2022-03-11 12:20:36	                main:574]	:	INFO	:	Epoch 1951 | loss: 0.0310306 | val_loss: 0.0311625 | Time: 6339.37 ms
[2022-03-11 12:20:42	                main:574]	:	INFO	:	Epoch 1952 | loss: 0.0311532 | val_loss: 0.0311311 | Time: 6280.33 ms
[2022-03-11 12:20:48	                main:574]	:	INFO	:	Epoch 1953 | loss: 0.0311089 | val_loss: 0.0310893 | Time: 6257.97 ms
[2022-03-11 12:20:55	                main:574]	:	INFO	:	Epoch 1954 | loss: 0.0310659 | val_loss: 0.0310339 | Time: 6266.21 ms
[2022-03-11 12:21:01	                main:574]	:	INFO	:	Epoch 1955 | loss: 0.0310188 | val_loss: 0.0309869 | Time: 6264.9 ms
[2022-03-11 12:21:07	                main:574]	:	INFO	:	Epoch 1956 | loss: 0.030978 | val_loss: 0.0309531 | Time: 6270.07 ms
[2022-03-11 12:21:13	                main:574]	:	INFO	:	Epoch 1957 | loss: 0.0309501 | val_loss: 0.0309248 | Time: 6264.11 ms
[2022-03-11 12:21:20	                main:574]	:	INFO	:	Epoch 1958 | loss: 0.0309247 | val_loss: 0.0309225 | Time: 6268.78 ms
[2022-03-11 12:21:26	                main:574]	:	INFO	:	Epoch 1959 | loss: 0.0309262 | val_loss: 0.0309017 | Time: 6264.6 ms
[2022-03-11 12:21:32	                main:574]	:	INFO	:	Epoch 1960 | loss: 0.030949 | val_loss: 0.0309863 | Time: 6272.35 ms
[2022-03-11 12:21:38	                main:574]	:	INFO	:	Epoch 1961 | loss: 0.0310333 | val_loss: 0.0311024 | Time: 6268.34 ms
[2022-03-11 12:21:45	                main:574]	:	INFO	:	Epoch 1962 | loss: 0.031084 | val_loss: 0.0310502 | Time: 6264.7 ms
[2022-03-11 12:21:51	                main:574]	:	INFO	:	Epoch 1963 | loss: 0.0310377 | val_loss: 0.0310135 | Time: 6266.29 ms
[2022-03-11 12:21:57	                main:574]	:	INFO	:	Epoch 1964 | loss: 0.0309915 | val_loss: 0.0309593 | Time: 6267.85 ms
[2022-03-11 12:22:04	                main:574]	:	INFO	:	Epoch 1965 | loss: 0.0309484 | val_loss: 0.0309389 | Time: 6261.53 ms
[2022-03-11 12:22:10	                main:574]	:	INFO	:	Epoch 1966 | loss: 0.0309317 | val_loss: 0.0309257 | Time: 6266.93 ms
[2022-03-11 12:22:16	                main:574]	:	INFO	:	Epoch 1967 | loss: 0.0309093 | val_loss: 0.0309024 | Time: 6266.47 ms
[2022-03-11 12:22:22	                main:574]	:	INFO	:	Epoch 1968 | loss: 0.0308935 | val_loss: 0.0308813 | Time: 6265.17 ms
[2022-03-11 12:22:29	                main:574]	:	INFO	:	Epoch 1969 | loss: 0.0308777 | val_loss: 0.030877 | Time: 6274.69 ms
[2022-03-11 12:22:35	                main:574]	:	INFO	:	Epoch 1970 | loss: 0.0308701 | val_loss: 0.0308624 | Time: 6252.68 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-11 12:23:37	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-11 12:23:37	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-11 12:23:37	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-11 12:23:37	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-11 12:23:37	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-11 12:23:37	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-11 12:23:37	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-11 12:23:37	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-11 12:23:37	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-11 12:23:37	                main:474]	:	INFO	:	Configuration: 
[2022-03-11 12:23:37	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-11 12:23:37	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-11 12:23:37	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-11 12:23:37	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-11 12:23:37	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-11 12:23:37	                main:480]	:	INFO	:	    Patience: 10
[2022-03-11 12:23:37	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-11 12:23:37	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-11 12:23:37	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-11 12:23:37	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-11 12:23:37	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-11 12:23:37	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-11 12:23:38	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-11 12:23:40	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-11 12:23:40	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-11 12:23:40	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-11 12:23:40	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-11 12:23:40	                main:494]	:	INFO	:	Creating Model
[2022-03-11 12:23:40	                main:507]	:	INFO	:	Preparing config file
[2022-03-11 12:23:40	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-11 12:23:40	                main:512]	:	INFO	:	Loading config
[2022-03-11 12:23:40	                main:514]	:	INFO	:	Loading state
[2022-03-11 12:23:41	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-11 12:23:41	                main:562]	:	INFO	:	Starting Training
[2022-03-11 12:23:48	                main:574]	:	INFO	:	Epoch 1962 | loss: 0.0310649 | val_loss: 0.0309443 | Time: 6254.96 ms
[2022-03-11 12:23:54	                main:574]	:	INFO	:	Epoch 1963 | loss: 0.0309066 | val_loss: 0.0308839 | Time: 6087.02 ms
[2022-03-11 12:24:00	                main:574]	:	INFO	:	Epoch 1964 | loss: 0.0308707 | val_loss: 0.030877 | Time: 6195.27 ms
[2022-03-11 12:24:06	                main:574]	:	INFO	:	Epoch 1965 | loss: 0.0308585 | val_loss: 0.0308616 | Time: 6230.32 ms
[2022-03-11 12:24:12	                main:574]	:	INFO	:	Epoch 1966 | loss: 0.0308884 | val_loss: 0.0308924 | Time: 6250.89 ms
[2022-03-11 12:24:19	                main:574]	:	INFO	:	Epoch 1967 | loss: 0.0308806 | val_loss: 0.0308818 | Time: 6268.86 ms
[2022-03-11 12:24:25	                main:574]	:	INFO	:	Epoch 1968 | loss: 0.0308708 | val_loss: 0.0308693 | Time: 6263.61 ms
[2022-03-11 12:24:31	                main:574]	:	INFO	:	Epoch 1969 | loss: 0.0308418 | val_loss: 0.0308505 | Time: 6253.96 ms
[2022-03-11 12:24:37	                main:574]	:	INFO	:	Epoch 1970 | loss: 0.0308301 | val_loss: 0.03086 | Time: 6252.57 ms
[2022-03-11 12:24:44	                main:574]	:	INFO	:	Epoch 1971 | loss: 0.0308582 | val_loss: 0.0308694 | Time: 6255.31 ms
[2022-03-11 12:24:50	                main:574]	:	INFO	:	Epoch 1972 | loss: 0.0308547 | val_loss: 0.0308468 | Time: 6263.25 ms
[2022-03-11 12:24:56	                main:574]	:	INFO	:	Epoch 1973 | loss: 0.0308249 | val_loss: 0.0308239 | Time: 6253.73 ms
[2022-03-11 12:25:02	                main:574]	:	INFO	:	Epoch 1974 | loss: 0.0308109 | val_loss: 0.0308175 | Time: 6247.97 ms
[2022-03-11 12:25:09	                main:574]	:	INFO	:	Epoch 1975 | loss: 0.0308154 | val_loss: 0.0308656 | Time: 6250.79 ms
[2022-03-11 12:25:15	                main:574]	:	INFO	:	Epoch 1976 | loss: 0.0308738 | val_loss: 0.0308649 | Time: 6257.29 ms
[2022-03-11 12:25:21	                main:574]	:	INFO	:	Epoch 1977 | loss: 0.0308493 | val_loss: 0.0308487 | Time: 6256.43 ms
[2022-03-11 12:25:27	                main:574]	:	INFO	:	Epoch 1978 | loss: 0.0308162 | val_loss: 0.0308441 | Time: 6256.21 ms
[2022-03-11 12:25:34	                main:574]	:	INFO	:	Epoch 1979 | loss: 0.0308865 | val_loss: 0.0309292 | Time: 6263.06 ms
[2022-03-11 12:25:40	                main:574]	:	INFO	:	Epoch 1980 | loss: 0.0309183 | val_loss: 0.0308999 | Time: 6254.48 ms
[2022-03-11 12:25:46	                main:574]	:	INFO	:	Epoch 1981 | loss: 0.030876 | val_loss: 0.0308536 | Time: 6252.78 ms
[2022-03-11 12:25:53	                main:574]	:	INFO	:	Epoch 1982 | loss: 0.0308353 | val_loss: 0.0308558 | Time: 6251.64 ms
[2022-03-11 12:25:59	                main:574]	:	INFO	:	Epoch 1983 | loss: 0.030834 | val_loss: 0.0308167 | Time: 6253.99 ms
[2022-03-11 12:26:05	                main:574]	:	INFO	:	Epoch 1984 | loss: 0.0308079 | val_loss: 0.0308181 | Time: 6271.52 ms
[2022-03-11 12:26:12	                main:574]	:	INFO	:	Epoch 1985 | loss: 0.0308193 | val_loss: 0.0308652 | Time: 6512.24 ms
[2022-03-11 12:26:18	                main:574]	:	INFO	:	Epoch 1986 | loss: 0.0308338 | val_loss: 0.0308366 | Time: 6303.08 ms
[2022-03-11 12:26:24	                main:574]	:	INFO	:	Epoch 1987 | loss: 0.0308076 | val_loss: 0.0308439 | Time: 6278.95 ms
[2022-03-11 12:26:30	                main:574]	:	INFO	:	Epoch 1988 | loss: 0.0308187 | val_loss: 0.0308259 | Time: 6268.4 ms
[2022-03-11 12:26:37	                main:574]	:	INFO	:	Epoch 1989 | loss: 0.0308069 | val_loss: 0.0308286 | Time: 6273.02 ms
[2022-03-11 12:26:43	                main:574]	:	INFO	:	Epoch 1990 | loss: 0.0308016 | val_loss: 0.0308151 | Time: 6257.31 ms
[2022-03-11 12:26:49	                main:574]	:	INFO	:	Epoch 1991 | loss: 0.03079 | val_loss: 0.0308147 | Time: 6252.15 ms
[2022-03-11 12:26:55	                main:574]	:	INFO	:	Epoch 1992 | loss: 0.0308068 | val_loss: 0.0308234 | Time: 6260.65 ms
[2022-03-11 12:27:02	                main:574]	:	INFO	:	Epoch 1993 | loss: 0.0308275 | val_loss: 0.0308495 | Time: 6257.47 ms
[2022-03-11 12:27:08	                main:574]	:	INFO	:	Epoch 1994 | loss: 0.0308165 | val_loss: 0.0308096 | Time: 6261.84 ms
[2022-03-11 12:27:14	                main:574]	:	INFO	:	Epoch 1995 | loss: 0.0307958 | val_loss: 0.030797 | Time: 6247.62 ms
[2022-03-11 12:27:21	                main:574]	:	INFO	:	Epoch 1996 | loss: 0.0307875 | val_loss: 0.0308002 | Time: 6260.88 ms
[2022-03-11 12:27:27	                main:574]	:	INFO	:	Epoch 1997 | loss: 0.0307841 | val_loss: 0.0307931 | Time: 6259.34 ms
[2022-03-11 12:27:33	                main:574]	:	INFO	:	Epoch 1998 | loss: 0.0307872 | val_loss: 0.0307964 | Time: 6249.03 ms
[2022-03-11 12:27:39	                main:574]	:	INFO	:	Epoch 1999 | loss: 0.0308201 | val_loss: 0.030918 | Time: 6248.51 ms
[2022-03-11 12:27:46	                main:574]	:	INFO	:	Epoch 2000 | loss: 0.030879 | val_loss: 0.03087 | Time: 6264.07 ms
[2022-03-11 12:27:52	                main:574]	:	INFO	:	Epoch 2001 | loss: 0.0308647 | val_loss: 0.0308564 | Time: 6258.78 ms
[2022-03-11 12:27:58	                main:574]	:	INFO	:	Epoch 2002 | loss: 0.0308049 | val_loss: 0.0307973 | Time: 6257.06 ms
[2022-03-11 12:28:04	                main:574]	:	INFO	:	Epoch 2003 | loss: 0.0307911 | val_loss: 0.0307966 | Time: 6262.01 ms
[2022-03-11 12:28:11	                main:574]	:	INFO	:	Epoch 2004 | loss: 0.0308194 | val_loss: 0.0309028 | Time: 6247.94 ms
[2022-03-11 12:28:17	                main:574]	:	INFO	:	Epoch 2005 | loss: 0.0309326 | val_loss: 0.0309289 | Time: 6264.21 ms
[2022-03-11 12:28:23	                main:574]	:	INFO	:	Epoch 2006 | loss: 0.0309891 | val_loss: 0.030959 | Time: 6248.6 ms
[2022-03-11 12:28:29	                main:574]	:	INFO	:	Epoch 2007 | loss: 0.0309186 | val_loss: 0.0309115 | Time: 6255.95 ms
[2022-03-11 12:28:36	                main:574]	:	INFO	:	Epoch 2008 | loss: 0.0308816 | val_loss: 0.0308616 | Time: 6250.71 ms
[2022-03-11 12:28:42	                main:574]	:	INFO	:	Epoch 2009 | loss: 0.0308385 | val_loss: 0.0308498 | Time: 6262.18 ms
[2022-03-11 12:28:48	                main:574]	:	INFO	:	Epoch 2010 | loss: 0.0308286 | val_loss: 0.0308325 | Time: 6312.28 ms
[2022-03-11 12:28:55	                main:574]	:	INFO	:	Epoch 2011 | loss: 0.0308193 | val_loss: 0.0308466 | Time: 6436.82 ms
[2022-03-11 12:29:01	                main:574]	:	INFO	:	Epoch 2012 | loss: 0.0308421 | val_loss: 0.0308333 | Time: 6317.54 ms
[2022-03-11 12:29:07	                main:574]	:	INFO	:	Epoch 2013 | loss: 0.0308174 | val_loss: 0.0308147 | Time: 6279.02 ms
[2022-03-11 12:29:14	                main:574]	:	INFO	:	Epoch 2014 | loss: 0.0308036 | val_loss: 0.0308213 | Time: 6277.94 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-11 12:30:19	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-11 12:30:19	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-11 12:30:19	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-11 12:30:19	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-11 12:30:20	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-11 12:30:20	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-11 12:30:20	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-11 12:30:20	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-11 12:30:20	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-11 12:30:20	                main:474]	:	INFO	:	Configuration: 
[2022-03-11 12:30:20	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-11 12:30:20	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-11 12:30:20	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-11 12:30:20	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-11 12:30:20	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-11 12:30:20	                main:480]	:	INFO	:	    Patience: 10
[2022-03-11 12:30:20	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-11 12:30:20	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-11 12:30:20	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-11 12:30:20	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-11 12:30:20	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-11 12:30:20	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-11 12:30:26	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-11 12:30:29	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-11 12:30:29	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-11 12:30:30	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-11 12:30:30	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-11 12:30:30	                main:494]	:	INFO	:	Creating Model
[2022-03-11 12:30:30	                main:507]	:	INFO	:	Preparing config file
[2022-03-11 12:30:30	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-11 12:30:30	                main:512]	:	INFO	:	Loading config
[2022-03-11 12:30:30	                main:514]	:	INFO	:	Loading state
[2022-03-11 12:30:31	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-11 12:30:31	                main:562]	:	INFO	:	Starting Training
[2022-03-11 12:30:40	                main:574]	:	INFO	:	Epoch 2011 | loss: 0.0309055 | val_loss: 0.0308569 | Time: 8751.44 ms
[2022-03-11 12:30:46	                main:574]	:	INFO	:	Epoch 2012 | loss: 0.0308157 | val_loss: 0.0308198 | Time: 6339.24 ms
[2022-03-11 12:30:53	                main:574]	:	INFO	:	Epoch 2013 | loss: 0.0308187 | val_loss: 0.0308362 | Time: 6410.36 ms
[2022-03-11 12:30:59	                main:574]	:	INFO	:	Epoch 2014 | loss: 0.0307985 | val_loss: 0.0308015 | Time: 6441.05 ms
[2022-03-11 12:31:05	                main:574]	:	INFO	:	Epoch 2015 | loss: 0.0307868 | val_loss: 0.0308036 | Time: 6408.18 ms
[2022-03-11 12:31:12	                main:574]	:	INFO	:	Epoch 2016 | loss: 0.0307848 | val_loss: 0.0308054 | Time: 6348.88 ms
[2022-03-11 12:31:18	                main:574]	:	INFO	:	Epoch 2017 | loss: 0.0307787 | val_loss: 0.030787 | Time: 6306.78 ms
[2022-03-11 12:31:24	                main:574]	:	INFO	:	Epoch 2018 | loss: 0.0307776 | val_loss: 0.0307938 | Time: 6291.35 ms
[2022-03-11 12:31:31	                main:574]	:	INFO	:	Epoch 2019 | loss: 0.0307845 | val_loss: 0.0308448 | Time: 6299.59 ms
[2022-03-11 12:31:37	                main:574]	:	INFO	:	Epoch 2020 | loss: 0.0308383 | val_loss: 0.0308404 | Time: 6760.96 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-11 12:32:44	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-11 12:32:44	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-11 12:32:44	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-11 12:32:44	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-11 12:32:44	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-11 12:32:44	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-11 12:32:44	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-11 12:32:44	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-11 12:32:44	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-11 12:32:44	                main:474]	:	INFO	:	Configuration: 
[2022-03-11 12:32:44	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-11 12:32:44	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-11 12:32:44	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-11 12:32:44	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-11 12:32:44	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-11 12:32:44	                main:480]	:	INFO	:	    Patience: 10
[2022-03-11 12:32:44	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-11 12:32:44	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-11 12:32:44	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-11 12:32:44	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-11 12:32:44	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-11 12:32:44	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-11 12:32:45	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-11 12:32:47	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-11 12:32:47	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-11 12:32:47	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-11 12:32:47	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-11 12:32:47	                main:494]	:	INFO	:	Creating Model
[2022-03-11 12:32:47	                main:507]	:	INFO	:	Preparing config file
[2022-03-11 12:32:47	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-11 12:32:47	                main:512]	:	INFO	:	Loading config
[2022-03-11 12:32:47	                main:514]	:	INFO	:	Loading state
[2022-03-11 12:32:48	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-11 12:32:48	                main:562]	:	INFO	:	Starting Training
[2022-03-11 12:32:55	                main:574]	:	INFO	:	Epoch 2019 | loss: 0.0308683 | val_loss: 0.0308024 | Time: 6330.73 ms
[2022-03-11 12:33:01	                main:574]	:	INFO	:	Epoch 2020 | loss: 0.0307875 | val_loss: 0.0307981 | Time: 6118.11 ms
[2022-03-11 12:33:07	                main:574]	:	INFO	:	Epoch 2021 | loss: 0.0307757 | val_loss: 0.0307894 | Time: 6195.7 ms
[2022-03-11 12:33:13	                main:574]	:	INFO	:	Epoch 2022 | loss: 0.0307998 | val_loss: 0.0308042 | Time: 6236.24 ms
[2022-03-11 12:33:20	                main:574]	:	INFO	:	Epoch 2023 | loss: 0.0307829 | val_loss: 0.0307991 | Time: 6267.23 ms
[2022-03-11 12:33:26	                main:574]	:	INFO	:	Epoch 2024 | loss: 0.0307759 | val_loss: 0.0308076 | Time: 6271.17 ms
[2022-03-11 12:33:32	                main:574]	:	INFO	:	Epoch 2025 | loss: 0.0307818 | val_loss: 0.0308092 | Time: 6318.58 ms
[2022-03-11 12:33:38	                main:574]	:	INFO	:	Epoch 2026 | loss: 0.0307898 | val_loss: 0.0308258 | Time: 6255.52 ms
[2022-03-11 12:33:45	                main:574]	:	INFO	:	Epoch 2027 | loss: 0.0308155 | val_loss: 0.0308457 | Time: 6259.01 ms
[2022-03-11 12:33:51	                main:574]	:	INFO	:	Epoch 2028 | loss: 0.0308085 | val_loss: 0.0308185 | Time: 6256.82 ms
[2022-03-11 12:33:57	                main:574]	:	INFO	:	Epoch 2029 | loss: 0.0308103 | val_loss: 0.0308083 | Time: 6263.09 ms
[2022-03-11 12:34:03	                main:574]	:	INFO	:	Epoch 2030 | loss: 0.0307823 | val_loss: 0.0307975 | Time: 6251.9 ms
[2022-03-11 12:34:10	                main:574]	:	INFO	:	Epoch 2031 | loss: 0.0307743 | val_loss: 0.0307997 | Time: 6267.13 ms
[2022-03-11 12:34:16	                main:574]	:	INFO	:	Epoch 2032 | loss: 0.0307772 | val_loss: 0.0307839 | Time: 6261.09 ms
[2022-03-11 12:34:22	                main:574]	:	INFO	:	Epoch 2033 | loss: 0.0307591 | val_loss: 0.0307837 | Time: 6258.38 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-11 12:35:29	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-11 12:35:29	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-11 12:35:29	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-11 12:35:29	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-11 12:35:29	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-11 12:35:29	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-11 12:35:29	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-11 12:35:29	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-11 12:35:29	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-11 12:35:29	                main:474]	:	INFO	:	Configuration: 
[2022-03-11 12:35:29	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-11 12:35:29	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-11 12:35:29	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-11 12:35:29	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-11 12:35:29	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-11 12:35:29	                main:480]	:	INFO	:	    Patience: 10
[2022-03-11 12:35:29	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-11 12:35:29	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-11 12:35:29	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-11 12:35:29	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-11 12:35:29	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-11 12:35:29	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-11 12:35:30	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-11 12:35:32	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-11 12:35:32	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-11 12:35:32	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-11 12:35:32	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-11 12:35:32	                main:494]	:	INFO	:	Creating Model
[2022-03-11 12:35:32	                main:507]	:	INFO	:	Preparing config file
[2022-03-11 12:35:32	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-11 12:35:32	                main:512]	:	INFO	:	Loading config
[2022-03-11 12:35:32	                main:514]	:	INFO	:	Loading state
[2022-03-11 12:35:33	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-11 12:35:33	                main:562]	:	INFO	:	Starting Training
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-11 12:36:45	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-11 12:36:45	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-11 12:36:45	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-11 12:36:45	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-11 12:36:45	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-11 12:36:45	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-11 12:36:45	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-11 12:36:46	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-11 12:36:46	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-11 12:36:46	                main:474]	:	INFO	:	Configuration: 
[2022-03-11 12:36:46	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-11 12:36:46	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-11 12:36:46	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-11 12:36:46	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-11 12:36:46	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-11 12:36:46	                main:480]	:	INFO	:	    Patience: 10
[2022-03-11 12:36:46	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-11 12:36:46	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-11 12:36:46	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-11 12:36:46	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-11 12:36:46	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-11 12:36:46	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-11 12:36:46	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-11 12:36:48	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-11 12:36:48	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-11 12:36:49	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-11 12:36:49	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-11 12:36:49	                main:494]	:	INFO	:	Creating Model
[2022-03-11 12:36:49	                main:507]	:	INFO	:	Preparing config file
[2022-03-11 12:36:49	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-11 12:36:49	                main:512]	:	INFO	:	Loading config
[2022-03-11 12:36:49	                main:514]	:	INFO	:	Loading state
[2022-03-11 12:36:50	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-11 12:36:50	                main:562]	:	INFO	:	Starting Training
[2022-03-11 12:36:56	                main:574]	:	INFO	:	Epoch 2028 | loss: 0.030859 | val_loss: 0.030824 | Time: 6339.26 ms
[2022-03-11 12:37:02	                main:574]	:	INFO	:	Epoch 2029 | loss: 0.0307866 | val_loss: 0.0308106 | Time: 6128.14 ms
[2022-03-11 12:37:08	                main:574]	:	INFO	:	Epoch 2030 | loss: 0.0307915 | val_loss: 0.0308012 | Time: 6206.82 ms
[2022-03-11 12:37:15	                main:574]	:	INFO	:	Epoch 2031 | loss: 0.0308026 | val_loss: 0.0308006 | Time: 6251.99 ms
[2022-03-11 12:37:21	                main:574]	:	INFO	:	Epoch 2032 | loss: 0.0308714 | val_loss: 0.0309283 | Time: 6260.42 ms
[2022-03-11 12:37:27	                main:574]	:	INFO	:	Epoch 2033 | loss: 0.0309122 | val_loss: 0.0308619 | Time: 6266.58 ms
[2022-03-11 12:37:33	                main:574]	:	INFO	:	Epoch 2034 | loss: 0.0308407 | val_loss: 0.0308379 | Time: 6254.87 ms
[2022-03-11 12:37:40	                main:574]	:	INFO	:	Epoch 2035 | loss: 0.030815 | val_loss: 0.0308153 | Time: 6254.93 ms
[2022-03-11 12:37:46	                main:574]	:	INFO	:	Epoch 2036 | loss: 0.0307902 | val_loss: 0.0307869 | Time: 6266.54 ms
[2022-03-11 12:37:52	                main:574]	:	INFO	:	Epoch 2037 | loss: 0.0308103 | val_loss: 0.0308178 | Time: 6269.41 ms
[2022-03-11 12:37:58	                main:574]	:	INFO	:	Epoch 2038 | loss: 0.0307901 | val_loss: 0.0308019 | Time: 6261.44 ms
[2022-03-11 12:38:05	                main:574]	:	INFO	:	Epoch 2039 | loss: 0.0307703 | val_loss: 0.0307968 | Time: 6256.61 ms
[2022-03-11 12:38:11	                main:574]	:	INFO	:	Epoch 2040 | loss: 0.0307767 | val_loss: 0.0308469 | Time: 6264.49 ms
[2022-03-11 12:38:17	                main:574]	:	INFO	:	Epoch 2041 | loss: 0.0308918 | val_loss: 0.0308981 | Time: 6254.56 ms
[2022-03-11 12:38:23	                main:574]	:	INFO	:	Epoch 2042 | loss: 0.0308536 | val_loss: 0.0308459 | Time: 6248.11 ms
[2022-03-11 12:38:30	                main:574]	:	INFO	:	Epoch 2043 | loss: 0.0308463 | val_loss: 0.0309121 | Time: 6261.98 ms
[2022-03-11 12:38:36	                main:574]	:	INFO	:	Epoch 2044 | loss: 0.0308413 | val_loss: 0.0308224 | Time: 6261.42 ms
[2022-03-11 12:38:42	                main:574]	:	INFO	:	Epoch 2045 | loss: 0.0308193 | val_loss: 0.0308124 | Time: 6260.72 ms
[2022-03-11 12:38:49	                main:574]	:	INFO	:	Epoch 2046 | loss: 0.0307941 | val_loss: 0.030809 | Time: 6285.09 ms
[2022-03-11 12:38:55	                main:574]	:	INFO	:	Epoch 2047 | loss: 0.0307849 | val_loss: 0.0307996 | Time: 6267.89 ms
[2022-03-11 12:39:01	                main:574]	:	INFO	:	Epoch 2048 | loss: 0.0307737 | val_loss: 0.0308018 | Time: 6262.28 ms
[2022-03-11 12:39:01	                main:597]	:	INFO	:	Saving trained model to model-final.pt, val_loss 0.0308018
[2022-03-11 12:39:01	                main:603]	:	INFO	:	Saving end state to config to file
[2022-03-11 12:39:01	                main:608]	:	INFO	:	Success, exiting..
12:39:01 (2020): called boinc_finish(0)

</stderr_txt>
]]>


©2022 MLC@Home Team
A project of the Cognition, Robotics, and Learning (CORAL) Lab at the University of Maryland, Baltimore County (UMBC)