Task 11520710

Name ParityModified-1639958860-4484-2-0_1
Workunit 8356463
Created 13 Feb 2022, 3:24:29 UTC
Sent 13 Feb 2022, 5:51:57 UTC
Report deadline 21 Feb 2022, 5:51:57 UTC
Received 21 Feb 2022, 0:08:08 UTC
Server state Over
Outcome Success
Client state Done
Exit status 0 (0x00000000)
Computer ID 16732
Run time 2 hours 52 min 42 sec
CPU time 2 hours 48 min 20 sec
Validate state Valid
Credit 4,160.00
Device peak FLOPS 1,132.04 GFLOPS
Application version Machine Learning Dataset Generator (GPU) v9.75 (cuda10200)
windows_x86_64
Peak working set size 1.62 GB
Peak swap size 3.57 GB
Peak disk usage 1.54 GB

Stderr output

<core_client_version>7.16.20</core_client_version>
<![CDATA[
<stderr_txt>
84]	:	INFO	:	    # Threads: 1
[2022-02-20 11:24:08	                main:486]	:	INFO	:	Preparing Dataset
[2022-02-20 11:24:08	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-02-20 11:24:08	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-02-20 11:24:10	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-02-20 11:24:10	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-02-20 11:24:10	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-02-20 11:24:10	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-02-20 11:24:10	                main:494]	:	INFO	:	Creating Model
[2022-02-20 11:24:10	                main:507]	:	INFO	:	Preparing config file
[2022-02-20 11:24:10	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-02-20 11:24:10	                main:512]	:	INFO	:	Loading config
[2022-02-20 11:24:10	                main:514]	:	INFO	:	Loading state
[2022-02-20 11:24:10	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-02-20 11:24:11	                main:562]	:	INFO	:	Starting Training
[2022-02-20 11:24:15	                main:574]	:	INFO	:	Epoch 1785 | loss: 0.0311567 | val_loss: 0.0312056 | Time: 4256.8 ms
[2022-02-20 11:24:20	                main:574]	:	INFO	:	Epoch 1786 | loss: 0.0311149 | val_loss: 0.0312197 | Time: 4709.57 ms
[2022-02-20 11:24:25	                main:574]	:	INFO	:	Epoch 1787 | loss: 0.0311127 | val_loss: 0.031221 | Time: 5068.19 ms
[2022-02-20 11:24:30	                main:574]	:	INFO	:	Epoch 1788 | loss: 0.031108 | val_loss: 0.0312087 | Time: 5218.47 ms
[2022-02-20 11:24:35	                main:574]	:	INFO	:	Epoch 1789 | loss: 0.0311068 | val_loss: 0.0312085 | Time: 5041.16 ms
[2022-02-20 11:24:40	                main:574]	:	INFO	:	Epoch 1790 | loss: 0.0311232 | val_loss: 0.0312034 | Time: 5072.17 ms
[2022-02-20 11:24:45	                main:574]	:	INFO	:	Epoch 1791 | loss: 0.0311155 | val_loss: 0.0312097 | Time: 5497.88 ms
[2022-02-20 11:24:51	                main:574]	:	INFO	:	Epoch 1792 | loss: 0.031111 | val_loss: 0.0312132 | Time: 5148.44 ms
[2022-02-20 11:24:56	                main:574]	:	INFO	:	Epoch 1793 | loss: 0.0311095 | val_loss: 0.0312211 | Time: 5461.08 ms
[2022-02-20 11:25:01	                main:574]	:	INFO	:	Epoch 1794 | loss: 0.0311061 | val_loss: 0.0312129 | Time: 5220.93 ms
[2022-02-20 11:25:06	                main:574]	:	INFO	:	Epoch 1795 | loss: 0.0311045 | val_loss: 0.0312248 | Time: 5072.67 ms
[2022-02-20 11:25:12	                main:574]	:	INFO	:	Epoch 1796 | loss: 0.0311055 | val_loss: 0.0312058 | Time: 5187.86 ms
[2022-02-20 11:25:17	                main:574]	:	INFO	:	Epoch 1797 | loss: 0.0311053 | val_loss: 0.0312136 | Time: 5120.85 ms
[2022-02-20 11:25:22	                main:574]	:	INFO	:	Epoch 1798 | loss: 0.0311016 | val_loss: 0.0312144 | Time: 5209.08 ms
[2022-02-20 11:25:27	                main:574]	:	INFO	:	Epoch 1799 | loss: 0.0311005 | val_loss: 0.0312178 | Time: 5090.37 ms
[2022-02-20 11:25:32	                main:574]	:	INFO	:	Epoch 1800 | loss: 0.0311148 | val_loss: 0.0312052 | Time: 4965.39 ms
[2022-02-20 11:25:37	                main:574]	:	INFO	:	Epoch 1801 | loss: 0.031111 | val_loss: 0.0312169 | Time: 5237.35 ms
[2022-02-20 11:25:42	                main:574]	:	INFO	:	Epoch 1802 | loss: 0.0311061 | val_loss: 0.0312199 | Time: 4833.31 ms
[2022-02-20 11:25:47	                main:574]	:	INFO	:	Epoch 1803 | loss: 0.0311036 | val_loss: 0.0312196 | Time: 5249.9 ms
[2022-02-20 11:25:53	                main:574]	:	INFO	:	Epoch 1804 | loss: 0.0311006 | val_loss: 0.0312273 | Time: 5347.44 ms
[2022-02-20 11:25:58	                main:574]	:	INFO	:	Epoch 1805 | loss: 0.0311035 | val_loss: 0.031209 | Time: 4915.79 ms
[2022-02-20 11:26:03	                main:574]	:	INFO	:	Epoch 1806 | loss: 0.0311067 | val_loss: 0.0312136 | Time: 5344.18 ms
[2022-02-20 11:26:08	                main:574]	:	INFO	:	Epoch 1807 | loss: 0.0311069 | val_loss: 0.0312098 | Time: 4970.17 ms
[2022-02-20 11:26:13	                main:574]	:	INFO	:	Epoch 1808 | loss: 0.0310999 | val_loss: 0.0312283 | Time: 5229.08 ms
[2022-02-20 11:26:18	                main:574]	:	INFO	:	Epoch 1809 | loss: 0.0310993 | val_loss: 0.0312104 | Time: 5195.03 ms
[2022-02-20 11:26:23	                main:574]	:	INFO	:	Epoch 1810 | loss: 0.0310957 | val_loss: 0.0312211 | Time: 5146.13 ms
[2022-02-20 11:26:28	                main:574]	:	INFO	:	Epoch 1811 | loss: 0.0311004 | val_loss: 0.0312205 | Time: 5055.44 ms
[2022-02-20 11:26:34	                main:574]	:	INFO	:	Epoch 1812 | loss: 0.0311033 | val_loss: 0.0312212 | Time: 5213.51 ms
[2022-02-20 11:26:39	                main:574]	:	INFO	:	Epoch 1813 | loss: 0.0311019 | val_loss: 0.0312174 | Time: 4965.63 ms
[2022-02-20 11:26:44	                main:574]	:	INFO	:	Epoch 1814 | loss: 0.0311004 | val_loss: 0.0312192 | Time: 5334.42 ms
[2022-02-20 11:26:49	                main:574]	:	INFO	:	Epoch 1815 | loss: 0.0311039 | val_loss: 0.0312282 | Time: 5037.5 ms
[2022-02-20 11:26:54	                main:574]	:	INFO	:	Epoch 1816 | loss: 0.0310973 | val_loss: 0.0312291 | Time: 5057.38 ms
[2022-02-20 11:26:59	                main:574]	:	INFO	:	Epoch 1817 | loss: 0.0311102 | val_loss: 0.0312174 | Time: 5147.94 ms
[2022-02-20 11:27:04	                main:574]	:	INFO	:	Epoch 1818 | loss: 0.0311062 | val_loss: 0.0312177 | Time: 5159.77 ms
[2022-02-20 11:27:10	                main:574]	:	INFO	:	Epoch 1819 | loss: 0.0311023 | val_loss: 0.0312127 | Time: 5502.8 ms
[2022-02-20 11:27:15	                main:574]	:	INFO	:	Epoch 1820 | loss: 0.0310999 | val_loss: 0.0312153 | Time: 4846.75 ms
[2022-02-20 11:27:20	                main:574]	:	INFO	:	Epoch 1821 | loss: 0.0310962 | val_loss: 0.0312218 | Time: 5231.52 ms
[2022-02-20 11:27:26	                main:574]	:	INFO	:	Epoch 1822 | loss: 0.0311036 | val_loss: 0.0312118 | Time: 5524.53 ms
[2022-02-20 11:27:30	                main:574]	:	INFO	:	Epoch 1823 | loss: 0.0311036 | val_loss: 0.0312272 | Time: 4904.19 ms
[2022-02-20 11:27:36	                main:574]	:	INFO	:	Epoch 1824 | loss: 0.0310973 | val_loss: 0.0312182 | Time: 5184.77 ms
[2022-02-20 11:27:41	                main:574]	:	INFO	:	Epoch 1825 | loss: 0.0310975 | val_loss: 0.0312082 | Time: 5074.25 ms
[2022-02-20 11:27:46	                main:574]	:	INFO	:	Epoch 1826 | loss: 0.0310973 | val_loss: 0.0312275 | Time: 5121.14 ms
[2022-02-20 11:27:51	                main:574]	:	INFO	:	Epoch 1827 | loss: 0.031108 | val_loss: 0.0312181 | Time: 5030.12 ms
[2022-02-20 11:27:56	                main:574]	:	INFO	:	Epoch 1828 | loss: 0.0311189 | val_loss: 0.0312254 | Time: 5010.15 ms
[2022-02-20 11:28:01	                main:574]	:	INFO	:	Epoch 1829 | loss: 0.0311218 | val_loss: 0.0312055 | Time: 5010.18 ms
[2022-02-20 11:28:06	                main:574]	:	INFO	:	Epoch 1830 | loss: 0.0311136 | val_loss: 0.0312025 | Time: 4901.23 ms
[2022-02-20 11:28:11	                main:574]	:	INFO	:	Epoch 1831 | loss: 0.0311074 | val_loss: 0.031211 | Time: 5447.54 ms
[2022-02-20 11:28:17	                main:574]	:	INFO	:	Epoch 1832 | loss: 0.0311019 | val_loss: 0.0312174 | Time: 5303.74 ms
[2022-02-20 11:28:22	                main:574]	:	INFO	:	Epoch 1833 | loss: 0.0311033 | val_loss: 0.031216 | Time: 5123.49 ms
[2022-02-20 11:28:27	                main:574]	:	INFO	:	Epoch 1834 | loss: 0.0311237 | val_loss: 0.0312281 | Time: 5209.76 ms
[2022-02-20 11:28:32	                main:574]	:	INFO	:	Epoch 1835 | loss: 0.0311215 | val_loss: 0.0312349 | Time: 5227.41 ms
[2022-02-20 11:28:37	                main:574]	:	INFO	:	Epoch 1836 | loss: 0.0311136 | val_loss: 0.0312274 | Time: 5183.01 ms
[2022-02-20 11:28:43	                main:574]	:	INFO	:	Epoch 1837 | loss: 0.0311082 | val_loss: 0.0312042 | Time: 5193.14 ms
[2022-02-20 11:28:48	                main:574]	:	INFO	:	Epoch 1838 | loss: 0.0311132 | val_loss: 0.0312066 | Time: 5179.45 ms
[2022-02-20 11:28:53	                main:574]	:	INFO	:	Epoch 1839 | loss: 0.0311143 | val_loss: 0.031207 | Time: 5156.46 ms
[2022-02-20 11:28:58	                main:574]	:	INFO	:	Epoch 1840 | loss: 0.0311038 | val_loss: 0.0312148 | Time: 5380.77 ms
[2022-02-20 11:29:03	                main:574]	:	INFO	:	Epoch 1841 | loss: 0.0311001 | val_loss: 0.0312287 | Time: 5064.11 ms
[2022-02-20 11:29:09	                main:574]	:	INFO	:	Epoch 1842 | loss: 0.0310976 | val_loss: 0.0312178 | Time: 5273.79 ms
[2022-02-20 11:29:14	                main:574]	:	INFO	:	Epoch 1843 | loss: 0.0311204 | val_loss: 0.0311953 | Time: 4941.28 ms
[2022-02-20 11:29:19	                main:574]	:	INFO	:	Epoch 1844 | loss: 0.0311483 | val_loss: 0.0311926 | Time: 5491.05 ms
[2022-02-20 11:29:24	                main:574]	:	INFO	:	Epoch 1845 | loss: 0.0311372 | val_loss: 0.0311985 | Time: 5030.48 ms
[2022-02-20 11:29:29	                main:574]	:	INFO	:	Epoch 1846 | loss: 0.0311297 | val_loss: 0.0312017 | Time: 5106.77 ms
[2022-02-20 11:29:34	                main:574]	:	INFO	:	Epoch 1847 | loss: 0.0311208 | val_loss: 0.0312113 | Time: 5329.13 ms
[2022-02-20 11:29:40	                main:574]	:	INFO	:	Epoch 1848 | loss: 0.0311187 | val_loss: 0.0312083 | Time: 5173.11 ms
[2022-02-20 11:29:45	                main:574]	:	INFO	:	Epoch 1849 | loss: 0.0311164 | val_loss: 0.031222 | Time: 4855.52 ms
[2022-02-20 11:29:50	                main:574]	:	INFO	:	Epoch 1850 | loss: 0.0311083 | val_loss: 0.0312117 | Time: 5040.2 ms
[2022-02-20 11:29:55	                main:574]	:	INFO	:	Epoch 1851 | loss: 0.0311029 | val_loss: 0.0312275 | Time: 5080.54 ms
[2022-02-20 11:30:00	                main:574]	:	INFO	:	Epoch 1852 | loss: 0.0311056 | val_loss: 0.0312189 | Time: 5306.8 ms
[2022-02-20 11:30:05	                main:574]	:	INFO	:	Epoch 1853 | loss: 0.0311076 | val_loss: 0.0312104 | Time: 5142.72 ms
[2022-02-20 11:30:10	                main:574]	:	INFO	:	Epoch 1854 | loss: 0.0311086 | val_loss: 0.0312052 | Time: 4992.56 ms
[2022-02-20 11:30:15	                main:574]	:	INFO	:	Epoch 1855 | loss: 0.0311063 | val_loss: 0.0312273 | Time: 5178.93 ms
[2022-02-20 11:30:20	                main:574]	:	INFO	:	Epoch 1856 | loss: 0.0311243 | val_loss: 0.0312028 | Time: 5149.38 ms
[2022-02-20 11:30:26	                main:574]	:	INFO	:	Epoch 1857 | loss: 0.0311609 | val_loss: 0.0311927 | Time: 5156.39 ms
[2022-02-20 11:30:31	                main:574]	:	INFO	:	Epoch 1858 | loss: 0.0311475 | val_loss: 0.031195 | Time: 5076.14 ms
[2022-02-20 11:30:36	                main:574]	:	INFO	:	Epoch 1859 | loss: 0.0311367 | val_loss: 0.0311891 | Time: 4976.1 ms
[2022-02-20 11:30:41	                main:574]	:	INFO	:	Epoch 1860 | loss: 0.0311248 | val_loss: 0.0311931 | Time: 5166.08 ms
[2022-02-20 11:30:46	                main:574]	:	INFO	:	Epoch 1861 | loss: 0.0311175 | val_loss: 0.0312045 | Time: 5017.02 ms
[2022-02-20 11:30:51	                main:574]	:	INFO	:	Epoch 1862 | loss: 0.0311077 | val_loss: 0.0312047 | Time: 5356.45 ms
[2022-02-20 11:30:57	                main:574]	:	INFO	:	Epoch 1863 | loss: 0.031102 | val_loss: 0.0312117 | Time: 5426.25 ms
[2022-02-20 11:31:02	                main:574]	:	INFO	:	Epoch 1864 | loss: 0.0310996 | val_loss: 0.0312133 | Time: 5145.07 ms
[2022-02-20 11:31:07	                main:574]	:	INFO	:	Epoch 1865 | loss: 0.0310992 | val_loss: 0.0312202 | Time: 5482.15 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GT 1030)
[2022-02-20 12:14:34	                main:435]	:	INFO	:	Set logging level to 1
[2022-02-20 12:14:34	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-02-20 12:14:34	                main:444]	:	INFO	:	Resolving all filenames
[2022-02-20 12:14:34	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-02-20 12:14:34	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-02-20 12:14:34	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-02-20 12:14:34	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-02-20 12:14:34	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-02-20 12:14:34	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-02-20 12:14:34	                main:474]	:	INFO	:	Configuration: 
[2022-02-20 12:14:34	                main:475]	:	INFO	:	    Model type: GRU
[2022-02-20 12:14:34	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-02-20 12:14:34	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-02-20 12:14:34	                main:478]	:	INFO	:	    Batch Size: 128
[2022-02-20 12:14:34	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-02-20 12:14:34	                main:480]	:	INFO	:	    Patience: 10
[2022-02-20 12:14:34	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-02-20 12:14:34	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-02-20 12:14:34	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-02-20 12:14:34	                main:484]	:	INFO	:	    # Threads: 1
[2022-02-20 12:14:34	                main:486]	:	INFO	:	Preparing Dataset
[2022-02-20 12:14:34	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GT 1030)
[2022-02-20 12:28:34	                main:435]	:	INFO	:	Set logging level to 1
[2022-02-20 12:28:34	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-02-20 12:28:34	                main:444]	:	INFO	:	Resolving all filenames
[2022-02-20 12:28:34	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-02-20 12:28:34	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-02-20 12:28:34	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-02-20 12:28:34	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-02-20 12:28:34	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-02-20 12:28:34	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-02-20 12:28:34	                main:474]	:	INFO	:	Configuration: 
[2022-02-20 12:28:34	                main:475]	:	INFO	:	    Model type: GRU
[2022-02-20 12:28:34	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-02-20 12:28:34	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-02-20 12:28:34	                main:478]	:	INFO	:	    Batch Size: 128
[2022-02-20 12:28:34	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-02-20 12:28:34	                main:480]	:	INFO	:	    Patience: 10
[2022-02-20 12:28:34	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-02-20 12:28:34	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-02-20 12:28:34	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-02-20 12:28:34	                main:484]	:	INFO	:	    # Threads: 1
[2022-02-20 12:28:34	                main:486]	:	INFO	:	Preparing Dataset
[2022-02-20 12:28:34	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-02-20 12:28:35	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-02-20 12:28:36	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-02-20 12:28:36	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-02-20 12:28:36	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-02-20 12:28:36	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-02-20 12:28:36	                main:494]	:	INFO	:	Creating Model
[2022-02-20 12:28:36	                main:507]	:	INFO	:	Preparing config file
[2022-02-20 12:28:36	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-02-20 12:28:36	                main:512]	:	INFO	:	Loading config
[2022-02-20 12:28:36	                main:514]	:	INFO	:	Loading state
[2022-02-20 12:28:37	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-02-20 12:28:37	                main:562]	:	INFO	:	Starting Training
[2022-02-20 12:28:41	                main:574]	:	INFO	:	Epoch 1856 | loss: 0.0311358 | val_loss: 0.0312145 | Time: 4273.22 ms
[2022-02-20 12:28:45	                main:574]	:	INFO	:	Epoch 1857 | loss: 0.0311136 | val_loss: 0.0312117 | Time: 4527.53 ms
[2022-02-20 12:28:50	                main:574]	:	INFO	:	Epoch 1858 | loss: 0.0311087 | val_loss: 0.0312197 | Time: 5017.45 ms
[2022-02-20 12:28:56	                main:574]	:	INFO	:	Epoch 1859 | loss: 0.0311132 | val_loss: 0.0312046 | Time: 5196.29 ms
[2022-02-20 12:29:00	                main:574]	:	INFO	:	Epoch 1860 | loss: 0.0311146 | val_loss: 0.0312019 | Time: 4814.51 ms
[2022-02-20 12:29:06	                main:574]	:	INFO	:	Epoch 1861 | loss: 0.031114 | val_loss: 0.031215 | Time: 5203.18 ms
[2022-02-20 12:29:11	                main:574]	:	INFO	:	Epoch 1862 | loss: 0.0311061 | val_loss: 0.031226 | Time: 4991.55 ms
[2022-02-20 12:29:16	                main:574]	:	INFO	:	Epoch 1863 | loss: 0.0311043 | val_loss: 0.0312165 | Time: 4936.58 ms
[2022-02-20 12:29:21	                main:574]	:	INFO	:	Epoch 1864 | loss: 0.0311089 | val_loss: 0.031217 | Time: 5198.41 ms
[2022-02-20 12:29:26	                main:574]	:	INFO	:	Epoch 1865 | loss: 0.0311039 | val_loss: 0.0312224 | Time: 5070.95 ms
[2022-02-20 12:29:31	                main:574]	:	INFO	:	Epoch 1866 | loss: 0.0311056 | val_loss: 0.0312117 | Time: 5110.7 ms
[2022-02-20 12:29:36	                main:574]	:	INFO	:	Epoch 1867 | loss: 0.0311099 | val_loss: 0.0312077 | Time: 5300.86 ms
[2022-02-20 12:29:42	                main:574]	:	INFO	:	Epoch 1868 | loss: 0.0311016 | val_loss: 0.0312122 | Time: 5181.58 ms
[2022-02-20 12:29:49	                main:574]	:	INFO	:	Epoch 1869 | loss: 0.0311063 | val_loss: 0.0312058 | Time: 7052.06 ms
[2022-02-20 12:29:54	                main:574]	:	INFO	:	Epoch 1870 | loss: 0.0311033 | val_loss: 0.0312109 | Time: 5802.12 ms
[2022-02-20 12:30:00	                main:574]	:	INFO	:	Epoch 1871 | loss: 0.0311013 | val_loss: 0.0312199 | Time: 5368.25 ms
[2022-02-20 12:30:05	                main:574]	:	INFO	:	Epoch 1872 | loss: 0.0310987 | val_loss: 0.031219 | Time: 5385.21 ms
[2022-02-20 12:30:10	                main:574]	:	INFO	:	Epoch 1873 | loss: 0.0310986 | val_loss: 0.031233 | Time: 5336.38 ms
[2022-02-20 12:30:16	                main:574]	:	INFO	:	Epoch 1874 | loss: 0.0310994 | val_loss: 0.0312226 | Time: 5178.94 ms
[2022-02-20 12:30:21	                main:574]	:	INFO	:	Epoch 1875 | loss: 0.0310964 | val_loss: 0.0312274 | Time: 5589.46 ms
[2022-02-20 12:30:26	                main:574]	:	INFO	:	Epoch 1876 | loss: 0.0310996 | val_loss: 0.031224 | Time: 4958.51 ms
[2022-02-20 12:30:32	                main:574]	:	INFO	:	Epoch 1877 | loss: 0.0311091 | val_loss: 0.0312126 | Time: 5490.5 ms
[2022-02-20 12:30:37	                main:574]	:	INFO	:	Epoch 1878 | loss: 0.0311334 | val_loss: 0.0312026 | Time: 5095.96 ms
[2022-02-20 12:30:42	                main:574]	:	INFO	:	Epoch 1879 | loss: 0.031124 | val_loss: 0.0312089 | Time: 5067.96 ms
[2022-02-20 12:30:47	                main:574]	:	INFO	:	Epoch 1880 | loss: 0.0311121 | val_loss: 0.031203 | Time: 5282.44 ms
[2022-02-20 12:30:52	                main:574]	:	INFO	:	Epoch 1881 | loss: 0.0311149 | val_loss: 0.0312123 | Time: 4969.96 ms
[2022-02-20 12:30:57	                main:574]	:	INFO	:	Epoch 1882 | loss: 0.03111 | val_loss: 0.0312062 | Time: 5200.14 ms
[2022-02-20 12:31:03	                main:574]	:	INFO	:	Epoch 1883 | loss: 0.0311083 | val_loss: 0.0312129 | Time: 5422.48 ms
[2022-02-20 12:31:08	                main:574]	:	INFO	:	Epoch 1884 | loss: 0.031109 | val_loss: 0.0312108 | Time: 5167.42 ms
[2022-02-20 12:31:13	                main:574]	:	INFO	:	Epoch 1885 | loss: 0.03111 | val_loss: 0.0312263 | Time: 5212.71 ms
[2022-02-20 12:31:18	                main:574]	:	INFO	:	Epoch 1886 | loss: 0.0311028 | val_loss: 0.0312085 | Time: 4957.84 ms
[2022-02-20 12:31:23	                main:574]	:	INFO	:	Epoch 1887 | loss: 0.0311024 | val_loss: 0.0312171 | Time: 5168.22 ms
[2022-02-20 12:31:29	                main:574]	:	INFO	:	Epoch 1888 | loss: 0.0311168 | val_loss: 0.0312119 | Time: 5212.71 ms
[2022-02-20 12:31:33	                main:574]	:	INFO	:	Epoch 1889 | loss: 0.0311348 | val_loss: 0.0312049 | Time: 4863.95 ms
[2022-02-20 12:31:39	                main:574]	:	INFO	:	Epoch 1890 | loss: 0.0311256 | val_loss: 0.0312041 | Time: 5178.5 ms
[2022-02-20 12:31:44	                main:574]	:	INFO	:	Epoch 1891 | loss: 0.0311305 | val_loss: 0.0311994 | Time: 5005.93 ms
[2022-02-20 12:31:49	                main:574]	:	INFO	:	Epoch 1892 | loss: 0.0311258 | val_loss: 0.0312125 | Time: 5065.13 ms
[2022-02-20 12:31:54	                main:574]	:	INFO	:	Epoch 1893 | loss: 0.0311374 | val_loss: 0.0312037 | Time: 5172.96 ms
[2022-02-20 12:31:59	                main:574]	:	INFO	:	Epoch 1894 | loss: 0.0311277 | val_loss: 0.0312149 | Time: 4981.93 ms
[2022-02-20 12:32:04	                main:574]	:	INFO	:	Epoch 1895 | loss: 0.031123 | val_loss: 0.0312177 | Time: 5252.68 ms
[2022-02-20 12:32:09	                main:574]	:	INFO	:	Epoch 1896 | loss: 0.0311177 | val_loss: 0.0312148 | Time: 5001.61 ms
[2022-02-20 12:32:14	                main:574]	:	INFO	:	Epoch 1897 | loss: 0.0311138 | val_loss: 0.0312197 | Time: 5180.48 ms
[2022-02-20 12:32:19	                main:574]	:	INFO	:	Epoch 1898 | loss: 0.0311075 | val_loss: 0.0312146 | Time: 5186.05 ms
[2022-02-20 12:32:24	                main:574]	:	INFO	:	Epoch 1899 | loss: 0.0311119 | val_loss: 0.0312158 | Time: 4980.01 ms
[2022-02-20 12:32:29	                main:574]	:	INFO	:	Epoch 1900 | loss: 0.0311049 | val_loss: 0.0312252 | Time: 5076.46 ms
[2022-02-20 12:32:35	                main:574]	:	INFO	:	Epoch 1901 | loss: 0.0310992 | val_loss: 0.0312215 | Time: 5275.44 ms
[2022-02-20 12:32:40	                main:574]	:	INFO	:	Epoch 1902 | loss: 0.0311004 | val_loss: 0.0312212 | Time: 5192.72 ms
[2022-02-20 12:32:45	                main:574]	:	INFO	:	Epoch 1903 | loss: 0.0310978 | val_loss: 0.0312425 | Time: 5319.85 ms
[2022-02-20 12:32:50	                main:574]	:	INFO	:	Epoch 1904 | loss: 0.031104 | val_loss: 0.0312319 | Time: 4983.1 ms
[2022-02-20 12:32:55	                main:574]	:	INFO	:	Epoch 1905 | loss: 0.0311192 | val_loss: 0.0312153 | Time: 5208.37 ms
[2022-02-20 12:33:01	                main:574]	:	INFO	:	Epoch 1906 | loss: 0.0311093 | val_loss: 0.0312262 | Time: 5045.15 ms
[2022-02-20 12:33:05	                main:574]	:	INFO	:	Epoch 1907 | loss: 0.0311026 | val_loss: 0.0312239 | Time: 4898.2 ms
[2022-02-20 12:33:11	                main:574]	:	INFO	:	Epoch 1908 | loss: 0.0311024 | val_loss: 0.0312128 | Time: 5339.25 ms
[2022-02-20 12:33:16	                main:574]	:	INFO	:	Epoch 1909 | loss: 0.0311135 | val_loss: 0.031209 | Time: 5018.25 ms
[2022-02-20 12:33:21	                main:574]	:	INFO	:	Epoch 1910 | loss: 0.0311086 | val_loss: 0.0312112 | Time: 5138.32 ms
[2022-02-20 12:33:26	                main:574]	:	INFO	:	Epoch 1911 | loss: 0.0311015 | val_loss: 0.0312189 | Time: 5140.68 ms
[2022-02-20 12:33:31	                main:574]	:	INFO	:	Epoch 1912 | loss: 0.0311088 | val_loss: 0.0312145 | Time: 5139.75 ms
[2022-02-20 12:33:36	                main:574]	:	INFO	:	Epoch 1913 | loss: 0.0311144 | val_loss: 0.0312156 | Time: 5236.49 ms
[2022-02-20 12:33:41	                main:574]	:	INFO	:	Epoch 1914 | loss: 0.0311238 | val_loss: 0.0312153 | Time: 4822.02 ms
[2022-02-20 12:33:47	                main:574]	:	INFO	:	Epoch 1915 | loss: 0.0311347 | val_loss: 0.0312016 | Time: 5206.56 ms
[2022-02-20 12:33:52	                main:574]	:	INFO	:	Epoch 1916 | loss: 0.0311262 | val_loss: 0.0312049 | Time: 5202.57 ms
[2022-02-20 12:33:57	                main:574]	:	INFO	:	Epoch 1917 | loss: 0.0311195 | val_loss: 0.0312115 | Time: 4980.72 ms
[2022-02-20 12:34:02	                main:574]	:	INFO	:	Epoch 1918 | loss: 0.03111 | val_loss: 0.0312238 | Time: 5193.94 ms
[2022-02-20 12:34:07	                main:574]	:	INFO	:	Epoch 1919 | loss: 0.0311064 | val_loss: 0.0312127 | Time: 4856.79 ms
[2022-02-20 12:34:12	                main:574]	:	INFO	:	Epoch 1920 | loss: 0.0311147 | val_loss: 0.03121 | Time: 5340.48 ms
[2022-02-20 12:34:17	                main:574]	:	INFO	:	Epoch 1921 | loss: 0.0311109 | val_loss: 0.0312145 | Time: 5298.32 ms
[2022-02-20 12:34:22	                main:574]	:	INFO	:	Epoch 1922 | loss: 0.0311403 | val_loss: 0.0311976 | Time: 4962.66 ms
[2022-02-20 12:34:27	                main:574]	:	INFO	:	Epoch 1923 | loss: 0.0311583 | val_loss: 0.0311881 | Time: 5070.93 ms
[2022-02-20 12:34:32	                main:574]	:	INFO	:	Epoch 1924 | loss: 0.0311506 | val_loss: 0.0311889 | Time: 4839.18 ms
[2022-02-20 12:34:38	                main:574]	:	INFO	:	Epoch 1925 | loss: 0.0311449 | val_loss: 0.0311925 | Time: 5212.31 ms
[2022-02-20 12:34:43	                main:574]	:	INFO	:	Epoch 1926 | loss: 0.0311382 | val_loss: 0.0311998 | Time: 5166.85 ms
[2022-02-20 12:34:48	                main:574]	:	INFO	:	Epoch 1927 | loss: 0.0311299 | val_loss: 0.0312113 | Time: 4990.18 ms
[2022-02-20 12:34:53	                main:574]	:	INFO	:	Epoch 1928 | loss: 0.031127 | val_loss: 0.031208 | Time: 5345.75 ms
[2022-02-20 12:34:58	                main:574]	:	INFO	:	Epoch 1929 | loss: 0.0311273 | val_loss: 0.0312043 | Time: 4818.01 ms
[2022-02-20 12:35:03	                main:574]	:	INFO	:	Epoch 1930 | loss: 0.0311221 | val_loss: 0.0312004 | Time: 5253.2 ms
[2022-02-20 12:35:08	                main:574]	:	INFO	:	Epoch 1931 | loss: 0.0311183 | val_loss: 0.031213 | Time: 5218.55 ms
[2022-02-20 12:35:13	                main:574]	:	INFO	:	Epoch 1932 | loss: 0.0311133 | val_loss: 0.031217 | Time: 5074.69 ms
[2022-02-20 12:35:19	                main:574]	:	INFO	:	Epoch 1933 | loss: 0.0311137 | val_loss: 0.0312193 | Time: 5192.65 ms
[2022-02-20 12:35:24	                main:574]	:	INFO	:	Epoch 1934 | loss: 0.0311108 | val_loss: 0.0312217 | Time: 5017.84 ms
[2022-02-20 12:35:29	                main:574]	:	INFO	:	Epoch 1935 | loss: 0.0311149 | val_loss: 0.0312124 | Time: 5067.06 ms
[2022-02-20 12:35:34	                main:574]	:	INFO	:	Epoch 1936 | loss: 0.0311157 | val_loss: 0.0312066 | Time: 5154.2 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GT 1030)
[2022-02-20 14:25:50	                main:435]	:	INFO	:	Set logging level to 1
[2022-02-20 14:25:50	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-02-20 14:25:50	                main:444]	:	INFO	:	Resolving all filenames
[2022-02-20 14:25:50	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-02-20 14:25:50	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-02-20 14:25:50	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-02-20 14:25:50	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-02-20 14:25:50	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-02-20 14:25:50	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-02-20 14:25:50	                main:474]	:	INFO	:	Configuration: 
[2022-02-20 14:25:50	                main:475]	:	INFO	:	    Model type: GRU
[2022-02-20 14:25:50	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-02-20 14:25:50	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-02-20 14:25:50	                main:478]	:	INFO	:	    Batch Size: 128
[2022-02-20 14:25:50	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-02-20 14:25:50	                main:480]	:	INFO	:	    Patience: 10
[2022-02-20 14:25:50	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-02-20 14:25:50	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-02-20 14:25:50	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-02-20 14:25:50	                main:484]	:	INFO	:	    # Threads: 1
[2022-02-20 14:25:50	                main:486]	:	INFO	:	Preparing Dataset
[2022-02-20 14:25:50	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GT 1030)
[2022-02-20 15:47:09	                main:435]	:	INFO	:	Set logging level to 1
[2022-02-20 15:47:09	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-02-20 15:47:09	                main:444]	:	INFO	:	Resolving all filenames
[2022-02-20 15:47:09	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-02-20 15:47:09	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-02-20 15:47:09	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-02-20 15:47:09	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-02-20 15:47:09	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-02-20 15:47:09	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-02-20 15:47:10	                main:474]	:	INFO	:	Configuration: 
[2022-02-20 15:47:10	                main:475]	:	INFO	:	    Model type: GRU
[2022-02-20 15:47:10	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-02-20 15:47:10	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-02-20 15:47:10	                main:478]	:	INFO	:	    Batch Size: 128
[2022-02-20 15:47:10	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-02-20 15:47:10	                main:480]	:	INFO	:	    Patience: 10
[2022-02-20 15:47:10	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-02-20 15:47:10	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-02-20 15:47:10	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-02-20 15:47:10	                main:484]	:	INFO	:	    # Threads: 1
[2022-02-20 15:47:10	                main:486]	:	INFO	:	Preparing Dataset
[2022-02-20 15:47:10	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-02-20 15:47:10	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-02-20 15:47:11	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-02-20 15:47:11	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-02-20 15:47:11	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-02-20 15:47:11	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-02-20 15:47:11	                main:494]	:	INFO	:	Creating Model
[2022-02-20 15:47:11	                main:507]	:	INFO	:	Preparing config file
[2022-02-20 15:47:11	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-02-20 15:47:11	                main:512]	:	INFO	:	Loading config
[2022-02-20 15:47:11	                main:514]	:	INFO	:	Loading state
[2022-02-20 15:47:12	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-02-20 15:47:12	                main:562]	:	INFO	:	Starting Training
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GT 1030)
[2022-02-20 15:50:58	                main:435]	:	INFO	:	Set logging level to 1
[2022-02-20 15:50:58	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-02-20 15:50:58	                main:444]	:	INFO	:	Resolving all filenames
[2022-02-20 15:50:58	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-02-20 15:50:58	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-02-20 15:50:58	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-02-20 15:50:58	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-02-20 15:50:58	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-02-20 15:50:58	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-02-20 15:50:58	                main:474]	:	INFO	:	Configuration: 
[2022-02-20 15:50:58	                main:475]	:	INFO	:	    Model type: GRU
[2022-02-20 15:50:58	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-02-20 15:50:58	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-02-20 15:50:58	                main:478]	:	INFO	:	    Batch Size: 128
[2022-02-20 15:50:58	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-02-20 15:50:58	                main:480]	:	INFO	:	    Patience: 10
[2022-02-20 15:50:58	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-02-20 15:50:58	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-02-20 15:50:58	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-02-20 15:50:58	                main:484]	:	INFO	:	    # Threads: 1
[2022-02-20 15:50:58	                main:486]	:	INFO	:	Preparing Dataset
[2022-02-20 15:50:58	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-02-20 15:50:58	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-02-20 15:50:59	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-02-20 15:50:59	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-02-20 15:50:59	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-02-20 15:51:00	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-02-20 15:51:00	                main:494]	:	INFO	:	Creating Model
[2022-02-20 15:51:00	                main:507]	:	INFO	:	Preparing config file
[2022-02-20 15:51:00	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-02-20 15:51:00	                main:512]	:	INFO	:	Loading config
[2022-02-20 15:51:00	                main:514]	:	INFO	:	Loading state
[2022-02-20 15:51:00	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-02-20 15:51:00	                main:562]	:	INFO	:	Starting Training
[2022-02-20 15:51:05	                main:574]	:	INFO	:	Epoch 1927 | loss: 0.0311726 | val_loss: 0.0312121 | Time: 4657.95 ms
[2022-02-20 15:51:10	                main:574]	:	INFO	:	Epoch 1928 | loss: 0.0311324 | val_loss: 0.0312051 | Time: 5191.24 ms
[2022-02-20 15:51:16	                main:574]	:	INFO	:	Epoch 1929 | loss: 0.0311231 | val_loss: 0.0311986 | Time: 5769.94 ms
[2022-02-20 15:51:22	                main:574]	:	INFO	:	Epoch 1930 | loss: 0.0311168 | val_loss: 0.0312072 | Time: 5821.34 ms
[2022-02-20 15:51:27	                main:574]	:	INFO	:	Epoch 1931 | loss: 0.0311153 | val_loss: 0.0311991 | Time: 5368.02 ms
[2022-02-20 15:51:32	                main:574]	:	INFO	:	Epoch 1932 | loss: 0.0311127 | val_loss: 0.0312236 | Time: 5047.8 ms
[2022-02-20 15:51:37	                main:574]	:	INFO	:	Epoch 1933 | loss: 0.0311119 | val_loss: 0.031205 | Time: 5364.63 ms
[2022-02-20 15:51:43	                main:574]	:	INFO	:	Epoch 1934 | loss: 0.0311102 | val_loss: 0.0312127 | Time: 5155.64 ms
[2022-02-20 15:51:48	                main:574]	:	INFO	:	Epoch 1935 | loss: 0.0311067 | val_loss: 0.0312153 | Time: 5041.31 ms
[2022-02-20 15:51:53	                main:574]	:	INFO	:	Epoch 1936 | loss: 0.0311003 | val_loss: 0.0312136 | Time: 5188.3 ms
[2022-02-20 15:51:58	                main:574]	:	INFO	:	Epoch 1937 | loss: 0.0311143 | val_loss: 0.0312083 | Time: 4938.92 ms
[2022-02-20 15:52:03	                main:574]	:	INFO	:	Epoch 1938 | loss: 0.0311373 | val_loss: 0.0311944 | Time: 5350.3 ms
[2022-02-20 15:52:08	                main:574]	:	INFO	:	Epoch 1939 | loss: 0.0311314 | val_loss: 0.0311939 | Time: 5050.56 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GT 1030)
[2022-02-20 17:07:41	                main:435]	:	INFO	:	Set logging level to 1
[2022-02-20 17:07:41	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-02-20 17:07:41	                main:444]	:	INFO	:	Resolving all filenames
[2022-02-20 17:07:41	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-02-20 17:07:41	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-02-20 17:07:41	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-02-20 17:07:41	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-02-20 17:07:41	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-02-20 17:07:41	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-02-20 17:07:41	                main:474]	:	INFO	:	Configuration: 
[2022-02-20 17:07:41	                main:475]	:	INFO	:	    Model type: GRU
[2022-02-20 17:07:41	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-02-20 17:07:41	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-02-20 17:07:41	                main:478]	:	INFO	:	    Batch Size: 128
[2022-02-20 17:07:41	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-02-20 17:07:41	                main:480]	:	INFO	:	    Patience: 10
[2022-02-20 17:07:41	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-02-20 17:07:41	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-02-20 17:07:41	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-02-20 17:07:41	                main:484]	:	INFO	:	    # Threads: 1
[2022-02-20 17:07:41	                main:486]	:	INFO	:	Preparing Dataset
[2022-02-20 17:07:41	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-02-20 17:07:41	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-02-20 17:07:42	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-02-20 17:07:42	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-02-20 17:07:42	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-02-20 17:07:42	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-02-20 17:07:42	                main:494]	:	INFO	:	Creating Model
[2022-02-20 17:07:42	                main:507]	:	INFO	:	Preparing config file
[2022-02-20 17:07:42	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-02-20 17:07:42	                main:512]	:	INFO	:	Loading config
[2022-02-20 17:07:42	                main:514]	:	INFO	:	Loading state
[2022-02-20 17:07:43	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-02-20 17:07:43	                main:562]	:	INFO	:	Starting Training
[2022-02-20 17:07:47	                main:574]	:	INFO	:	Epoch 1927 | loss: 0.0311792 | val_loss: 0.0312037 | Time: 4374.25 ms
[2022-02-20 17:07:52	                main:574]	:	INFO	:	Epoch 1928 | loss: 0.0311382 | val_loss: 0.0312017 | Time: 4571.13 ms
[2022-02-20 17:07:57	                main:574]	:	INFO	:	Epoch 1929 | loss: 0.0311285 | val_loss: 0.0311998 | Time: 5023.4 ms
[2022-02-20 17:08:02	                main:574]	:	INFO	:	Epoch 1930 | loss: 0.0311224 | val_loss: 0.0311979 | Time: 5209.77 ms
[2022-02-20 17:08:07	                main:574]	:	INFO	:	Epoch 1931 | loss: 0.0311386 | val_loss: 0.0311922 | Time: 4907.25 ms
[2022-02-20 17:08:12	                main:574]	:	INFO	:	Epoch 1932 | loss: 0.0311637 | val_loss: 0.0311892 | Time: 5553.96 ms
[2022-02-20 17:08:18	                main:574]	:	INFO	:	Epoch 1933 | loss: 0.0311666 | val_loss: 0.031191 | Time: 5154.88 ms
[2022-02-20 17:08:23	                main:574]	:	INFO	:	Epoch 1934 | loss: 0.0311629 | val_loss: 0.0311874 | Time: 4931.35 ms
[2022-02-20 17:08:28	                main:574]	:	INFO	:	Epoch 1935 | loss: 0.0311578 | val_loss: 0.0311888 | Time: 5230.08 ms
[2022-02-20 17:08:33	                main:574]	:	INFO	:	Epoch 1936 | loss: 0.0311551 | val_loss: 0.0311902 | Time: 5170.47 ms
[2022-02-20 17:08:38	                main:574]	:	INFO	:	Epoch 1937 | loss: 0.0311751 | val_loss: 0.0311913 | Time: 5328.07 ms
[2022-02-20 17:08:45	                main:574]	:	INFO	:	Epoch 1938 | loss: 0.031193 | val_loss: 0.0311993 | Time: 7139.33 ms
[2022-02-20 17:09:00	                main:574]	:	INFO	:	Epoch 1939 | loss: 0.0311984 | val_loss: 0.0311997 | Time: 15064.3 ms
[2022-02-20 17:09:06	                main:574]	:	INFO	:	Epoch 1940 | loss: 0.0311973 | val_loss: 0.0311989 | Time: 5215.19 ms
[2022-02-20 17:09:11	                main:574]	:	INFO	:	Epoch 1941 | loss: 0.0311958 | val_loss: 0.031192 | Time: 5395.41 ms
[2022-02-20 17:09:16	                main:574]	:	INFO	:	Epoch 1942 | loss: 0.0311909 | val_loss: 0.0311916 | Time: 4838.79 ms
[2022-02-20 17:09:21	                main:574]	:	INFO	:	Epoch 1943 | loss: 0.0311901 | val_loss: 0.0311915 | Time: 5403.41 ms
[2022-02-20 17:09:27	                main:574]	:	INFO	:	Epoch 1944 | loss: 0.0311897 | val_loss: 0.0311922 | Time: 5294.66 ms
[2022-02-20 17:09:32	                main:574]	:	INFO	:	Epoch 1945 | loss: 0.0311919 | val_loss: 0.0311913 | Time: 5037.98 ms
[2022-02-20 17:09:37	                main:574]	:	INFO	:	Epoch 1946 | loss: 0.0311896 | val_loss: 0.0311964 | Time: 5396.19 ms
[2022-02-20 17:09:42	                main:574]	:	INFO	:	Epoch 1947 | loss: 0.0311945 | val_loss: 0.0312012 | Time: 5367.22 ms
[2022-02-20 17:09:48	                main:574]	:	INFO	:	Epoch 1948 | loss: 0.0312022 | val_loss: 0.0312038 | Time: 5413.2 ms
[2022-02-20 17:09:53	                main:574]	:	INFO	:	Epoch 1949 | loss: 0.031201 | val_loss: 0.0312009 | Time: 5282.05 ms
[2022-02-20 17:09:58	                main:574]	:	INFO	:	Epoch 1950 | loss: 0.0311973 | val_loss: 0.0311983 | Time: 5193.52 ms
[2022-02-20 17:10:04	                main:574]	:	INFO	:	Epoch 1951 | loss: 0.0311953 | val_loss: 0.031198 | Time: 5341.63 ms
[2022-02-20 17:10:09	                main:574]	:	INFO	:	Epoch 1952 | loss: 0.0311944 | val_loss: 0.0311983 | Time: 5087.81 ms
[2022-02-20 17:10:14	                main:574]	:	INFO	:	Epoch 1953 | loss: 0.0311948 | val_loss: 0.0311969 | Time: 5095.29 ms
[2022-02-20 17:10:19	                main:574]	:	INFO	:	Epoch 1954 | loss: 0.0311935 | val_loss: 0.0311967 | Time: 5304.03 ms
[2022-02-20 17:10:24	                main:574]	:	INFO	:	Epoch 1955 | loss: 0.0311956 | val_loss: 0.0312 | Time: 5183.84 ms
[2022-02-20 17:10:30	                main:574]	:	INFO	:	Epoch 1956 | loss: 0.0311921 | val_loss: 0.0311942 | Time: 5292.1 ms
[2022-02-20 17:10:35	                main:574]	:	INFO	:	Epoch 1957 | loss: 0.0311906 | val_loss: 0.0311939 | Time: 5256.71 ms
[2022-02-20 17:10:40	                main:574]	:	INFO	:	Epoch 1958 | loss: 0.0311916 | val_loss: 0.0311924 | Time: 5055.12 ms
[2022-02-20 17:10:45	                main:574]	:	INFO	:	Epoch 1959 | loss: 0.0311889 | val_loss: 0.0311918 | Time: 5157.93 ms
[2022-02-20 17:10:50	                main:574]	:	INFO	:	Epoch 1960 | loss: 0.0311883 | val_loss: 0.0311919 | Time: 5312.87 ms
[2022-02-20 17:10:56	                main:574]	:	INFO	:	Epoch 1961 | loss: 0.0311884 | val_loss: 0.0311908 | Time: 5204.38 ms
[2022-02-20 17:11:01	                main:574]	:	INFO	:	Epoch 1962 | loss: 0.0311879 | val_loss: 0.0311911 | Time: 5233.17 ms
[2022-02-20 17:11:06	                main:574]	:	INFO	:	Epoch 1963 | loss: 0.0311876 | val_loss: 0.0311919 | Time: 5270.54 ms
[2022-02-20 17:11:12	                main:574]	:	INFO	:	Epoch 1964 | loss: 0.031189 | val_loss: 0.0311923 | Time: 5309.65 ms
[2022-02-20 17:11:16	                main:574]	:	INFO	:	Epoch 1965 | loss: 0.0311903 | val_loss: 0.0311948 | Time: 4866.07 ms
[2022-02-20 17:11:22	                main:574]	:	INFO	:	Epoch 1966 | loss: 0.0311883 | val_loss: 0.0311921 | Time: 5395.62 ms
[2022-02-20 17:11:27	                main:574]	:	INFO	:	Epoch 1967 | loss: 0.0311863 | val_loss: 0.0311928 | Time: 5229.76 ms
[2022-02-20 17:11:32	                main:574]	:	INFO	:	Epoch 1968 | loss: 0.0311871 | val_loss: 0.0311955 | Time: 5005.17 ms
[2022-02-20 17:11:37	                main:574]	:	INFO	:	Epoch 1969 | loss: 0.0311879 | val_loss: 0.031193 | Time: 5178.61 ms
[2022-02-20 17:11:42	                main:574]	:	INFO	:	Epoch 1970 | loss: 0.0311873 | val_loss: 0.0311902 | Time: 5214 ms
[2022-02-20 17:11:48	                main:574]	:	INFO	:	Epoch 1971 | loss: 0.0311851 | val_loss: 0.0311918 | Time: 5375.73 ms
[2022-02-20 17:11:53	                main:574]	:	INFO	:	Epoch 1972 | loss: 0.0311864 | val_loss: 0.0311928 | Time: 5221.57 ms
[2022-02-20 17:11:58	                main:574]	:	INFO	:	Epoch 1973 | loss: 0.0311837 | val_loss: 0.0311905 | Time: 5154.19 ms
[2022-02-20 17:12:04	                main:574]	:	INFO	:	Epoch 1974 | loss: 0.0311859 | val_loss: 0.0311896 | Time: 5394.9 ms
[2022-02-20 17:12:09	                main:574]	:	INFO	:	Epoch 1975 | loss: 0.0311925 | val_loss: 0.0311921 | Time: 5192.43 ms
[2022-02-20 17:12:14	                main:574]	:	INFO	:	Epoch 1976 | loss: 0.0311927 | val_loss: 0.031192 | Time: 4995.3 ms
[2022-02-20 17:12:19	                main:574]	:	INFO	:	Epoch 1977 | loss: 0.0311909 | val_loss: 0.0311898 | Time: 5271.06 ms
[2022-02-20 17:12:24	                main:574]	:	INFO	:	Epoch 1978 | loss: 0.0311895 | val_loss: 0.0311884 | Time: 5032.47 ms
[2022-02-20 17:12:29	                main:574]	:	INFO	:	Epoch 1979 | loss: 0.0311885 | val_loss: 0.0311891 | Time: 5180.86 ms
[2022-02-20 17:12:35	                main:574]	:	INFO	:	Epoch 1980 | loss: 0.0311889 | val_loss: 0.031187 | Time: 5293.35 ms
[2022-02-20 17:12:39	                main:574]	:	INFO	:	Epoch 1981 | loss: 0.0311859 | val_loss: 0.0311861 | Time: 4916.15 ms
[2022-02-20 17:12:45	                main:574]	:	INFO	:	Epoch 1982 | loss: 0.0311859 | val_loss: 0.0311903 | Time: 5135.43 ms
[2022-02-20 17:12:50	                main:574]	:	INFO	:	Epoch 1983 | loss: 0.0311865 | val_loss: 0.0311881 | Time: 5097.39 ms
[2022-02-20 17:12:55	                main:574]	:	INFO	:	Epoch 1984 | loss: 0.0311874 | val_loss: 0.0311854 | Time: 5297.49 ms
[2022-02-20 17:13:01	                main:574]	:	INFO	:	Epoch 1985 | loss: 0.0311862 | val_loss: 0.0311845 | Time: 5508.82 ms
[2022-02-20 17:13:05	                main:574]	:	INFO	:	Epoch 1986 | loss: 0.0311884 | val_loss: 0.0311893 | Time: 4880.6 ms
[2022-02-20 17:13:11	                main:574]	:	INFO	:	Epoch 1987 | loss: 0.031188 | val_loss: 0.0311881 | Time: 5499.19 ms
[2022-02-20 17:13:16	                main:574]	:	INFO	:	Epoch 1988 | loss: 0.0311864 | val_loss: 0.0311851 | Time: 5379.71 ms
[2022-02-20 17:13:21	                main:574]	:	INFO	:	Epoch 1989 | loss: 0.0311866 | val_loss: 0.0311828 | Time: 5041.78 ms
[2022-02-20 17:13:27	                main:574]	:	INFO	:	Epoch 1990 | loss: 0.0311846 | val_loss: 0.0311843 | Time: 5211.47 ms
[2022-02-20 17:13:32	                main:574]	:	INFO	:	Epoch 1991 | loss: 0.0311857 | val_loss: 0.0311822 | Time: 5164.84 ms
[2022-02-20 17:13:37	                main:574]	:	INFO	:	Epoch 1992 | loss: 0.0311847 | val_loss: 0.031183 | Time: 5458.58 ms
[2022-02-20 17:13:43	                main:574]	:	INFO	:	Epoch 1993 | loss: 0.0311837 | val_loss: 0.0311822 | Time: 5346.15 ms
[2022-02-20 17:13:48	                main:574]	:	INFO	:	Epoch 1994 | loss: 0.0311832 | val_loss: 0.0311821 | Time: 5212.59 ms
[2022-02-20 17:13:53	                main:574]	:	INFO	:	Epoch 1995 | loss: 0.0311826 | val_loss: 0.0311821 | Time: 5118.09 ms
[2022-02-20 17:13:58	                main:574]	:	INFO	:	Epoch 1996 | loss: 0.0311824 | val_loss: 0.0311842 | Time: 5291.39 ms
[2022-02-20 17:14:03	                main:574]	:	INFO	:	Epoch 1997 | loss: 0.0311821 | val_loss: 0.0311816 | Time: 5145.39 ms
[2022-02-20 17:14:08	                main:574]	:	INFO	:	Epoch 1998 | loss: 0.0311814 | val_loss: 0.0311821 | Time: 5045.87 ms
[2022-02-20 17:14:13	                main:574]	:	INFO	:	Epoch 1999 | loss: 0.0311852 | val_loss: 0.0311879 | Time: 5006.04 ms
[2022-02-20 17:14:19	                main:574]	:	INFO	:	Epoch 2000 | loss: 0.031182 | val_loss: 0.0311847 | Time: 5326.75 ms
[2022-02-20 17:14:24	                main:574]	:	INFO	:	Epoch 2001 | loss: 0.0311841 | val_loss: 0.0311806 | Time: 5398.72 ms
[2022-02-20 17:14:29	                main:574]	:	INFO	:	Epoch 2002 | loss: 0.031186 | val_loss: 0.0311919 | Time: 4892.91 ms
[2022-02-20 17:14:34	                main:574]	:	INFO	:	Epoch 2003 | loss: 0.0312022 | val_loss: 0.0312085 | Time: 5429.95 ms
[2022-02-20 17:14:39	                main:574]	:	INFO	:	Epoch 2004 | loss: 0.0312089 | val_loss: 0.0312049 | Time: 4942.39 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GT 1030)
[2022-02-20 17:32:20	                main:435]	:	INFO	:	Set logging level to 1
[2022-02-20 17:32:20	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-02-20 17:32:20	                main:444]	:	INFO	:	Resolving all filenames
[2022-02-20 17:32:20	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-02-20 17:32:20	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-02-20 17:32:20	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-02-20 17:32:20	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-02-20 17:32:20	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-02-20 17:32:20	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-02-20 17:32:20	                main:474]	:	INFO	:	Configuration: 
[2022-02-20 17:32:20	                main:475]	:	INFO	:	    Model type: GRU
[2022-02-20 17:32:20	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-02-20 17:32:20	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-02-20 17:32:20	                main:478]	:	INFO	:	    Batch Size: 128
[2022-02-20 17:32:20	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-02-20 17:32:20	                main:480]	:	INFO	:	    Patience: 10
[2022-02-20 17:32:20	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-02-20 17:32:20	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-02-20 17:32:20	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-02-20 17:32:20	                main:484]	:	INFO	:	    # Threads: 1
[2022-02-20 17:32:20	                main:486]	:	INFO	:	Preparing Dataset
[2022-02-20 17:32:20	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GT 1030)
[2022-02-20 17:36:40	                main:435]	:	INFO	:	Set logging level to 1
[2022-02-20 17:36:40	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-02-20 17:36:40	                main:444]	:	INFO	:	Resolving all filenames
[2022-02-20 17:36:40	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-02-20 17:36:40	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-02-20 17:36:40	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-02-20 17:36:40	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-02-20 17:36:40	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-02-20 17:36:40	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-02-20 17:36:40	                main:474]	:	INFO	:	Configuration: 
[2022-02-20 17:36:40	                main:475]	:	INFO	:	    Model type: GRU
[2022-02-20 17:36:40	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-02-20 17:36:40	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-02-20 17:36:40	                main:478]	:	INFO	:	    Batch Size: 128
[2022-02-20 17:36:40	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-02-20 17:36:40	                main:480]	:	INFO	:	    Patience: 10
[2022-02-20 17:36:40	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-02-20 17:36:40	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-02-20 17:36:40	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-02-20 17:36:40	                main:484]	:	INFO	:	    # Threads: 1
[2022-02-20 17:36:40	                main:486]	:	INFO	:	Preparing Dataset
[2022-02-20 17:36:40	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce GT 1030)
[2022-02-20 18:03:03	                main:435]	:	INFO	:	Set logging level to 1
[2022-02-20 18:03:03	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-02-20 18:03:03	                main:444]	:	INFO	:	Resolving all filenames
[2022-02-20 18:03:03	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-02-20 18:03:03	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-02-20 18:03:03	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-02-20 18:03:03	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-02-20 18:03:03	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-02-20 18:03:03	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-02-20 18:03:03	                main:474]	:	INFO	:	Configuration: 
[2022-02-20 18:03:03	                main:475]	:	INFO	:	    Model type: GRU
[2022-02-20 18:03:03	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-02-20 18:03:03	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-02-20 18:03:03	                main:478]	:	INFO	:	    Batch Size: 128
[2022-02-20 18:03:03	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-02-20 18:03:03	                main:480]	:	INFO	:	    Patience: 10
[2022-02-20 18:03:03	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-02-20 18:03:03	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-02-20 18:03:03	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-02-20 18:03:03	                main:484]	:	INFO	:	    # Threads: 1
[2022-02-20 18:03:03	                main:486]	:	INFO	:	Preparing Dataset
[2022-02-20 18:03:03	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-02-20 18:03:04	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-02-20 18:03:05	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-02-20 18:03:05	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-02-20 18:03:05	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-02-20 18:03:05	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-02-20 18:03:05	                main:494]	:	INFO	:	Creating Model
[2022-02-20 18:03:05	                main:507]	:	INFO	:	Preparing config file
[2022-02-20 18:03:05	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-02-20 18:03:05	                main:512]	:	INFO	:	Loading config
[2022-02-20 18:03:05	                main:514]	:	INFO	:	Loading state
[2022-02-20 18:03:06	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-02-20 18:03:06	                main:562]	:	INFO	:	Starting Training
[2022-02-20 18:03:10	                main:574]	:	INFO	:	Epoch 1995 | loss: 0.0312106 | val_loss: 0.0311882 | Time: 4285.39 ms
[2022-02-20 18:03:15	                main:574]	:	INFO	:	Epoch 1996 | loss: 0.0311856 | val_loss: 0.0311829 | Time: 4706.17 ms
[2022-02-20 18:03:20	                main:574]	:	INFO	:	Epoch 1997 | loss: 0.0311818 | val_loss: 0.0311831 | Time: 5081.44 ms
[2022-02-20 18:03:25	                main:574]	:	INFO	:	Epoch 1998 | loss: 0.031178 | val_loss: 0.0311806 | Time: 5256.9 ms
[2022-02-20 18:03:30	                main:574]	:	INFO	:	Epoch 1999 | loss: 0.0311766 | val_loss: 0.0311816 | Time: 4946.21 ms
[2022-02-20 18:03:35	                main:574]	:	INFO	:	Epoch 2000 | loss: 0.0311756 | val_loss: 0.0311789 | Time: 5350.99 ms
[2022-02-20 18:03:41	                main:574]	:	INFO	:	Epoch 2001 | loss: 0.0311753 | val_loss: 0.0311789 | Time: 5398.16 ms
[2022-02-20 18:03:46	                main:574]	:	INFO	:	Epoch 2002 | loss: 0.0311755 | val_loss: 0.0311813 | Time: 4991.07 ms
[2022-02-20 18:03:51	                main:574]	:	INFO	:	Epoch 2003 | loss: 0.0311803 | val_loss: 0.0311838 | Time: 5216.7 ms
[2022-02-20 18:03:56	                main:574]	:	INFO	:	Epoch 2004 | loss: 0.0311844 | val_loss: 0.0311906 | Time: 4937.48 ms
[2022-02-20 18:04:01	                main:574]	:	INFO	:	Epoch 2005 | loss: 0.031195 | val_loss: 0.0312024 | Time: 5399.79 ms
[2022-02-20 18:04:07	                main:574]	:	INFO	:	Epoch 2006 | loss: 0.0312016 | val_loss: 0.0312004 | Time: 5835.06 ms
[2022-02-20 18:04:12	                main:574]	:	INFO	:	Epoch 2007 | loss: 0.0311997 | val_loss: 0.0311969 | Time: 5151.55 ms
[2022-02-20 18:04:18	                main:574]	:	INFO	:	Epoch 2008 | loss: 0.0311965 | val_loss: 0.0311959 | Time: 5281.93 ms
[2022-02-20 18:04:23	                main:574]	:	INFO	:	Epoch 2009 | loss: 0.0311947 | val_loss: 0.0312027 | Time: 5311.05 ms
[2022-02-20 18:04:28	                main:574]	:	INFO	:	Epoch 2010 | loss: 0.0312004 | val_loss: 0.0312049 | Time: 4827.96 ms
[2022-02-20 18:04:33	                main:574]	:	INFO	:	Epoch 2011 | loss: 0.0311997 | val_loss: 0.0312043 | Time: 5179.12 ms
[2022-02-20 18:04:38	                main:574]	:	INFO	:	Epoch 2012 | loss: 0.0312026 | val_loss: 0.0312057 | Time: 5015.24 ms
[2022-02-20 18:04:43	                main:574]	:	INFO	:	Epoch 2013 | loss: 0.031202 | val_loss: 0.0312078 | Time: 5413.95 ms
[2022-02-20 18:04:49	                main:574]	:	INFO	:	Epoch 2014 | loss: 0.0312027 | val_loss: 0.0312052 | Time: 5422.34 ms
[2022-02-20 18:04:54	                main:574]	:	INFO	:	Epoch 2015 | loss: 0.0312016 | val_loss: 0.0312045 | Time: 5080.97 ms
[2022-02-20 18:04:59	                main:574]	:	INFO	:	Epoch 2016 | loss: 0.0312009 | val_loss: 0.0312048 | Time: 5178.06 ms
[2022-02-20 18:05:04	                main:574]	:	INFO	:	Epoch 2017 | loss: 0.0312008 | val_loss: 0.0312035 | Time: 5149.48 ms
[2022-02-20 18:05:10	                main:574]	:	INFO	:	Epoch 2018 | loss: 0.0312004 | val_loss: 0.0312031 | Time: 5380.38 ms
[2022-02-20 18:05:15	                main:574]	:	INFO	:	Epoch 2019 | loss: 0.0311993 | val_loss: 0.0312035 | Time: 5435.11 ms
[2022-02-20 18:05:20	                main:574]	:	INFO	:	Epoch 2020 | loss: 0.0311984 | val_loss: 0.0312023 | Time: 5018.7 ms
[2022-02-20 18:05:25	                main:574]	:	INFO	:	Epoch 2021 | loss: 0.0311978 | val_loss: 0.0312034 | Time: 5330.3 ms
[2022-02-20 18:05:30	                main:574]	:	INFO	:	Epoch 2022 | loss: 0.0311973 | val_loss: 0.0312035 | Time: 5014.52 ms
[2022-02-20 18:05:36	                main:574]	:	INFO	:	Epoch 2023 | loss: 0.0311985 | val_loss: 0.031202 | Time: 5354.73 ms
[2022-02-20 18:05:41	                main:574]	:	INFO	:	Epoch 2024 | loss: 0.0311985 | val_loss: 0.0312023 | Time: 5354.15 ms
[2022-02-20 18:05:46	                main:574]	:	INFO	:	Epoch 2025 | loss: 0.0311984 | val_loss: 0.031203 | Time: 5162.6 ms
[2022-02-20 18:05:51	                main:574]	:	INFO	:	Epoch 2026 | loss: 0.0311969 | val_loss: 0.0312044 | Time: 5155.92 ms
[2022-02-20 18:05:57	                main:574]	:	INFO	:	Epoch 2027 | loss: 0.0311964 | val_loss: 0.0312022 | Time: 5332.79 ms
[2022-02-20 18:06:02	                main:574]	:	INFO	:	Epoch 2028 | loss: 0.031196 | val_loss: 0.0312024 | Time: 4937.83 ms
[2022-02-20 18:06:07	                main:574]	:	INFO	:	Epoch 2029 | loss: 0.0311965 | val_loss: 0.0312017 | Time: 5467.47 ms
[2022-02-20 18:06:12	                main:574]	:	INFO	:	Epoch 2030 | loss: 0.0311964 | val_loss: 0.0312033 | Time: 4999.55 ms
[2022-02-20 18:06:17	                main:574]	:	INFO	:	Epoch 2031 | loss: 0.0311955 | val_loss: 0.0312008 | Time: 5136.11 ms
[2022-02-20 18:06:23	                main:574]	:	INFO	:	Epoch 2032 | loss: 0.0311951 | val_loss: 0.0312018 | Time: 5327.2 ms
[2022-02-20 18:06:28	                main:574]	:	INFO	:	Epoch 2033 | loss: 0.0311963 | val_loss: 0.0312013 | Time: 5098.35 ms
[2022-02-20 18:06:33	                main:574]	:	INFO	:	Epoch 2034 | loss: 0.0311951 | val_loss: 0.0312013 | Time: 5281.63 ms
[2022-02-20 18:06:38	                main:574]	:	INFO	:	Epoch 2035 | loss: 0.031196 | val_loss: 0.0312006 | Time: 4888.49 ms
[2022-02-20 18:06:43	                main:574]	:	INFO	:	Epoch 2036 | loss: 0.0311947 | val_loss: 0.0312001 | Time: 5288.15 ms
[2022-02-20 18:06:49	                main:574]	:	INFO	:	Epoch 2037 | loss: 0.0311941 | val_loss: 0.0312026 | Time: 5312.06 ms
[2022-02-20 18:06:54	                main:574]	:	INFO	:	Epoch 2038 | loss: 0.0311941 | val_loss: 0.0312015 | Time: 5321.35 ms
[2022-02-20 18:06:59	                main:574]	:	INFO	:	Epoch 2039 | loss: 0.0311949 | val_loss: 0.0312015 | Time: 5482.59 ms
[2022-02-20 18:07:04	                main:574]	:	INFO	:	Epoch 2040 | loss: 0.0311946 | val_loss: 0.0312028 | Time: 5054.51 ms
[2022-02-20 18:07:10	                main:574]	:	INFO	:	Epoch 2041 | loss: 0.0311969 | val_loss: 0.0312029 | Time: 5413.09 ms
[2022-02-20 18:07:15	                main:574]	:	INFO	:	Epoch 2042 | loss: 0.0311981 | val_loss: 0.0312013 | Time: 5421.04 ms
[2022-02-20 18:07:20	                main:574]	:	INFO	:	Epoch 2043 | loss: 0.031198 | val_loss: 0.0312023 | Time: 4984.32 ms
[2022-02-20 18:07:25	                main:574]	:	INFO	:	Epoch 2044 | loss: 0.0311961 | val_loss: 0.0312044 | Time: 5122.6 ms
[2022-02-20 18:07:30	                main:574]	:	INFO	:	Epoch 2045 | loss: 0.031197 | val_loss: 0.0312012 | Time: 4982.29 ms
[2022-02-20 18:07:35	                main:574]	:	INFO	:	Epoch 2046 | loss: 0.0311966 | val_loss: 0.0312023 | Time: 5084.41 ms
[2022-02-20 18:07:41	                main:574]	:	INFO	:	Epoch 2047 | loss: 0.0311962 | val_loss: 0.0312029 | Time: 5425.9 ms
[2022-02-20 18:07:46	                main:574]	:	INFO	:	Epoch 2048 | loss: 0.031195 | val_loss: 0.0312016 | Time: 4930.6 ms
[2022-02-20 18:07:46	                main:597]	:	INFO	:	Saving trained model to model-final.pt, val_loss 0.0312016
[2022-02-20 18:07:46	                main:603]	:	INFO	:	Saving end state to config to file
[2022-02-20 18:07:46	                main:608]	:	INFO	:	Success, exiting..
18:07:46 (22380): called boinc_finish(0)

</stderr_txt>
]]>


©2022 MLC@Home Team
A project of the Cognition, Robotics, and Learning (CORAL) Lab at the University of Maryland, Baltimore County (UMBC)