Task 13950371

Name ParityModified-1647048878-4928-1-0_1
Workunit 10611414
Created 28 Mar 2022, 18:00:05 UTC
Sent 28 Mar 2022, 18:23:04 UTC
Report deadline 5 Apr 2022, 18:23:04 UTC
Received 31 Mar 2022, 8:24:38 UTC
Server state Over
Outcome Success
Client state Done
Exit status 0 (0x00000000)
Computer ID 6180
Run time 3 hours 41 min 3 sec
CPU time 3 hours 7 min 6 sec
Validate state Valid
Credit 4,160.00
Device peak FLOPS 884.73 GFLOPS
Application version Machine Learning Dataset Generator (GPU) v9.75 (cuda10200)
windows_x86_64
Peak working set size 1.54 GB
Peak swap size 3.44 GB
Peak disk usage 1.54 GB

Stderr output

<core_client_version>7.16.20</core_client_version>
<![CDATA[
<stderr_txt>
659 | val_loss: 0.031165 | Time: 6277.97 ms
[2022-03-30 20:22:44	                main:574]	:	INFO	:	Epoch 1787 | loss: 0.0311653 | val_loss: 0.0311633 | Time: 6263.98 ms
[2022-03-30 20:22:51	                main:574]	:	INFO	:	Epoch 1788 | loss: 0.0311639 | val_loss: 0.0311647 | Time: 6287.32 ms
[2022-03-30 20:22:57	                main:574]	:	INFO	:	Epoch 1789 | loss: 0.0311663 | val_loss: 0.0311692 | Time: 6250.12 ms
[2022-03-30 20:23:03	                main:574]	:	INFO	:	Epoch 1790 | loss: 0.0311688 | val_loss: 0.0311678 | Time: 6287.64 ms
[2022-03-30 20:23:09	                main:574]	:	INFO	:	Epoch 1791 | loss: 0.0311678 | val_loss: 0.0311671 | Time: 6270.14 ms
[2022-03-30 20:23:16	                main:574]	:	INFO	:	Epoch 1792 | loss: 0.0311661 | val_loss: 0.0311668 | Time: 6281.78 ms
[2022-03-30 20:23:22	                main:574]	:	INFO	:	Epoch 1793 | loss: 0.0311672 | val_loss: 0.0311664 | Time: 6318.84 ms
[2022-03-30 20:23:28	                main:574]	:	INFO	:	Epoch 1794 | loss: 0.0311656 | val_loss: 0.031166 | Time: 6279.76 ms
[2022-03-30 20:23:35	                main:574]	:	INFO	:	Epoch 1795 | loss: 0.0311633 | val_loss: 0.0311653 | Time: 6287.85 ms
[2022-03-30 20:23:41	                main:574]	:	INFO	:	Epoch 1796 | loss: 0.0311629 | val_loss: 0.0311632 | Time: 6287.01 ms
[2022-03-30 20:23:47	                main:574]	:	INFO	:	Epoch 1797 | loss: 0.0311629 | val_loss: 0.0311627 | Time: 6280.6 ms
[2022-03-30 20:23:53	                main:574]	:	INFO	:	Epoch 1798 | loss: 0.0311639 | val_loss: 0.0311628 | Time: 6277.64 ms
[2022-03-30 20:24:00	                main:574]	:	INFO	:	Epoch 1799 | loss: 0.031163 | val_loss: 0.0311614 | Time: 6259.52 ms
[2022-03-30 20:24:06	                main:574]	:	INFO	:	Epoch 1800 | loss: 0.0311607 | val_loss: 0.031161 | Time: 6276.73 ms
[2022-03-30 20:24:12	                main:574]	:	INFO	:	Epoch 1801 | loss: 0.0311598 | val_loss: 0.0311608 | Time: 6285.92 ms
[2022-03-30 20:24:19	                main:574]	:	INFO	:	Epoch 1802 | loss: 0.0311596 | val_loss: 0.031163 | Time: 6284.09 ms
[2022-03-30 20:24:25	                main:574]	:	INFO	:	Epoch 1803 | loss: 0.0311591 | val_loss: 0.0311608 | Time: 6313.98 ms
[2022-03-30 20:24:31	                main:574]	:	INFO	:	Epoch 1804 | loss: 0.0311601 | val_loss: 0.0311609 | Time: 6293.44 ms
[2022-03-30 20:24:37	                main:574]	:	INFO	:	Epoch 1805 | loss: 0.0311595 | val_loss: 0.0311604 | Time: 6262.77 ms
[2022-03-30 20:24:44	                main:574]	:	INFO	:	Epoch 1806 | loss: 0.0311594 | val_loss: 0.03116 | Time: 6333.86 ms
[2022-03-30 20:24:50	                main:574]	:	INFO	:	Epoch 1807 | loss: 0.0311588 | val_loss: 0.0311582 | Time: 6283.56 ms
[2022-03-30 20:24:56	                main:574]	:	INFO	:	Epoch 1808 | loss: 0.0311595 | val_loss: 0.0311583 | Time: 6270.6 ms
[2022-03-30 20:25:03	                main:574]	:	INFO	:	Epoch 1809 | loss: 0.0311607 | val_loss: 0.0311582 | Time: 6292.99 ms
[2022-03-30 20:25:09	                main:574]	:	INFO	:	Epoch 1810 | loss: 0.0311587 | val_loss: 0.0311627 | Time: 6299.82 ms
[2022-03-30 20:25:15	                main:574]	:	INFO	:	Epoch 1811 | loss: 0.0311586 | val_loss: 0.0311553 | Time: 6320.51 ms
[2022-03-30 20:25:22	                main:574]	:	INFO	:	Epoch 1812 | loss: 0.0311576 | val_loss: 0.0311579 | Time: 6384.57 ms
[2022-03-30 20:25:28	                main:574]	:	INFO	:	Epoch 1813 | loss: 0.0311575 | val_loss: 0.0311564 | Time: 6344.6 ms
[2022-03-30 20:25:34	                main:574]	:	INFO	:	Epoch 1814 | loss: 0.0311562 | val_loss: 0.031153 | Time: 6281.75 ms
[2022-03-30 20:25:41	                main:574]	:	INFO	:	Epoch 1815 | loss: 0.0311558 | val_loss: 0.031153 | Time: 6259.07 ms
[2022-03-30 20:25:47	                main:574]	:	INFO	:	Epoch 1816 | loss: 0.0311549 | val_loss: 0.0311541 | Time: 6327.59 ms
[2022-03-30 20:25:53	                main:574]	:	INFO	:	Epoch 1817 | loss: 0.0311545 | val_loss: 0.0311547 | Time: 6330.29 ms
[2022-03-30 20:26:00	                main:574]	:	INFO	:	Epoch 1818 | loss: 0.0311549 | val_loss: 0.0311557 | Time: 6444.88 ms
[2022-03-30 20:26:06	                main:574]	:	INFO	:	Epoch 1819 | loss: 0.0311537 | val_loss: 0.0311544 | Time: 6402.82 ms
[2022-03-30 20:26:12	                main:574]	:	INFO	:	Epoch 1820 | loss: 0.0311549 | val_loss: 0.0311562 | Time: 6335.32 ms
[2022-03-30 20:26:19	                main:574]	:	INFO	:	Epoch 1821 | loss: 0.0311549 | val_loss: 0.0311545 | Time: 6286.37 ms
[2022-03-30 20:26:25	                main:574]	:	INFO	:	Epoch 1822 | loss: 0.0311534 | val_loss: 0.0311536 | Time: 6307.87 ms
[2022-03-30 20:26:31	                main:574]	:	INFO	:	Epoch 1823 | loss: 0.0311543 | val_loss: 0.0311543 | Time: 6259.32 ms
[2022-03-30 20:26:38	                main:574]	:	INFO	:	Epoch 1824 | loss: 0.0311546 | val_loss: 0.0311515 | Time: 6275.48 ms
[2022-03-30 20:26:44	                main:574]	:	INFO	:	Epoch 1825 | loss: 0.0311529 | val_loss: 0.0311524 | Time: 6286.64 ms
[2022-03-30 20:26:50	                main:574]	:	INFO	:	Epoch 1826 | loss: 0.0311522 | val_loss: 0.0311532 | Time: 6260.12 ms
[2022-03-30 20:26:56	                main:574]	:	INFO	:	Epoch 1827 | loss: 0.0311516 | val_loss: 0.0311545 | Time: 6271.93 ms
[2022-03-30 20:27:03	                main:574]	:	INFO	:	Epoch 1828 | loss: 0.0311517 | val_loss: 0.0311557 | Time: 6271.56 ms
[2022-03-30 20:27:09	                main:574]	:	INFO	:	Epoch 1829 | loss: 0.0311515 | val_loss: 0.0311542 | Time: 6275.42 ms
[2022-03-30 20:27:15	                main:574]	:	INFO	:	Epoch 1830 | loss: 0.0311546 | val_loss: 0.0311562 | Time: 6267.64 ms
[2022-03-30 20:27:22	                main:574]	:	INFO	:	Epoch 1831 | loss: 0.0311531 | val_loss: 0.0311556 | Time: 6280.02 ms
[2022-03-30 20:27:28	                main:574]	:	INFO	:	Epoch 1832 | loss: 0.0311519 | val_loss: 0.0311538 | Time: 6283.52 ms
[2022-03-30 20:27:34	                main:574]	:	INFO	:	Epoch 1833 | loss: 0.0311505 | val_loss: 0.0311552 | Time: 6274.3 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-31 00:42:06	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-31 00:42:06	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-31 00:42:06	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-31 00:42:06	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-31 00:42:06	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-31 00:42:06	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-31 00:42:06	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-31 00:42:06	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-31 00:42:06	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-31 00:42:06	                main:474]	:	INFO	:	Configuration: 
[2022-03-31 00:42:06	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-31 00:42:06	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-31 00:42:06	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-31 00:42:06	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-31 00:42:06	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-31 00:42:06	                main:480]	:	INFO	:	    Patience: 10
[2022-03-31 00:42:06	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-31 00:42:06	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-31 00:42:06	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-31 00:42:06	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-31 00:42:06	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-31 00:42:06	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-31 00:42:07	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-31 00:42:11	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-31 00:42:11	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-31 00:42:11	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-31 00:42:12	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-31 00:42:12	                main:494]	:	INFO	:	Creating Model
[2022-03-31 00:42:12	                main:507]	:	INFO	:	Preparing config file
[2022-03-31 00:42:12	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-31 00:42:12	                main:512]	:	INFO	:	Loading config
[2022-03-31 00:42:12	                main:514]	:	INFO	:	Loading state
[2022-03-31 00:42:13	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-31 00:42:13	                main:562]	:	INFO	:	Starting Training
[2022-03-31 00:42:21	                main:574]	:	INFO	:	Epoch 1825 | loss: 0.0311849 | val_loss: 0.0311658 | Time: 7073.74 ms
[2022-03-31 00:42:27	                main:574]	:	INFO	:	Epoch 1826 | loss: 0.0311571 | val_loss: 0.0311572 | Time: 6255.96 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-31 00:44:36	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-31 00:44:36	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-31 00:44:36	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-31 00:44:36	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-31 00:44:37	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-31 00:44:37	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-31 00:44:37	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-31 00:44:37	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-31 00:44:37	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-31 00:44:37	                main:474]	:	INFO	:	Configuration: 
[2022-03-31 00:44:37	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-31 00:44:37	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-31 00:44:37	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-31 00:44:37	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-31 00:44:37	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-31 00:44:37	                main:480]	:	INFO	:	    Patience: 10
[2022-03-31 00:44:37	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-31 00:44:37	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-31 00:44:37	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-31 00:44:37	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-31 00:44:37	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-31 00:44:37	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-31 00:44:38	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-31 00:44:41	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-31 00:44:41	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-31 00:44:41	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-31 00:44:41	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-31 00:44:41	                main:494]	:	INFO	:	Creating Model
[2022-03-31 00:44:41	                main:507]	:	INFO	:	Preparing config file
[2022-03-31 00:44:41	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-31 00:44:41	                main:512]	:	INFO	:	Loading config
[2022-03-31 00:44:41	                main:514]	:	INFO	:	Loading state
[2022-03-31 00:44:42	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-31 00:44:42	                main:562]	:	INFO	:	Starting Training
[2022-03-31 00:44:49	                main:574]	:	INFO	:	Epoch 1825 | loss: 0.0311793 | val_loss: 0.031165 | Time: 7014.07 ms
[2022-03-31 00:44:56	                main:574]	:	INFO	:	Epoch 1826 | loss: 0.0311575 | val_loss: 0.0311576 | Time: 6718.19 ms
[2022-03-31 00:45:02	                main:574]	:	INFO	:	Epoch 1827 | loss: 0.0311555 | val_loss: 0.031155 | Time: 6283.6 ms
[2022-03-31 00:45:09	                main:574]	:	INFO	:	Epoch 1828 | loss: 0.0311532 | val_loss: 0.0311557 | Time: 6288.6 ms
[2022-03-31 00:45:15	                main:574]	:	INFO	:	Epoch 1829 | loss: 0.0311565 | val_loss: 0.0311597 | Time: 6309.66 ms
[2022-03-31 00:45:21	                main:574]	:	INFO	:	Epoch 1830 | loss: 0.0311514 | val_loss: 0.031163 | Time: 6311.89 ms
[2022-03-31 00:45:28	                main:574]	:	INFO	:	Epoch 1831 | loss: 0.0311504 | val_loss: 0.031155 | Time: 6297.28 ms
[2022-03-31 00:45:34	                main:574]	:	INFO	:	Epoch 1832 | loss: 0.0311495 | val_loss: 0.0311555 | Time: 6321.29 ms
[2022-03-31 00:45:40	                main:574]	:	INFO	:	Epoch 1833 | loss: 0.0311498 | val_loss: 0.0311546 | Time: 6390.01 ms
[2022-03-31 00:45:47	                main:574]	:	INFO	:	Epoch 1834 | loss: 0.0311528 | val_loss: 0.0311548 | Time: 6265.58 ms
[2022-03-31 00:45:53	                main:574]	:	INFO	:	Epoch 1835 | loss: 0.0311532 | val_loss: 0.0311553 | Time: 6296.57 ms
[2022-03-31 00:45:59	                main:574]	:	INFO	:	Epoch 1836 | loss: 0.0311538 | val_loss: 0.031155 | Time: 6289.2 ms
[2022-03-31 00:46:06	                main:574]	:	INFO	:	Epoch 1837 | loss: 0.031154 | val_loss: 0.0311543 | Time: 6321.51 ms
[2022-03-31 00:46:12	                main:574]	:	INFO	:	Epoch 1838 | loss: 0.0311532 | val_loss: 0.0311579 | Time: 6284.52 ms
[2022-03-31 00:46:18	                main:574]	:	INFO	:	Epoch 1839 | loss: 0.0311521 | val_loss: 0.0311518 | Time: 6258.21 ms
[2022-03-31 00:46:24	                main:574]	:	INFO	:	Epoch 1840 | loss: 0.0311485 | val_loss: 0.0311506 | Time: 6263.17 ms
[2022-03-31 00:46:31	                main:574]	:	INFO	:	Epoch 1841 | loss: 0.0311492 | val_loss: 0.0311541 | Time: 6274.44 ms
[2022-03-31 00:46:37	                main:574]	:	INFO	:	Epoch 1842 | loss: 0.0311481 | val_loss: 0.0311502 | Time: 6524.55 ms
[2022-03-31 00:46:44	                main:574]	:	INFO	:	Epoch 1843 | loss: 0.0311471 | val_loss: 0.0311514 | Time: 6372.33 ms
[2022-03-31 00:46:50	                main:574]	:	INFO	:	Epoch 1844 | loss: 0.0311459 | val_loss: 0.031151 | Time: 6360.92 ms
[2022-03-31 00:46:56	                main:574]	:	INFO	:	Epoch 1845 | loss: 0.0311446 | val_loss: 0.0311516 | Time: 6285.64 ms
[2022-03-31 00:47:03	                main:574]	:	INFO	:	Epoch 1846 | loss: 0.0311446 | val_loss: 0.0311508 | Time: 6261.28 ms
[2022-03-31 00:47:09	                main:574]	:	INFO	:	Epoch 1847 | loss: 0.0311414 | val_loss: 0.0311502 | Time: 6276.35 ms
[2022-03-31 00:47:15	                main:574]	:	INFO	:	Epoch 1848 | loss: 0.0311402 | val_loss: 0.0311512 | Time: 6306.32 ms
[2022-03-31 00:47:21	                main:574]	:	INFO	:	Epoch 1849 | loss: 0.031145 | val_loss: 0.0311683 | Time: 6317.11 ms
[2022-03-31 00:47:28	                main:574]	:	INFO	:	Epoch 1850 | loss: 0.0311516 | val_loss: 0.0311622 | Time: 6304.05 ms
[2022-03-31 00:47:34	                main:574]	:	INFO	:	Epoch 1851 | loss: 0.0311459 | val_loss: 0.031155 | Time: 6272.93 ms
[2022-03-31 00:47:40	                main:574]	:	INFO	:	Epoch 1852 | loss: 0.0311503 | val_loss: 0.0311621 | Time: 6256.45 ms
[2022-03-31 00:47:47	                main:574]	:	INFO	:	Epoch 1853 | loss: 0.0311501 | val_loss: 0.0311558 | Time: 6260.98 ms
[2022-03-31 00:47:53	                main:574]	:	INFO	:	Epoch 1854 | loss: 0.03115 | val_loss: 0.0311536 | Time: 6271.05 ms
[2022-03-31 00:47:59	                main:574]	:	INFO	:	Epoch 1855 | loss: 0.0311475 | val_loss: 0.0311526 | Time: 6270.62 ms
[2022-03-31 00:48:05	                main:574]	:	INFO	:	Epoch 1856 | loss: 0.0311467 | val_loss: 0.0311495 | Time: 6295.99 ms
[2022-03-31 00:48:12	                main:574]	:	INFO	:	Epoch 1857 | loss: 0.0311452 | val_loss: 0.0311476 | Time: 6305.68 ms
[2022-03-31 00:48:18	                main:574]	:	INFO	:	Epoch 1858 | loss: 0.0311449 | val_loss: 0.0311479 | Time: 6276.57 ms
[2022-03-31 00:48:24	                main:574]	:	INFO	:	Epoch 1859 | loss: 0.0311434 | val_loss: 0.0311439 | Time: 6266.16 ms
[2022-03-31 00:48:31	                main:574]	:	INFO	:	Epoch 1860 | loss: 0.0311428 | val_loss: 0.0311473 | Time: 6278.52 ms
[2022-03-31 00:48:37	                main:574]	:	INFO	:	Epoch 1861 | loss: 0.031144 | val_loss: 0.0311462 | Time: 6284.12 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-31 00:49:44	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-31 00:49:44	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-31 00:49:44	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-31 00:49:44	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-31 00:49:44	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-31 00:49:44	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-31 00:49:44	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-31 00:49:44	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-31 00:49:44	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-31 00:49:44	                main:474]	:	INFO	:	Configuration: 
[2022-03-31 00:49:44	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-31 00:49:44	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-31 00:49:44	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-31 00:49:44	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-31 00:49:44	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-31 00:49:44	                main:480]	:	INFO	:	    Patience: 10
[2022-03-31 00:49:44	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-31 00:49:44	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-31 00:49:44	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-31 00:49:44	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-31 00:49:45	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-31 00:49:45	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-31 00:49:46	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-31 00:49:49	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-31 00:49:49	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-31 00:49:49	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-31 00:49:49	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-31 00:49:49	                main:494]	:	INFO	:	Creating Model
[2022-03-31 00:49:49	                main:507]	:	INFO	:	Preparing config file
[2022-03-31 00:49:49	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-31 00:49:49	                main:512]	:	INFO	:	Loading config
[2022-03-31 00:49:49	                main:514]	:	INFO	:	Loading state
[2022-03-31 00:49:51	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-31 00:49:51	                main:562]	:	INFO	:	Starting Training
[2022-03-31 00:49:57	                main:574]	:	INFO	:	Epoch 1854 | loss: 0.0311668 | val_loss: 0.0311503 | Time: 6853.44 ms
[2022-03-31 00:50:04	                main:574]	:	INFO	:	Epoch 1855 | loss: 0.0311439 | val_loss: 0.0311502 | Time: 6268.44 ms
[2022-03-31 00:50:10	                main:574]	:	INFO	:	Epoch 1856 | loss: 0.0311405 | val_loss: 0.0311459 | Time: 6286.64 ms
[2022-03-31 00:50:16	                main:574]	:	INFO	:	Epoch 1857 | loss: 0.0311407 | val_loss: 0.0311488 | Time: 6258.13 ms
[2022-03-31 00:50:23	                main:574]	:	INFO	:	Epoch 1858 | loss: 0.0311395 | val_loss: 0.0311482 | Time: 6263.91 ms
[2022-03-31 00:50:29	                main:574]	:	INFO	:	Epoch 1859 | loss: 0.0311405 | val_loss: 0.031147 | Time: 6268.05 ms
[2022-03-31 00:50:35	                main:574]	:	INFO	:	Epoch 1860 | loss: 0.0311379 | val_loss: 0.0311454 | Time: 6310.33 ms
[2022-03-31 00:50:41	                main:574]	:	INFO	:	Epoch 1861 | loss: 0.0311359 | val_loss: 0.031145 | Time: 6295.43 ms
[2022-03-31 00:50:48	                main:574]	:	INFO	:	Epoch 1862 | loss: 0.0311353 | val_loss: 0.0311463 | Time: 6325.41 ms
[2022-03-31 00:50:54	                main:574]	:	INFO	:	Epoch 1863 | loss: 0.0311362 | val_loss: 0.0311456 | Time: 6288.61 ms
[2022-03-31 00:51:00	                main:574]	:	INFO	:	Epoch 1864 | loss: 0.0311387 | val_loss: 0.031145 | Time: 6272.55 ms
[2022-03-31 00:51:07	                main:574]	:	INFO	:	Epoch 1865 | loss: 0.0311377 | val_loss: 0.0311477 | Time: 6321.77 ms
[2022-03-31 00:51:13	                main:574]	:	INFO	:	Epoch 1866 | loss: 0.0311378 | val_loss: 0.0311457 | Time: 6502.62 ms
[2022-03-31 00:51:19	                main:574]	:	INFO	:	Epoch 1867 | loss: 0.0311357 | val_loss: 0.0311441 | Time: 6276.45 ms
[2022-03-31 00:51:26	                main:574]	:	INFO	:	Epoch 1868 | loss: 0.0311398 | val_loss: 0.0311469 | Time: 6300.23 ms
[2022-03-31 00:51:32	                main:574]	:	INFO	:	Epoch 1869 | loss: 0.0311406 | val_loss: 0.0311464 | Time: 6274.52 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-31 01:08:30	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-31 01:08:30	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-31 01:08:30	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-31 01:08:30	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-31 01:08:30	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-31 01:08:30	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-31 01:08:30	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-31 01:08:30	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-31 01:08:30	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-31 01:08:30	                main:474]	:	INFO	:	Configuration: 
[2022-03-31 01:08:30	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-31 01:08:30	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-31 01:08:30	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-31 01:08:30	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-31 01:08:30	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-31 01:08:30	                main:480]	:	INFO	:	    Patience: 10
[2022-03-31 01:08:30	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-31 01:08:31	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-31 01:08:31	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-31 01:08:31	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-31 01:08:31	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-31 01:08:31	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-31 01:08:31	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-31 01:08:34	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-31 01:08:34	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-31 01:08:34	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-31 01:08:34	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-31 01:08:34	                main:494]	:	INFO	:	Creating Model
[2022-03-31 01:08:34	                main:507]	:	INFO	:	Preparing config file
[2022-03-31 01:08:34	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-31 01:08:34	                main:512]	:	INFO	:	Loading config
[2022-03-31 01:08:34	                main:514]	:	INFO	:	Loading state
[2022-03-31 01:08:35	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-31 01:08:35	                main:562]	:	INFO	:	Starting Training
[2022-03-31 01:08:42	                main:574]	:	INFO	:	Epoch 1863 | loss: 0.0311644 | val_loss: 0.0311569 | Time: 6609.82 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-31 01:10:11	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-31 01:10:11	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-31 01:10:11	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-31 01:10:11	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-31 01:10:11	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-31 01:10:11	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-31 01:10:11	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-31 01:10:11	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-31 01:10:12	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-31 01:10:12	                main:474]	:	INFO	:	Configuration: 
[2022-03-31 01:10:12	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-31 01:10:12	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-31 01:10:12	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-31 01:10:12	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-31 01:10:12	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-31 01:10:12	                main:480]	:	INFO	:	    Patience: 10
[2022-03-31 01:10:12	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-31 01:10:12	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-31 01:10:12	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-31 01:10:12	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-31 01:10:12	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-31 01:10:12	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-31 01:10:12	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-31 01:10:14	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-31 01:10:14	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-31 01:10:15	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-31 01:10:15	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-31 01:10:15	                main:494]	:	INFO	:	Creating Model
[2022-03-31 01:10:15	                main:507]	:	INFO	:	Preparing config file
[2022-03-31 01:10:15	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-31 01:10:15	                main:512]	:	INFO	:	Loading config
[2022-03-31 01:10:15	                main:514]	:	INFO	:	Loading state
[2022-03-31 01:10:16	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-31 01:10:16	                main:562]	:	INFO	:	Starting Training
[2022-03-31 01:10:22	                main:574]	:	INFO	:	Epoch 1863 | loss: 0.0311624 | val_loss: 0.0311481 | Time: 6464.88 ms
[2022-03-31 01:10:28	                main:574]	:	INFO	:	Epoch 1864 | loss: 0.0311401 | val_loss: 0.0311446 | Time: 6248.55 ms
[2022-03-31 01:10:35	                main:574]	:	INFO	:	Epoch 1865 | loss: 0.0311378 | val_loss: 0.0311428 | Time: 6261.12 ms
[2022-03-31 01:10:41	                main:574]	:	INFO	:	Epoch 1866 | loss: 0.0311359 | val_loss: 0.0311418 | Time: 6271.8 ms
[2022-03-31 01:10:47	                main:574]	:	INFO	:	Epoch 1867 | loss: 0.0311351 | val_loss: 0.0311439 | Time: 6288.09 ms
[2022-03-31 01:10:54	                main:574]	:	INFO	:	Epoch 1868 | loss: 0.031136 | val_loss: 0.031149 | Time: 6285.1 ms
[2022-03-31 01:11:00	                main:574]	:	INFO	:	Epoch 1869 | loss: 0.0311381 | val_loss: 0.0311495 | Time: 6303.62 ms
[2022-03-31 01:11:06	                main:574]	:	INFO	:	Epoch 1870 | loss: 0.0311407 | val_loss: 0.0311523 | Time: 6315.32 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-31 01:12:13	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-31 01:12:13	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-31 01:12:13	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-31 01:12:13	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-31 01:12:13	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-31 01:12:13	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-31 01:12:13	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-31 01:12:13	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-31 01:12:13	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-31 01:12:13	                main:474]	:	INFO	:	Configuration: 
[2022-03-31 01:12:13	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-31 01:12:13	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-31 01:12:13	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-31 01:12:13	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-31 01:12:13	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-31 01:12:13	                main:480]	:	INFO	:	    Patience: 10
[2022-03-31 01:12:13	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-31 01:12:13	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-31 01:12:13	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-31 01:12:13	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-31 01:12:13	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-31 01:12:13	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-31 01:12:13	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-31 01:12:16	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-31 01:12:16	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-31 01:12:16	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-31 01:12:16	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-31 01:12:16	                main:494]	:	INFO	:	Creating Model
[2022-03-31 01:12:16	                main:507]	:	INFO	:	Preparing config file
[2022-03-31 01:12:16	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-31 01:12:16	                main:512]	:	INFO	:	Loading config
[2022-03-31 01:12:16	                main:514]	:	INFO	:	Loading state
[2022-03-31 01:12:17	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-31 01:12:17	                main:562]	:	INFO	:	Starting Training
[2022-03-31 01:12:23	                main:574]	:	INFO	:	Epoch 1863 | loss: 0.0311621 | val_loss: 0.0311518 | Time: 6473.94 ms
[2022-03-31 01:12:30	                main:574]	:	INFO	:	Epoch 1864 | loss: 0.0311385 | val_loss: 0.0311417 | Time: 6242.82 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-31 01:13:31	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-31 01:13:31	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-31 01:13:31	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-31 01:13:32	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-31 01:13:32	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-31 01:13:32	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-31 01:13:32	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-31 01:13:32	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-31 01:13:32	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-31 01:13:32	                main:474]	:	INFO	:	Configuration: 
[2022-03-31 01:13:32	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-31 01:13:32	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-31 01:13:32	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-31 01:13:32	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-31 01:13:32	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-31 01:13:32	                main:480]	:	INFO	:	    Patience: 10
[2022-03-31 01:13:32	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-31 01:13:32	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-31 01:13:32	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-31 01:13:32	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-31 01:13:32	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-31 01:13:32	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-31 01:13:32	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-31 01:13:34	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-31 01:13:34	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-31 01:13:35	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-31 01:13:35	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-31 01:13:35	                main:494]	:	INFO	:	Creating Model
[2022-03-31 01:13:35	                main:507]	:	INFO	:	Preparing config file
[2022-03-31 01:13:35	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-31 01:13:35	                main:512]	:	INFO	:	Loading config
[2022-03-31 01:13:35	                main:514]	:	INFO	:	Loading state
[2022-03-31 01:13:36	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-31 01:13:36	                main:562]	:	INFO	:	Starting Training
[2022-03-31 01:13:42	                main:574]	:	INFO	:	Epoch 1863 | loss: 0.0311598 | val_loss: 0.0311504 | Time: 6478.92 ms
[2022-03-31 01:13:49	                main:574]	:	INFO	:	Epoch 1864 | loss: 0.0311384 | val_loss: 0.0311462 | Time: 6254.14 ms
[2022-03-31 01:13:55	                main:574]	:	INFO	:	Epoch 1865 | loss: 0.031138 | val_loss: 0.0311463 | Time: 6322.01 ms
[2022-03-31 01:14:01	                main:574]	:	INFO	:	Epoch 1866 | loss: 0.0311366 | val_loss: 0.0311461 | Time: 6296.02 ms
[2022-03-31 01:14:07	                main:574]	:	INFO	:	Epoch 1867 | loss: 0.0311338 | val_loss: 0.031147 | Time: 6311.97 ms
[2022-03-31 01:14:14	                main:574]	:	INFO	:	Epoch 1868 | loss: 0.0311418 | val_loss: 0.0311511 | Time: 6291.07 ms
[2022-03-31 01:14:20	                main:574]	:	INFO	:	Epoch 1869 | loss: 0.0311422 | val_loss: 0.0311474 | Time: 6300.09 ms
[2022-03-31 01:14:26	                main:574]	:	INFO	:	Epoch 1870 | loss: 0.0311403 | val_loss: 0.0311429 | Time: 6254.3 ms
[2022-03-31 01:14:33	                main:574]	:	INFO	:	Epoch 1871 | loss: 0.031136 | val_loss: 0.0311411 | Time: 6256.78 ms
[2022-03-31 01:14:39	                main:574]	:	INFO	:	Epoch 1872 | loss: 0.0311346 | val_loss: 0.0311407 | Time: 6264.38 ms
[2022-03-31 01:14:45	                main:574]	:	INFO	:	Epoch 1873 | loss: 0.0311339 | val_loss: 0.0311434 | Time: 6300.67 ms
[2022-03-31 01:14:51	                main:574]	:	INFO	:	Epoch 1874 | loss: 0.0311338 | val_loss: 0.0311407 | Time: 6274.53 ms
[2022-03-31 01:14:58	                main:574]	:	INFO	:	Epoch 1875 | loss: 0.0311385 | val_loss: 0.0311515 | Time: 6266.13 ms
[2022-03-31 01:15:04	                main:574]	:	INFO	:	Epoch 1876 | loss: 0.0311455 | val_loss: 0.0311494 | Time: 6265.46 ms
[2022-03-31 01:15:10	                main:574]	:	INFO	:	Epoch 1877 | loss: 0.0311381 | val_loss: 0.0311467 | Time: 6277.57 ms
[2022-03-31 01:15:17	                main:574]	:	INFO	:	Epoch 1878 | loss: 0.0311351 | val_loss: 0.0311447 | Time: 6259.73 ms
[2022-03-31 01:15:23	                main:574]	:	INFO	:	Epoch 1879 | loss: 0.0311366 | val_loss: 0.031145 | Time: 6247.18 ms
[2022-03-31 01:15:29	                main:574]	:	INFO	:	Epoch 1880 | loss: 0.0311373 | val_loss: 0.0311468 | Time: 6252.76 ms
[2022-03-31 01:15:35	                main:574]	:	INFO	:	Epoch 1881 | loss: 0.0311388 | val_loss: 0.0311423 | Time: 6284.5 ms
[2022-03-31 01:15:42	                main:574]	:	INFO	:	Epoch 1882 | loss: 0.0311363 | val_loss: 0.0311422 | Time: 6277.24 ms
[2022-03-31 01:15:48	                main:574]	:	INFO	:	Epoch 1883 | loss: 0.0311357 | val_loss: 0.031142 | Time: 6276.37 ms
[2022-03-31 01:15:54	                main:574]	:	INFO	:	Epoch 1884 | loss: 0.031135 | val_loss: 0.0311428 | Time: 6274.07 ms
[2022-03-31 01:16:00	                main:574]	:	INFO	:	Epoch 1885 | loss: 0.0311336 | val_loss: 0.031142 | Time: 6263.46 ms
[2022-03-31 01:16:07	                main:574]	:	INFO	:	Epoch 1886 | loss: 0.0311339 | val_loss: 0.0311453 | Time: 6283.43 ms
[2022-03-31 01:16:13	                main:574]	:	INFO	:	Epoch 1887 | loss: 0.0311347 | val_loss: 0.0311423 | Time: 6268.25 ms
[2022-03-31 01:16:19	                main:574]	:	INFO	:	Epoch 1888 | loss: 0.0311358 | val_loss: 0.0311465 | Time: 6273.48 ms
[2022-03-31 01:16:26	                main:574]	:	INFO	:	Epoch 1889 | loss: 0.0311395 | val_loss: 0.0311519 | Time: 6255.42 ms
[2022-03-31 01:16:32	                main:574]	:	INFO	:	Epoch 1890 | loss: 0.0311399 | val_loss: 0.0311469 | Time: 6256.97 ms
[2022-03-31 01:16:38	                main:574]	:	INFO	:	Epoch 1891 | loss: 0.0311374 | val_loss: 0.0311438 | Time: 6276.62 ms
[2022-03-31 01:16:44	                main:574]	:	INFO	:	Epoch 1892 | loss: 0.0311357 | val_loss: 0.0311432 | Time: 6244.69 ms
[2022-03-31 01:16:51	                main:574]	:	INFO	:	Epoch 1893 | loss: 0.0311332 | val_loss: 0.0311426 | Time: 6279.52 ms
[2022-03-31 01:16:57	                main:574]	:	INFO	:	Epoch 1894 | loss: 0.0311352 | val_loss: 0.0311502 | Time: 6279.1 ms
[2022-03-31 01:17:04	                main:574]	:	INFO	:	Epoch 1895 | loss: 0.0311411 | val_loss: 0.0311495 | Time: 6634.47 ms
[2022-03-31 01:17:10	                main:574]	:	INFO	:	Epoch 1896 | loss: 0.031141 | val_loss: 0.0311529 | Time: 6265.11 ms
[2022-03-31 01:17:16	                main:574]	:	INFO	:	Epoch 1897 | loss: 0.0311415 | val_loss: 0.0311503 | Time: 6262.84 ms
[2022-03-31 01:17:22	                main:574]	:	INFO	:	Epoch 1898 | loss: 0.0311398 | val_loss: 0.0311516 | Time: 6265.46 ms
[2022-03-31 01:17:29	                main:574]	:	INFO	:	Epoch 1899 | loss: 0.0311382 | val_loss: 0.0311477 | Time: 6259.29 ms
[2022-03-31 01:17:35	                main:574]	:	INFO	:	Epoch 1900 | loss: 0.0311369 | val_loss: 0.0311485 | Time: 6271.66 ms
[2022-03-31 01:17:41	                main:574]	:	INFO	:	Epoch 1901 | loss: 0.031135 | val_loss: 0.0311434 | Time: 6310.72 ms
[2022-03-31 01:17:47	                main:574]	:	INFO	:	Epoch 1902 | loss: 0.0311355 | val_loss: 0.031145 | Time: 6277.17 ms
[2022-03-31 01:17:54	                main:574]	:	INFO	:	Epoch 1903 | loss: 0.0311353 | val_loss: 0.0311446 | Time: 6304.59 ms
[2022-03-31 01:18:00	                main:574]	:	INFO	:	Epoch 1904 | loss: 0.0311323 | val_loss: 0.0311448 | Time: 6288.04 ms
[2022-03-31 01:18:06	                main:574]	:	INFO	:	Epoch 1905 | loss: 0.0311317 | val_loss: 0.0311403 | Time: 6275.93 ms
[2022-03-31 01:18:13	                main:574]	:	INFO	:	Epoch 1906 | loss: 0.0311313 | val_loss: 0.0311432 | Time: 6246.96 ms
[2022-03-31 01:18:19	                main:574]	:	INFO	:	Epoch 1907 | loss: 0.0311334 | val_loss: 0.0311419 | Time: 6284.98 ms
[2022-03-31 01:18:25	                main:574]	:	INFO	:	Epoch 1908 | loss: 0.0311386 | val_loss: 0.0311606 | Time: 6292.78 ms
[2022-03-31 01:18:31	                main:574]	:	INFO	:	Epoch 1909 | loss: 0.0311582 | val_loss: 0.0311656 | Time: 6280.06 ms
[2022-03-31 01:18:38	                main:574]	:	INFO	:	Epoch 1910 | loss: 0.0311556 | val_loss: 0.0311621 | Time: 6264.38 ms
[2022-03-31 01:18:44	                main:574]	:	INFO	:	Epoch 1911 | loss: 0.0311504 | val_loss: 0.0311553 | Time: 6259.11 ms
[2022-03-31 01:18:50	                main:574]	:	INFO	:	Epoch 1912 | loss: 0.0311462 | val_loss: 0.0311567 | Time: 6280.31 ms
[2022-03-31 01:18:57	                main:574]	:	INFO	:	Epoch 1913 | loss: 0.0311438 | val_loss: 0.0311509 | Time: 6287.1 ms
[2022-03-31 01:19:03	                main:574]	:	INFO	:	Epoch 1914 | loss: 0.0311405 | val_loss: 0.0311495 | Time: 6313.09 ms
[2022-03-31 01:19:09	                main:574]	:	INFO	:	Epoch 1915 | loss: 0.0311393 | val_loss: 0.031149 | Time: 6302.16 ms
[2022-03-31 01:19:16	                main:574]	:	INFO	:	Epoch 1916 | loss: 0.031138 | val_loss: 0.0311464 | Time: 6304.97 ms
[2022-03-31 01:19:22	                main:574]	:	INFO	:	Epoch 1917 | loss: 0.0311368 | val_loss: 0.0311491 | Time: 6256.65 ms
[2022-03-31 01:19:28	                main:574]	:	INFO	:	Epoch 1918 | loss: 0.0311391 | val_loss: 0.03115 | Time: 6288.94 ms
[2022-03-31 01:19:34	                main:574]	:	INFO	:	Epoch 1919 | loss: 0.0311389 | val_loss: 0.0311541 | Time: 6261.13 ms
[2022-03-31 01:19:41	                main:574]	:	INFO	:	Epoch 1920 | loss: 0.0311511 | val_loss: 0.031156 | Time: 6270.49 ms
[2022-03-31 01:19:47	                main:574]	:	INFO	:	Epoch 1921 | loss: 0.0311528 | val_loss: 0.0311576 | Time: 6283.45 ms
[2022-03-31 01:19:53	                main:574]	:	INFO	:	Epoch 1922 | loss: 0.031151 | val_loss: 0.0311565 | Time: 6292.47 ms
[2022-03-31 01:19:59	                main:574]	:	INFO	:	Epoch 1923 | loss: 0.0311493 | val_loss: 0.0311534 | Time: 6271.45 ms
[2022-03-31 01:20:06	                main:574]	:	INFO	:	Epoch 1924 | loss: 0.0311472 | val_loss: 0.0311512 | Time: 6273.66 ms
[2022-03-31 01:20:12	                main:574]	:	INFO	:	Epoch 1925 | loss: 0.0311456 | val_loss: 0.0311505 | Time: 6259.13 ms
[2022-03-31 01:20:18	                main:574]	:	INFO	:	Epoch 1926 | loss: 0.0311457 | val_loss: 0.0311488 | Time: 6258.87 ms
[2022-03-31 01:20:25	                main:574]	:	INFO	:	Epoch 1927 | loss: 0.0311443 | val_loss: 0.0311483 | Time: 6276.19 ms
[2022-03-31 01:20:31	                main:574]	:	INFO	:	Epoch 1928 | loss: 0.0311447 | val_loss: 0.0311491 | Time: 6283.14 ms
[2022-03-31 01:20:37	                main:574]	:	INFO	:	Epoch 1929 | loss: 0.0311434 | val_loss: 0.0311497 | Time: 6259.72 ms
[2022-03-31 01:20:43	                main:574]	:	INFO	:	Epoch 1930 | loss: 0.0311421 | val_loss: 0.0311481 | Time: 6276.14 ms
[2022-03-31 01:20:50	                main:574]	:	INFO	:	Epoch 1931 | loss: 0.0311431 | val_loss: 0.0311492 | Time: 6289.94 ms
[2022-03-31 01:20:56	                main:574]	:	INFO	:	Epoch 1932 | loss: 0.031144 | val_loss: 0.0311498 | Time: 6297.08 ms
[2022-03-31 01:21:02	                main:574]	:	INFO	:	Epoch 1933 | loss: 0.0311427 | val_loss: 0.0311502 | Time: 6298.13 ms
[2022-03-31 01:21:09	                main:574]	:	INFO	:	Epoch 1934 | loss: 0.0311424 | val_loss: 0.0311492 | Time: 6288.98 ms
[2022-03-31 01:21:15	                main:574]	:	INFO	:	Epoch 1935 | loss: 0.0311417 | val_loss: 0.0311477 | Time: 6260.68 ms
[2022-03-31 01:21:21	                main:574]	:	INFO	:	Epoch 1936 | loss: 0.0311397 | val_loss: 0.031147 | Time: 6250.08 ms
[2022-03-31 01:21:27	                main:574]	:	INFO	:	Epoch 1937 | loss: 0.0311386 | val_loss: 0.0311452 | Time: 6306.97 ms
[2022-03-31 01:21:34	                main:574]	:	INFO	:	Epoch 1938 | loss: 0.0311381 | val_loss: 0.0311445 | Time: 6284.98 ms
[2022-03-31 01:21:40	                main:574]	:	INFO	:	Epoch 1939 | loss: 0.0311367 | val_loss: 0.0311448 | Time: 6265.16 ms
[2022-03-31 01:21:46	                main:574]	:	INFO	:	Epoch 1940 | loss: 0.0311367 | val_loss: 0.0311476 | Time: 6305.3 ms
[2022-03-31 01:21:53	                main:574]	:	INFO	:	Epoch 1941 | loss: 0.0311351 | val_loss: 0.0311416 | Time: 6308.47 ms
[2022-03-31 01:21:59	                main:574]	:	INFO	:	Epoch 1942 | loss: 0.0311347 | val_loss: 0.031142 | Time: 6268.08 ms
[2022-03-31 01:22:05	                main:574]	:	INFO	:	Epoch 1943 | loss: 0.0311357 | val_loss: 0.031145 | Time: 6253.94 ms
[2022-03-31 01:22:11	                main:574]	:	INFO	:	Epoch 1944 | loss: 0.0311351 | val_loss: 0.0311411 | Time: 6245.67 ms
[2022-03-31 01:22:18	                main:574]	:	INFO	:	Epoch 1945 | loss: 0.0311335 | val_loss: 0.031141 | Time: 6269.32 ms
[2022-03-31 01:22:24	                main:574]	:	INFO	:	Epoch 1946 | loss: 0.0311342 | val_loss: 0.0311408 | Time: 6298.16 ms
[2022-03-31 01:22:30	                main:574]	:	INFO	:	Epoch 1947 | loss: 0.0311331 | val_loss: 0.0311404 | Time: 6300.2 ms
[2022-03-31 01:22:37	                main:574]	:	INFO	:	Epoch 1948 | loss: 0.0311323 | val_loss: 0.0311415 | Time: 6285.15 ms
[2022-03-31 01:22:43	                main:574]	:	INFO	:	Epoch 1949 | loss: 0.0311311 | val_loss: 0.0311412 | Time: 6260.85 ms
[2022-03-31 01:22:49	                main:574]	:	INFO	:	Epoch 1950 | loss: 0.0311311 | val_loss: 0.0311429 | Time: 6246.75 ms
[2022-03-31 01:22:55	                main:574]	:	INFO	:	Epoch 1951 | loss: 0.0311314 | val_loss: 0.031143 | Time: 6298.36 ms
[2022-03-31 01:23:02	                main:574]	:	INFO	:	Epoch 1952 | loss: 0.0311307 | val_loss: 0.0311413 | Time: 6272.29 ms
[2022-03-31 01:23:08	                main:574]	:	INFO	:	Epoch 1953 | loss: 0.0311301 | val_loss: 0.0311412 | Time: 6267 ms
[2022-03-31 01:23:14	                main:574]	:	INFO	:	Epoch 1954 | loss: 0.0311295 | val_loss: 0.0311413 | Time: 6247.22 ms
[2022-03-31 01:23:20	                main:574]	:	INFO	:	Epoch 1955 | loss: 0.03113 | val_loss: 0.0311438 | Time: 6283.01 ms
[2022-03-31 01:23:27	                main:574]	:	INFO	:	Epoch 1956 | loss: 0.0311354 | val_loss: 0.0311474 | Time: 6274.74 ms
[2022-03-31 01:23:33	                main:574]	:	INFO	:	Epoch 1957 | loss: 0.0311371 | val_loss: 0.0311443 | Time: 6330.26 ms
[2022-03-31 01:23:39	                main:574]	:	INFO	:	Epoch 1958 | loss: 0.0311344 | val_loss: 0.0311407 | Time: 6285.65 ms
[2022-03-31 01:23:46	                main:574]	:	INFO	:	Epoch 1959 | loss: 0.031134 | val_loss: 0.0311425 | Time: 6270.51 ms
[2022-03-31 01:23:52	                main:574]	:	INFO	:	Epoch 1960 | loss: 0.031132 | val_loss: 0.0311399 | Time: 6285.98 ms
[2022-03-31 01:23:58	                main:574]	:	INFO	:	Epoch 1961 | loss: 0.0311318 | val_loss: 0.0311432 | Time: 6240.87 ms
[2022-03-31 01:24:04	                main:574]	:	INFO	:	Epoch 1962 | loss: 0.0311308 | val_loss: 0.0311408 | Time: 6270.9 ms
[2022-03-31 01:24:11	                main:574]	:	INFO	:	Epoch 1963 | loss: 0.0311314 | val_loss: 0.0311424 | Time: 6302.74 ms
[2022-03-31 01:24:17	                main:574]	:	INFO	:	Epoch 1964 | loss: 0.0311304 | val_loss: 0.0311401 | Time: 6261.67 ms
[2022-03-31 01:24:23	                main:574]	:	INFO	:	Epoch 1965 | loss: 0.0311316 | val_loss: 0.0311423 | Time: 6278.82 ms
[2022-03-31 01:24:30	                main:574]	:	INFO	:	Epoch 1966 | loss: 0.031133 | val_loss: 0.031142 | Time: 6291.37 ms
[2022-03-31 01:24:36	                main:574]	:	INFO	:	Epoch 1967 | loss: 0.0311296 | val_loss: 0.0311404 | Time: 6271.07 ms
[2022-03-31 01:24:42	                main:574]	:	INFO	:	Epoch 1968 | loss: 0.0311317 | val_loss: 0.0311415 | Time: 6320.28 ms
[2022-03-31 01:24:49	                main:574]	:	INFO	:	Epoch 1969 | loss: 0.0311324 | val_loss: 0.0311504 | Time: 6278.79 ms
[2022-03-31 01:24:55	                main:574]	:	INFO	:	Epoch 1970 | loss: 0.0311326 | val_loss: 0.0311407 | Time: 6273.99 ms
[2022-03-31 01:25:01	                main:574]	:	INFO	:	Epoch 1971 | loss: 0.0311282 | val_loss: 0.0311403 | Time: 6301.23 ms
[2022-03-31 01:25:07	                main:574]	:	INFO	:	Epoch 1972 | loss: 0.0311289 | val_loss: 0.0311467 | Time: 6297.19 ms
[2022-03-31 01:25:14	                main:574]	:	INFO	:	Epoch 1973 | loss: 0.0311298 | val_loss: 0.0311408 | Time: 6261.78 ms
[2022-03-31 01:25:20	                main:574]	:	INFO	:	Epoch 1974 | loss: 0.0311269 | val_loss: 0.0311411 | Time: 6301.96 ms
[2022-03-31 01:25:26	                main:574]	:	INFO	:	Epoch 1975 | loss: 0.0311292 | val_loss: 0.0311425 | Time: 6301.68 ms
[2022-03-31 01:25:33	                main:574]	:	INFO	:	Epoch 1976 | loss: 0.0311306 | val_loss: 0.0311401 | Time: 6304.8 ms
[2022-03-31 01:25:39	                main:574]	:	INFO	:	Epoch 1977 | loss: 0.0311292 | val_loss: 0.0311429 | Time: 6275.91 ms
[2022-03-31 01:25:45	                main:574]	:	INFO	:	Epoch 1978 | loss: 0.0311318 | val_loss: 0.031143 | Time: 6285.45 ms
[2022-03-31 01:25:51	                main:574]	:	INFO	:	Epoch 1979 | loss: 0.0311345 | val_loss: 0.0311431 | Time: 6249.23 ms
[2022-03-31 01:25:58	                main:574]	:	INFO	:	Epoch 1980 | loss: 0.0311337 | val_loss: 0.0311393 | Time: 6234.39 ms
[2022-03-31 01:26:04	                main:574]	:	INFO	:	Epoch 1981 | loss: 0.0311304 | val_loss: 0.0311417 | Time: 6303.46 ms
[2022-03-31 01:26:10	                main:574]	:	INFO	:	Epoch 1982 | loss: 0.0311306 | val_loss: 0.0311427 | Time: 6281.7 ms
[2022-03-31 01:26:17	                main:574]	:	INFO	:	Epoch 1983 | loss: 0.0311293 | val_loss: 0.0311406 | Time: 6275.63 ms
[2022-03-31 01:26:23	                main:574]	:	INFO	:	Epoch 1984 | loss: 0.0311287 | val_loss: 0.0311415 | Time: 6339.14 ms
[2022-03-31 01:26:29	                main:574]	:	INFO	:	Epoch 1985 | loss: 0.0311274 | val_loss: 0.0311401 | Time: 6273.99 ms
[2022-03-31 01:26:35	                main:574]	:	INFO	:	Epoch 1986 | loss: 0.0311266 | val_loss: 0.0311395 | Time: 6265.37 ms
[2022-03-31 01:26:42	                main:574]	:	INFO	:	Epoch 1987 | loss: 0.0311283 | val_loss: 0.0311389 | Time: 6265.08 ms
[2022-03-31 01:26:48	                main:574]	:	INFO	:	Epoch 1988 | loss: 0.031127 | val_loss: 0.0311405 | Time: 6241.9 ms
[2022-03-31 01:26:54	                main:574]	:	INFO	:	Epoch 1989 | loss: 0.0311259 | val_loss: 0.0311396 | Time: 6276.3 ms
[2022-03-31 01:27:01	                main:574]	:	INFO	:	Epoch 1990 | loss: 0.0311261 | val_loss: 0.0311396 | Time: 6291.47 ms
[2022-03-31 01:27:07	                main:574]	:	INFO	:	Epoch 1991 | loss: 0.0311281 | val_loss: 0.0311422 | Time: 6296.93 ms
[2022-03-31 01:27:13	                main:574]	:	INFO	:	Epoch 1992 | loss: 0.0311285 | val_loss: 0.0311413 | Time: 6320.09 ms
[2022-03-31 01:27:19	                main:574]	:	INFO	:	Epoch 1993 | loss: 0.0311308 | val_loss: 0.0311438 | Time: 6270.85 ms
[2022-03-31 01:27:26	                main:574]	:	INFO	:	Epoch 1994 | loss: 0.0311307 | val_loss: 0.0311413 | Time: 6258.63 ms
[2022-03-31 01:27:32	                main:574]	:	INFO	:	Epoch 1995 | loss: 0.0311289 | val_loss: 0.0311423 | Time: 6263.35 ms
[2022-03-31 01:27:38	                main:574]	:	INFO	:	Epoch 1996 | loss: 0.0311378 | val_loss: 0.0311501 | Time: 6273.57 ms
[2022-03-31 01:27:44	                main:574]	:	INFO	:	Epoch 1997 | loss: 0.031144 | val_loss: 0.0311491 | Time: 6253.88 ms
[2022-03-31 01:27:51	                main:574]	:	INFO	:	Epoch 1998 | loss: 0.0311484 | val_loss: 0.0311512 | Time: 6263.63 ms
[2022-03-31 01:27:57	                main:574]	:	INFO	:	Epoch 1999 | loss: 0.0311475 | val_loss: 0.0311552 | Time: 6315.65 ms
[2022-03-31 01:28:03	                main:574]	:	INFO	:	Epoch 2000 | loss: 0.0311581 | val_loss: 0.0311833 | Time: 6285.57 ms
[2022-03-31 01:28:10	                main:574]	:	INFO	:	Epoch 2001 | loss: 0.031191 | val_loss: 0.0311927 | Time: 6291.2 ms
[2022-03-31 01:28:16	                main:574]	:	INFO	:	Epoch 2002 | loss: 0.0311893 | val_loss: 0.0311857 | Time: 6268.26 ms
[2022-03-31 01:28:22	                main:574]	:	INFO	:	Epoch 2003 | loss: 0.0311841 | val_loss: 0.0311811 | Time: 6282.93 ms
[2022-03-31 01:28:28	                main:574]	:	INFO	:	Epoch 2004 | loss: 0.0311794 | val_loss: 0.0311767 | Time: 6253.27 ms
[2022-03-31 01:28:35	                main:574]	:	INFO	:	Epoch 2005 | loss: 0.0311765 | val_loss: 0.031179 | Time: 6256.57 ms
[2022-03-31 01:28:41	                main:574]	:	INFO	:	Epoch 2006 | loss: 0.0311727 | val_loss: 0.0311728 | Time: 6272.44 ms
[2022-03-31 01:28:47	                main:574]	:	INFO	:	Epoch 2007 | loss: 0.0311699 | val_loss: 0.0311715 | Time: 6272.37 ms
[2022-03-31 01:28:55	                main:574]	:	INFO	:	Epoch 2008 | loss: 0.0311694 | val_loss: 0.0311722 | Time: 8166.48 ms
[2022-03-31 01:29:02	                main:574]	:	INFO	:	Epoch 2009 | loss: 0.0311679 | val_loss: 0.0311721 | Time: 6308.06 ms
[2022-03-31 09:18:35	                main:574]	:	INFO	:	Epoch 2010 | loss: 0.0311659 | val_loss: 0.0311684 | Time: 2.81806e+07 ms
[2022-03-31 09:18:46	                main:574]	:	INFO	:	Epoch 2011 | loss: 0.031164 | val_loss: 0.0311684 | Time: 10259.1 ms
[2022-03-31 09:18:52	                main:574]	:	INFO	:	Epoch 2012 | loss: 0.0311624 | val_loss: 0.0311663 | Time: 6373.27 ms
Machine Learning Dataset Generator v9.75 (Windows/x64) (libTorch: release/1.6 GPU: NVIDIA GeForce 940MX)
[2022-03-31 09:19:33	                main:435]	:	INFO	:	Set logging level to 1
[2022-03-31 09:19:33	                main:441]	:	INFO	:	Running in BOINC Client mode
[2022-03-31 09:19:33	                main:444]	:	INFO	:	Resolving all filenames
[2022-03-31 09:19:33	                main:452]	:	INFO	:	Resolved: dataset.hdf5 => dataset.hdf5 (exists = 1)
[2022-03-31 09:19:33	                main:452]	:	INFO	:	Resolved: model.cfg => model.cfg (exists = 1)
[2022-03-31 09:19:33	                main:452]	:	INFO	:	Resolved: model-final.pt => model-final.pt (exists = 0)
[2022-03-31 09:19:33	                main:452]	:	INFO	:	Resolved: model-input.pt => model-input.pt (exists = 1)
[2022-03-31 09:19:33	                main:452]	:	INFO	:	Resolved: snapshot.pt => snapshot.pt (exists = 1)
[2022-03-31 09:19:33	                main:472]	:	INFO	:	Dataset filename: dataset.hdf5
[2022-03-31 09:19:33	                main:474]	:	INFO	:	Configuration: 
[2022-03-31 09:19:33	                main:475]	:	INFO	:	    Model type: GRU
[2022-03-31 09:19:33	                main:476]	:	INFO	:	    Validation Loss Threshold: 0.0001
[2022-03-31 09:19:33	                main:477]	:	INFO	:	    Max Epochs: 2048
[2022-03-31 09:19:33	                main:478]	:	INFO	:	    Batch Size: 128
[2022-03-31 09:19:33	                main:479]	:	INFO	:	    Learning Rate: 0.01
[2022-03-31 09:19:33	                main:480]	:	INFO	:	    Patience: 10
[2022-03-31 09:19:33	                main:481]	:	INFO	:	    Hidden Width: 12
[2022-03-31 09:19:33	                main:482]	:	INFO	:	    # Recurrent Layers: 4
[2022-03-31 09:19:33	                main:483]	:	INFO	:	    # Backend Layers: 4
[2022-03-31 09:19:33	                main:484]	:	INFO	:	    # Threads: 1
[2022-03-31 09:19:33	                main:486]	:	INFO	:	Preparing Dataset
[2022-03-31 09:19:33	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xt from dataset.hdf5 into memory
[2022-03-31 09:19:34	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yt from dataset.hdf5 into memory
[2022-03-31 09:19:37	                load:106]	:	INFO	:	Successfully loaded dataset of 2048 examples into memory.
[2022-03-31 09:19:37	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Xv from dataset.hdf5 into memory
[2022-03-31 09:19:37	load_hdf5_ds_into_tensor:28]	:	INFO	:	Loading Dataset /Yv from dataset.hdf5 into memory
[2022-03-31 09:19:37	                load:106]	:	INFO	:	Successfully loaded dataset of 512 examples into memory.
[2022-03-31 09:19:37	                main:494]	:	INFO	:	Creating Model
[2022-03-31 09:19:37	                main:507]	:	INFO	:	Preparing config file
[2022-03-31 09:19:37	                main:511]	:	INFO	:	Found checkpoint, attempting to load... 
[2022-03-31 09:19:37	                main:512]	:	INFO	:	Loading config
[2022-03-31 09:19:37	                main:514]	:	INFO	:	Loading state
[2022-03-31 09:19:38	                main:559]	:	INFO	:	Loading DataLoader into Memory
[2022-03-31 09:19:38	                main:562]	:	INFO	:	Starting Training
[2022-03-31 09:19:44	                main:574]	:	INFO	:	Epoch 2005 | loss: 0.0311986 | val_loss: 0.0311725 | Time: 6369.19 ms
[2022-03-31 09:19:51	                main:574]	:	INFO	:	Epoch 2006 | loss: 0.031167 | val_loss: 0.0311647 | Time: 6204.05 ms
[2022-03-31 09:19:57	                main:574]	:	INFO	:	Epoch 2007 | loss: 0.0311607 | val_loss: 0.0311602 | Time: 6226.91 ms
[2022-03-31 09:20:03	                main:574]	:	INFO	:	Epoch 2008 | loss: 0.0311553 | val_loss: 0.0311591 | Time: 6259.33 ms
[2022-03-31 09:20:09	                main:574]	:	INFO	:	Epoch 2009 | loss: 0.0311526 | val_loss: 0.0311569 | Time: 6289.05 ms
[2022-03-31 09:20:16	                main:574]	:	INFO	:	Epoch 2010 | loss: 0.0311539 | val_loss: 0.0311578 | Time: 6286.85 ms
[2022-03-31 09:20:22	                main:574]	:	INFO	:	Epoch 2011 | loss: 0.0311631 | val_loss: 0.0311623 | Time: 6294.57 ms
[2022-03-31 09:20:28	                main:574]	:	INFO	:	Epoch 2012 | loss: 0.0311639 | val_loss: 0.0311614 | Time: 6291.51 ms
[2022-03-31 09:20:35	                main:574]	:	INFO	:	Epoch 2013 | loss: 0.0311565 | val_loss: 0.0311574 | Time: 6267.15 ms
[2022-03-31 09:20:41	                main:574]	:	INFO	:	Epoch 2014 | loss: 0.0311627 | val_loss: 0.031163 | Time: 6267.32 ms
[2022-03-31 09:20:47	                main:574]	:	INFO	:	Epoch 2015 | loss: 0.0311605 | val_loss: 0.0311607 | Time: 6266.87 ms
[2022-03-31 09:20:53	                main:574]	:	INFO	:	Epoch 2016 | loss: 0.0311569 | val_loss: 0.031159 | Time: 6262.14 ms
[2022-03-31 09:21:00	                main:574]	:	INFO	:	Epoch 2017 | loss: 0.0311599 | val_loss: 0.0311624 | Time: 6294.2 ms
[2022-03-31 09:21:06	                main:574]	:	INFO	:	Epoch 2018 | loss: 0.0311622 | val_loss: 0.031162 | Time: 6282.19 ms
[2022-03-31 09:21:12	                main:574]	:	INFO	:	Epoch 2019 | loss: 0.0311604 | val_loss: 0.0311603 | Time: 6266.93 ms
[2022-03-31 09:21:18	                main:574]	:	INFO	:	Epoch 2020 | loss: 0.0311575 | val_loss: 0.0311586 | Time: 6269.21 ms
[2022-03-31 09:21:25	                main:574]	:	INFO	:	Epoch 2021 | loss: 0.0311562 | val_loss: 0.0311573 | Time: 6261.2 ms
[2022-03-31 09:21:31	                main:574]	:	INFO	:	Epoch 2022 | loss: 0.0311557 | val_loss: 0.0311575 | Time: 6319.51 ms
[2022-03-31 09:21:37	                main:574]	:	INFO	:	Epoch 2023 | loss: 0.0311547 | val_loss: 0.031156 | Time: 6343.76 ms
[2022-03-31 09:21:44	                main:574]	:	INFO	:	Epoch 2024 | loss: 0.0311532 | val_loss: 0.031156 | Time: 6921.28 ms
[2022-03-31 09:21:52	                main:574]	:	INFO	:	Epoch 2025 | loss: 0.0311541 | val_loss: 0.0311592 | Time: 8031.86 ms
[2022-03-31 09:21:59	                main:574]	:	INFO	:	Epoch 2026 | loss: 0.0311548 | val_loss: 0.0311564 | Time: 6291.93 ms
[2022-03-31 09:22:05	                main:574]	:	INFO	:	Epoch 2027 | loss: 0.0311538 | val_loss: 0.0311556 | Time: 6278.1 ms
[2022-03-31 09:22:11	                main:574]	:	INFO	:	Epoch 2028 | loss: 0.031153 | val_loss: 0.0311554 | Time: 6283.49 ms
[2022-03-31 09:22:18	                main:574]	:	INFO	:	Epoch 2029 | loss: 0.0311512 | val_loss: 0.0311529 | Time: 6290.97 ms
[2022-03-31 09:22:24	                main:574]	:	INFO	:	Epoch 2030 | loss: 0.0311501 | val_loss: 0.0311542 | Time: 6262.09 ms
[2022-03-31 09:22:30	                main:574]	:	INFO	:	Epoch 2031 | loss: 0.0311497 | val_loss: 0.0311526 | Time: 6266.2 ms
[2022-03-31 09:22:36	                main:574]	:	INFO	:	Epoch 2032 | loss: 0.0311477 | val_loss: 0.0311524 | Time: 6258.69 ms
[2022-03-31 09:22:43	                main:574]	:	INFO	:	Epoch 2033 | loss: 0.0311472 | val_loss: 0.0311535 | Time: 6261.07 ms
[2022-03-31 09:22:49	                main:574]	:	INFO	:	Epoch 2034 | loss: 0.0311469 | val_loss: 0.0311523 | Time: 6258.68 ms
[2022-03-31 09:22:55	                main:574]	:	INFO	:	Epoch 2035 | loss: 0.031146 | val_loss: 0.0311515 | Time: 6270.01 ms
[2022-03-31 09:23:01	                main:574]	:	INFO	:	Epoch 2036 | loss: 0.0311455 | val_loss: 0.031151 | Time: 6277.75 ms
[2022-03-31 09:23:08	                main:574]	:	INFO	:	Epoch 2037 | loss: 0.031145 | val_loss: 0.0311508 | Time: 6266.62 ms
[2022-03-31 09:23:14	                main:574]	:	INFO	:	Epoch 2038 | loss: 0.0311433 | val_loss: 0.0311509 | Time: 6329.22 ms
[2022-03-31 09:23:20	                main:574]	:	INFO	:	Epoch 2039 | loss: 0.0311446 | val_loss: 0.0311522 | Time: 6289.96 ms
[2022-03-31 09:23:27	                main:574]	:	INFO	:	Epoch 2040 | loss: 0.0311441 | val_loss: 0.0311531 | Time: 6264.39 ms
[2022-03-31 09:23:33	                main:574]	:	INFO	:	Epoch 2041 | loss: 0.031143 | val_loss: 0.0311494 | Time: 6289.17 ms
[2022-03-31 09:23:39	                main:574]	:	INFO	:	Epoch 2042 | loss: 0.0311422 | val_loss: 0.0311493 | Time: 6255.54 ms
[2022-03-31 09:23:45	                main:574]	:	INFO	:	Epoch 2043 | loss: 0.0311415 | val_loss: 0.031149 | Time: 6268.06 ms
[2022-03-31 09:23:52	                main:574]	:	INFO	:	Epoch 2044 | loss: 0.0311411 | val_loss: 0.0311498 | Time: 6294.47 ms
[2022-03-31 09:23:58	                main:574]	:	INFO	:	Epoch 2045 | loss: 0.0311409 | val_loss: 0.031149 | Time: 6287.63 ms
[2022-03-31 09:24:04	                main:574]	:	INFO	:	Epoch 2046 | loss: 0.0311413 | val_loss: 0.0311501 | Time: 6285.26 ms
[2022-03-31 09:24:11	                main:574]	:	INFO	:	Epoch 2047 | loss: 0.0311415 | val_loss: 0.0311494 | Time: 6300.07 ms
[2022-03-31 09:24:17	                main:574]	:	INFO	:	Epoch 2048 | loss: 0.0311445 | val_loss: 0.0311525 | Time: 6247.81 ms
[2022-03-31 09:24:17	                main:597]	:	INFO	:	Saving trained model to model-final.pt, val_loss 0.0311525
[2022-03-31 09:24:17	                main:603]	:	INFO	:	Saving end state to config to file
[2022-03-31 09:24:17	                main:608]	:	INFO	:	Success, exiting..
09:24:17 (10404): called boinc_finish(0)

</stderr_txt>
]]>


©2022 MLC@Home Team
A project of the Cognition, Robotics, and Learning (CORAL) Lab at the University of Maryland, Baltimore County (UMBC)