log in |
Message boards : Number crunching : Curious
1 · 2 · 3 · Next
Author | Message |
---|---|
If you work unit is marked invalid is there any information that could show that it was invalid for sure? | |
ID: 2356 · Reply Quote | |
We use "redundancy" (two results from different computers must be exactly the same) in order to check if a workunit is "valid" (successful computation). If not a third copy of the workunit is sent to another computer. At the end, if two results are identical they are declared "OK" and all the others are marked "invalid". | |
ID: 2357 · Reply Quote | |
I am interested to know whether or not the current project that is running Homo Sapiens (OneGenE - FANTOM-1) has a estimated completion date? | |
ID: 2612 · Reply Quote | |
From the science stats page, H. sapiens (α=0.05, FANTOM-1)
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
| # genes/isoforms | Queued | Executed | Last 10 days |
| 87554 | 61490 | 59986 (68.51%) | 88.80/day |
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This indicates there are (87554 - 59986) / 88.80 = 310.45 days remaining which gives February 7th, 2023 as the estimated completion date at the current rate. | |
ID: 2613 · Reply Quote | |
Thank you I was aware that was there I just wasn't sure how to work out the end date | |
ID: 2614 · Reply Quote | |
I am interested to know whether or not the current project that is running Homo Sapiens (OneGenE - FANTOM-1) has a estimated completion date? Valter just added "ETA" column in the Science Status page :) https://gene.disi.unitn.it/test/gene_science.php | |
ID: 2632 · Reply Quote | |
Thank you I am aware of this | |
ID: 2634 · Reply Quote | |
Recently the "Last 10 Days" value, not sure what the units are, maybe genes/day, has dropped from 88 to 66. Server status indicates there's still a lot computers working here. Time to complete WUs is still around 4 hours. I've long thought it'd be nice to have a long term chart to understand the change in genes/day. | |
ID: 2694 · Reply Quote | |
Maybe this link https://www.boincstats.com/stats/150/project/detail/overview will help you | |
ID: 2695 · Reply Quote | |
I haven't had any in a while but occasionally I will receive a task that runs for under 1 hour (Ryzen 9 3900X). I am guessing these are just shorter tasks. Has anybody else noticed the shorter tasks? | |
ID: 2696 · Reply Quote | |
Sure I've seen a number that ran around an hour. Right now I have plenty that are running around 2.5 hours. | |
ID: 2697 · Reply Quote | |
I haven't had any in a while but occasionally I will receive a task that runs for under 1 hour (Ryzen 9 3900X). I am guessing these are just shorter tasks. Has anybody else noticed the shorter tasks? The last workunit of every gene expansion batch is shorter than the others, it is the one containing "wu-294" in its name | |
ID: 2698 · Reply Quote | |
Sure I've seen a number that ran around an hour. Right now I have plenty that are running around 2.5 hours. On the server status page there is a higher than usual number of "tasks in progress". Will check tomorrow, while back in the office, if there is something strange on the server. | |
ID: 2699 · Reply Quote | |
I haven't had any in a while but occasionally I will receive a task that runs for under 1 hour (Ryzen 9 3900X). I am guessing these are just shorter tasks. Has anybody else noticed the shorter tasks? Bingo! I perused my list of running WUs and the one that's running will finish in under an hour. | |
ID: 2700 · Reply Quote | |
On the server status page there is a higher than usual number of "tasks in progress". Will check tomorrow, while back in the office, if there is something strange on the server.That does seem high. Does it include the Ready To Start WUs as well? Every few days my Ready To Start WUs accumulate to almost 300 and I switch preferences to Resource Zero Mode. TN-GRID works really well in RZM and never seems to give me more than one extra WU waiting in the wings. But at Resource 100% it does not seem to honor the BOINC preference for how much work to buffer. All my computers are set to either 0.5 or 1.0 days but you send more than that. I believe some projects limit the maximum amount of WUs to twice the number of CPU threads and GPUgrid limits it to twice the number of GPUs. It's been a good while since I've noticed the server running out of available WUs. Nice work tuning it up. | |
ID: 2701 · Reply Quote | |
I have two finished tasks that I haven't been able to get uploaded to the server all day for some reason. | |
ID: 2703 · Reply Quote | |
I have two finished tasks that I haven't been able to get uploaded to the server all day for some reason. I have one too. It is at 100% and then Mon 06 Jun 2022 15:29:49 BST | TN-Grid Platform | Temporarily failed upload of 208612_Hs_T059718-LRRC46_wu-122_1654480886442_1_0: transient HTTP error
Interestingly, my tasks page is whowing only six tasks waiting for validation. The ones that were on there validated seem to have gone. Also it is only showing one task in progress as opposed to 8 which are actually running. | |
ID: 2704 · Reply Quote | |
OK, I stopped and restarted the server, checked the databases with mysqlcheck, did a consistency check on the file system, deleted a bunch of zero bytes files inside the upload directory and looked for weird errors inside the logs. | |
ID: 2705 · Reply Quote | |
OK, I stopped and restarted the server, checked the databases with mysqlcheck, did a consistency check on the file system, deleted a bunch of zero bytes files inside the upload directory and looked for weird errors inside the logs. I still am unable to upload my two stalled tasks. Set http_xfer_debug and got this. Mon 06 Jun 2022 09:42:57 AM PDT | TN-Grid Platform | Temporarily failed upload of 208508_Hs_T004877-MARCH8_wu-86_1654367869573_1_0: transient HTTP error Mon 06 Jun 2022 09:42:57 AM PDT | TN-Grid Platform | Backing off 05:14:00 on upload of 208508_Hs_T004877-MARCH8_wu-86_1654367869573_1_0 Mon 06 Jun 2022 09:42:58 AM PDT | | [http_xfer] [ID#0] HTTP: wrote 2415 bytes Mon 06 Jun 2022 09:42:58 AM PDT | | [http_xfer] [ID#0] HTTP: wrote 2542 bytes Mon 06 Jun 2022 09:42:58 AM PDT | | [http_xfer] [ID#0] HTTP: wrote 2808 bytes Mon 06 Jun 2022 09:42:58 AM PDT | | [http_xfer] [ID#0] HTTP: wrote 3113 bytes Mon 06 Jun 2022 09:42:58 AM PDT | | [http_xfer] [ID#0] HTTP: wrote 2888 bytes Mon 06 Jun 2022 09:42:58 AM PDT | | [http_xfer] [ID#0] HTTP: wrote 1278 bytes Mon 06 Jun 2022 09:42:58 AM PDT | | Internet access OK - project servers may be temporarily down. And with http_debug got this: Mon 06 Jun 2022 09:47:59 AM PDT | TN-Grid Platform | [http] [ID#4397] Info: Recv failure: Connection reset by peer Mon 06 Jun 2022 09:47:59 AM PDT | TN-Grid Platform | [http] [ID#4397] Info: Closing connection 9062 Mon 06 Jun 2022 09:47:59 AM PDT | TN-Grid Platform | [http] HTTP error: Failure when receiving data from the peer Mon 06 Jun 2022 09:47:59 AM PDT | | Project communication failed: attempting access to reference site Mon 06 Jun 2022 09:47:59 AM PDT | | [http] HTTP_OP::init_get(): https://www.google.com/ Mon 06 Jun 2022 09:47:59 AM PDT | TN-Grid Platform | Temporarily failed upload of 208508_Hs_T004877-MARCH8_wu-86_1654367869573_1_0: transient HTTP error Mon 06 Jun 2022 09:47:59 AM PDT | TN-Grid Platform | Backing off 05:16:40 on upload of 208508_Hs_T004877-MARCH8_wu-86_1654367869573_1_0 | |
ID: 2706 · Reply Quote | |
server side I got something like (apache cgi:error) (104)Connection reset by peer: [client x.x.x.x:40986] AH01225: Error reading request entity data
BTW I also have one unable to upload task... Can't figure out why... | |
ID: 2707 · Reply Quote | |
Message boards :
Number crunching :
Curious