[00:11:40] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:11:41] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:25:10] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.027 seconds [00:25:11] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.027 seconds [00:45:59] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: Puppet has not run in the last 10 hours [00:45:59] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: Puppet has not run in the last 10 hours [00:45:59] PROBLEM - Puppet freshness on ms-be1009 is CRITICAL: Puppet has not run in the last 10 hours [00:45:59] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: Puppet has not run in the last 10 hours [00:45:59] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: Puppet has not run in the last 10 hours [00:45:59] PROBLEM - Puppet freshness on ms-be1009 is CRITICAL: Puppet has not run in the last 10 hours [00:57:05] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:57:05] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:07:26] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 6.262 seconds [01:07:26] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 6.262 seconds [01:38:11] PROBLEM - MySQL Replication Heartbeat on db1035 is CRITICAL: CRIT replication delay 186 seconds [01:38:11] PROBLEM - MySQL Replication Heartbeat on db1035 is CRITICAL: CRIT replication delay 186 seconds [01:38:29] PROBLEM - MySQL Slave Delay on db1035 is CRITICAL: CRIT replication delay 188 seconds [01:38:29] PROBLEM - MySQL Slave Delay on db1035 is CRITICAL: CRIT replication delay 188 seconds [01:40:08] PROBLEM - MySQL Slave Delay on db1025 is CRITICAL: CRIT replication delay 296 seconds [01:40:08] PROBLEM - MySQL Slave Delay on db1025 is CRITICAL: CRIT replication delay 296 seconds [01:40:35] PROBLEM - MySQL Slave Delay on storage3 is CRITICAL: CRIT replication delay 229 seconds [01:40:35] PROBLEM - MySQL Slave Delay on storage3 is CRITICAL: CRIT replication delay 229 seconds [01:41:56] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:41:56] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:42:50] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 184 seconds [01:42:50] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 184 seconds [01:42:59] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 184 seconds [01:42:59] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 184 seconds [01:45:41] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 184 seconds [01:45:41] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 184 seconds [01:46:08] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 183 seconds [01:46:09] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 183 seconds [01:47:38] PROBLEM - Misc_Db_Lag on storage3 is CRITICAL: CHECK MySQL REPLICATION - lag - CRITICAL - Seconds_Behind_Master : 650s [01:47:38] PROBLEM - Misc_Db_Lag on storage3 is CRITICAL: CHECK MySQL REPLICATION - lag - CRITICAL - Seconds_Behind_Master : 650s [01:48:41] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 194 seconds [01:48:42] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 194 seconds [01:48:59] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 197 seconds [01:49:00] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 197 seconds [01:49:16] @replag plwiki [01:49:16] saper: [plwiki: s2] db52: 0s, db53: 0s, db54: 0s, db57: 0s [01:52:17] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 5.408 seconds [01:52:18] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 5.408 seconds [01:54:50] RECOVERY - Misc_Db_Lag on storage3 is OK: CHECK MySQL REPLICATION - lag - OK - Seconds_Behind_Master : 1s [01:54:50] RECOVERY - MySQL Slave Delay on db1025 is OK: OK replication delay 1 seconds [01:54:51] RECOVERY - Misc_Db_Lag on storage3 is OK: CHECK MySQL REPLICATION - lag - OK - Seconds_Behind_Master : 1s [01:54:51] RECOVERY - MySQL Slave Delay on db1025 is OK: OK replication delay 1 seconds [01:55:08] RECOVERY - MySQL Slave Delay on storage3 is OK: OK replication delay 6 seconds [01:55:09] RECOVERY - MySQL Slave Delay on storage3 is OK: OK replication delay 6 seconds [01:56:12] PROBLEM - MySQL Slave Delay on db1020 is CRITICAL: CRIT replication delay 199 seconds [01:56:13] PROBLEM - MySQL Slave Delay on db1020 is CRITICAL: CRIT replication delay 199 seconds [01:56:20] PROBLEM - MySQL Replication Heartbeat on db1020 is CRITICAL: CRIT replication delay 203 seconds [01:56:21] PROBLEM - MySQL Replication Heartbeat on db1020 is CRITICAL: CRIT replication delay 203 seconds [01:56:47] RECOVERY - MySQL Replication Heartbeat on db1035 is OK: OK replication delay 21 seconds [01:56:48] RECOVERY - MySQL Replication Heartbeat on db1035 is OK: OK replication delay 21 seconds [01:57:41] RECOVERY - MySQL Slave Delay on db1035 is OK: OK replication delay 0 seconds [01:57:42] RECOVERY - MySQL Slave Delay on db1035 is OK: OK replication delay 0 seconds [02:26:04] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:26:04] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:37:37] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.030 seconds [02:37:37] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.030 seconds [04:37:53] PROBLEM - check_job_queue on spence is CRITICAL: JOBQUEUE CRITICAL - the following wikis have more than 9,999 jobs: , zhwiki (10348) [04:37:54] PROBLEM - check_job_queue on spence is CRITICAL: JOBQUEUE CRITICAL - the following wikis have more than 9,999 jobs: , zhwiki (10348) [04:53:20] PROBLEM - check_job_queue on neon is CRITICAL: JOBQUEUE CRITICAL - the following wikis have more than 9,999 jobs: , zhwiki (10078) [04:53:21] PROBLEM - check_job_queue on neon is CRITICAL: JOBQUEUE CRITICAL - the following wikis have more than 9,999 jobs: , zhwiki (10078) [04:59:47] RECOVERY - MySQL Replication Heartbeat on db1020 is OK: OK replication delay 28 seconds [04:59:48] RECOVERY - MySQL Replication Heartbeat on db1020 is OK: OK replication delay 28 seconds [04:59:56] RECOVERY - MySQL Slave Delay on db1020 is OK: OK replication delay 0 seconds [04:59:57] RECOVERY - MySQL Slave Delay on db1020 is OK: OK replication delay 0 seconds [05:03:50] RECOVERY - check_job_queue on spence is OK: JOBQUEUE OK - all job queues below 10,000 [05:03:51] RECOVERY - check_job_queue on spence is OK: JOBQUEUE OK - all job queues below 10,000 [05:08:47] RECOVERY - check_job_queue on neon is OK: JOBQUEUE OK - all job queues below 10,000 [05:08:48] RECOVERY - check_job_queue on neon is OK: JOBQUEUE OK - all job queues below 10,000 [05:45:32] PROBLEM - check_job_queue on spence is CRITICAL: JOBQUEUE CRITICAL - the following wikis have more than 9,999 jobs: , zhwiki (11449) [05:45:33] PROBLEM - check_job_queue on spence is CRITICAL: JOBQUEUE CRITICAL - the following wikis have more than 9,999 jobs: , zhwiki (11449) [05:48:14] RECOVERY - MySQL Replication Heartbeat on db33 is OK: OK replication delay 0 seconds [05:48:14] RECOVERY - MySQL Replication Heartbeat on db33 is OK: OK replication delay 0 seconds [05:48:41] RECOVERY - MySQL Slave Delay on db33 is OK: OK replication delay 0 seconds [05:48:41] RECOVERY - MySQL Slave Delay on db33 is OK: OK replication delay 0 seconds [05:50:29] PROBLEM - check_job_queue on neon is CRITICAL: JOBQUEUE CRITICAL - the following wikis have more than 9,999 jobs: , zhwiki (11360) [05:50:29] PROBLEM - check_job_queue on neon is CRITICAL: JOBQUEUE CRITICAL - the following wikis have more than 9,999 jobs: , zhwiki (11360) [06:46:11] PROBLEM - Puppet freshness on srv281 is CRITICAL: Puppet has not run in the last 10 hours [06:46:11] PROBLEM - Puppet freshness on srv281 is CRITICAL: Puppet has not run in the last 10 hours [07:17:42] PROBLEM - Puppet freshness on labstore1 is CRITICAL: Puppet has not run in the last 10 hours [07:17:42] PROBLEM - Puppet freshness on labstore1 is CRITICAL: Puppet has not run in the last 10 hours [07:30:45] PROBLEM - Puppet freshness on neon is CRITICAL: Puppet has not run in the last 10 hours [07:30:45] PROBLEM - Puppet freshness on neon is CRITICAL: Puppet has not run in the last 10 hours [07:54:45] PROBLEM - Puppet freshness on ocg3 is CRITICAL: Puppet has not run in the last 10 hours [07:54:46] PROBLEM - Puppet freshness on ocg3 is CRITICAL: Puppet has not run in the last 10 hours [08:34:04] PROBLEM - Puppet freshness on ms-be1003 is CRITICAL: Puppet has not run in the last 10 hours [08:34:05] PROBLEM - Puppet freshness on ms-be1003 is CRITICAL: Puppet has not run in the last 10 hours [08:49:04] PROBLEM - Puppet freshness on virt1001 is CRITICAL: Puppet has not run in the last 10 hours [08:49:04] PROBLEM - Puppet freshness on virt1001 is CRITICAL: Puppet has not run in the last 10 hours [09:02:10] PROBLEM - Puppet freshness on virt1002 is CRITICAL: Puppet has not run in the last 10 hours [09:02:11] PROBLEM - Puppet freshness on virt1002 is CRITICAL: Puppet has not run in the last 10 hours [09:17:28] PROBLEM - Puppet freshness on virt1003 is CRITICAL: Puppet has not run in the last 10 hours [09:17:28] PROBLEM - Puppet freshness on virt1003 is CRITICAL: Puppet has not run in the last 10 hours [10:47:30] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: Puppet has not run in the last 10 hours [10:47:30] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: Puppet has not run in the last 10 hours [10:47:30] PROBLEM - Puppet freshness on ms-be1009 is CRITICAL: Puppet has not run in the last 10 hours [10:47:31] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: Puppet has not run in the last 10 hours [10:47:31] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: Puppet has not run in the last 10 hours [10:47:31] PROBLEM - Puppet freshness on ms-be1009 is CRITICAL: Puppet has not run in the last 10 hours [12:35:26] PROBLEM - Puppet freshness on bayes is CRITICAL: Puppet has not run in the last 10 hours [12:35:27] PROBLEM - Puppet freshness on bayes is CRITICAL: Puppet has not run in the last 10 hours [12:37:23] PROBLEM - Puppet freshness on niobium is CRITICAL: Puppet has not run in the last 10 hours [12:37:23] PROBLEM - Puppet freshness on srv242 is CRITICAL: Puppet has not run in the last 10 hours [12:37:23] PROBLEM - Puppet freshness on niobium is CRITICAL: Puppet has not run in the last 10 hours [12:37:23] PROBLEM - Puppet freshness on srv242 is CRITICAL: Puppet has not run in the last 10 hours [12:38:26] PROBLEM - Puppet freshness on srv238 is CRITICAL: Puppet has not run in the last 10 hours [12:38:26] PROBLEM - Puppet freshness on mw27 is CRITICAL: Puppet has not run in the last 10 hours [12:38:26] PROBLEM - Puppet freshness on srv190 is CRITICAL: Puppet has not run in the last 10 hours [12:38:26] PROBLEM - Puppet freshness on srv238 is CRITICAL: Puppet has not run in the last 10 hours [12:38:26] PROBLEM - Puppet freshness on mw27 is CRITICAL: Puppet has not run in the last 10 hours [12:38:26] PROBLEM - Puppet freshness on srv190 is CRITICAL: Puppet has not run in the last 10 hours [15:21:38] Are there any guidelines for wiki logos? Idwiki changed their's using CSS [15:22:11] Now I want to move that back to InitialiseSettings.php so that the CSS can be cleaned up [15:22:32] and used on other wikis (which isn't recommended, but used over here) [16:29:02] New patchset: Alex Monk; "(bug 39054) Close suwikibooks" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/17698 [16:47:12] PROBLEM - Puppet freshness on srv281 is CRITICAL: Puppet has not run in the last 10 hours [16:47:13] PROBLEM - Puppet freshness on srv281 is CRITICAL: Puppet has not run in the last 10 hours [17:18:25] PROBLEM - Puppet freshness on labstore1 is CRITICAL: Puppet has not run in the last 10 hours [17:18:25] PROBLEM - Puppet freshness on labstore1 is CRITICAL: Puppet has not run in the last 10 hours [17:31:19] PROBLEM - Puppet freshness on neon is CRITICAL: Puppet has not run in the last 10 hours [17:31:19] PROBLEM - Puppet freshness on neon is CRITICAL: Puppet has not run in the last 10 hours [17:55:19] PROBLEM - Puppet freshness on ocg3 is CRITICAL: Puppet has not run in the last 10 hours [17:55:19] PROBLEM - Puppet freshness on ocg3 is CRITICAL: Puppet has not run in the last 10 hours [17:58:28] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:58:28] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:02:40] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 2.279 seconds [18:02:40] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 2.279 seconds [18:12:32] hoo|away: idk if there's a policy but i made a related change recently: [18:12:36] !g Ide0cf4cb8238749be | hoo|away [18:12:36] hoo|away: https://gerrit.wikimedia.org/r/#q,Ide0cf4cb8238749be,n,z [18:13:57] jeremyb: mhm [18:14:22] hoo: of course it's a little different when the old one is 404... ;P [18:14:30] on idwiki they have uploaded the same file as on commons locally, to be able to change it at times (I guess they don't have a commons admin) [18:14:40] Probably, yes :D [18:15:20] hoo: so have them do a request to change to a local URL then? [18:17:05] jeremyb: I'll do... do I have to open a bug then to fulfill the formalities or can I directly commit the change to gerrit? [18:17:32] hoo: idk... first just get a consensus link ;) [18:17:56] hoo: and link to the common.css diffs [18:18:34] damn, I can't find the Village pump there :P [18:18:39] i imagine that having no edit warring over common.css will be a good sign. but it also needs public announcement. some people maybe just didn't notice [18:19:08] hoo: https://id.wikipedia.org/wiki/Wikipedia:Kedutaan [18:19:39] Thanks ... :P [18:20:16] hoo: let me know how it works out [18:33:08] jeremyb: https://id.wikipedia.org/wiki/Pembicaraan_Wikipedia:Kedutaan#Logo_of_your_Wiki [18:33:27] I'll keep an eye on it [18:35:03] hoo: move=sysop? [18:35:14] i guess most places filemover isn't so easy to get anyway [18:35:18] PROBLEM - Puppet freshness on ms-be1003 is CRITICAL: Puppet has not run in the last 10 hours [18:35:19] PROBLEM - Puppet freshness on ms-be1003 is CRITICAL: Puppet has not run in the last 10 hours [18:35:23] jeremyb: upload=sysop [18:35:29] hoo: and move? [18:35:30] isn't that what I wrote?! [18:35:38] no, you wrote upload [18:35:41] Only sysops can move files there, I guess [18:35:42] i'm saying move too [18:35:50] maybe, maybe not ;) [18:36:32] Only sysops can... fixed anyway ;) [18:37:15] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:37:16] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:46:06] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.032 seconds [18:46:07] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.032 seconds [18:50:18] PROBLEM - Puppet freshness on virt1001 is CRITICAL: Puppet has not run in the last 10 hours [18:50:19] PROBLEM - Puppet freshness on virt1001 is CRITICAL: Puppet has not run in the last 10 hours [19:03:21] PROBLEM - Puppet freshness on virt1002 is CRITICAL: Puppet has not run in the last 10 hours [19:03:22] PROBLEM - Puppet freshness on virt1002 is CRITICAL: Puppet has not run in the last 10 hours [19:18:21] PROBLEM - Puppet freshness on virt1003 is CRITICAL: Puppet has not run in the last 10 hours [19:18:21] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:18:22] PROBLEM - Puppet freshness on virt1003 is CRITICAL: Puppet has not run in the last 10 hours [19:18:22] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:30:12] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 8.491 seconds [19:30:12] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 8.491 seconds [20:04:30] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:04:30] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:11:30] https://it.wikipedia.org/wiki/Speciale:Contributi/208.80.154.54 [20:11:36] spam from a Wikimedia IP! [20:11:39] !dev [20:12:13] ouch [20:15:49] Vito_away, I don't think that's a standard stalkword... [20:16:01] I think so [20:16:03] You need to talk to a shell/root user anyway, not a developer [20:16:10] anyway seems to be a serious problem [20:16:25] I blocked the address globally for 3 days [20:16:30] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.026 seconds [20:16:30] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.026 seconds [20:16:38] but it must be investigated by *someone* [20:16:52] um [20:16:54] It's usually easily fixed [20:16:56] let me check [20:16:58] You blocked a Wikimedia address globally? [20:17:04] From Wikimedia? [20:17:25] yep [20:17:27] Wikimedia totally doesn't need access to Wikimedia [20:17:33] Usually? [20:17:38] 54.154.80.208.in-addr.arpa name = cp1044.wikimedia.org. [20:17:39] FFS [20:17:48] ಠ_ಠ [20:18:18] PROBLEM - SSH on amslvs1 is CRITICAL: Server answer: [20:18:18] This is even worse [20:18:18] PROBLEM - SSH on amslvs1 is CRITICAL: Server answer: [20:18:23] That's listed in the XFF list [20:18:29] I see [20:19:57] RECOVERY - SSH on amslvs1 is OK: SSH OK - OpenSSH_5.9p1 Debian-5ubuntu1 (protocol 2.0) [20:19:57] RECOVERY - SSH on amslvs1 is OK: SSH OK - OpenSSH_5.9p1 Debian-5ubuntu1 (protocol 2.0) [20:20:31] We don't have a way of searching IPs with wildcards, do we? [20:20:56] You want to check contributions of a range? [20:21:36] There's a gadget https://en.wikipedia.org/wiki/MediaWiki:Gadget-contribsrange.js [20:21:58] Reedy: https://it.wikipedia.org/w/index.php?title=Special:Contributions/208.80.154.53 [20:22:11] https://it.wikipedia.org/wiki/Speciale:Contributi/208.80.154.52 [20:22:31] javascript:prefixContribsToggleDiv("cr-208.80.152.83") [20:22:31] if( $cluster == 'pmtpa' ) { [20:22:34] When was that added.. [20:22:42] https://it.wikipedia.org/w/index.php?title=Special:Contributions/208.80.152.83 [20:22:51] Reedy: You can git blame nowadays :P [20:22:52] and other ones... [20:22:55] Indeed [20:23:02] Vito_away: you've verified what I wanted to check, thanks [20:23:07] ie it wasn't just a one off [20:23:28] yep, I see other edits from that subnet [20:24:03] Hmm, Antoine changed that, but it was in May [20:24:17] though, that one is july [20:25:13] Certainly looks fishy [20:26:11] :| [20:26:16] https://en.wikipedia.org/w/index.php?title=User:Reedy&action=history [20:26:17] Maybe the range could be globally blocked for now? [20:26:28] I doubt there is a need of having squids edit [20:27:29] Wonder why they're hitting eqiad squids though [20:27:47] Presumably, but not certainly you'd expect them to be european based [20:28:13] Reedy: XFFs on it.wiki showed it originated from Russia [20:28:15] *was [20:28:44] MaxSem: About? Do you hit esams? [20:28:46] Vito_away: How can you see http headers? Checkuser? [20:28:58] yep [20:28:59] Reedy, how to check? [20:29:12] MaxSem: ping en.wikipedia.org [20:29:26] it should give you a wikipedia-lb.*.wikimedia.org resolution target [20:29:27] Reedy: I'm about to gblock the whole /22 for ~1 week [20:29:34] if you agree [20:29:36] ah [20:29:42] yes, it works for me [20:29:46] Vito_away: If you do soft, it should be a billion percent fine [20:29:56] and hard, probably still [20:29:58] as the matter of fact, I've been editing WP when you pinged me [20:30:22] heh [20:30:29] done [20:30:45] gotta run now, see you later [20:31:23] I'll try and speak to Tim before I go offline [20:44:38] mhm, I tried it now and was able to edit using a private IP: http://meta.wikimedia.org/wiki/Special:Contributions/10.64.0.169 [20:44:58] Reedy, Vito_away: ^ [20:45:37] Right [20:45:45] But that happening isn't so unexpected [20:45:51] That ip is NOT listed for that proxy [20:46:41] Reedy: I used 208.80.154.54 [20:46:50] on Port 80 [20:46:56] indeed, but it registered as an internal ip [20:47:02] some are listed some aren't [20:47:09] let me correct that for these eqiad proxies [20:47:29] I think those internal ranges are whitelisted to not be subject to rate limits [20:47:40] there's a crappy hack in MediaWiki to allow that [20:48:09] PROBLEM - Puppet freshness on ms-be1009 is CRITICAL: Puppet has not run in the last 10 hours [20:48:09] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: Puppet has not run in the last 10 hours [20:48:09] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: Puppet has not run in the last 10 hours [20:48:10] PROBLEM - Puppet freshness on ms-be1009 is CRITICAL: Puppet has not run in the last 10 hours [20:48:10] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: Puppet has not run in the last 10 hours [20:48:10] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: Puppet has not run in the last 10 hours [20:49:57] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:49:58] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:56:59] hoo: can you try again? [20:57:14] Reedy: Sure [20:57:41] {"edit":{"captcha":{"type":"image","mime":"image\/png","id":"1552608095","url":"\/w\/index.php?title=Special:Captcha\/image&wpCaptchaId=1552608095"},"result":"Failure"}} [21:00:08] Reedy: Got it to edit my user page... showing my real IP :/ [21:00:17] good [21:00:23] i just added more ips to the list ;) [21:01:39] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.022 seconds [21:01:40] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.022 seconds [21:34:44] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:34:44] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:41:38] New patchset: Reedy; "cp1042 is not a wm.o host, only eqiad.wmnet" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/17774 [21:42:21] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/17774 [21:44:47] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.182 seconds [21:44:47] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.182 seconds [21:57:11] hoo: 208.80.154.54 is a mobile proxy (apparently) [21:57:22] same for .53 [21:57:30] Reedy: Yes, I noticed [21:57:40] but still I could access the API [21:59:06] mm [21:59:41] Well, that's probably supposed to be that way... (blocking the API for mobile users would for sure break smth :P) [21:59:50] heh, yeah [22:00:18] based on the list of cache servers in puppet, the xff list is now properly up to date [22:01:34] Though, thinking about it, aren't the mobile caches varnish, not squid? [22:02:31] Reedy: Yes... 208.80.154.54 is Varnish 1.1 [22:03:15] though, doesn't answer why it only "started" recently for thoes ips [22:03:29] $ git grep -hF -A 7 'cp104(' manifests/site.pp | perl -pe 's/\n//;s/\t/ /g;s/(\s\s?)\s*/$1/g;END {print "\n";}' if $hostname =~ /^cp104(3|4)$/ { $ganglia_aggregator = "true" } interface_add_ip6_mapped { "main": } include role::cache::mobile} [22:04:16] Reedy: mobile cache runs on varnish [22:04:43] tfinc: it seems the mobile caches ips have been accumulating edits... [22:04:56] Reedy: that would be a first [22:04:57] https://it.wikipedia.org/w/index.php?title=Special:Contributions/208.80.154.53 [22:04:59] https://it.wikipedia.org/w/index.php?title=Special:Contributions/208.80.154.54 [22:05:01] Indeed [22:05:05] mail ops and find out why [22:05:29] tfinc: they've been globally blocked (steward level?) now i think [22:05:31] I've just fixed up the xff/wgSquidServersNoPurge [22:06:08] As it was missing some of the varnish internal ips [22:06:08] ie http://meta.wikimedia.org/wiki/Special:Contributions/10.64.0.169 [22:18:32] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:18:32] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:21:44] hoo: lol [22:21:45] https://en.wikipedia.org/wiki/Special:Contributions/208.80.154.53 [22:21:52] (show/hide) 12:04, 2 August 2012 Graeme Bartlett (Talk | contribs | block) blocked 208.80.154.53 (Talk) with an expiry time of 3 months (anonymous users only, account creation disabled) (spambot) (unblock | change block) [22:22:15] tfinc: lol https://en.wikipedia.org/wiki/Special:Contributions/208.80.154.54 [22:22:54] Reedy: Yes... those Wikimedia guys are just to stupid to get there servers configured... they should probably get cut of the internet :D [22:25:59] I've emailed opsen [22:30:14] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.032 seconds [22:30:14] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.032 seconds [22:35:56] PROBLEM - Puppet freshness on bayes is CRITICAL: Puppet has not run in the last 10 hours [22:35:56] PROBLEM - Puppet freshness on bayes is CRITICAL: Puppet has not run in the last 10 hours [22:38:02] PROBLEM - Puppet freshness on niobium is CRITICAL: Puppet has not run in the last 10 hours [22:38:02] PROBLEM - Puppet freshness on srv242 is CRITICAL: Puppet has not run in the last 10 hours [22:38:02] PROBLEM - Puppet freshness on niobium is CRITICAL: Puppet has not run in the last 10 hours [22:38:02] PROBLEM - Puppet freshness on srv242 is CRITICAL: Puppet has not run in the last 10 hours [22:38:56] PROBLEM - Puppet freshness on srv190 is CRITICAL: Puppet has not run in the last 10 hours [22:38:56] PROBLEM - Puppet freshness on mw27 is CRITICAL: Puppet has not run in the last 10 hours [22:38:56] PROBLEM - Puppet freshness on srv238 is CRITICAL: Puppet has not run in the last 10 hours [22:38:56] PROBLEM - Puppet freshness on srv190 is CRITICAL: Puppet has not run in the last 10 hours [22:38:57] PROBLEM - Puppet freshness on mw27 is CRITICAL: Puppet has not run in the last 10 hours [22:38:57] PROBLEM - Puppet freshness on srv238 is CRITICAL: Puppet has not run in the last 10 hours [23:02:38] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:02:39] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:12:50] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 7.167 seconds [23:12:51] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 7.167 seconds [23:47:05] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:47:05] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:56:14] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 8.185 seconds [23:56:14] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 8.185 seconds