[00:00:19] (03CR) 10jerkins-bot: [V: 04-1] wikistats: cron for XML dumps (WIP) [puppet] - 10https://gerrit.wikimedia.org/r/358150 (https://phabricator.wikimedia.org/T165879) (owner: 10Dzahn) [00:11:32] vi up+62 [00:11:46] ,wq [00:12:01] oops, nevermind, it's my connection [00:13:32] (03PS4) 10Dzahn: wikistats: cron for XML dumps (WIP) [puppet] - 10https://gerrit.wikimedia.org/r/358150 [00:14:31] (03CR) 10jerkins-bot: [V: 04-1] wikistats: cron for XML dumps (WIP) [puppet] - 10https://gerrit.wikimedia.org/r/358150 (owner: 10Dzahn) [00:19:24] (03PS5) 10Dzahn: wikistats: cron for XML dumps (WIP) [puppet] - 10https://gerrit.wikimedia.org/r/358150 [00:20:23] (03CR) 10jerkins-bot: [V: 04-1] wikistats: cron for XML dumps (WIP) [puppet] - 10https://gerrit.wikimedia.org/r/358150 (owner: 10Dzahn) [00:22:50] PROBLEM - Disk space on ms-be1008 is CRITICAL: DISK CRITICAL - /srv/swift-storage/sdb1 is not accessible: Input/output error [00:23:39] (03PS6) 10Dzahn: wikistats: cron for XML dumps (WIP) [puppet] - 10https://gerrit.wikimedia.org/r/358150 [00:26:11] PROBLEM - puppet last run on ms-be1008 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[mountpoint-/srv/swift-storage/sdb1] [01:01:50] RECOVERY - Disk space on ms-be1008 is OK: DISK OK [01:28:30] RECOVERY - puppet last run on ms-be1008 is OK: OK: Puppet is currently enabled, last run 46 seconds ago with 0 failures [01:51:17] (03PS1) 10Huji: Change AbuseFilter block duration for fawiki [mediawiki-config] - 10https://gerrit.wikimedia.org/r/358156 (https://phabricator.wikimedia.org/T167562) [01:56:57] 10Operations, 10DNS, 10Traffic: Redirect status.wikipedia.org to status.wikimedia.org - https://phabricator.wikimedia.org/T167239#3321697 (10Peachey88) I had a very quick and brief look but can't find it, But i believe previous consensus /or desire was not to add more subdomains on the wikipedia domain unles... [02:00:16] (03CR) 10TTO: "Just noting that it would have been more appropriate to do this by taking the category names from the list above this one in InitialiseSet" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/358007 (owner: 10Amire80) [02:16:09] !log l10nupdate@tin scap sync-l10n completed (1.30.0-wmf.4) (duration: 05m 33s) [02:16:19] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [02:22:22] !log l10nupdate@tin ResourceLoader cache refresh completed at Sat Jun 10 02:22:22 UTC 2017 (duration 6m 13s) [02:22:30] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [02:47:20] PROBLEM - HHVM rendering on mw1200 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 1308 bytes in 0.073 second response time [02:47:30] PROBLEM - nova instance creation test on labnet1001 is CRITICAL: PROCS CRITICAL: 0 processes with command name python, args nova-fullstack [02:48:00] PROBLEM - HHVM rendering on mw1198 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:48:00] PROBLEM - Apache HTTP on mw1200 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 1311 bytes in 1.010 second response time [02:48:50] RECOVERY - HHVM rendering on mw1198 is OK: HTTP OK: HTTP/1.1 200 OK - 73604 bytes in 0.223 second response time [02:49:00] RECOVERY - Apache HTTP on mw1200 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 613 bytes in 0.390 second response time [02:49:20] RECOVERY - HHVM rendering on mw1200 is OK: HTTP OK: HTTP/1.1 200 OK - 73608 bytes in 2.808 second response time [02:54:30] PROBLEM - Apache HTTP on mw1195 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 1308 bytes in 0.073 second response time [02:55:00] PROBLEM - Apache HTTP on mw1201 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 1308 bytes in 0.074 second response time [02:55:11] PROBLEM - Nginx local proxy to apache on mw1201 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 1308 bytes in 0.153 second response time [02:55:20] PROBLEM - Apache HTTP on mw1180 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 1308 bytes in 0.073 second response time [02:55:30] RECOVERY - Apache HTTP on mw1195 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 612 bytes in 0.174 second response time [02:55:40] PROBLEM - HHVM rendering on mw1180 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 1308 bytes in 0.079 second response time [02:56:00] RECOVERY - Apache HTTP on mw1201 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 613 bytes in 0.471 second response time [02:56:11] RECOVERY - Nginx local proxy to apache on mw1201 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 613 bytes in 0.184 second response time [02:56:20] RECOVERY - Apache HTTP on mw1180 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 612 bytes in 0.102 second response time [02:56:40] RECOVERY - HHVM rendering on mw1180 is OK: HTTP OK: HTTP/1.1 200 OK - 73606 bytes in 0.325 second response time [04:01:00] PROBLEM - HHVM rendering on mw1190 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [04:01:50] RECOVERY - HHVM rendering on mw1190 is OK: HTTP OK: HTTP/1.1 200 OK - 73526 bytes in 6.460 second response time [04:10:10] PROBLEM - mailman I/O stats on fermium is CRITICAL: CRITICAL - I/O stats: Transfers/Sec=710.10 Read Requests/Sec=297.10 Write Requests/Sec=0.60 KBytes Read/Sec=37980.00 KBytes_Written/Sec=14.40 [04:20:10] RECOVERY - mailman I/O stats on fermium is OK: OK - I/O stats: Transfers/Sec=13.00 Read Requests/Sec=0.00 Write Requests/Sec=0.50 KBytes Read/Sec=0.00 KBytes_Written/Sec=9.20 [04:37:50] PROBLEM - dhclient process on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [04:37:50] PROBLEM - swift-object-updater on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [04:37:50] PROBLEM - salt-minion processes on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [04:37:50] PROBLEM - swift-container-replicator on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [04:37:50] PROBLEM - swift-object-replicator on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [04:37:50] PROBLEM - swift-account-replicator on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [04:37:50] PROBLEM - swift-object-auditor on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [04:38:40] RECOVERY - dhclient process on ms-be1005 is OK: PROCS OK: 0 processes with command name dhclient [04:38:40] RECOVERY - swift-object-auditor on ms-be1005 is OK: PROCS OK: 3 processes with regex args ^/usr/bin/python /usr/bin/swift-object-auditor [04:38:40] RECOVERY - swift-object-updater on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-object-updater [04:38:40] RECOVERY - swift-account-replicator on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-account-replicator [04:38:40] RECOVERY - swift-container-replicator on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-container-replicator [04:38:40] RECOVERY - swift-object-replicator on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-object-replicator [04:38:40] RECOVERY - salt-minion processes on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/salt-minion [06:07:30] PROBLEM - Check whether ferm is active by checking the default input chain on ms-be1019 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [06:08:20] RECOVERY - Check whether ferm is active by checking the default input chain on ms-be1019 is OK: OK ferm input default policy is set [07:19:00] PROBLEM - HHVM rendering on mw1190 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [07:20:00] RECOVERY - HHVM rendering on mw1190 is OK: HTTP OK: HTTP/1.1 200 OK - 73538 bytes in 8.687 second response time [08:29:35] PROBLEM - MariaDB disk space on pc1004 is CRITICAL: DISK CRITICAL - free space: /srv 133760 MB (5% inode=99%) [08:46:06] PROBLEM - MariaDB disk space on pc2004 is CRITICAL: DISK CRITICAL - free space: /srv 134049 MB (5% inode=99%) [09:54:35] RECOVERY - MariaDB disk space on pc1004 is OK: DISK OK [09:55:02] someone did that? :) [09:55:18] binlong on pc1004, marostegui was cleaning them [09:55:22] *binlog [09:55:23] ah [09:55:26] yep [09:56:05] RECOVERY - MariaDB disk space on pc2004 is OK: DISK OK [09:56:08] ^ same [09:56:27] marostegui: are you doing the other 2 couples too right? [09:56:35] Yeah, moving there now [09:57:19] thanks! [09:58:29] Going to log it by the way [09:58:42] !log Purge binary logs on pc1004-pc2004 and pc1005-pc2005 [09:58:52] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [10:00:06] !log Purge binary logs on pc1006-pc2006 [10:00:10] PROBLEM - puppet last run on cp3039 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [10:00:15] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [10:19:01] !log on terbium: running purgeParserCache.php prior to cron job due to observed disk space usage increase [10:19:10] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [10:28:10] RECOVERY - puppet last run on cp3039 is OK: OK: Puppet is currently enabled, last run 18 seconds ago with 0 failures [11:16:20] (03PS1) 10Volans: Parsercache: temporarily increase limit for space alarm [puppet] - 10https://gerrit.wikimedia.org/r/358167 (https://phabricator.wikimedia.org/T167567) [11:42:56] (03PS2) 10Volans: Parsercache: temporarily increase limit for space alarm [puppet] - 10https://gerrit.wikimedia.org/r/358167 (https://phabricator.wikimedia.org/T167567) [11:54:33] (03PS3) 10Volans: Parsercache: temporarily increase limit for space alarm [puppet] - 10https://gerrit.wikimedia.org/r/358167 (https://phabricator.wikimedia.org/T167567) [11:54:37] !log cleared leaked instances out of the nova fullstack test. Six were up and running and reachable, one had a network failure. [11:54:46] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [11:55:40] RECOVERY - nova instance creation test on labnet1001 is OK: PROCS OK: 1 process with command name python, args nova-fullstack [11:58:05] Urbanecm: hey, problem [11:58:41] https://pl.wikipedia.org/static/images/project-logos/plwiki-2x.png hasn't been used since 2010 [11:58:44] ouch [11:59:12] And we've been displaying that as a logo since 12 January [12:01:17] (That's commit 499686c5 by the way) [12:04:38] (03PS4) 10Volans: MariaDB: temporarily increase limit for space alarm [puppet] - 10https://gerrit.wikimedia.org/r/358167 (https://phabricator.wikimedia.org/T167567) [12:12:26] (03CR) 10Odder: "Please note that this commit introduced and (03CR) 10Volans: "I've opted for a quick and secure increase, it's true it changes the alarm for the other DBs too but it's only 1% difference." [puppet] - 10https://gerrit.wikimedia.org/r/358167 (https://phabricator.wikimedia.org/T167567) (owner: 10Volans) [12:13:38] (03CR) 10Odder: "Please note that this commit introduced and (03CR) 10Marostegui: [C: 031] MariaDB: temporarily increase limit for space alarm [puppet] - 10https://gerrit.wikimedia.org/r/358167 (https://phabricator.wikimedia.org/T167567) (owner: 10Volans) [12:15:29] (03CR) 10Odder: "Please note that this commit introduced and (03CR) 10Volans: [C: 032] MariaDB: temporarily increase limit for space alarm [puppet] - 10https://gerrit.wikimedia.org/r/358167 (https://phabricator.wikimedia.org/T167567) (owner: 10Volans) [12:17:11] andrewbogott: there are uncommitted changeds of yours [12:17:15] what should I do? [12:17:35] s/uncommitted/un-puppet-merged/ [12:20:05] andrewbogott: mine can be freely merged anytime, so I'll leave it unmerged given that I don't know if yours can be puppet-merged yet [12:22:29] * volans lunch [12:26:40] PROBLEM - Unmerged changes on repository puppet on puppetmaster1001 is CRITICAL: There are 2 unmerged changes in puppet (dir /var/lib/git/operations/puppet, ref HEAD..origin/production). [12:45:54] (03PS1) 10Odder: Update pre-2010 high-density Wikipedia logos [mediawiki-config] - 10https://gerrit.wikimedia.org/r/358170 [12:46:32] (03CR) 10Odder: "All logos have been optimised with optipng -o7 as usual." [mediawiki-config] - 10https://gerrit.wikimedia.org/r/358170 (owner: 10Odder) [12:52:48] (03CR) 10Dereckson: [C: 031] "Same issue than b79cd621bda4f7ac979f96c7514e77f71ce259ac / T165811." [mediawiki-config] - 10https://gerrit.wikimedia.org/r/358170 (owner: 10Odder) [14:10:02] (03CR) 10Krinkle: [C: 032] phpunit: replace deprecated strict=true [mediawiki-config] - 10https://gerrit.wikimedia.org/r/356349 (owner: 10Hashar) [14:11:20] (03Merged) 10jenkins-bot: phpunit: replace deprecated strict=true [mediawiki-config] - 10https://gerrit.wikimedia.org/r/356349 (owner: 10Hashar) [14:11:30] (03CR) 10jenkins-bot: phpunit: replace deprecated strict=true [mediawiki-config] - 10https://gerrit.wikimedia.org/r/356349 (owner: 10Hashar) [14:26:38] (03PS1) 10Framawiki: robots.txt: Remove old and disabled archive.org_bot rule [mediawiki-config] - 10https://gerrit.wikimedia.org/r/358171 (https://phabricator.wikimedia.org/T7582) [14:39:10] (03CR) 10Nemo bis: robots.txt: Remove old and disabled archive.org_bot rule (031 comment) [mediawiki-config] - 10https://gerrit.wikimedia.org/r/358171 (https://phabricator.wikimedia.org/T7582) (owner: 10Framawiki) [14:46:24] (03CR) 10Framawiki: robots.txt: Remove old and disabled archive.org_bot rule (031 comment) [mediawiki-config] - 10https://gerrit.wikimedia.org/r/358171 (https://phabricator.wikimedia.org/T7582) (owner: 10Framawiki) [14:50:58] (03CR) 10Nemo bis: robots.txt: Remove old and disabled archive.org_bot rule (031 comment) [mediawiki-config] - 10https://gerrit.wikimedia.org/r/358171 (https://phabricator.wikimedia.org/T7582) (owner: 10Framawiki) [14:59:00] PROBLEM - Disk space on ms-be1002 is CRITICAL: DISK CRITICAL - /srv/swift-storage/sde1 is not accessible: Input/output error [15:02:01] RECOVERY - Disk space on ms-be1002 is OK: DISK OK [15:11:40] 10Operations, 10Performance-Team, 10Thumbor, 10MW-1.30-release-notes (WMF-deploy-2017-06-06_(1.30.0-wmf.4)), 10Patch-For-Review: Thumbor should reject thumbnail requests that are the same size as the original or bigger - https://phabricator.wikimedia.org/T150741#3251660 (10Krinkle) > The limit only works... [15:42:26] 10Operations, 10DBA, 10Patch-For-Review: Migrate parsercache host to file per table - https://phabricator.wikimedia.org/T167567#3337276 (10Marostegui) [15:44:23] 10Operations, 10DBA, 10Patch-For-Review: Migrate parsercache hosts to file per table - https://phabricator.wikimedia.org/T167567#3337005 (10Marostegui) [16:03:43] volans: sorry! I'll look right now [16:04:40] RECOVERY - Unmerged changes on repository puppet on puppetmaster1001 is OK: No changes to merge. [16:04:41] ok, merged [16:05:01] andrewbogott: thanks [17:22:22] (03PS1) 10Odder: Update logo for the Norwegian Wikisource [mediawiki-config] - 10https://gerrit.wikimedia.org/r/358175 (https://phabricator.wikimedia.org/T167192) [17:23:22] (03CR) 10Odder: "All three logos have been optimised with optipng -o7 as usual." [mediawiki-config] - 10https://gerrit.wikimedia.org/r/358175 (https://phabricator.wikimedia.org/T167192) (owner: 10Odder) [17:36:47] (03PS1) 10Odder: Delete duplicate HD logos for the Punjabi Wikipedia [mediawiki-config] - 10https://gerrit.wikimedia.org/r/358176 [18:30:30] PROBLEM - swift-account-auditor on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:30:31] PROBLEM - swift-object-server on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:30:40] PROBLEM - swift-container-auditor on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:30:43] PROBLEM - swift-account-server on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:30:43] PROBLEM - swift-container-updater on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:30:43] PROBLEM - swift-container-server on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:30:43] PROBLEM - swift-account-reaper on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:31:00] PROBLEM - swift-object-replicator on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:31:00] PROBLEM - swift-container-replicator on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:31:00] PROBLEM - dhclient process on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:31:00] PROBLEM - swift-object-auditor on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:31:00] PROBLEM - swift-account-replicator on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:31:00] PROBLEM - swift-object-updater on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:31:00] PROBLEM - salt-minion processes on ms-be1005 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:31:21] RECOVERY - swift-account-auditor on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-account-auditor [18:31:21] RECOVERY - swift-object-server on ms-be1005 is OK: PROCS OK: 101 processes with regex args ^/usr/bin/python /usr/bin/swift-object-server [18:31:30] RECOVERY - swift-account-reaper on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-account-reaper [18:31:30] RECOVERY - swift-account-server on ms-be1005 is OK: PROCS OK: 13 processes with regex args ^/usr/bin/python /usr/bin/swift-account-server [18:31:30] RECOVERY - swift-container-server on ms-be1005 is OK: PROCS OK: 13 processes with regex args ^/usr/bin/python /usr/bin/swift-container-server [18:31:30] RECOVERY - swift-container-updater on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-container-updater [18:31:30] RECOVERY - swift-container-auditor on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-container-auditor [18:31:50] RECOVERY - swift-object-replicator on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-object-replicator [18:31:52] RECOVERY - dhclient process on ms-be1005 is OK: PROCS OK: 0 processes with command name dhclient [18:31:52] RECOVERY - swift-object-auditor on ms-be1005 is OK: PROCS OK: 3 processes with regex args ^/usr/bin/python /usr/bin/swift-object-auditor [18:31:52] RECOVERY - swift-account-replicator on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-account-replicator [18:31:52] RECOVERY - swift-container-replicator on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-container-replicator [18:31:52] RECOVERY - swift-object-updater on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-object-updater [18:31:52] RECOVERY - salt-minion processes on ms-be1005 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/salt-minion [19:00:33] odder: hi, can you take a look at T125942 ? [19:00:33] T125942: On beta metawiki, a mix of the beta enwiki and the production metawiki logos show - https://phabricator.wikimedia.org/T125942 [19:33:58] Sagan: I did, but I can't confirm the same results that that other person is talking about. [19:34:08] I still see the production Meta-Wiki logo everywhere. [19:35:00] odder: (I'm the other person). That's weird, I even see the "Wikimedia Beta Meta-Wiki" logo when not logged in [19:35:52] Sagan: I was wondering if that's a cache issue I'm experiencing... [21:39:12] Sagan: link me and ill see if i can reproduce t125942 maybe its a locale/cache thing [22:00:13] I don't see a different logo on Special:Userlogin :P [23:15:00] PROBLEM - cassandra-c CQL 10.192.48.51:9042 on restbase2006 is CRITICAL: connect to address 10.192.48.51 and port 9042: Connection refused [23:16:50] PROBLEM - cassandra-c SSL 10.192.48.51:7001 on restbase2006 is CRITICAL: SSL CRITICAL - failed to connect or SSL handshake:Connection refused [23:17:00] PROBLEM - cassandra-c service on restbase2006 is CRITICAL: CRITICAL - Expecting active but unit cassandra-c is failed [23:17:10] PROBLEM - Check systemd state on restbase2006 is CRITICAL: CRITICAL - degraded: The system is operational but one or more units failed. [23:35:10] RECOVERY - Check systemd state on restbase2006 is OK: OK - running: The system is fully operational [23:36:00] RECOVERY - cassandra-c service on restbase2006 is OK: OK - cassandra-c is active [23:36:50] RECOVERY - cassandra-c SSL 10.192.48.51:7001 on restbase2006 is OK: SSL OK - Certificate restbase2006-c valid until 2017-09-12 15:35:47 +0000 (expires in 93 days) [23:37:00] RECOVERY - cassandra-c CQL 10.192.48.51:9042 on restbase2006 is OK: TCP OK - 0.004 second response time on 10.192.48.51 port 9042