Monthly Archives: January 2019

Last Thursday (10 Jan 2019), starting with 02:30, we experienced an issue that caused a full downtime of ~12 hours and intermittent issues more than that afterwards.

First of all, so so sorry about this. And, as a summary, it was totally our fault.

Uptime Robot is available since Jan 2010 and it is the first time we had such a major problem.

We would like to share what happened and what we’ll be doing to prevent it from repeating:

  • Our main DB server became unreachable. We first thought it was a network issue, then discovered that it wasn’t able to boot and later on made sure that the harddisk had problems.
  • We were ok as we had the replicate DB server. Decided to make it the master DB server. We couldn’t connect to this server at first, made a power reboot, then connected and made a huge personal mistake here. Before starting the (MySQL) DB server after the reboot, we had to change several of its settings so that it was ready for the live load. Besides few my.ini changes, we removed the innodb logs so that they were re-created with the right settings. Started the server, all good.. and it stopped by itself. Checked the MySQL error logs and saw that there were sync problems with MySQL’s log sequence number being in the future. The problem is, with the power reboot, the DB server was shutdown unexpectedly and we must have started it with the original settings, then stopped normally and make the changes afterwards. A simple yet huge mistake.
  • After lots of retries with different options (including forcing innodb recovery), some major tables didn’t recover.
  • And, we decided to make a full restore from the backups. We take very regular backups. We have 2 types of data:
    • the account settings, monitors, alert contacts.. (backups taken directly to the backup server every 1 hour)
    • and the logs (this data is pretty huge, backups are taken every day to the local server at first so that it is faster, automatically zipped and moved to the backup server afterwards)
      • The latest backup was ~23 minutes ago before the incident. We restored it.
      • The latest logs backup was ~7 hours ago before the incident. Yet, the zip file was corrupt. So were several of the latest backup files. The latest healthy logs backup was taken 7 days ago.
  • We tried to reach the contents of the corrupted backup files with several methods/tools but failed (this process took the most of the hours as we wanted to re-enable the DB with the latest log file backup). And, we restored the backup taken 7 days ago (since that day, we tried with much more tools, suggestions, etc.. yet, convinced that those files are corrupt at their cores).
  • We made the site live after the restore process but realized that there were many inconsistencies due the date differences of the backup files used. Worked on creating a tool to remove those inconsistencies, paused the system for another 3 hours the next day, ran this tool to recover all the inconsistencies and made the system live again.
  • After the event, when looking at it calmly, the most logical explanation is the harddisk having an issue for several days before totally going down and corrupting the local backups we had taken on it (which we then moved as corrupted).
  • And, we couldn’t restore the log (up-down) data between 03 Jan to 10 Jan.

This is actually a short summary of the issue we experienced. We did various mistakes:

  • Not using a RAID (this was due to a negative experience we had with RAID in the past but, thinking twice, it was still better than having a single corrupted harddisk).
  • Handling the replicate going master badly. We must have had a more detailed self-documentation about this process.
  • Taking larger backups locally and then moving to the backup server.
  • Also, we didn’t have a communication tool in place when the system was fully down and user data was unreachable.. which is so wrong.

We are taking several actions to make sure that such a downtime never repeats and any such issue is handled much better:

  • The backup scenarios are already changed including verification for each backup file.
  • Getting ready to move all critical servers to RAID setups (will share a scheduled maintenance for it soon).
  • Already updated our recovery documentation accordingly and will be documenting such cases in more detail from now on.
  • Working on creating a better communication channel that is not tied to our infrastructure.

Very sorry for the trouble again, we learned a lot from it and we can’t thank enough to all Uptime Robot users for supporting and helping us during the issue .

Uptime Robot is already integrated with the major team communication apps and here is another addition: Google Hangouts Chat.

If you already use Hangouts Chat (which is part of the G Suite of Google), the integration can be setup with just these few steps:

  • Inside Hangouts Chat, create a new web-hook URL in Room menu>Configure webhooks>Add new.
  • Inside Uptime Robot, create a new alert contact in My Settings>Alert Contacts>Add new>Google Hangouts Chat using the previously created Hangouts Chat web-hook URL.
  • Attach this new alert contact to the monitors of your choice from add/edit monitor dialogs.
  • That is it.