A wise man told me last week that I should be careful what I wished for.
I should have listened.
This week there were two major incidents.
The first of which happened on Wednesday.
We'd just got to the final stage of upgrading our app on the first test run and were preparing to run the final set of scripts.
Then the lights went out.
And the PCs on all the desks went off.
Myself and two of the engineers went running into the computer room to check on the servers.
I was trying to explain that I needed to bring the production databases down cleanly, as quickly as possible, as we didn't have the auto-shutdown software working. As there was no power to the user PCs, they couldn't access the systems anyway so it wouldn't be an issue.
Then someone issued the fatal line 'It's OK. We've probably got at least 30 minutes or so on the UPS boxes'.
The words had barely left his mouth when, yes, you've guessed it, the UPS boxes failed one after the other.
Turn out that some idiot in a JCB on the adjacent construction site had barged straight through all of the cables supplying the Business Park.
The power was out for almost two hours and it took us practically the rest of the afternoon to get everything back up and running as normal.
Luckily, (touch wood, cross your fingers, whatever you do for luck) there was no serious damage done.
SO, Thursday we made another attempt to run the final set of scripts.
While we were preparing to do this, I had one of the engineers installing two brand new servers on which we were going to build what will become the new production system.
The idea being that this current round of testing is to ensure the process works, then we perform the same process again on the new servers.
At Go-Live, the new servers will become the new production system and the existing production system will initially continue to exist as an archive/contingency plan, before being relegated to a test system.
Anyway, we set the script away and it had been running for about 45 minutes, estimated running time was 1 hour 20 minutes.
Suddenly, my connection dropped.
I could still ping the server, but could no longer create a terminal services connection.
A quick jog round to the computer room to check on the server and I soon discovered what had happened.
There, displayed on the console of the server I was working on, was a message highlighting a problem with the IP address.
A chat with the engineer installing the servers revealed that out of the two IP addresses he'd been allocated for the new servers, one of them was the IP address of the exisitng test system.
To top it all off, the script that had been running at the time the connection dropped was provided by the 3rd party who provided the application.
The script looked at a table X that contained both current and historic data.
It created 3 temporary tables, then inserted rows into them based on a select from X.
Once the insert statement completed, the rows were then deleted from table X.
The data in the temporary tables was then updated to reflect the new accounting structure and inserted back into table X and finally deleted from the temporary tables.
After we got the IP issue sorted, we than had to look to find out at what point the script stopped and clean it all up.
There was no data in the temporary tables.
The data was no longer in table X.
We'd lost it.
Luckily, I'm very cautious and had exported the table before running the scipt, but we lost the rest of the afternoon waiting for it to import.
I may have to rename this blog 'The Disaster Diary'.