InvestorsHub Logo
Followers 211
Posts 7903
Boards Moderated 15
Alias Born 05/24/2001

Re: waterwisp post# 72141

Friday, 09/01/2006 8:12:39 AM

Friday, September 01, 2006 8:12:39 AM

Post# of 216878
I did try to get it ready for production for about 15 minutes or so after the data file had been copied over, but was unsuccessful, so turned on the lights over here.

Later last night, anyone within a few miles of Boogerville heard a maniacal "It's aliiiive!", but I couldn't put it into production because it needs the data file copied back over again. Lots easier and faster to do that than to write the routines to move over just the changed/added data.

And the more I think about it, the more convinced I am that I didn't do the data move properly, as in making sure it would happen at gigabit speed. 35 gig took something like 68 minutes. 514MB per minute, or 8MB per second equals 64-100 megabits per second, depending on whether a byte is 8 or 10 bits in this context. Wanna check me on that, Dave? If memory serves, the log file was 418MB and took 35 seconds.

Seems to me it should've taken about 7 minutes, assuming the network was the slowest part of the equation. I have no idea what the throughput is on the old server's hard drives. But know it's nearly double the speed of the NICs on the new server. I also have no idea how much overhead is added by RAID5 having to calculate and store the parity bit, though surely with hard drives nearly twice as fast as the NICs, it shouldn't be a factor.

I also need to do some research today and see if maybe I should be using DTS instead of DOS or Windoze to move the data. It occurs to me that if I use DTS, the new database won't be fragmented like it currently is since it walks through the tables, importing just a few at a time. Better still, script the DTS so the Message table is the last one imported so all messages up to the import time will be in contiguous blocks.

Might be worth testing to see if I can use DTS once a month to defragment the database (the existing defrag tools, at least in the old SQL, only defrag the indexes), or fragmentation just might not matter with the throughput, spindle speed, and access speed of the new drives. And 64-bit everything, meaning it'll actually use the 8GB of memory in the db server, so caching should become a major performance-improver.

Anyway, the current plan is to shut the system down after the close again this afternoon, copy everything over, then bring it back up on the new webserver and db server. With the old webserver ready to step back in if needed. I don't anticipate any issues with the new database server, but there's a reason the new webserver has been gathering dust for 2 months or so and hopefully those issues will simply "go away" now. The issue seemed to be that it was opening too many connections on the db server.

In the db server's case, as long as it's functional, it has enough sheer grunt that it should be able to power through any performance-hurting issues while they're being addressed on the fly. And now I know how to make it functional. Required downloading and installing a driver I would've thought would've come with the new webserver's OS but didn't.

Once this migration is pulled off, then it'll be time to finish my work on the backend post-submission routine, which will be a LOT faster, less work for the webserver, and completely rid us of the problem of duplicate message numbers within boards.

Join the InvestorsHub Community

Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.