is happily being the wheel rather than a rusty old spoke
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
I agree. But it's also completely out of my hands. They're the only game in town.
As Dave alluded to earlier, they're a company who once was the only game in the country and seem to be trying to head back that direction, likely since VOIP provided by cable companies and the proliferation of cell phones means they wouldn't be a monopoly this time around.
What a hassle. I miss the old days when Bob listened to us.
Apparently you also missed the fact that Advanced Search was completely broken for a couple of weeks, prompting these changes.
I had to make some major changes to how the search tables and catalogs were handled. It was apparently a problem of the search catalog having reached a critical size beyond which it was taking minutes for searches to complete, resulting in timeouts.
Did it just happen? The site just slowed down to a crawl for a few minutes here.
There will probably be occasional slowdowns since I assume they're doing "things" with the connection.
You'll know when it happens, though. The sites (both iHub and SI) will suddenly go offline and when that happens, they're going to throw the machines into a truck and get them to KC as quickly as they can.
I thought they'd be able to do it quickly by just quickly taking the whole cabinet up there, leaving the machines running on the UPS, but that won't be the case because the cheap cabinet has no floor or wheels (it was the best I could find of a large batch of them they'd bought cheap somewhere).
They're going to have to disconnect power cords and patch cables for all 6 machines, take the machines out of the cabinets (a couple of them weigh a good 100 lbs each), load them into a truck, drive them to KC, hook them back up, and cross their fingers while they power them up.
The outage could be as long as 2 hours or as little as 45 minutes. All depends on how quickly they can empty the cabinet when the lights go out on the connection.
I'm planning to stay out of the way for the move since I don't bring much to the table expect a pair of hands, but am considering running up there not only to help out, but to get that crappy cabinet since we won't be using it but I paid $800 for it. I can use it at home for the recording studio.
I'll have to read up on just what an isolator actually does. I've got a sizable one still in the box that I bought several years ago to use in a Suburban I ended up trading in before I got the second battery added to it.
Is its function identical to that of a diode (current goes one way but not the other) but at really high amperages?
Interesting. Once I went to the site itself, the link started showing up in your post. Likely is from them not allowing remote linking (no idea how that's actually done) but it displays now because it's in my cache.
I think the solution involves wormholes into parallel universes since the "No Stopping" sign actually applies from the sign and back, and you have to stop at or before a stop sign, not after it.
The "at or before" saves us. If we had to stop before it, the second sign forbids that. If we use "at", and get some help from Dr. Hawking, we're fine.
The trick is to find a way to get the front-most edge of the car to the line of 0 width that's a legal place to stop. It exists, but it has zero (not null -- big difference) width. In this universe.
Edit: I just noticed that the "No Stopping" sign isn't mounted in the same plane as the Stop sign. I wonder what edge of the sign or arrow applies. If it's anything left of center of the sign, even Dr. Hawking can't save us.
Not a bad idea, but might not be worth the extra plumbing since if we take a random number like 4 minutes of enough pressure for the impact wrench when all tanks are online, it'd drop to 2 minutes if I use half the tanks.
However, there's the benefit that if I don't have enough pressure, it'd take half the time to get the pressure high enough.
Although I don't know how helpful it'll be with the slow charging rate I'm expecting from the 2-amp compressor.
One thing I haven't done yet is really scrutinize the situation to see if I can somehow put both the diesel generator and the gas air compressor into the trailer. I actually might be able to since I won't be putting the air tanks above the batteries like I'd initially planned (not enough room). So there might be room to put the gas compressor there and fire it up when I need my air in a bigger hurry.
Oh, so you know, eh?
I just called them to congratulate them on a job well done since the sites were running when I got up.
They said they haven't been moved yet. Was talking over my head (and I quoted the Lloyd Bridges line from Hot Shots: "I don't have a clue what you're talking about. Not a ....... clue.") but the gist is apparently that the line to the KC facility isn't alive and the chronology is something like the line will come up in KC and probably go dead in Lenexa at the same time, and they're going to rush back to Lenexa, grab the computers and take them to KC, hook them up, and cross their fingers.
Unfortunately, this is going to happen sometime during the day today. Probably this morning.
Actually, upon further inspection of the source code and the database, I think I got all of that sideways, sorta.
Not sure.
But eventually I'll know how it works, will change it to work more efficiently and simply, and will let everyone know what I've come up with.
Filtering is likely to be occasionally inconsistent (more so than usual?) over the next couple of days as I work on it.
I'm making it more simplified and squashing a couple of bugs in the process. Rather than having different "types" of filters (ones that block PM's and ones that hide public messages), setting a filter will accomplish both. You'll still be able to select whether you want to block replies to filtered people, though. Most people don't use that, but I put so much work into it, I'm not ready to throw it out the window yet.
One notable change that will possibly affect 444 people who use public-message filtering is that if you had someone's public message blocked but not their PM's, they won't be blocked anymore until you re-block them when I'm done. Actually, I'm pretty sure I'm far enough along that you can reblock them now (from a PM of theirs or their profile) and the change will survive the changes I'm making.
On the technical side of things, what's going on is that I've been using two separate tables that handle filter-storage completely differently for each type of filter. One of these methods/tables has a bug in that if you put a public filter (from their profile) on, say, user number 6748, it also treated user 8, user 48, and user 748 as being blocked. That bug didn't exist in the "PM Block" table, and since all filters will be based on that table now, the bug will go away.
I'll also be making it so that if you've got someone blocked (regardless of the current status of your Filtering On/Off toggle), they'll still be able to write public messages to you, but those messages won't appear in your MailBox.
And yet one more bug that'll be addressed. When I implemented filter limits (5 for free members, 200 for premium members), I apparently only applied that to PM blocks. Pretty useless since free members can't PM anyway. Those limits will end up applied to the single form of filtering we're going to use.
So, roughly half an hour to increase the pressure in 180 gallons of tanks by 10 PSI.
If I remember correctly, I tested the system and found it could run 320 watts worth of lights for 24 hours before the batteries were too drained.
If I'm reading this correctly, and it would take about 5 hours of runtime to top off the tanks to 100 PSI from empty, this might work out just fine provided I don't find myself needing more pressure during the day because there's not enough to run the air impact. Or if it's quiet enough, set the compressor to come on at a relatively high pressure so it has a better chance of keeping up with demand.
Leaving 19 hours worth of lighting power in the batteries for a weekend is way more than enough. There really won't be a challenge until I start finding other ways to use the power (heating/cooling, electric tools, etc). And if push comes to shove at that point, I can look at putting a genny in the trailer and can also put a lot of batteries in the truck bed since the bed on this one is only used for holding my track fuel tank and the trailer hitch. Not enough room for hauling lumber, etc, and plenty of room and suspension for batteries.
If I dramatically increase the amount of juice my truck is sending to the trailer batteries (along with my energy requirements), it might be a worthwhile endeavor to rig up the truck to shut itself off when the batteries are charged (if it's in Park, of course, with a required switch enabled), and start it before leaving the track each evening. Assuming I haven't put a generator into the equation by then.
On a side note, I've been trying to find an electric motor setup to work the trailer jacks, but get the feeling this is going to end up being a home-made setup. The only one I've found so far is specific to a set of fifth-wheel jacks made by the company who makes the motor setup. Won't work on mine.
It's just a kind of "side" venture though. This time of year, the truck stays connected to the trailer for several months and is only used for hauling the trailer. If I were having to frequently get the trailer off the truck and back on again, it'd be more of an issue because working that manual jack really takes it out of you. Especially when you've got the weight of 8 marine batteries directly over the jack feet. In fact, they're heavy enough (as will the additional things the front of the trailer will gain), that this year I'll be pulling both cars in backwards to move more of the payload weight reward. And looking for a much smaller motorcycle to take with me. I really don't need an 800-lb Gold Wing for tooling around and getting back and forth between track and hotel. And that 800 lbs is being carried a good 15-20 feet ahead of the trailer's axles. There's no room for it behind the cars.
Edit: Tongue weight on the truck will definitely need to be addressed eventually. This truck is a dually with all the possible suspension upgrades and it's getting pushed down pretty hard.
I found a small, supposedly quiet 2 amp air compressor at either Northern Tool or Harbor Freight that claimed to have a 100% duty cycle. I'm going to start off with that one and 9 10-gallon tanks and just see how the setup works. There's room for 9 more tanks easily.
Some of the tanks I've found can take 200 PSI but the ones I'm going to get go to 125 and have popoff valves. I think the compressor I was looking at could only do 100-125 PSI anyway.
My inverter should be able to handle a 15-amp compressor since it's rated for 3Kw continuous and 6Kw peak. But there's no such thing as a quiet 15-amp air compressor unless I were willing to spend big bucks on the non-piston kind.
So I'll see how 9 tanks do with the little compressor and will probably add another 9 tanks whether I need them or not since the space won't be used for anything else. And it looks like I've got room for as many as 3 of the 2-amp compressors in the space I've got planned for air storage.
I think it'd probably be easier to just charge the tanks and run them through a typical weekend's worth of work than it would be to try to do the math using the rated CFM of a very old (and probably loose) impact wrench.
I do have a gas-powered compressor that I never use (bought it as part of a package deal with a couple of generators and a power washer) but the problem with it is that though I could probably sufficiently muffle the engine and damp the compressor, it'd take up a lot of room. I may still end up using it. I have space for either a generator or a bigger compressor and haven't decided which to put in there yet.
One of the challenges there is that though I've got a couple of generators I'm not using (including a diesel with electric start), I don't think I can really throw a lot of juice at the batteries with them.
I've got a lot of dead chargers, and when I was talking to the folks at the office next door (who specialize in alternative energy solutions for new homes), they just happened to mention that it's hard to find a charger that won't ruin itself on the non-sinewave output of the typical generator, and that's when it occurred to me that each of my bad chargers had been run off generators at some point.
My diesel generator would fit very nicely in the belly of the trailer, is fairly quiet and I'm sure very easily muffled to be almost inaudible and it does have 12V DC output. The problem is that the DC output is only 8.3 amps, which does me zero good.
There is the possibility, though, of using it for any electrical requirements of the trailer if the batteries get too drained. It's good for around 5Kw. And I suppose it could further insult the batteries by throwing them about 1 amp each anytime it has to be used. <g>
Okay, color me a little bit ticked.
http://www.off-road.com/ford/f350/intro.html
Judging by some of the comments on that page, the truck the guy is bragging about would be a 1999. He bought the Ford for exactly the same reasons I did: Chevy didn't have a respectable diesel engine and Dodge didn't have a Crew Cab.
Why am I ticked? Scroll down a little and on the left side you'll see he mentioned the 2-alternator setup being a $335 option. When I factory-ordered mine (which didn't arrive in time, so I had to take one from a dealer lot), they insisted that the 2nd alternator simply was not an option. Even after they supposedly made phone calls to people in the know. There was no code they could put on the order sheet that'd get the 2nd alternator included.
I was pretty ticked at the time because the literature for the truck *showed* two alternators on it. They ended up telling me that was a mistake in the literature because the 2nd alternator was only available for the ambulance chassis or the Excursion. So I ended up dismissing the matter, only to find out now that it WAS possible to get the 2nd alternator.
I'm getting started doing mods to the racing trailer and the two things I wanted to tackle first were adequate charging of the 8 deep-cycle batteries, and air supply. I'm not having any luck finding (online) the bracketry and alternator to add a second alternator to the truck and use it only for charging the trailer batteries via a separate heavy-guage wire. I figure the 15+ amps per battery I'd get should be adequate. I guess I'll call the local Ford parts department tomorrow but I bet that'll be a major exercise in futility.
Another possibility is to simply run a lead from the existing alternator or one of the existing batteries back to the trailer and use a diode to make sure the trailer can't drain the truck batteries. Under most conditions, I'd think there's a lot of available amperage with the single alternator that isn't getting used. With this possibility in mind, I've also put out feelers for a 200+ amp alternator.
I've nearly figured out the air supply setup. Only part I haven't figured out yet is which compressor to get. I was planning to put 10-gallon air tanks in the belly right above the batteries, but there's not enough room. However, in the frontmost part of the trailer, which requires climbing to get to, meaning I won't be using it much, I can easily get *18* 10-gallon tanks in there. And block them and the compressor off with a wall to keep the sound down. Already figured out which retracting hose reel to get, too, which'll be accessible yet out of the way under the overhang part of the trailer.
What I haven't figured out yet is which compressor to get. I want a very quiet one so we can hear the intercom at the track without having to quickly turn off the compressor.
Any compressor would take forever to fill 180 gallons of tanks, and I need to figure out the math to determine how long it'd take a given compressor with known CFM@PSI to add 20 PSI to the tanks. It's possible I could get by with one of the little 2-amp ones advertised in Harbor Freight as "Super Quiet", but only doing about 1/2 CFM @ 90PSI. I need to figure out though, if I'm not better off biting the 15-amp bullet (which my inverter can handle) and end up with a compressor that's not running anywhere near as long to top off the tanks.
Of course, I'd top off the tanks from the shop compressor before leaving for the track, so with 180 gallons available, it's entirely possible I could make it through a weekend of inflations and tire changes (air impact wrench) without ever turning on the compressor.
Speaking of the shop, I just finished adding 6 more 400-watt metal halide lights, and the difference is amazing! If I turn on all the lights, then just turn off the new ones, I can't believe I worked so long with so little light!
Downside is the shop now uses a little over 4Kw *just for lights*.
If any adept searcher happens to run across a link to where I can get the 2nd alternator bracket for my 99 F350 Powerstroke, it'd be much appreciated.
It's just an unfortunate fact of life for programmers and people in other computer-related specialties have their knowledge become obsoleted and have to keep up with the changes if they want to stay in the game.
I'm old-school enough that my mind was completely blown when I was suddenly expected to deal with objects, properties, and methods. Steepest learning curve I ever encountered because I was so entrenched in just grunting through business logic sequentially.
I'm very sold on ASP.NET, though. And I'm really quite an old dog to be learning that particular trick. Turning 46 tomorrow and I started teaching myself programming with GWBasic in about 85.
I am convinced that 90% of what I know about computers and programming is just wasting brain capacity. Wish I could selectively purge it, as it's feeling more full with each new trick I have to learn.
Assembler, GWBasic, dBase and all the dialects of "xBase". DOS, DesqView. The list could go on and on, but I'm sure many reading this have the same list.
I do have one serious advantage over the youngsters just now learning programming, though. Since I started out on very low-powered equipment using languages that made me interact very intimately with the hardware, I'm always very mindful of what my code is going to mean in terms of workload for a limited (although, to me, inconceivably enormous these days) amount of resources. So I tend to write stuff that's pretty efficient.
A big plus when writing for a website that handles 1.5 million page views per day and probably does something like 10 million SQL transactions per day to service that traffic. :)
What on earth is going on with NAV? It continues to get crushed on a daily basis!
SANM has been pretty much sideways. Currently up about 1.5% from where I first mentioned it. It's a plus that it's done fairly well on days that the market has taken a beating, but the B-bands aren't going to tighten anytime soon if it keeps spending so much time below $6.00.
lol.. i can tell you the out-sourcing rates for .NET guys is going up in NYC...
my buddy just landed a 12 month 80k project...
6 months ago that would have been 65k in Manhattan
ASP.NET is hot right now..
Wow!
I assume your buddy is a contractor? I wonder if I was just lucky (or better than I thought I was) or if the supply of seasoned developers in the midwest is comparatively low, or the supply is a lot higher now than it was back then. When I got out of the consulting game in about 1997, I would've bid more like $200k for a 12-month project. If it interested me (meaning it wasn't yet another accounting or inventory management system).
I really didn't like ASP.NET at all when I started using it, and I still use classic ASP for any throwaways I need.
I don't like or use datagrids, though I was VERY excited by them when I first encountered them, but later found they took too much control away from me and required too much of the html stuff to be put right into the SQL queries, making the queries very difficult to read when they needed editing, and use too many bytes for my liking.
At first it was really cool, though. Several dozen lines of code replaced by just a few lines. But to me the lines of code don't matter anywhere near as much as things like query readability and especially the number of bytes of html being sent over the wire.
However, I'm VERY sold on ASP.NET for its greater efficiency. It's far more efficient at interacting with the database (and it looks like it's actually "gentler" to the db, even for identical queries) and it seems to be exponentially gentler to the webserver, due in large part to the fact that it's compiled rather than translated like classic ASP is. A side benefit of this being that comments don't cost anything like they do in classic.
It's somewhat of an apples and oranges comparison because iHub gets a lot more traffic than SI does (2 to 4 times as much -- I don't remember which), but iHub uses a rather powerful multi-processor machine with a fast disk subsystem and gobs of memory for its webserver and runs about 20% utilization most of the time.
SI's webserver is a very inexpensive machine with much less memory, an inexpensive IDE drive, and only one CPU, yet seems to be idling all the time.
My guess is that by converting a large site (such as this one) to ASP.NET, the resulting performance gain is similar to that achieved by multiplying the webserver count by 3 or 4. This becomes a HUGE consideration for companies that might be using dozens of webservers to handle a classic ASP site.
Not that the change will likely be noticeable to users of the site at all. Nothing would be noticed unless our webserver were too busy, which it's not. And is a long way from being.
The main thing that the ASP.NET will gain us is something like being able to handle 10 times as much traffic on current equipment rather than about 3 times as much, which I think is the case now.
And, actually more importantly, by rewriting the whole site from scratch, some major inefficiencies from code I've never optimized will go away because they'd be written more "correctly" from the get-go, as is the case with SI. In addition to fixing the major mess I made with Preview. <g>
Not panning the inherited code, btw. It works and at the time, it worked well enough and not only were the inefficiencies hard to notice, the scale at which it would have to operate would've been difficult to comprehend.
Happens a lot with code I've personally written and thought I'd done perfectly. A year later when the scale is much larger, I'll run across some of my code as a bottleneck, look at it, and go "What on earth was I thinking? It's obvious I should do it this other way rather than how I did it."
IS a next 50 an option to Advanced Search anytime soon?
Definitely. I have a project to take care of on SI that'll take a few days, then I'll come back here and do that.
It likely won't, however, span the year gaps. If you've Previous-50'd your way back to the beginning of 2005, it probably won't jump into 2004. I won't know for sure until I can focus my attention on it and see if I can make that possible without too much work for me or the machines.
It'd be pretty cute, though, to do it that way.
like most email systems have now.. and it could purge automatically..
that way they wouldn't have to flag your inbox but they would still exist
That's how it works on SI. At the end of the post-submission routine, it checks to see if the recipient has the author on Ignore. If they do, the message is flagged as already having been read by the recipient, so it doesn't appear in their Inbox.
That won't happen here until I fix the huge (but reliable so long as I make no changes) mess I made with Preview.
And that's such a huge rewrite that rather than rewriting it in-place, it'll be part of rewriting the whole site in ASP.NET, which is pretty far down the road. It's very likely it won't happen until I hire someone to take over most of my programming duties so I can focus on the rest of my job. Which I don't see happening anytime soon.
It won't get reset every January 1st. I plan to do the reset in March of every year. Preferably the end of the month so that no search table ends up being less than 3 months.
Making a sliding one-year window would be possible and I was doing this on SI for a while, but it turned out to be too expensive and too easily broken. I was dropping the oldest post out of the search table each time a new one was added. Problem is that doing so made MSSearch do a little more than twice as much work. It's already pretty busy when a new post is added, parsing the words and adding the references to the catalog. When a post gets removed, it appears it's actually more work for MSSearch to find the (potentially thousands of) references where that post number has been associated with that word and remove the reference.
Anyone who has access to the regular Public Msgs search also has access to Advanced Search.
If a future version of SQL Server handles full-text search differently (and from what I've read so far, Yukon won't in the way that's most meaningful to us), I'll be able to go back to keeping the search data in one table rather than segregating it by year.
Don't forget guys: Advanced Search was completely broken last week. The dataset had gotten so large, it was almost always timing out. The only workable solution was to decrease the size of the searched dataset for any given search. And while I was doing that, I incorporated other ways to speed it up, like segregating PM's from public messages so I wouldn't have to include a "message type" text field in the where clause.
PM's are still a single-table deal, though. They'll remain that way for as long as it's possible (searches not timing out consistently), but that might change sometime next year. Currently, about 1.5 million of the messages here are PM's and that's growing rapidly. I haven't taken a read lately on what percent of new messages are private, but it looks large based on how much the PM search table grows each day.
Actually, there's a "value added" part of the formula, but not what you're remembering or even close to the ratio you're thinking.
The part of the formula you're thinking of applies to bookmarks; not reads.
There's a "bookmarks" sub-formula, the results of which are multiplied by either the total posts or the total posts in some recent timeframe (I don't remember which) then divided by some number I don't remember but which is used so that the bookmarks portion of the formula doesn't make us deal with scores in the millions.
The bookmarks sub-formula assigns a value of 1 to a bookmark, then adds 2 points if the person who placed the bookmark has posted in the past 30 days. That way I can include the relative activity level of the bookmarker as part of the formula.
I think I determined at one point that because of the multiplication and division that's happening, a bookmark placed by someone who actively posts contributes about 5 times as many points to the score as someone who doesn't post. But that number varies because the number of points that bookmark contributes is also determined by the overall posting activity on that board.
Keep in mind, too, that though the score is reported as an integer, the calculations aren't based on integers. It's not a simple matter of 1 bookmark adding one point or even some multiple of 1. I think there are scenarios in which bookmarks (whether "active" or "non-active") contribute fractions of a point. And other scenarios in which a bookmark is worth multiple points.
Short answer: I haven't a clue how it really works anymore, but I know that post reads don't factor into it.
Since number of reads plays into your overall TOP BOARDS score, the possible manipulation causes the false interpretation of the true data.
I'm pretty certain that the number of reads is NOT factored in to the Top Boards score. I can say that with a high degree of certainty despite the fact that the formula I'm using is so complicated and inscrutable that even I no longer can figure it out without hours of tearing it apart.
The reason I'm so sure is that I wrote the Top Boards algorithm something like a year or more before I was even tracking the number of reads a board got.
The problem with that, Bob, is that any unread messages PRIOR to that automatically 'read' message will be missed, if I click the 'New Posts' link. (Right?)
If I'm understanding you correctly, no, that wouldn't be an issue.
If you've got filtering toggled on and have Ignore records, the system uses a different stored procedure to retrieve the message. It gives the first unignored one.
The urge to save others was strong enough to overcome the wishes of the majority there?
No.
The site's rules take precedence over the wishes of the majority on any board, including that one.
And the site's rules do not and never will make "This poster is a basher" an acceptable reason for deleting a post that doesn't break the site's rules.
At least for free-zone boards. I'm a little fuzzy on just what the rules are for Premium boards. Ideally, I'd prefer that those boards can have bans at the whim of the moderator (just like on SI), but still have to use one of the selected valid reasons for deleting posts. It's why I went through all the trouble of integrating a dropdown in the deletion routine from which the moderator must select the reason for the deletion, and each of the deletion reasons is from the Terms of Use. And deleting posts that didn't break the TOU has always been grounds for post restoration and moderator removal.
Many feet are poised to leave if there is no change.
On both sides of the issue.
I hadn't realized that a batch read was being treated as a single read. I do remember that it used to count a "Next 100" as 100 reads, even if there were only 2 messages to be read, and that was being used rather aggressively to manipulate read counts, but since I was still very tied up in the SI changeover, I apparently just did the bandaid fix of counting it as 1 read rather than incorporating logic to see how many messages there were in the batch. I can fix that fairly easily. I'm pretty sure classic ASP doesn't give a mechanism for determining how many records are in the recordset it's working with, but I can just increment a counter while populating the page, and use that number to update the read count. Although that can also easily be manipulated by just going back 100 posts in the board and repeatedly hitting Next 100 and Previous 100.
So the status quo might actually be the best bet. It handicaps boards that're heavily populated by premium members with batch-read capability, but it makes manipulation a much lower ROI endeavor.
Ensuring that only one read per reader is counted would be too expensive. I'd have to maintain a separate table containing IP address and post number, and on each read, check that table (which would start out at zero rows every night at midnight, but get quite large by the next midnight) and have logic that would increment the read count and insert the IP/msgnum combo if a matching record wasn't found. Else do nothing.
Since well over half the hits on the site are the single-message read, that's one routine I'm very careful even when adding very low-cost items because the cumulative cost can really pile up.
Oh, regarding page views for advertising purposes, that's not even associated with any of the code running the site. That's determined by parsing the webserver's log files. So read_msgs.asp always counts as one pageview, no matter what size the batch, since it's technically one page.
And since a lot of people don't see ads because they've subscribed, there's a separate mechanism for tracking ad views.
Without a resultant increase in CPU usage?
During the whole time, CPU utilization looked very normal.
As everyone's noticed, SELECTs were fine. So were INSERTs and UPDATEs except one: doing an INSERT on the message table.
The main thing that bugs me is that about 1000 messages hadn't been committed to disk. I didn't check the timestamps of the remaining messages straddling that gap, but would guess it to be at least an hour. Inconceivable to me that even with the most aggressive write-caching possible, data would be write-delayed that long.
Nah. I'm not buying that the messages disappeared because they existed only in cache. Perhaps the database or table itself was corrupted at a specific point and SQL Server repaired it up to that point? Might explain why it still took so long for INSERTs in the message table to work after the machine came back up. And if the corruption was on the hard drives themselves (RAID5), perhaps the controller patching things up was why it took about 15 minutes after I restarted the machine until it started responding to anything?
I get the feeling this is all academic and we'll never know what happened, but when something like that happens, it's very easy to really get wrapped up in "I've gotta know because I need to know how to prevent that ever happening again."
One of my favorite sayings when there are occasional brief outages that get lots of outcries (outages under a minute) is "This is a message board; not a kidney dialysis machine.", meaning short outages in my opinion aren't a "mission critical" problem. We get lots of short outages that I've learned to be able to tell just from looking at the CPU usage patterns of both machines (webserver and db server) are the communication link to the webserver dropping. I think a few days ago I saw about a dozen of those back to back, lasting about 5 seconds and happening about a minute apart. That kind of stuff we have to just live with.
This one was quite different. A problem that takes us offline for 50 minutes and zaps about 1000 messages is pretty serious.
But I don't think we'll ever know what happened.
And would a typical redundancy setup (mirroring the db in realtime to a separate disk sub-system or machine) have been useless because the same error might've happened on the mirrored system?
FWIW, the only changes of note that were made recently were cranking up full-text search's priority to 5 several hours earlier (since backed off to its previous setting of 4, because it didn't make catalog-updating realtime, which is a separate problem for which I'm trying to find a solution) and the addition of an insert trigger on the message table about a day and a half prior.
Unfortunately, the number of reads can be manipulated quite easily.
Hey, Len.
I just sent a link to your msg to Sheryl asking her if she remembers she's supposed to process those the same day we get the subscription.
If you haven't replied to her yet, tell her I said you can have 2.
Actually, you should've picked up a handful while you were out here!
Those do happen from time to time, but when they do, the system throws an error saying there was a deadlock and you were the victim.
It wastes very little time straightening itself out when deadlocks occur. If two processes persist in trying for the same lock at the same time, it quickly chooses a "victim" and throws them an error to that effect.
That was workable only while the site was below a certain size threshhold.
Much like how it started off years ago with full-text search using a LIKE clause in the queries, but they figured out very quick (like 50k messages into it), that that wasn't workable.
Advanced Search will be the only way to search pre-2005 now.
The performance problem with Search retrieval (which is completely cured now) was because we were searching one huge table.
And the way Search works in SQL Server, it does no good to add other limiting criteria to the query. If you want to find all of my posts in which I wrote the word "search", no matter what I do or how much it ticks me off that it works that way, it first finds ALL posts containing the word "search", then filters them down to just my posts.
Just not workable to have it work against the whole message table when it's gotten as large as it has.
Nothing in the software either in ASP or SQL had changed since about 2 AM last night when I was finishing up the Search stuff.
It was a full-on crash, and we don't know yet what caused it and might never know.
Confirmed by checking the database directly. The only message that exists in that number range is # 5754956. I'm sure it was the first post written when the system came back up, since I wrote it. In fact, it's probably the one I inserted via the Query Analyzer that took 1:38 to happen.
Edit: Confirmed that it has to be the one I inserted directly (which means it didn't increment the board count or anything -- I strongly suspect the board counts won't correct themselves and I'll have to run a query to fix that) because when I went to that board, the link to that message was in the "not previously visited" color. And had I written it through the front-end, the system would've taken me to it.
Yuck!!! I hate losing that many posts!
Actually, that one's easy to explain as I noticed the particular section of source code (and a glaring inefficiency) a couple of days ago. What made the section of code jump out at me was a line that says "Select * from board where board_id=" and some number. In a routine that needs only one small item from the board table; not everything, iBox included. Inherited code.
During the 10-15 minutes that SQL Server was still running but message submits were failing, reply-count incrementing was still happening.
The program (same inherited code that I've never bothered to optimize because most of this system's work is in message retrieval; not insertion) that inserts new messages first adds 1 to the number of replies the replied-to message got, then does something else, then actually puts the message in the database.
Ideally (and it'll be done this way when I rewrite this as an ASP.NET system), all the steps that happen during an insert should be wrapped in a transaction and rolled back on any failure or commited if no failures happens (which will usually be the case). That way, if the message didn't insert, the reply count didn't increment.
Since people were unable to post for so long, I have no doubt that a LOT of messages are showing reply counts higher than they should.
LOL!
I swear I wasn't doing anything to the site when it went nuts. I wasn't aware there was a problem until I tried to post a message.
It looks to me like, because of the nature of how the machine had to be brought back up, we lost all posts written within a certain timeframe. I don't think there's any way to "get them back" as they very likely don't exist anymore. Although I'd be very curious to see if the timeframe of the "gap" can be determined.
We still don't have any details. All we know is that a key part of SQL Server crashed VERY hard. And this is only a guess, since I'm not really knowledgeable about how SQL Server interacts with cache and hard drives, but it's sounding like there were messages written that were being kept in cache (for who knows how long) that SQL hadn't gotten around to committing to disk.
I'd assumed that all writes were committed to disk immediately, but based on what is being reported, that may be an incorrect assumption, and I don't know yet if the suspected write delay is a function of SQL Server or the hard drive array itself. Or even Windoze.
About 5 minutes.
I would really like to get a better idea of what's going on with Catalog updating. I kept checking the catalog and it was behind by about 150 messages and not gaining rows, didn't even see MSSearch as a process using any cpu cycles, then suddenly it zipped through the 150-message backlog and had them all in there in less than a second.
search lag test
Can you show me an example?
On the ASP side of things, I use PrimalScript. It doesn't do things like ensuring matching endifs, but I'm real used to it especially since it understands WordStar keyboard shortcuts (which should give an idea of just how old I am <g>). I do all of my ASP development in a separate directory and have different things I do when troubleshooting a problem, the most common doing a response.write of the sql command I'm attempting, followed by a response.end so I can just look at my query since most errors are of the SQL variety.
On the SQL side of things, I just type it free-hand into a new procedure. If a lot of weird joins are involved, I'll often make a View that does what I want so I can cut/paste the SQL from it into a proc and edit as I like.
Personally, I don't like to use "Development Tools". They often introduce a lot of things that, while convenient and making the programmer's job much easier and quicker to get done, make the resulting code itself inefficient. FrontPage is a perfect example of this. If you used FrontPage to make a page that looks like our homepage, then look at the HTML source it made, your hair will stand on end.
Also, I'm an exceptionally fast typist. And just very "old school". I believe programs should be written with a keyboard; not a mouse.
Okay, that was very weird and I don't think we're ever going to know what happened.
The site was timing out for about 10 minutes when submitting posts, so I stopped/restarted SQL Server after detemining it was nothing in the source code.
That didn't help, so I rebooted the whole machine. After about 10 minutes, the machine wasn't responding, so I had to call the ISP. I don't know yet what, if anything, they did, but the machine was back up a few minutes later.
But message submits were still not working.
I tried inserting a row into another table on the server and that worked immediately.
Then I tried inserting a test message into the database and it took 1 minutes, 38 seconds, but finally went through. After it did, the site started working correctly.
So I'm currently scratching my head trying to figure out what on earth just happened and why.
Edit: Dave says the machine did a memory dump and it was when that finished, that inserts started working again. His hypothesis is that the machine either had or thought it had a real memory error.