is happily being the wheel rather than a rusty old spoke
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Actually, my toes and ankles hurt pretty good right now from the number of times I sank into the muck and had to contort my foot to break the suction while attempting (successfully every time so far) to extract my foot with the shoe still on it.
In any event, I'm only wearing shoes down there that I wouldn't hate losing and, more importantly, that I CAN slip my foot out of if that's the only way to get my foot/leg back.
I've had it stuck before, but never quite like this. Usually I can curl the front bucket to pull the machine along, and in the previous worst cases, turn the seat sideways and work both the backhoe and the front bucket. No luck this time. There's nothing but "pudding" behind the machine, so the rear bucket isn't any help at all.
The guy with the excavator got most of the mud removed from around the machine. Enough that I think when I get the rented excavator (a bit bigger than his) and get a nice runway cleared out on the way to the backhoe, I'll be able to yank it out pretty easy.
Then it'll be time to use the excavator to dig channels and pile dirt up to let the whole area dry out enough to reall work with it. A friend who works for the Cat dealership is looking for a used 963 for me that I hope to buy in a month when I'm done renting the excavator.
Work it for a year on this lake and a couple of other minor projects and sell it. Hopefully it'll work out so I'll have spent $5k-$10k to have the use of it for a year. I definitely don't need one of those fulltime.
The neighbor with the 963 got a visit at the lake by his wife and grand-daughter.
When the 3-year old asked grandma why the tractor was stuck in the mud, she winked at me and told the little girl "Because a man tried to do something."
Surprisingly, I was actually able to start the backhoe tonight with a jumper wire feeding juice to the fuel injector pump, so I got the rear boom stored in a more convenient position, and suspect once I get an excavator over here, I'll have the backhoe out in a day.
Of course, both front tires are completely flat. Can't see enough of the rears to see if they also are.
I'm optimistic, though. The guy with the excavator showed me a lot about how to use them, and dug quite a ramp down to the backhoe and got down to solid rock. About the only time I've felt fortunate to live on mostly rock.
I guess I'm not going to have the excavator until next week.
The neighbors got their 963 running, but it ended up stuck, so they called a friend with a large excavator. He was able to easily extract the dump truck and 963, but no joy on the backhoe. He spend hours digging around it and trying to pull it, but the suction and weight of the mud still sticking to the machine is some multiple of the machine's 16k lbs weight.
When the excavator gets here, I'm going to have to do a LOT of digging to get things drying out enough and get the excavator onto solid enough footing to pull the backhoe out.
At least the backhoe isn't full of water now.
Won't be able to fix that one until I've finished the new version of the backend posting routine. It's about 90% done and we had it in production for a while until we found there were a few posting scenarios it wasn't handling correctly.
ztest
Ground control to Major Tom.
We're running on the new database server now. Not sure what performance will be like initially. It's getting clobbered rebuilding the search catalogs from scratch, but seems to be performing well enough and the rebuild is running real fast!
I learned the hard way on the old one not to fire off more than one of these rebuilds at a time, though. So I'll rebuild the other ones one at a time tonight and over the weekend.
test
Turns out the earliest I can get the machine delivered is Wednesday. Well, at least the 963 should be out of the way by then. We really need to give up on it doing the job. It wasn't accomplishing much when the backhoe was helping. And I'm not sure I can get the backhoe running again until it's been on dry ground for a few days and I've gone all the way through the electricals.
LOL
The blinking light is a cute touch!
As promised, here are some REALLY nasty, non-doctored pictures!
I'm actually headed out the door right now to check out a JCB excavator I might be renting to retrieve the backhoe and finish its job. The Cat place was fresh out.
Can you please repeat what you just said in one or two sentences?
The new database server is very, VERY fast! We already have fast machines and I've gotten used to them, but this thing's a major jaw-dropper.
And it got stuck at an inopportune time. During the kind of drought we only get once every several years.
I was digging a channel to drain as much water as possible out of the muck so I can more easily work with it, using it to raise the dam 8 feet and nearly double the surface area of the lake to about 8-10 acres while more than quadrupling its volume so it can produce a lot of micro-hydroelectric power. The idea being to produce enough to run everything but the 240 stuff (welder, lift, etc) in the workshop and maybe even have enough excess capacity to make it worthwhile to sell it to the power company or at least move the other garage (on its own meter but using a paltry amount of juice) and the pool house (which uses an obscene amount of power) onto the free juice.
Now I have to try to work quickly. Renting the excavator for a month, which I'll initially use to dig the drainage channel and install the pipes for the hydroelectric, then get my son down here to run the dumptruck while I use the excavator to load it and smooth what he drops off, then after the month is over, have them pick up the excavator while dropping off a 963. The idea being that the muck *should* be dry enough after a month of draining that I can take the dump truck out of the equation and finish the job with the loader.
I did try to get it ready for production for about 15 minutes or so after the data file had been copied over, but was unsuccessful, so turned on the lights over here.
Later last night, anyone within a few miles of Boogerville heard a maniacal "It's aliiiive!", but I couldn't put it into production because it needs the data file copied back over again. Lots easier and faster to do that than to write the routines to move over just the changed/added data.
And the more I think about it, the more convinced I am that I didn't do the data move properly, as in making sure it would happen at gigabit speed. 35 gig took something like 68 minutes. 514MB per minute, or 8MB per second equals 64-100 megabits per second, depending on whether a byte is 8 or 10 bits in this context. Wanna check me on that, Dave? If memory serves, the log file was 418MB and took 35 seconds.
Seems to me it should've taken about 7 minutes, assuming the network was the slowest part of the equation. I have no idea what the throughput is on the old server's hard drives. But know it's nearly double the speed of the NICs on the new server. I also have no idea how much overhead is added by RAID5 having to calculate and store the parity bit, though surely with hard drives nearly twice as fast as the NICs, it shouldn't be a factor.
I also need to do some research today and see if maybe I should be using DTS instead of DOS or Windoze to move the data. It occurs to me that if I use DTS, the new database won't be fragmented like it currently is since it walks through the tables, importing just a few at a time. Better still, script the DTS so the Message table is the last one imported so all messages up to the import time will be in contiguous blocks.
Might be worth testing to see if I can use DTS once a month to defragment the database (the existing defrag tools, at least in the old SQL, only defrag the indexes), or fragmentation just might not matter with the throughput, spindle speed, and access speed of the new drives. And 64-bit everything, meaning it'll actually use the 8GB of memory in the db server, so caching should become a major performance-improver.
Anyway, the current plan is to shut the system down after the close again this afternoon, copy everything over, then bring it back up on the new webserver and db server. With the old webserver ready to step back in if needed. I don't anticipate any issues with the new database server, but there's a reason the new webserver has been gathering dust for 2 months or so and hopefully those issues will simply "go away" now. The issue seemed to be that it was opening too many connections on the db server.
In the db server's case, as long as it's functional, it has enough sheer grunt that it should be able to power through any performance-hurting issues while they're being addressed on the fly. And now I know how to make it functional. Required downloading and installing a driver I would've thought would've come with the new webserver's OS but didn't.
Once this migration is pulled off, then it'll be time to finish my work on the backend post-submission routine, which will be a LOT faster, less work for the webserver, and completely rid us of the problem of duplicate message numbers within boards.
I believe we have. The message that Matt linked the explanation to contains verbage about how we're not leaving search in realtime status during market hours right now.
I fully expect search to be back to realtime Monday at the latest.
If it's seeming this fast when nothing changed except about an hour-long outage, it's gonna be a psychic interface on the new setup. <g>
The upgrade hasn't been pulled off yet.
All I've done is copy the db onto the new database server for testing and configuring. We're still running on the same setup.
However, things are coming along well getting the new database server working. I expect to be doing another copy tomorrow night and putting that machine into production; possibly on the new webserver, too.
This thing on?
Just so there's no misunderstanding, the plan right now is NOT to move the new machine into production. It's simply to move a copy of the production database onto it for testing purposes, then fire the old one back up.
Well, the plan is simply to copy the database over to the new machine for testing purposes so we can (hopefully) put it into production in the next few days.
However..... The test db I moved over happened even more flawlessly and seamlessly than I dared hope. So you never know. We're going to move it over, and at least pound on it in the development directory for a while and see how it does.
The test db was very simple. The real system is VERY complicated and we could run into all kinds of conversion barriers of which we're currently blissfully unaware.
Let's just say there's a reason I'm budgeting an hour for a db copying that will likely only take half an hour.
I got a copy of Flight Simulator for my XT and used to spend countless hours with it.
Until I discovered this little executable on the machine called GWBasic. From my first "Hello World", I was hooked. Programming became my "video games".
And I saw and learned many languages along the way, and oddly enough, the language I use the most now is nothing but a superset of GWBasic.
My first was a PC clone. 8088. But it was an XT. 10Mghz. Zoom-zoom!
Thought I was in heaven when I upgraded to a 286/10 and through memory voodoo and DesqView, was able to run a 2-line BBS on it while also using it for other things.
Actually, my backhoe looks very much like that right now. I've got pictures. Will try to get them posted when I get a chance. The pictures would be hilarious to anyone else. Since it's my (worth ~$20k) machine that's barely visible in the pond muck, I'm struggling to do more than giggle.
The backhoe picture itself would be funny. What takes the scene into hilarity is that my dumptruck is nearby, similarly mired, and so is the neighbor's 963 loader (huge machine). His machine isn't stuck though. It looks stuck but he stalled the engine and the starter failed. After having managed to pull the backhoe maybe 1-2 feet in about an hour with the backhoe working furiously trying to help out. The backhoe can no longer help, though. It's spent enough time with enough of it underwater, that starting is the last thing on its mind, even with the batteries fully charged. The fuse box is probably about a foot under water and that's generally not a good thing. I'm hoping, though, that I can hotwire it under the hood to get it started.
I should have a rental Cat 330C excavator arriving tomorrow that likely can just pick up the backhoe and deposit it onto dry ground, but for one problem. The dead 963 is where I need the 330C to be to accomplish the deed.
If I can get the 963 out of the way, it might not even matter that the backhoe doesn't start. It's got hooks on top of the front bucket for holding forks, and they'll be just perfect for grabbing with the excavator and lifting the backhoe while I pull it.
We're not sure why that error was happening, but rebooting the webserver fixed it. Not going to try to track it down because we expect to have the new webserver into production before too much longer.
An update on the new database server. It's installed and we inadvertently caused slowdowns and timeouts by testing file transfer speeds between the old and the new. A 1.6GB test file transfered in 40 seconds, and will likely transfer even faster in the evening when the system isn't as busy, meaning it's possible it might only take 10 minutes to copy the whole database over to the new machine. I'm sure the current db's SCSI array will be the bottleneck, and not much of one. The new machine and current one are talking to each other through a separate Gigabit switch.
What was really interesting was that once the test file was on the new machine, it took about 3 seconds to copy it from one partition to another. That is some SICK speed! Looking like they weren't lying when they said each hard drive adds 300Mbps throughput to the array and we've got 6 drives in this array, so 1.8Gbps throughput.
I can't comprehend the amount of traffic it'd take for that kind of hard drive bandwidth to become a bottleneck! Of course, many moons ago, I couldn't imagine how I'd ever use up all of my first 30-meg hard drive.
I've got a smallish database on the new machine now copied from the old and will spend as much of today as possible (and with other hands on my time, that won't be a lot of time) checking out how the new version of SQL Server deals with it, and if it looks like it's doing well, I'll take the production database offline sometime tonight temporarily so I can copy it over for further testing.
We're scheduled to install the machine tomorrow evening. Not knocking on wood.
Of course, it'll still take a while to finish configuring the machine, then migrating iHub's db over to it. But am shooting for pulling off that feat this weekend. Should be possible barring nasty surprises, which definitely could be in store for us.
these folks initially have a larger hippocampus than people
not associated with superior navigation, programming and race car driving skills.
Interesting. I'd have never considered any kind of correlation between programming and race car driving skills, and still can't piece them together. Race car driving skills largely involves interacting with the car through all of your senses to determine what it's doing and what it needs you to do, and making habits and instinct out of actions that are counter-intuitive until you have them explained properly and put them into practice. Like stepping on the gas when the car rotates too much (skid) because we tend not to want to accelerate when something's wrong, even when it's the correct thing to do.
I can't imagine any parallels to programming aside from having to put your mind into a completely different operating mode from normal. In the case of programming, thinking more like a machine. In the case of driving, communicating with a machine in the many ways it communicates with you.
Beyond that, I can't think of anything.
At this point, I can finally say it'll be days rather than weeks.
AARRGGHH!!!!!
Oui.
There are still occasional post-number duplications, and we're not currently able to run the full-text indexer during market hours.
Matt should probably post an update and change the link to point to the new message.
The new (much more powerful) database server is in-house and will be installed shortly, then we'll migrate over to it and all should be right with the world.
Ummmmm, no.
Grubmaster, if you're reading this, we have two 57513's here.
The rewrite of the posting routine to speed up post submissions and reduce or eliminate the occurrence of these double numbers has been a toughie.
Is it commonplace for the duplicate numbering to happen at times like this where posting volume should be extremely low?
Last play that had this many bashers did very well
Ticker, please.
CEO's, as should all business-people, whether they're with private or public companies, should at least *try* to appear articulate and educated.
I doubt I'm alone in preferring to invest in companies run by people more intelligent than myself. If they show themselves to be less intelligent, why would I trust them with my money?
I'm watching this only out of the corner of my eye because this whole concept of a "reset" suddenly making the stock "worth" $54 per share leaves me nothing short of incredulous.
I'm not the sharpest knife in the drawer, but I'm no spoon. And the concept that a stock that's already trading can suddenly trade at nearly 70 times its current value at the CEO's whim is a brand new one on me. And being a devout Missourian, I don't believe it because I haven't seen it.
"Reset" isn't another way of saying "reverse split", is it?
You're apparently under the impression that anytime one of us is online, we're aware of everything that's happening system-wide.
That's not the case. There's too much activity, and some things, like my email for example, are not accessible remotely, nor do I care for it to be since I tend not to put in long hours here on weekends.
Plus, I'm not sure I understand your question.
Are you saying you subscribed, but the system didn't upgrade you? If you don't hit the final button that says "Return to Investors Hub" or whatever it says, and instead use your Back button to get back to the site, the system isn't aware that you subscribed and one of us has to manually upgrade you when we notice the payment email come in but don't see an upgraded account to go with it.
If you subscribed and the system didn't upgrade you and you got a confirmation email from PayPal or Verisign saying you paid and the transaction was approved, forward a copy of that email to Matt and he'll upgrade you.
On weekdays, we usually catch those within 15 minutes of the subscription happening, though. And most of the time (if the user hits all the right buttons when prompted) the process is totally automatic.
I have no idea and don't want to know. It's purposely not part of my job. My main job is to make this hobby of ours a viable business. Matt and grubmaster take care of the Admin stuff.
Yeah, last I saw, I think it was getting used hundreds of times per day. That's a VERY rare event in the context of over 4 million page views per day. Even if it's now being used thousands of times per day. It'd have to be used 40k times per day just to account for 1% of our traffic and I don't think it's anywhere near that.
Glad all I see is "Suppressed Sound Link".
RB's got full-text search?!?
I know FT-search is a BIG plus of our sites, but don't think it's the only thing that makes them worthwhile.
Anyway, grousing that we don't have it running realtime during market hours yet doesn't accomplish anything. We already know we need it and everything we're doing and all the money we're spending is specifically to bring that back. It'll be available as soon as it's physically possible to make it available (which we can finally say is SOON) and not one second earlier. It's simply not possible to pull it off any faster than we possibly can, which is already the speed we're working at.
It's possible to have it working now during market hours, but the cost is too high. Too-frequent timeouts and message number duplications. That's a set of costs that outweigh the benefits.
And it's really comparatively rarely used. Which doesn't matter much. You're preaching to the choir when you tell us that when you need/want it, anything less than realtime is pretty useless for active traders.
So it'll be available again as soon as possible and not a second sooner or later.
Nope. Only reason for that is we don't have the spare horsepower. But that problem should be getting fixed soon.
Unfortunately, though it's a problem you can throw money at to fix, you can't just catapult a bag of cash into it and it's immediately fixed. Takes a lot of time and some outages to do the switch, too.
Darn both of you! I've always hated that song and can't get it out of my head now.
I think I remember reading somewhere that that song was something Chuck Berry ended up really being embarassed about ever doing and that it marked the low point of his career. Pandering to the masses at the lowest common denominator.
Heck, even with different lyrics, it's not a song worthy of him.