I've seen various arguments that push for black box testing, or testing with no internal knowledge of the system being tested. Others argue for white box testing, in which the tests are conceived with intimate knowledge of the inner workings of the test subject. And then there's grey box testing, which is somewhere in between.
So what's best? Each of the above (and even other approaches) have their merits, but no one technique on its own is enough.
Black Box
If you pretend you don't know anything about the system under test (or deprive your testers of inside information), the test is typically going to be more of a real-world scenario. But will the test cover everything?
Take for example VMware's recent ESX bug that prevented countless virtual servers from powering on as of August 12, 2008. It was caused by a hard-coded expiration date intended for beta releases that was inadvertently left in. I'm assuming here, but something tells me that the VMware QA group never tested to see if this functionality was disabled in the release build as intended, because they didn't even know it existed.
White Box
Then there's the approach of using all you can find out about the system to be tested (source code, etc.) to more fully test a system. Does that information always help?
Last week I fixed a bug that was not caught in our normal testing. It was a pretty basic problem and something that was found easily with an atypical black box test. Normal testing didn't find it though, because we knew from way-back-when that we didn't have the problem since it was specifically addressed in the source code. This meant there was no test case for it.
Of course, source code changes over time. In this case, the prevention for this problem was shuffled around during code refactoring. While it was still executed, it happened too late in the process. Since it was still in the source code though, it remained untested.
Black and White
Testing is an imperfect science. When you put a bunch of humans together to test code written by another bunch of humans, mistakes will happen. Hopefully the mistakes will be caught and fixed, but it is inevitable that something undesirable will slip out the door.
I haven't seen bug-free software yet, and I doubt I ever will. As perfect as I am, I can't be in all places at all times to stop all the world's bugs. I'll have to work on that though, because imagine the utopia we'd all live in if all software was perfect.
I don't fault a black box test for missing an undocumented scenario. And I don't fault a white box test for missing something documented as not needing to be tested. VMware responded quickly and got a patch out as soon as they could when they found out about the problem customers were having. We were a bit more ahead of the curve and got a patch out before any customers reported a problem. Basically, both cases were just days in the life of technology.
But both cases illustrate what is an all too common problem in my experience - not enough testing. From a quality standpoint, there is no such thing as too much testing. From a business point of view, you have to ship your software at some point and can't test indefinitely.
Each development project finds its own compromise based on its unique situation, needs, and resources. I'm not about to argue with those. But a typical compromise picks a specific test methodology and runs with it.
Just like airplanes have a co-pilot to back up the pilot and the New England Patriots have Matt Cassel to back up Tom Brady, use a backup for your primary testing methodology. Sooner or later, it just might save your trans-Atlantic flight, football season, or just some embarrassment.
Tuesday, November 11, 2008
Thursday, October 23, 2008
Mmmm, Beer!
What would you rather do - drink beer, or fight cancer? Soon, you may be able to do both at the same time. Woohoo!
Friday, October 10, 2008
What's With All The Flag Waving?
OK, time for a soapbox sermon. I've had a few blog post ideas swirling around for a while now, but this one really got me going...
Flag variables.
I can't tell you how many times I've seen code like:
If a flag variable is assigned, tested in a following line, and never used again, I cringe! It makes the code harder to follow and debug and also wastes CPU and memory. True, creating one boolean variable and then assigning a value isn't an overly expensive process, but it is a waste and is the type of thing that can build up and bite you somewhere unpleasant in a multi-user app and/or large code base.
What about a good compiler, you say? Sure, some compilers are pretty good about optimizations. Most will probably drop the redundant boolean comparison from the condition, and some might even do away with the flag altogether. But that's only in the executble, the mess is still in your source code to cause confusion.
However, not all code is compiled! Does FireFox's JavaScript implementation make this kind of optimization? IE's? Opera's? Webkit's? What about the numerous other interpreted languages?
I consider flag variables to be only a small step away from GOTO statements. Like GOTOs, they might have a time and place, but should be used judiciously, if at all. Like GOTOs, they are often overused and abused.
If you set, test and never use it again, get rid of that darn flag. If a number of different conditions can possibly set the flag and then it is used later on to make a decision, maybe you need it, but you can probably still code better.
Yes, I have used flags (and GOTOs) in the past. Would you believe me if I said it was only back when I was 7 years old, first learning to program in BASIC on a TRS-80? Probably not.
But far too many programmers just are not disciplined enough and modern hardware makes it easy to get away with poor code. Considering the advances in hardware, shouldn't Word 2007 on a dual-core make Word 2.0 on a 386 look dog-slow? Sure, you hit a point of diminishing returns when trying to over-optimize code, but if some fairly basic discipline was more prevalent, Vista would have been a huge success for Microsoft because there would never be a wait dialog and everyone would be thrilled with how fast and efficient it was.
Flag variables.
I can't tell you how many times I've seen code like:
flag1 = false;Oh, I wish I could tell you that the code above is something I totally made up just to make a point. Sadly, it's not. The above is real code with the function names and specific syntax changed to protect the not-so-innocent.
flag2 = true;
flag1 = SomeFunctionThatReturnsTrueOrFalse();
if (flag1 == true)
{
flag2 = false;
}
if (flag2 == true)
{
flag3 = SomeOtherFunctionThatReturnsTrueOrFalse();
}
if (flag3 == true)
{
// do something
}
If a flag variable is assigned, tested in a following line, and never used again, I cringe! It makes the code harder to follow and debug and also wastes CPU and memory. True, creating one boolean variable and then assigning a value isn't an overly expensive process, but it is a waste and is the type of thing that can build up and bite you somewhere unpleasant in a multi-user app and/or large code base.
What about a good compiler, you say? Sure, some compilers are pretty good about optimizations. Most will probably drop the redundant boolean comparison from the condition, and some might even do away with the flag altogether. But that's only in the executble, the mess is still in your source code to cause confusion.
However, not all code is compiled! Does FireFox's JavaScript implementation make this kind of optimization? IE's? Opera's? Webkit's? What about the numerous other interpreted languages?
I consider flag variables to be only a small step away from GOTO statements. Like GOTOs, they might have a time and place, but should be used judiciously, if at all. Like GOTOs, they are often overused and abused.
If you set, test and never use it again, get rid of that darn flag. If a number of different conditions can possibly set the flag and then it is used later on to make a decision, maybe you need it, but you can probably still code better.
Yes, I have used flags (and GOTOs) in the past. Would you believe me if I said it was only back when I was 7 years old, first learning to program in BASIC on a TRS-80? Probably not.
But far too many programmers just are not disciplined enough and modern hardware makes it easy to get away with poor code. Considering the advances in hardware, shouldn't Word 2007 on a dual-core make Word 2.0 on a 386 look dog-slow? Sure, you hit a point of diminishing returns when trying to over-optimize code, but if some fairly basic discipline was more prevalent, Vista would have been a huge success for Microsoft because there would never be a wait dialog and everyone would be thrilled with how fast and efficient it was.
Wednesday, September 24, 2008
Don't Waste Your Time Rewriting URLs
Lately I have been seeing more requests for URL rewriting in the Alpha Application Server to make URLS more SEO friendly. Google's Matt Cutts first said in 2007 that Google has no problem at all with query-string parameters in URLs. Then just 2 days ago, Google made it pretty clear in the Google Webmaster Central blog post "Dynamic URLs vs. static URLs" that rewritten URLs add an error-prone layer of complexity with no SEO benefit at all.
Similarly, Yahoo has had dynamic URL rewriting as part of their Site Explorer toolbag for almost a year to mitigate any problems with dynamic URLs.
Yes, there are other reasons to use URL rewriting, but if you are just doing it for SEO, don't waste your time.
Similarly, Yahoo has had dynamic URL rewriting as part of their Site Explorer toolbag for almost a year to mitigate any problems with dynamic URLs.
Yes, there are other reasons to use URL rewriting, but if you are just doing it for SEO, don't waste your time.
Wednesday, July 30, 2008
"Mojave Experiment"
Wow!
It's really difficult to describe the Mojave Experiment, but I'm pretty sure that as I viewed this page with my jaw on the floor, I would have said "Wow!" if I could have spoken.
How bad are things for Windows Vista if Microsoft has to trick people just to get them to try it? Yes, that's right, Microsoft can't get people to even give Vista a quick look, so they have taken to disguising it as "Mojave", the "next Microsoft OS". And they have of course highlighted some positive responses and they threw in one skeptic to make it seem realistic.
And by the way, I was also surprised when I wasn't prompted to download Silverlight before I could view the site. I've been holding out and still don't have Silverlight on my main workstation, so I expected to see a prompt before the page loaded. Instead, it loaded seamlessly because Microsoft used Flash for this site instead of their own technology. Hmmm...
I'm glad I'm not part of Microsoft's marketing team trying to dig out of the hole they have found themselves in.
It's really difficult to describe the Mojave Experiment, but I'm pretty sure that as I viewed this page with my jaw on the floor, I would have said "Wow!" if I could have spoken.
How bad are things for Windows Vista if Microsoft has to trick people just to get them to try it? Yes, that's right, Microsoft can't get people to even give Vista a quick look, so they have taken to disguising it as "Mojave", the "next Microsoft OS". And they have of course highlighted some positive responses and they threw in one skeptic to make it seem realistic.
And by the way, I was also surprised when I wasn't prompted to download Silverlight before I could view the site. I've been holding out and still don't have Silverlight on my main workstation, so I expected to see a prompt before the page loaded. Instead, it loaded seamlessly because Microsoft used Flash for this site instead of their own technology. Hmmm...
I'm glad I'm not part of Microsoft's marketing team trying to dig out of the hole they have found themselves in.
Monday, July 28, 2008
VMware ESXi Now Available Free
VMware ESXi licenses are now available at no charge. If you aren't familiar with ESXi (or ESX), it is VMware's hypervisor offering. Whereas VMware Server or Workstation would be installed on top of a full operating system, a hypervisor installs directly on your bare hardware and eliminates the need for an operating system on your virtual machine host hardware. This saves a bunch of overhead on your hardware because the hypervisor allocates resources directly instead of leaving it up to the base operating system installation. IN addition to better performance, this also allows for advanced administration features, such as those in Infrastructure and other tools.
With Microsoft making so much noise about Hyper-V, it seemed inevitable that VMware would have to revisit pricing, but this change in licensing availability was sooner and more dramatic than I had expected. With Paul Maritz replacing Diane Greene, I wonder what other surprises may be in store for us.
With Microsoft making so much noise about Hyper-V, it seemed inevitable that VMware would have to revisit pricing, but this change in licensing availability was sooner and more dramatic than I had expected. With Paul Maritz replacing Diane Greene, I wonder what other surprises may be in store for us.
Friday, July 25, 2008
New Age of Innovation
I've had my eye on a book, The New Age of Innovation: Driving Cocreated Value Through Global Networks, for a few weeks now. I haven't bought it yet though because I have a ton of stuff going on right now and won't have the time to read it just yet, but the preview stuff I've seen has this at the top of my list right now.
I'm not quite sure what the exact association is, but InformationWeek is now pushing the book through a New Age of Innovation Think Tank Road Show. It's a free event, but only for qualified attendees, which they define as "Senior level executives such as Vice Presidents and C-level executives interested in finding out about the cultural forces and emerging technology trends shaping the business environment."
I've been "lucky enough" to be approved and I'm looking forward to attending in October. If you aren't familiar with the book or the road show, you may want to take a look.
I'm not quite sure what the exact association is, but InformationWeek is now pushing the book through a New Age of Innovation Think Tank Road Show. It's a free event, but only for qualified attendees, which they define as "Senior level executives such as Vice Presidents and C-level executives interested in finding out about the cultural forces and emerging technology trends shaping the business environment."
I've been "lucky enough" to be approved and I'm looking forward to attending in October. If you aren't familiar with the book or the road show, you may want to take a look.
Friday, June 20, 2008
Penny Wise, Pound Foolish
Computerworld brags about "40 years of the most authoritative source of news and information for IT leaders." And they have an article talking about spending $125 and "a couple hours" to refresh a 5 year old laptop. The author reports a 30% improvement in performance after the updates, which include RAM, a hard drive, a keyboard, and a good 'ole cleaning.
C'mon now, this might be an interesting little project for someone at home with no budget and very casual needs, but it's not even close to worth it for any type of business use. I find this article so far off base for what Computerworld claims to be the intended audience that I don't even know where to start...
They didn't replace the battery, which probably would have been another $100+, which means this laptop isn't going to be very mobile. Congratulations, your laptop is now a desktop system with a small keyboard. I bet it has a PS/2 port where you could have just plugged in a real keyboard.
They didn't put a price on the labor involved. I find the time estimate overly optimistic for cleaning off the Windows clutter, but let's just accept it anyway. So conservatively we have to add another $100 to the cost.
And the result: a 5 year old laptop that performs about like it did 5 years ago. This "revitalized" machine still only has a 1.5-GHz Pentium M processor, slower memory, who-knows-what for video and produces a PC Mark 05 score of 1,536. A current low-end laptop should be at least twice that and can be found in the $400-500 range.
Hmm, would I like to spend $350 updating an old slow machine or $450 on a current machine that carries a warranty and increases user productivity?
I'm not a fan of the throw-away society, but hardware is cheap and it pays for itself quickly - and many times over - in most businesses. Don't save your pennies and waste your dollars. Spend a little more and get a whole lot more done.
C'mon now, this might be an interesting little project for someone at home with no budget and very casual needs, but it's not even close to worth it for any type of business use. I find this article so far off base for what Computerworld claims to be the intended audience that I don't even know where to start...
They didn't replace the battery, which probably would have been another $100+, which means this laptop isn't going to be very mobile. Congratulations, your laptop is now a desktop system with a small keyboard. I bet it has a PS/2 port where you could have just plugged in a real keyboard.
They didn't put a price on the labor involved. I find the time estimate overly optimistic for cleaning off the Windows clutter, but let's just accept it anyway. So conservatively we have to add another $100 to the cost.
And the result: a 5 year old laptop that performs about like it did 5 years ago. This "revitalized" machine still only has a 1.5-GHz Pentium M processor, slower memory, who-knows-what for video and produces a PC Mark 05 score of 1,536. A current low-end laptop should be at least twice that and can be found in the $400-500 range.
Hmm, would I like to spend $350 updating an old slow machine or $450 on a current machine that carries a warranty and increases user productivity?
I'm not a fan of the throw-away society, but hardware is cheap and it pays for itself quickly - and many times over - in most businesses. Don't save your pennies and waste your dollars. Spend a little more and get a whole lot more done.
Thursday, April 24, 2008
Run and Hide, It's the Dreaded Mac Creep!
When I got home last night, the latest issue of eWeek had arrived and my wife had left it on the kitchen table for me. Call me old fashioned if you will, but I've been a subscriber to the dead-trees edition of several magazines ever since I got my first professional subscription - to WebWeek back when I was a full-time webmaster. The online versions just don't cut it for me. I stare at a computer monitor for far too long each day and my eyes like looking at paper.
One of the stories on the cover is Dealing with Mac Creep. I did not get a chance to read it yet, and I will not be reading the online version that I just linked to so it'll have to wait until this evening. But that article title was enough to get me to thinking.
And before I go any further, I have to make a confession: I like Macs. Sure, I work at a Windows software company, my first computer experience was on a TRS-80, and most of my time spent working is on Windows. But I've always had a bit of an interest in Macs from a networking standpoint, I married a Mac woman (she has a marketing and design background, what do you expect?) and when OS X first came out, I was very excited by the BSD underpinnings because I had done so much with Linux and FreeBSD. At home, we have a PC and an iMac. The kids and my wife use the iMac while I have always used the PC. But these days, the PC is basically just a VM host for a file server virtual machine. When I work from the iMac and I need to compile some code under Windows, I just use remote desktop, either to my home PC or to my office PC.
I think Macs are a natural for general business use. Linux will continue to struggle on the desktop because it is hard to get working and the applications available are limited. Macs though are just plain easy to use and work well. (Of course, it's easier for Apple to deliver that experience since they control both the OS and the hardware.) And while Macs may not have the same depth in its software library as Windows, there is still quite a bit in it and I see more and more applications moving to the web anyway. OK, the users themselves are well covered.
Now tack on what appealed to my inner geek originally, the BSD core, and IT should be happy. They may not be familiar with it, but they should at least have enough of an interest in technology to understand the implications. And if they are afraid to learn, they should be looking for a new career anyway.
That leaves only our friends with the black and red pens. When the finance folks first see an iMac starts at $1099 compared to around $400-500 for a commodity PC, they aren't going to be too happy. But assuming you roll out a new machine every 3 years, an extra $200/year towards employee morale is a good investment in my mind.
I wonder if I'll have a Mac on my desktop at work any time soon.
One of the stories on the cover is Dealing with Mac Creep. I did not get a chance to read it yet, and I will not be reading the online version that I just linked to so it'll have to wait until this evening. But that article title was enough to get me to thinking.
And before I go any further, I have to make a confession: I like Macs. Sure, I work at a Windows software company, my first computer experience was on a TRS-80, and most of my time spent working is on Windows. But I've always had a bit of an interest in Macs from a networking standpoint, I married a Mac woman (she has a marketing and design background, what do you expect?) and when OS X first came out, I was very excited by the BSD underpinnings because I had done so much with Linux and FreeBSD. At home, we have a PC and an iMac. The kids and my wife use the iMac while I have always used the PC. But these days, the PC is basically just a VM host for a file server virtual machine. When I work from the iMac and I need to compile some code under Windows, I just use remote desktop, either to my home PC or to my office PC.
I think Macs are a natural for general business use. Linux will continue to struggle on the desktop because it is hard to get working and the applications available are limited. Macs though are just plain easy to use and work well. (Of course, it's easier for Apple to deliver that experience since they control both the OS and the hardware.) And while Macs may not have the same depth in its software library as Windows, there is still quite a bit in it and I see more and more applications moving to the web anyway. OK, the users themselves are well covered.
Now tack on what appealed to my inner geek originally, the BSD core, and IT should be happy. They may not be familiar with it, but they should at least have enough of an interest in technology to understand the implications. And if they are afraid to learn, they should be looking for a new career anyway.
That leaves only our friends with the black and red pens. When the finance folks first see an iMac starts at $1099 compared to around $400-500 for a commodity PC, they aren't going to be too happy. But assuming you roll out a new machine every 3 years, an extra $200/year towards employee morale is a good investment in my mind.
I wonder if I'll have a Mac on my desktop at work any time soon.
Thursday, April 17, 2008
Is Anyone Really Surprised by Sun?
News from the MySQL Conference indicates that Sun will be adding some new features only to MySQL Enterprise, and not the more common (and open-source) MySQL Community. Is anyone really surprised by this?
Google does it with Google Apps vs. GMail. Sun is already doing this with StarOffice vs. OpenOffice. Red Hat has been doing it for quite some time now with Red Hat Enterprise Linux vs. Fedora. InnoBase, now owned by Oracle, does it with InnoDB HotBackup vs. InnoDB. There are countless other examples out there too.
And the reasoning is pretty simple - at the end of the day, we all need to get paid. Even within the open source community, people have bills. We might all like to be idealists and work on a project just because we enjoy it, but that isn't going to put clothes on your back or keep your house warm during the winter. You still need to find a way to make some money.
Google, Sun, Red Hat and Oracle need to make money too. The way to do that is by actually charging for what they do instead of giving it all away. If the free product is "good enough" for your situation, you are in luck. But if you are after more advanced features, then you probably have a real business need for the product and should understand having to pay for it.
It's much like transportation. You can get anywhere you want for free by walking or swimming. But if you need or want to get to your destination faster, easier or maybe with more luggage, then you are going to need to pay in some fashion to take a car, bus, train, plane or boat.
Sun's move should not be unexpected to anyone. It doesn't mean they are evil, it just means they live in the real world.
Google does it with Google Apps vs. GMail. Sun is already doing this with StarOffice vs. OpenOffice. Red Hat has been doing it for quite some time now with Red Hat Enterprise Linux vs. Fedora. InnoBase, now owned by Oracle, does it with InnoDB HotBackup vs. InnoDB. There are countless other examples out there too.
And the reasoning is pretty simple - at the end of the day, we all need to get paid. Even within the open source community, people have bills. We might all like to be idealists and work on a project just because we enjoy it, but that isn't going to put clothes on your back or keep your house warm during the winter. You still need to find a way to make some money.
Google, Sun, Red Hat and Oracle need to make money too. The way to do that is by actually charging for what they do instead of giving it all away. If the free product is "good enough" for your situation, you are in luck. But if you are after more advanced features, then you probably have a real business need for the product and should understand having to pay for it.
It's much like transportation. You can get anywhere you want for free by walking or swimming. But if you need or want to get to your destination faster, easier or maybe with more luggage, then you are going to need to pay in some fashion to take a car, bus, train, plane or boat.
Sun's move should not be unexpected to anyone. It doesn't mean they are evil, it just means they live in the real world.
Thursday, April 10, 2008
Web Security Good Enough for Google
Web applications are gaining tremendous momentum, due in no small part to the fact that they can be available everywhere and to everyone. But that is a double-edged sword, since not everyone with an Internet connection is a "good guy".
Make no mistake: If you put an application on the Internet, someone will find it and try to break into it. If you think otherwise, you are only fooling yourself and putting your data at risk.
Google recently shared some of its security secrets at RSA Conference, which focuses on information security. The article is a bit scant on details and I'm hoping that more information about Scott Petry's session will become available, but what is there is still very valuable. Google's Vice President of Engineering, Douglas Merrill, also shared some security insight back in June of 2007.
There are two recurring themes in both of these articles, and I could not agree more strongly with them. First, security is something that the developer must have real knowledge about. Second, security is something that must be considered from the beginning and not tacked on at the end.
Learn about web security, and make sure you understand it. If you are developing web applications, you need to know this stuff.
And don't hesitate to use the tools available to help you secure your application. The Security Framework in Alpha's Application Server is a giant leap forward and can get you well on your way. Just remember to consider all potential attack vectors and address them.
Make no mistake: If you put an application on the Internet, someone will find it and try to break into it. If you think otherwise, you are only fooling yourself and putting your data at risk.
Google recently shared some of its security secrets at RSA Conference, which focuses on information security. The article is a bit scant on details and I'm hoping that more information about Scott Petry's session will become available, but what is there is still very valuable. Google's Vice President of Engineering, Douglas Merrill, also shared some security insight back in June of 2007.
There are two recurring themes in both of these articles, and I could not agree more strongly with them. First, security is something that the developer must have real knowledge about. Second, security is something that must be considered from the beginning and not tacked on at the end.
Learn about web security, and make sure you understand it. If you are developing web applications, you need to know this stuff.
And don't hesitate to use the tools available to help you secure your application. The Security Framework in Alpha's Application Server is a giant leap forward and can get you well on your way. Just remember to consider all potential attack vectors and address them.
Monday, April 07, 2008
Expensive Pizza
Pizza is usually a pretty simple food, but there are all sorts of gourmet variants too. I've had some expensive stuff in my day, and I've even enjoyed some of it. But this tops them all: pizza.com was sold for $2.6 million on Friday by domain auctioneer sedo.com. Not bad for a $20/year investment by some guy that had bought it in 1994 hoping to land a consulting gig.
It makes pizza.info a steal at the current bid of 12,500 EUR.
Kinda makes me wish I hadn't let a few domain names expire in years past. Maybe I should flip through the unused domain names I have left. I'm not a greedy guy so I'd be happy to let them all go as a package deal for the rock-bottom price of only $1.5 million. Any takers?
It makes pizza.info a steal at the current bid of 12,500 EUR.
Kinda makes me wish I hadn't let a few domain names expire in years past. Maybe I should flip through the unused domain names I have left. I'm not a greedy guy so I'd be happy to let them all go as a package deal for the rock-bottom price of only $1.5 million. Any takers?
Thursday, April 03, 2008
Fungible, My Ass!
1 != 1 (that's "one does not equal one" for all you non-geeks) when you're talking about people. Sure, it holds true for bits of data, produce, and gold. People are different.
Martin Heller agreed with Richard Rabins when he said "Software developers are not fungible commodities to be bought and sold." That led to a post on Martin's InfoWorld blog, Strategic Developer, a couple of weeks ago. At the time, I was pleased to see Alpha Software mentioned on the InfoWorld site, but I was also a bit surprised by the content of Martin's mention. Doesn't everyone already know developers are not a commodity, let alone a fungible one?
Apparently not! After two weeks, Martin got such varied responses to his agreement with Richard's assertion that he's made a second post just to summarize the discussion.
I have to admit that I am not overly surprised by the disagreement and wide range of opinions even though I think it is pretty much common sense that while you can substitute one bushel of corn for another, you cannot substitute one worker for another. And notice I said "worker", not developer. Sure, I work in technology and the developer is more directly visible to my colleagues, peers, and me, but it is just as true when applied to ditch diggers.
The counter argument that tries to say workers are fungible usually goes something like, "It's just a matter of training and education." You just can't teach certain things though - work ethic, aptitude, natural talent, etc.
Next time you drive by a road crew "working", notice the 1 or 2 people doing the actual work while many more look on. Sure, I'm stereotyping here, but stereotypes come from reality. Chances are that those 1 or 2 doing the work have a better work ethic than the others. Yeah, they could just be the new guys that get dumped on, I'm sure that happens too. But I personally know several people that work in similar environments yet consistently work reliably while others look on or otherwise waste time - not because someone is cracking a whip or hazing them, but because they feel that it is their duty to do so. You just can't teach that.
Now go to your local auto mechanic, car dealership or oil change chain and walk into the repair bay, or peer in closely while standing next to the ominous "insurance regulations do not permit..." sign. If there is more than one mechanic working, I bet you'll see some differences in the way they get work done. Sometimes it's just a matter of work ethic, but I know even from my very early work experience that it is often something more. My father owns a gas station and auto repair shop (with a woefully out of date web site) and I worked there through junior high and high school. While there, I observed my brothers as well as other employees and how they worked. Even within my own family that shared a similar work ethic, there were distinct differences. One of my brothers just was not as good despite his best efforts. He wasn't doing anything wrong, he wasn't lazy, and he wasn't stupid. But his hands just didn't hold the tools as comfortably and his mind didn't visualize the problem as well. We all got the same "training" growing up working on cars, but you just can't learn what he didn't have.
In some cases, persistence can make up for skill and vice versa. The ditch digger that holds his shovel awkwardly can work faster in an attempt to move as much dirt as someone else, while the lazy digger that has outstanding technique can still move plenty of dirt too. Yet neither of these folks will accomplish as much as a skilled, motivated and natural person. And let's not forget that the awkward person is going to tire and slow down or even burn out completely and the lazy person is going to impact everyone else's morale.
In the world of programming, these differences often have more insidious effects. If your work crew has a lousy ditch digger, you'll probably know it at the end of the day when you see his work compared to the rest. But a programmer that's not up to snuff may very well write some code that appears to function well and do it in a timely manner. For now...
Let me make it clear that I'm not talking about just a bug here. We've all mistyped something, worked a bit too late in an effort to get just a little more done for the day, created a routine that was off by 1, or generally made some other kind of "silly" mistake. These "simple" bugs can be dangerous too, but they will not run rampant and consume your code like that guy that you hired just because he seemed like he'd work for a bit less than the other candidates.
Does he stink at building algorithms because his mind simply doesn't work that way so he ends up introducing what functionally amounts to a back door with the potential for a massive Hanaford-scale data breach? Or maybe his weakness is in conceptualizing threading and now your application is full of race conditions. Perhaps his specialty is uninitialized variables that lead to unpredictable behavior even though he's been taught repeatedly to always initialize them, but he just doesn't see the need to do so.
There's an easy solution here though. Find, hire and retain the best people that you can. And when the occasional lousy one slips through the cracks and gets a job working for you, don't hesitate to get rid of him.
Martin Heller agreed with Richard Rabins when he said "Software developers are not fungible commodities to be bought and sold." That led to a post on Martin's InfoWorld blog, Strategic Developer, a couple of weeks ago. At the time, I was pleased to see Alpha Software mentioned on the InfoWorld site, but I was also a bit surprised by the content of Martin's mention. Doesn't everyone already know developers are not a commodity, let alone a fungible one?
Apparently not! After two weeks, Martin got such varied responses to his agreement with Richard's assertion that he's made a second post just to summarize the discussion.
I have to admit that I am not overly surprised by the disagreement and wide range of opinions even though I think it is pretty much common sense that while you can substitute one bushel of corn for another, you cannot substitute one worker for another. And notice I said "worker", not developer. Sure, I work in technology and the developer is more directly visible to my colleagues, peers, and me, but it is just as true when applied to ditch diggers.
The counter argument that tries to say workers are fungible usually goes something like, "It's just a matter of training and education." You just can't teach certain things though - work ethic, aptitude, natural talent, etc.
Next time you drive by a road crew "working", notice the 1 or 2 people doing the actual work while many more look on. Sure, I'm stereotyping here, but stereotypes come from reality. Chances are that those 1 or 2 doing the work have a better work ethic than the others. Yeah, they could just be the new guys that get dumped on, I'm sure that happens too. But I personally know several people that work in similar environments yet consistently work reliably while others look on or otherwise waste time - not because someone is cracking a whip or hazing them, but because they feel that it is their duty to do so. You just can't teach that.
Now go to your local auto mechanic, car dealership or oil change chain and walk into the repair bay, or peer in closely while standing next to the ominous "insurance regulations do not permit..." sign. If there is more than one mechanic working, I bet you'll see some differences in the way they get work done. Sometimes it's just a matter of work ethic, but I know even from my very early work experience that it is often something more. My father owns a gas station and auto repair shop (with a woefully out of date web site) and I worked there through junior high and high school. While there, I observed my brothers as well as other employees and how they worked. Even within my own family that shared a similar work ethic, there were distinct differences. One of my brothers just was not as good despite his best efforts. He wasn't doing anything wrong, he wasn't lazy, and he wasn't stupid. But his hands just didn't hold the tools as comfortably and his mind didn't visualize the problem as well. We all got the same "training" growing up working on cars, but you just can't learn what he didn't have.
In some cases, persistence can make up for skill and vice versa. The ditch digger that holds his shovel awkwardly can work faster in an attempt to move as much dirt as someone else, while the lazy digger that has outstanding technique can still move plenty of dirt too. Yet neither of these folks will accomplish as much as a skilled, motivated and natural person. And let's not forget that the awkward person is going to tire and slow down or even burn out completely and the lazy person is going to impact everyone else's morale.
In the world of programming, these differences often have more insidious effects. If your work crew has a lousy ditch digger, you'll probably know it at the end of the day when you see his work compared to the rest. But a programmer that's not up to snuff may very well write some code that appears to function well and do it in a timely manner. For now...
Let me make it clear that I'm not talking about just a bug here. We've all mistyped something, worked a bit too late in an effort to get just a little more done for the day, created a routine that was off by 1, or generally made some other kind of "silly" mistake. These "simple" bugs can be dangerous too, but they will not run rampant and consume your code like that guy that you hired just because he seemed like he'd work for a bit less than the other candidates.
Does he stink at building algorithms because his mind simply doesn't work that way so he ends up introducing what functionally amounts to a back door with the potential for a massive Hanaford-scale data breach? Or maybe his weakness is in conceptualizing threading and now your application is full of race conditions. Perhaps his specialty is uninitialized variables that lead to unpredictable behavior even though he's been taught repeatedly to always initialize them, but he just doesn't see the need to do so.
There's an easy solution here though. Find, hire and retain the best people that you can. And when the occasional lousy one slips through the cracks and gets a job working for you, don't hesitate to get rid of him.
Monday, March 24, 2008
Virtualization Is Hot, But Don't Get Burned
Virtualization is getting a lot of buzz right now, and for good reason. I think it offers some real benefits in a variety of uses. I've been trying to write an article for the Alpha VAR newsletter discussing some of these that should be of interest to that audience for some time now, but I keep getting pulled away. Rather than keep waiting, I want to share an important point I was going to make - especially since it just bit me and I knew about it in advance.
One of the really useful things about virtualization is that your entire VM is usually just 2-5 files. This makes it easy to back up an entire system or move it to a different host system.
It also makes it really easy to get yourself into trouble. Of those few files, one of them is typically the virtual disk for the virtual machine. That one file represents the entire hard drive, so any corruption to it means none of your VM is usable.
Normally, that downside far outweighs the benefits because it is so easy to back up. And everyone backs up, right?
Well least week, in the midst of all of the scurrying here to release Alpha Five Platinum, one of the customer service rep's machine died. It wouldn't power on in the morning. After swapping a power supply, power switch, power cord and other troubleshooting, we determined that the motherboard was fried. The disk was still good though, so we were in pretty good shape.
For all the reasons that I haven't written about in that elusive article yet, James' PC was a perfect candidate for virtualization. We created a new VM, put his old physical disk into another system and fired up Ghost on both to clone the physical disk to the new virtual disk.
There are a number of ways to go from physical machine to virtual, but we have had a 100% success rate with this method. That was, until last week anyway. After cloning and firing up the VM, the Windows XP install went into the endless blue screen, reboot, blue screen cycle. When we finally got a screen shot of the blue screen, we saw that there was a hardware conflict. Something about the failed physical system was too different from the virtual hardware.
After a bunch of trial and error (and blue screens) we found a system in our QA lab that was apparently close enough to the original hardware to get XP booted. Now with a running Windows instance, we could use VMware Converter to do the p2v conversion since that is much better at dealing with hardware subtleties.
Finally after running overnight, VMware Converter gave us a usable VM for James to use. We had already lost much valuable time and needed to get back to work on Alpha Five, so we made a conscious decision to take a calculated risk and not set up proper snapshotting and backups for this new VM.
And in case you can't guess what happened next, the hard drive in the host system for James' VM began to fail.
We gambled. We knew we were gambling. We lost.
Just a few sectors on the drive are damaged so it needs a new disk but fortunately only one file is damaged. That one file is James' 28GB virtual disk file. Any attempt to copy it, move it, read it or repair it results in a CRC error.
As I write this, I'm waiting for yet another repair attempt to complete. The last one said it found and fixed everything, but then the move to the replacement hard drive failed at 96%. I hope this one works and if the timer on the progress meter that I've been staring at for entirely too long is accurate, I'll know in another 8 minutes.
If we don't get this virtual disk file repaired and usable, we still have the old physical disk to go back to. In this case, James will have magically jumped back about a week in time and lost all work he's done since. Well, not everything - we do have backups of his documents, email, etc. but getting this VM repaired is still the best way to go.
So all the old warnings about backing up that have historically applied to physical machines apply to virtual machines as well. With the corruption of one single file, we're potentially looking at the loss of 30GB, not to mention all of the time lost today when we could have simply copied a single file from a backup archive if we had gone ahead and set that up last week.
One of the really useful things about virtualization is that your entire VM is usually just 2-5 files. This makes it easy to back up an entire system or move it to a different host system.
It also makes it really easy to get yourself into trouble. Of those few files, one of them is typically the virtual disk for the virtual machine. That one file represents the entire hard drive, so any corruption to it means none of your VM is usable.
Normally, that downside far outweighs the benefits because it is so easy to back up. And everyone backs up, right?
Well least week, in the midst of all of the scurrying here to release Alpha Five Platinum, one of the customer service rep's machine died. It wouldn't power on in the morning. After swapping a power supply, power switch, power cord and other troubleshooting, we determined that the motherboard was fried. The disk was still good though, so we were in pretty good shape.
For all the reasons that I haven't written about in that elusive article yet, James' PC was a perfect candidate for virtualization. We created a new VM, put his old physical disk into another system and fired up Ghost on both to clone the physical disk to the new virtual disk.
There are a number of ways to go from physical machine to virtual, but we have had a 100% success rate with this method. That was, until last week anyway. After cloning and firing up the VM, the Windows XP install went into the endless blue screen, reboot, blue screen cycle. When we finally got a screen shot of the blue screen, we saw that there was a hardware conflict. Something about the failed physical system was too different from the virtual hardware.
After a bunch of trial and error (and blue screens) we found a system in our QA lab that was apparently close enough to the original hardware to get XP booted. Now with a running Windows instance, we could use VMware Converter to do the p2v conversion since that is much better at dealing with hardware subtleties.
Finally after running overnight, VMware Converter gave us a usable VM for James to use. We had already lost much valuable time and needed to get back to work on Alpha Five, so we made a conscious decision to take a calculated risk and not set up proper snapshotting and backups for this new VM.
And in case you can't guess what happened next, the hard drive in the host system for James' VM began to fail.
We gambled. We knew we were gambling. We lost.
Just a few sectors on the drive are damaged so it needs a new disk but fortunately only one file is damaged. That one file is James' 28GB virtual disk file. Any attempt to copy it, move it, read it or repair it results in a CRC error.
As I write this, I'm waiting for yet another repair attempt to complete. The last one said it found and fixed everything, but then the move to the replacement hard drive failed at 96%. I hope this one works and if the timer on the progress meter that I've been staring at for entirely too long is accurate, I'll know in another 8 minutes.
If we don't get this virtual disk file repaired and usable, we still have the old physical disk to go back to. In this case, James will have magically jumped back about a week in time and lost all work he's done since. Well, not everything - we do have backups of his documents, email, etc. but getting this VM repaired is still the best way to go.
So all the old warnings about backing up that have historically applied to physical machines apply to virtual machines as well. With the corruption of one single file, we're potentially looking at the loss of 30GB, not to mention all of the time lost today when we could have simply copied a single file from a backup archive if we had gone ahead and set that up last week.
Saturday, March 22, 2008
If You Build It, Will They Come?
No. I'm sorry to shatter the dream, but just building it is not even close to enough. Not here in the Field of Reality anyway.
And building it well is not enough either. You can be a damn good builder and produce some great creations, but that just doesn't matter. If you want the crowd to come and buy your wares, you need to create something that is interesting and solves a problem, and then you need to make sure everyone knows about it.
It's really quite simple. If nobody needs what you've built, they are not going to care about it. But wait, what if they really do need it but they just don't realize it? It's still a no. Most people aren't going to wait around while you try to educate them about something they already have decided is a waste because they don't think they need it.
OK, so you've built something that people need, and they even know they need it. Now will they give you all of their money for it? That's a fat chance if they don't even know it exists. Let's say I've had my head in the sand for the past few years and I've decided I really need a portable music player. I'm probably not going to go look for an iPod if I've never heard of it.
I do kinda miss my clunky old Sony Sports Walkman in bright yellow though so maybe I can find one of those. I don't think I would have made it through junior high without that thing.
Now raise your hand if you think all of this is common sense and you're wondering why I've bothered to write about it.
Yep, just as I thought, I haven't revealed any deep dark secrets to very many of you yet. That's OK, I have more for you. Here's where it gets interesting.
I'll bet most everyone with a hand in the air has a widget that passes the two tests above, right? Of course! Your idea is different, better, cooler than the rest. The world is begging for your widget, aren't they? Even if they don't know it (OK, so you only passed 1 1/2 tests, no biggie), they really really need it.
Listen up sonny, take off those rose colored glasses and put down the Kool-Aid. Trust me, I know what I'm talking about here, I drank the Kool-Aid too and I liked it just as much as you do. That sugar might put a smile on your face, but it isn't going to pay the bills.
You see, back around 1999 or so, Greg Donarum, a good friend and really smart guy that I went to college with, and I started building an online catalog system. We weren't sure exactly what we were going to do with it, but it was a good way to leverage the experience we already had and learn some new technologies. What we ended up with was a complete system with full e-commerce, CRM and BI functionality built as an ASP, or as a SAAS in todays revised jargon.
North of Zero was great, if I do say so myself. Greg architected an incredible data model, I built a fault-tolerant hosting infrastructure and together we coded fast and robust web interfaces for all of it. Our third business partner, Ralph Lucier, is a very talented graphic designer and all-around professional business person so our application screens, marketing collateral, etc. were all impeccable.
What we built essentially enabled SMBs to have their very own Amazon-quality online store with features like upselling, cross promotions, affiliate programs and so on. On the back-end, we built full business intelligence and customer relationship management. The hosting infrastructure provided 100% uptime for over 3 years straight and the monthly fee was very reasonable. And our customers loved it!
So why haven't I retired yet and why haven't you ever heard of North of Zero? We didn't know it at the time because of those darned glasses, but we failed both of the tests.
First up is the requirement that people need what you have and they know it. Well, we actually only half-failed this one - I still think the world needed our solution. Back when we were trying to sell our application, many business owners that we spoke to had no idea what CRM and BI were and only had an inkling about why they may want to get into e-commerce. Few of them had actually shopped online and they were still doing business using telephones, faxes and snail mail. Trying to educate them on what the technology could do for them was usually an insurmountable feat and made the sales team spin their wheels far too much.
Second is making potential customers aware of what you've built. We were self-funded, which was good in that we didn't explode when the dot-com bubble burst, but it also meant we had a pretty small marketing budget. We did get some great PR, but it wasn't enough for the companies that were looking for our kind of solution to find us, and cold-calling just isn't effective. So those companies found some other offering (most of which did less and cost more) and never even considered North of Zero.
So would I do it all again? You bet. And would I do things differently? Absolutely!
I could go on and on about lessons learned, things I'd change here and there, if I had only known then what I know now, yada, yada. But I can really sum it all up quite simply - take a step back and look at what you are doing more objectively. You might think you already are, but I can almost guarantee that you aren't.
And building it well is not enough either. You can be a damn good builder and produce some great creations, but that just doesn't matter. If you want the crowd to come and buy your wares, you need to create something that is interesting and solves a problem, and then you need to make sure everyone knows about it.
It's really quite simple. If nobody needs what you've built, they are not going to care about it. But wait, what if they really do need it but they just don't realize it? It's still a no. Most people aren't going to wait around while you try to educate them about something they already have decided is a waste because they don't think they need it.
OK, so you've built something that people need, and they even know they need it. Now will they give you all of their money for it? That's a fat chance if they don't even know it exists. Let's say I've had my head in the sand for the past few years and I've decided I really need a portable music player. I'm probably not going to go look for an iPod if I've never heard of it.
I do kinda miss my clunky old Sony Sports Walkman in bright yellow though so maybe I can find one of those. I don't think I would have made it through junior high without that thing.
Now raise your hand if you think all of this is common sense and you're wondering why I've bothered to write about it.
Yep, just as I thought, I haven't revealed any deep dark secrets to very many of you yet. That's OK, I have more for you. Here's where it gets interesting.
I'll bet most everyone with a hand in the air has a widget that passes the two tests above, right? Of course! Your idea is different, better, cooler than the rest. The world is begging for your widget, aren't they? Even if they don't know it (OK, so you only passed 1 1/2 tests, no biggie), they really really need it.
Listen up sonny, take off those rose colored glasses and put down the Kool-Aid. Trust me, I know what I'm talking about here, I drank the Kool-Aid too and I liked it just as much as you do. That sugar might put a smile on your face, but it isn't going to pay the bills.
You see, back around 1999 or so, Greg Donarum, a good friend and really smart guy that I went to college with, and I started building an online catalog system. We weren't sure exactly what we were going to do with it, but it was a good way to leverage the experience we already had and learn some new technologies. What we ended up with was a complete system with full e-commerce, CRM and BI functionality built as an ASP, or as a SAAS in todays revised jargon.
North of Zero was great, if I do say so myself. Greg architected an incredible data model, I built a fault-tolerant hosting infrastructure and together we coded fast and robust web interfaces for all of it. Our third business partner, Ralph Lucier, is a very talented graphic designer and all-around professional business person so our application screens, marketing collateral, etc. were all impeccable.
What we built essentially enabled SMBs to have their very own Amazon-quality online store with features like upselling, cross promotions, affiliate programs and so on. On the back-end, we built full business intelligence and customer relationship management. The hosting infrastructure provided 100% uptime for over 3 years straight and the monthly fee was very reasonable. And our customers loved it!
So why haven't I retired yet and why haven't you ever heard of North of Zero? We didn't know it at the time because of those darned glasses, but we failed both of the tests.
First up is the requirement that people need what you have and they know it. Well, we actually only half-failed this one - I still think the world needed our solution. Back when we were trying to sell our application, many business owners that we spoke to had no idea what CRM and BI were and only had an inkling about why they may want to get into e-commerce. Few of them had actually shopped online and they were still doing business using telephones, faxes and snail mail. Trying to educate them on what the technology could do for them was usually an insurmountable feat and made the sales team spin their wheels far too much.
Second is making potential customers aware of what you've built. We were self-funded, which was good in that we didn't explode when the dot-com bubble burst, but it also meant we had a pretty small marketing budget. We did get some great PR, but it wasn't enough for the companies that were looking for our kind of solution to find us, and cold-calling just isn't effective. So those companies found some other offering (most of which did less and cost more) and never even considered North of Zero.
So would I do it all again? You bet. And would I do things differently? Absolutely!
I could go on and on about lessons learned, things I'd change here and there, if I had only known then what I know now, yada, yada. But I can really sum it all up quite simply - take a step back and look at what you are doing more objectively. You might think you already are, but I can almost guarantee that you aren't.
Monday, March 17, 2008
Cut to the Chase Already
Joel Spolsky has a new post on web standards and why browsers are such a mess. I had noticed it earlier and ignored it because it looked so long, but Alpha's PR folks passed it along to me just a little while ago to get my thoughts on it. They know their business and they are good at what they do, so I figured I really should take another look at what Joel wrote.
I think I agree with where Joel is going, but frankly I feel more inspired to write a rant about why his article is waaaay too long for me to bother reading all of it.
Maybe that's because I'm all too familiar with non-standard standards and bitter about the chunk of my life wasted while testing web pages in umpteen different browsers at a time. I still have the scars, and the copy of Netscape Navigator 1.0 on a floppy, to remind me.
HTML and browser "standards", like many standards in technology and other industries as well, are almost always implemented in a non-standard way. This means that if you build a web page and test it on your own PC, say using Internet Explorer 7 with a screen resolution of 1024x768, and everything looks great, you now know absolutely nothing about what your web page is going to look like to other folks viewing the exact same page.
This concept is very valuable, critical really, to anyone new to web development.
Joel goes to great lengths to explain this and you can see that he obviously put in a lot of effort to make his explanation clear and included numerous diagrams to back it all up. I appreciate and applaud the effort But I still haven't read the whole darn thing and I'm not going to.
I'm a busy guy and I have work to do. Alpha Five Platinum will ship in the near future and that means I'm busy testing, fixing bugs (of course not in MY code ;), and making sure that tech support is as up to speed as possible on the new release so they can help customers, not to mention all the other "distractions" of the job. And even without an upcoming release, I still would have plenty of work to do and would not have read the whole article.
Don't get me wrong Joel, I like your stuff. And I know you'll sleep better tonight knowing that :) But printed, your post is 13 pages! It's not an attention span thing either - I've read and reread most, if not all, of those "non-standard standards" and those certainly aren't light or entertaining reading. But please, cut to the chase already!
Yes, I know I've taken too much of readers' time writing a blog post this long, so I'm done.
I think I agree with where Joel is going, but frankly I feel more inspired to write a rant about why his article is waaaay too long for me to bother reading all of it.
Maybe that's because I'm all too familiar with non-standard standards and bitter about the chunk of my life wasted while testing web pages in umpteen different browsers at a time. I still have the scars, and the copy of Netscape Navigator 1.0 on a floppy, to remind me.
HTML and browser "standards", like many standards in technology and other industries as well, are almost always implemented in a non-standard way. This means that if you build a web page and test it on your own PC, say using Internet Explorer 7 with a screen resolution of 1024x768, and everything looks great, you now know absolutely nothing about what your web page is going to look like to other folks viewing the exact same page.
This concept is very valuable, critical really, to anyone new to web development.
Joel goes to great lengths to explain this and you can see that he obviously put in a lot of effort to make his explanation clear and included numerous diagrams to back it all up. I appreciate and applaud the effort But I still haven't read the whole darn thing and I'm not going to.
I'm a busy guy and I have work to do. Alpha Five Platinum will ship in the near future and that means I'm busy testing, fixing bugs (of course not in MY code ;), and making sure that tech support is as up to speed as possible on the new release so they can help customers, not to mention all the other "distractions" of the job. And even without an upcoming release, I still would have plenty of work to do and would not have read the whole article.
Don't get me wrong Joel, I like your stuff. And I know you'll sleep better tonight knowing that :) But printed, your post is 13 pages! It's not an attention span thing either - I've read and reread most, if not all, of those "non-standard standards" and those certainly aren't light or entertaining reading. But please, cut to the chase already!
Yes, I know I've taken too much of readers' time writing a blog post this long, so I'm done.
Thursday, March 06, 2008
When Should We Ditch Our Platform?
Slashdot has a recent story where a member has asked "When Should We Ditch Our Platform?" The member's company has recently replaced their web developer and it seems that they had one heckuva time finding a replacement that could work with their existing technology.
Those familiar with Slashdot will not be surprised that there are no meaningful answers to the question, rather just bickering about whether this is the right question to ask, a lambasting directed at the member for not giving the specific details about the current platform and some attempts to blame all the world's problems on Microsoft.
While I'm no die-hard fan of Microsoft, this question is exactly the right question to ask and I'd like to give a meaningful answer. Sure, I could post this to the Slashdot discussion and be lost amongst the noise, but that's not very productive.
I think the meaningful answer to this question is quite simple really - it is time to replace your technology when it no longer solves the problem at hand. I don't care what the "existing" technology is - be it something from Microsoft, an open source project, or even Alpha Five. At the end of the day, it doesn't matter if your platform has a really cool sounding acronym with all of the media's current attention, because those things don't run your business and help you earn your paycheck.
Alpha Five, just like any other software, is a tool. Tools are what set humans apart from other animals, but only if we use the right tools for the right job. Have you ever tried to change a light bulb with a sledge hammer? I haven't, but I'm pretty sure it won't work too well.
Well, when your technology feels too heavy to even pick up, you are probably using the wrong technology. If a reasonably intelligent person can't adapt to the technology in a relatively short amount of time, you are probably using the wrong technology. If you have to work significantly harder than everyone else, only to produce a fraction of the results, you are probably using the wrong technology.
I admittedly am biased, but I think Alpha Five is the right technology to solve a wide range of problems facing businesses today. But I don't want you to take my word for it:
Those familiar with Slashdot will not be surprised that there are no meaningful answers to the question, rather just bickering about whether this is the right question to ask, a lambasting directed at the member for not giving the specific details about the current platform and some attempts to blame all the world's problems on Microsoft.
While I'm no die-hard fan of Microsoft, this question is exactly the right question to ask and I'd like to give a meaningful answer. Sure, I could post this to the Slashdot discussion and be lost amongst the noise, but that's not very productive.
I think the meaningful answer to this question is quite simple really - it is time to replace your technology when it no longer solves the problem at hand. I don't care what the "existing" technology is - be it something from Microsoft, an open source project, or even Alpha Five. At the end of the day, it doesn't matter if your platform has a really cool sounding acronym with all of the media's current attention, because those things don't run your business and help you earn your paycheck.
Alpha Five, just like any other software, is a tool. Tools are what set humans apart from other animals, but only if we use the right tools for the right job. Have you ever tried to change a light bulb with a sledge hammer? I haven't, but I'm pretty sure it won't work too well.
Well, when your technology feels too heavy to even pick up, you are probably using the wrong technology. If a reasonably intelligent person can't adapt to the technology in a relatively short amount of time, you are probably using the wrong technology. If you have to work significantly harder than everyone else, only to produce a fraction of the results, you are probably using the wrong technology.
I admittedly am biased, but I think Alpha Five is the right technology to solve a wide range of problems facing businesses today. But I don't want you to take my word for it:
- If you are not already using Alpha Five, download the trial now and check it out for yourself. I think you'll be impressed with what you can accomplish in a very small amount of time.
- If you are already using Alpha Five, go ahead and download some alternative product trials and see what they can do. If you can find something that lets you get more done with less work and does not bankrupt you, I'd love to hear about it!
Wednesday, February 20, 2008
Common Mistakes
Jakob Nielsen has a new blog post that covers common application design mistakes. He largely focuses on Web applications, but his points are as valid for desktop apps as they are for Web apps.
There's some good stuff here, both for us at Alpha Software to keep in mind, and for our customers (you) to be aware of as you build your applications.
We've all seen these mistakes many times, and probably even made a couple of them ourselves.
There's some good stuff here, both for us at Alpha Software to keep in mind, and for our customers (you) to be aware of as you build your applications.
We've all seen these mistakes many times, and probably even made a couple of them ourselves.
Subscribe to:
Posts (Atom)