Do You Want to Be Right?

  • If you can believe it, I once had a boss who (although I was his top developer) told me I was "too thorough". This was from a CIO who could not let go of development, spent close to 75% of his time developing and often checked in code that wouldn't compile. Was I as overkill as Miles Neale's excellent example with the chairs? No. But if my name was going to be on it, I wanted it done right.

    I've always said, I write what I believe to be good code because I'm lazy and selfish. I'm thorough because I'm lazy and selfish. I document my code and always include check-in comments because I'm lazy and selfish. What do I mean by lazy and selfish? I do these things because if I have to go back to it because the business user wants to change the functionality, I do not want to have to waste the clock cycles figuring out what I was doing and why. So..... call me "lazy", when in fact, I'm the exact opposite! 😉

    Lisa

  • This discussion reminded me of a project I once worked on. All I was tasked to do was to write a program to mail x amount of people a customized message. Pretty simple really and is trivial to do. The problem I ran into was that my 400 of my 1000 test emails never went out. But sometimes only 300, or 100, or 50 wouldn't. Either way all 1000 hardly ever went out. It took me a while to figure it was network congestion combined with hammering the mail server that was the problem. I was pretty sure my boss would've been okay with up to 40% of the messages not going through and worked around it somehow: schedule the mailings during periods of low network activity, let some third-party service do it instead, or just don't send the emails at all (they're not that important).

    I ended up creating a web service that had the spool directory of the mail server mapped. My mailing program would then zip its emails, send it to the web service, and it would unzip and process them into the spool directory. It was an elegant solution I thought that 100% worked every time to get the emails out. But in the course of doing that my boss came in one day and asked why it was taking me so long to do such a simple task: send out emails. I explained it to him and he left feeling okay. A year later that program is beginning to be used for practically all mass mailings we do because it always works, even though I only created it for the one business problem I was tasked with. The fact that it always works no matter what has even changed mindsets on what is possible.

    I get the argument about getting the job done and not trying to be an artist with your code. But sometimes the best solution really does take an investment of energy, even if that looks like code artistry to others.

  • I think Miles Neale has hit the nail on the head there with his "chair" example.

    I've been doing software development for about 19yrs since college graduation.

    At first, I approached problems the way I was taught in school; the theoretical, utopian way, where the end product was as technically "perfect" as I could make it. I would be pre-occupied with writing elegant code, thinking that is what I should be doing.

    Over the years, I have been working in the real world, where I have discovered that businesses are not interested in elegant code. They are interested in SOLVING A BUSINESS PROBLEM. Obviously the situation varies from company to company, but unless you are working in some environment with unlimited resources, you have to learn to ask yourself, what provides the best VALUE TO THE BUSINESS ?

    You have a fixed # of hours a week. What can you do with them? You can produce 5 new process/productivity tools at 80% of perfection, in the time it would take you to make 4 new tools at 100% perfection. Which adds more value to the enterprise ? More often than not, the extra tool, even if it's only 80% as good, will add more value to the company than 4 tools at 100%.

    Yes, you will have to go back and maintain the 80% more than the 100% tools. However, in the meantime, people will be enjoying immediate productivity cycle benefits from your 80% code of the extra 5th project. Likely not as much if it was 100%, but certainly more than not having that 5th tool at all.

    Kinda like having a mortgage; paying for a house over 25 years will make that house much more expensive than if you saved up and paid cash for it, but in the meantime you enjoy the benefits of living in that house, accruing some equity. Not mathematically ideal, but still better than living in a cardboard box while you save up enough to buy it outright.

    Nobody goes to computer science with dreams of writing many mediocre tools instead of a few good tools, any more than someone becomes a chef with dreams of making 1000 crappy cheeseburgers instead of 1 award-winning foie gras. However, money talks, and the reality is that you have to realize that your employer does not exist to provide you with an environment to show off how techically clever and elegant you are, they exist to make a profit. Your employment is a "side effect" of that objective.

    If the employer ever realized that every unit of time you work is providing a smaller contribution to financial profitability than someone else who is happy to do 80%, then they have a fiduciary responsibility to maximize shareholder value by replacing you with that other person.

    Sorry kids, that's reality in 2008. Is it "right"? Frankly, I believe the answer is NO. However, in the big picture, that doesn't matter. You can argue that moral point all year, but I suppose whether or not it's "right" isn't nearly as relevant as the fact that it's true. Unless you can change that from being true, it is to your personal advantage to adapt to the way it is, and maximize your opportunities given the constraints that are there.

  • Respectfully, the chair example is just wrong. You might take four tries to get the first one right, but you can easily replicate the other three chairs assuming that you documented what you did to build the first chair successfully. At that point, the chair will just be a template that you can apply to all the chairs in the future. A chair that was built as 80% right will still be wrong 20% of the time. If you replicate that, then you carry the 20% failure to a most certain failure.

    There are certainly plenty of other places that the client can obtain the 80% chair. If that is what they really want, then good luck to them. If you desire a continued client base, you give them the chair that you feel is 100% to their specifications and your standards.

    To me, this is the difference between Microsoft and Google. Microsoft will put out a product that they know will need a lot more work to please their clients (service packs are a Microsoft invention). Google will design it, put it into Google Labs for testing, and then sign off on it once it has been "vetted". Or they will nix it based on data collected during the labs phase.

    Experienced programmers do not redesign an application as soon as the new technology comes out anyway. We wait for the service pack to come out first :>

  • I don't agree with the chair example either in the context of this discussion, which is doing code "right" rather than "wrong", not "perfect" versus "imperfect" (and you'd still need to define those terms...)

    In the real business world we don't tear apart a project just because some new whiz-bang technology comes along (well, most of us don't since our companies can't afford it). But we do incorporate new technologies into ongoing projects where it makes sense to do so.

    And no matter what kind of technology you use to build those chairs, if they collapse 20% of the time from poor construction, I'm betting you'd still lose your job. And if you built them in the USA, you'd probably get sued, too! :hehe:


    Here there be dragons...,

    Steph Brown

  • Jeff Moden (6/15/2008)


    I've never known hardware to be the solution to any performance problem... not ever.

    I have some examples where hardware was a solution. See if you agree with them:

    SQL Server with a 10-disk SAN, set up as a RAID-5 array, as 1 partition. All databases, all log files, all indexes, etc., all on that one 1 partition on that 1 array. I didn't know to request something else, and the guy who set it up (who did know better), just plain likes RAID-5 a lot. All databases on this were OLTP. I learned a bit, found out how to improve it, got it changed, and got a 30%-100% increase in speed (depending on the transaction), for all databases concerned.

    My dev box had 1 Gig of RAM and a single-core Pentium CPU. Got it upgraded to a dual-core CPU and 4 Gig of RAM, added a high-speed HDD, got a huge increase in speed on pretty much everything. (Okay, that's kind of reductio ad absurdem, but it does apply at least somewhat.)

    Had a database that, by it's very nature, had long-running, CPU-intensive, RAM-hungry transactions, which accessed DLLs outside of SQL Server. I moved that to a cheap, relatively low-end server, with just enough hardware to do what it needed, but the main point was a getting it off the hardware of every other database. Huge increase in performance, stability, etc., for everything else in every other database.

    Of course, what you're talking about, is (I assume) upgrading hardware to compensate for crap code. In that case, I have seen hardware "solve" it, as in the queries began to run fast enough, or the web pages to load fast enough, or whatever, but at a stupidly high cost compared to other solutions (like rewriting the code correctly).

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • I do have an opinion on this subject, and it's got a basis in reality. There is a gradient scale of being right, with dead-wrong on one end and perfect by all possible standards on the other. Somewhere on this scale, in all real-world cases, is "good enough". "Good enough" has both objective and subjective factors to it, and since it is at least partially subjective, it is not a single point on the scale, but a range of the scale.

    The main objective standard of "good enough" is: Does it produce the necessary end result? That one is pretty much yes/no. But even there, if different people have different needs from something, it might have a fuzzy range.

    The second objective standard of "good enough" is: Does it cost less/more to produce/run than it is worth? Where this can be measured financially, it can have a very definite answer.

    So far as I can tell, everything else is subjective.

    Here's some of why I think this way:

    When I started out working with databases, I was a sales and marketing person. I built an Excel spreadsheet and quickly expanded it to an Access database, just to keep track of my customers/prospects and their orders. After a few months, with the help of a developer who knew just enough SQL to squeek by, I converted the database to SQL 2000 from Access, and left the Access front-end on it, because everyone else in the building also wanted to use that data and that application.

    That database and Access project worked. It got the job done. Customers could be placed in it, prospects could be tracked in it, orders could be tracked from start to finish, various statistics on order volume, inventory, etc., could all be tracked in there. Management got reports out of it that were very valuable to them.

    Was it "right"? By the best standards I knew at the time, the code in it was the best solutions I knew. I never knowingly wrote something that I knew was "quick and dirty" or "I'll fix that later". BUT (and this is a seriously elephantine but), when I look back at that code now, I shudder at how horrible it was. I had a view in the database that I was seriously proud of at the time because it made dozens of forms work, and made it very fast to build the forms. I thought it was a great idea. It selected from 6 or 8 tables, hundreds of columns, and I used maybe 5-10 of those columns, usually from no more than 2 tables, in each form. Indexes? I didn't even know they existed yet. Stored procs? "Stored what?" One of the tables had "name" as a primary key. The list of attrocities goes on and on. Wasn't even 1NF, in some cases.

    On the other hand, as mentioned, that database had very little cost associated with it, and huge value to the company and the employees. If it was down for an hour, the whole place ground to a halt. Every day, people would come up with new and valuable uses for it.

    Of course, over the years, I rebuilt the thing. Twice, I dropped the whole database structure, started over from the ground and built up. Each time, the database was "better, stronger, faster". Employees loved it. Managers loved it. It allowed us to compete successfully with a company with 10 times our manpower and put us in a position whereby we ate 20% of the available market from the leading company, mainly due to the systems we had based on that database. Each time, it became more "right".

    But does that mean it was "wrong" when it was first built? Personally, I would say "no". It was as right as I could make it at the time. If it had never improved past that point, if I'd stayed on sales and marketing and never learned how to make it "better, stronger, faster, more caffeinated", it would still have been incredibly useful and would have been instrumental in increasing my ability to sell and market by a significant margin, just as it was during the first weeks and months after I first built it.

    If I'd had to comply to the standards I hold now, I'd never have built the thing in the first place, and would have suffered thereby.

    Yes, it's better to make code, or anything else, "more right". Make it fulfill more needs, increase the ROI. But don't say something isn't right just because it isn't perfect.

    Just because you can't make the Mona Lisa every time, doesn't mean don't paint. Can't write, "The Grapes of Wrath" or "The Illiad" every time you set fingers to keyboard? Does that mean you shouldn't ever write a single word?

    That's my take on it.

    P.S.: On the car analogy Jeff used, I think that may be a flawed analogy. By the standard of "never write code that's not 'right'", every car would have to be a Lamborghini. It's not about the tune-up leaving you with a cylinder that doesn't fire, it's about building the car with less than 12 cylinders in the first place. I hate to say it, but I'd be willing to bet you don't EVER drive a car to work that has as much as 80% of the performance of a Formula-1 car.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Interesting comments on the chair analogy. I think perhaps some people are interpreting that differently than I am.

    Of course, if by 80%, that were to mean that the chair fails comprehensively (ie: collapses when sat upon) 20% of the time, then I agree that of course is not even close to good enough, and you would be sued and go bankrupt.

    How I interpreted that was a less literal representation of the 80/20 rule; a law of diminishing returns that manifests itself in software development as much as it does in anything else; past a certain point, it becomes increasingly "expensive" to gain smaller and smaller increases in quality or performance.

    As was mentioned, I guess it depends on how you define quality.

    When I say 80%, I don't mean that the program only works 80% of the time. What I mean is that it may execute 80% as fast as a theoretically perfect program. It may crash, but perhaps once every 2 person-weeks of continuous use. I know from experience that to squeeze the last bit of performance out of it, or to ensure that it will only crash once every 2 months of use, I will have to spend a LOT more time on it to cover every conceivable use case that occurs 1/1000 times. The remaining 20% of performance may very well take just as long to write as the first 80%. (or, as I discovered much to my dismay in the past, it might take even LONGER than the first 80%).

    That is where the business value test I mentioned earlier comes in to play. Do you pay a coder an extra 4 weeks of pay to get that 20% increase in speed, when usually the speed of execution is not the critical path for the business process anyway ? Or do you have them stop at 80%, and spend the next 4 weeks writing a new tool that adds a new bit of functionality somewhere else ? That too, will only run 80% as fast as a perfect version, but I'd argue that having a slightly slower tool today is still better than no tool at all.

    The car analogy is also a very good one. Some cars are designed to be closer to the 100% than the 80%, such as Mercedes. I'm sure it is an example of a variety of engineering "best practices", if held up to detailed technical scrutiny.(or at least as close to that as is possible, given mass production constraints). However, not everyone can afford such a car.

    A Hyundai is an example of a car that is closer to the 80%. That does not mean it breaks down on 20% of every trip taken, but that the manufacturer stopped at the 80% of perfection, to keep costs down. Many more people can afford that, and thus enjoy the freedom of movement associated with ownership of such a car.

    If every automotive engineer took the hard-line stance that some IT people are taking, (and refused to do something unless it was technically perfect) then Hyundai's would not exist. Only the rich could afford to drive anywhere.

    As well, only the largest and richest corporations could afford to have a computerized accounting system if everything had to completed to a standard of theoretical perfection that is taught in school.

  • Say an airplane's navigation system. That must not give an incorrect result.

    It depends on "incorrect". If, for example, precision exceedes accuracy, that's wasted effort.

    If an airplane's (or ship's) navigation system is off by 100 meters, the pilot can still find the runway (dock) at a glance. If it's off by 10 km, there's a major problem. If it's off by 1 cm, that's excessively precise. The only way to know the "right" precision is by dealing with actual pilots of actual planes, and actual air-traffic controllers, while keeping in mind precision vs accuracy and ROI.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GSquared (6/16/2008)


    Say an airplane's navigation system. That must not give an incorrect result.

    It depends on "incorrect". If, for example, precision exceedes accuracy, that's wasted effort.

    If an airplane's (or ship's) navigation system is off by 100 meters, the pilot can still find the runway (dock) at a glance. If it's off by 10 km, there's a major problem. If it's off by 1 cm, that's excessively precise. The only way to know the "right" precision is by dealing with actual pilots of actual planes, and actual air-traffic controllers, while keeping in mind precision vs accuracy and ROI.

    I feel compelled to share a (semi)-relevant story here. My grandfather used to be a pilot, and is an engineer, so he considers 2% to be an acceptable rate of error. He illustrates this margin of error in this way: He was flying a friend to Florida at night, and his friend was marveling that my grandfather seemed unconcerned that he couldn't see anything. He asked my grandfather if he knew where he was, and he replied that he knew within 2% accuracy - "Somewhere over either Florida or Georgia." 😉

    As G^2 has noted, depends on the size of your measuring stick.

    ---------------------------------------------------------
    How best to post your question[/url]
    How to post performance problems[/url]
    Tally Table:What it is and how it replaces a loop[/url]

    "stewsterl 80804 (10/16/2009)I guess when you stop and try to understand the solution provided you not only learn, but save yourself some headaches when you need to make any slight changes."

  • GSquared (6/16/2008)


    Say an airplane's navigation system. That must not give an incorrect result.

    It depends on "incorrect". If, for example, precision exceedes accuracy, that's wasted effort.

    Naturally. I probably should have written 'wildly inaccurate'

    I have fond memories of a Physics prof's reaction to a result given from an experiment.

    The student in question listed the result they got from the experiment as something like 105.26895 ± 0.25.

    Needless to say, no one ever did that again that year.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (6/16/2008)


    GSquared (6/16/2008)


    Say an airplane's navigation system. That must not give an incorrect result.

    It depends on "incorrect". If, for example, precision exceedes accuracy, that's wasted effort.

    Naturally. I probably should have written 'wildly inaccurate'

    I have fond memories of a Physics prof's reaction to a result given from an experiment.

    The student in question listed the result they got from the experiment as something like 105.26895 ± 0.25.

    Needless to say, no one ever did that again that year.

    And, that prof probably has the reaction down to a science for exactly that "... that year" reason. 🙂

    My high school physics textbook actually had incorrect definitions of precision and accuracy, as well as an incorrect definition of repeatability. Figured them out, with outside resources, but man that book made something so simple into something so complex!

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GSquared (6/16/2008)


    I have some examples where hardware was a solution. See if you agree with them:

    Yes... we did something similar at work... we changed from SQL Server Standard Edition to Enterprise Edition. We went from a lameo 4 processor box with 2 gig of ram to a 16 processor box with 16 gig of ram and an absolutely killer hard drive system. Everyone was over-joyed when 4 hour processes went down to 30 minute processes. Everyone (except me and my DBA) was in computational Nirvana... all was right with the world... for six months... and then we reached the secondary tipping point where the 30 minute processes went back to 4 hours (and more, in some cases).

    Buying killer hardware is a temporary stopgap... nothing will help crap code except a non-crap rewrite. 😉 Triangular joins are still triangular... correlated sub-queries are still hidden RBAR... cursors and While loops are still RBAR... crap code is still crap code even though Cadilac hardware is available... eventually, the crap code is gonna put a tear in the seat...

    ... and, no, all that new hardware didn't come close to stopping the average of 640 deadloacks per day. Only changing crap code to good code changed that.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden (6/16/2008)


    Triangular joins are still triangular... correlated sub-queries are still hidden RBAR... cursors and While loops are still RBAR... crap code is still crap code even though Cadillac hardware is available... eventually, the crap code is gonna put a tear in the seat.

    Yes, you are correct, these are things that can have an adverse effect on system performance. The unfortunate truth is that many people who write SQL are not fully aware of the performance issues these can have.

    I had no formal SQL training when I first started working with MS SQL Server 6.5 beyond 2 Microsoft classes dealing with installation and maintenance of SQL 6.5 databases. Having had zero experience with Unix, and Unix like nature of SQL Server 6.5, that is all I got so I could install and setup our first MS SQL database. From that point on, everything I learned, I learned on the job and from reading books. Guess what, a lot of the books I read talked about cursors, correlated sub-queries, triangular joins (but I don't think they actually called them that), as well as standard query methods including inner and outer joins.

    From casual conversations with others I work with, this has been the norm. What I have also seen, is that for most business applications, these coding techniques work well enough that no one has needed to go back and rewrite the code to enhance performance. The amount of data that was being queried, and the performance of the systems as met the users requirements.

    Will this continue to be the case several years down the road for these businesses, who knows; but if the systems continue to perform as expected there are other things that these business can spend their resources on than looking for "crap code" that may exist in their systems. Maybe an adventurous DBA may stumble onto the this code and rewrite it and improve these processes, but until they start having an adverse affect, they aren't going to expend limited resources fixing what isn't, in their minds, broke. Just so that you know, sometimes even when things aren't broke, they need to be reviewed and sometimes rewritten. A good DBA with the proper tools to monitor their systems may find these things before they are an issue, and recommend changes to improve the system.

    What needs to happen, actually happens right here on SSC. Individuals, like myself, have learned better, more efficient ways to accomplish common tasks. We try and pass this knowledge on to others, to become advocates for better ways of coding SQL queries that are more performant, easier to understand and modify when needed. We also find other ways to do common tasks, such as determining the first or last day of a month. There are several ways of doing it, some a little faster than others, but they meet the same functional requirements. Is this to say one way is right and another is wrong? I have changed how I do it, but that doesn't mean others will.

    What I do know, is that when I walk into a new environment now, and I see users complaining about system performance, I now know things to look for in code that I may not have 3 or 4 years ago. These are things that I can also start looking at right where I work now. I have other learning curves to deal with here for some of the systems, but I have the resources here to help there (PeopleSoft and how it works). I'm not involved in the day to day activities, but there have been times when performance has degraded and I had to help figure out the issues and come up with a solution. In the few cases we have had recently, the solutions were actually to modify several indexing schemes. The indexing used under SQL Server 2000 worked fine, but when we moved to SQL Server 2005, a few of the online processes became problematic. Updated indexes solved the problem (in one case it was an index on a table that didn't have indexes in SQL Server 2000; figure that one out).

    I would like to start spending time reviewing developer SQL code as well as some of the PeopleSoft SQL (I have learned we can also tune that code, though it is then considered customized code that we'd have to maintain). Of course, I have to balance that with all the other activities I am responsible for as well.

    Bottom line, keep learning. Find better ways to do things rather than always doing it the same way just because you know it works. If you find a better way, and have the opportunity to fix previously written code, you should investigate doing so. It may require going through whatever change management process your company uses, or it could be somethig you do on the fly if allowed. Given time in development, doing what you know works may be the best start; it helps you identify the correct output of a query or stored procedure, but you shouldn't stop there as long as you have time to continue development. As you learn new ways, those become the starting point in new projects.

    I used to put correlated sub-queries in the column selection of SELECT statements. Now I have learned that it is more efficient to put those into a join statement in the FROM clause of the SELECT statements. There many other examples of tips I have learned on SSC and from other SQL developers that have improved the code I write today.

    😎

  • Heh... Ya know... some of these posts just reminded me of something... I shouldn't get upset about folks writing code to solve the "Time to Market" problem... I make more than half my living fixing such code... 😛

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 15 posts - 31 through 45 (of 53 total)

You must be logged in to reply to this topic. Login to reply