SProc speed

  • Hi.

    I have taken over maintenance of an application where all reports are populated with data via SProcs.

    Thats all good and well, as the joins are countless but....

    The developer has created the SProcs with string concatenations based on a multitude of parameters, with a possability of 8 differens SQL query strings to be constructed in the single SProc. This means the SProc is 7 pages long and nothing is compiled (in my view) when it is saved.

    My question is this: Because all the SQL is constructed at runtime only, will this not mean that SQL does not have an execution plan at all for this SPRoc ? Doesn't this mean that there is no speed advantage in such a 'dynamic' SProc ?

    I believe that I could rather let VB construct the SQL query string during runtime and then submit it as a normal query with no real loss in speed ---OR--- am i wrong ?

  • Is the nature of the single stored procedure such that it cannot be broken out into much simpler non-dynamic SQL?

    Let us assume that the proc cannot be broken out.

    Even if the proc has no speed advantage what it does give you is encapsulation of database logic in the database itself. A VB programmer simply has to call the proc with the appropriate parameters. Change the programming language to C#, C++, J#, F# etc and you still have a simple call to make to the proc.

    The 2nd advantage is security. I do not like granting direct access to tables. I prefer to think of tables as being private properties and stored procedures as public methods to populate those properties. If I want to block a specific set of users from using a set of procedures then it is a database security issue.

    If you put your SQL in your VB code then you either have to provide direct access to your table or if you want a more secure version you will have to start mucking around with column level permissions. Column level permissions are a whole new bag of worms and most people prefer to use VIEWS.

    If you use VB.NET to call dynamic SQL and run SQL Profiler you will see that .NET does not simply run your code, it calls sp_executesql to do it for you.

    If you call the procedure twice with the same argument signature the 2nd run will be executed from a cached plan.

    The downside is that sp_executesql has a 4096 character limit on the dynamic SQL that it can pass so your 7 page stored proc might not work.

  • David has great advice listed above.

  • In my humble opinion, what you have is a very good example of bad design. I don't see any valid reason to write a 7 pages long (!!) stored proc just to build a sql string and execute it. Since you didn't provide more detail, I'm guessing the reporting application you are dealing with is the kind that presents the user with a set of parameters they can use to build the selection criteria, and in the stored proc builds the sql string depending on the parameters selected. Now, I'm not going to say that there is a solution for this that will work on EVERY case, that would be silly. But in some cases it's possible to simplify a little. For example, a report on employees based on last name, department and salary has three parameters, say @LNAME, @DEPT, @SALARY. Based on the user's selection, the final query may be (SELECT * FROM EMPLOYEE WHERE LastName = @LNAME), or (SELECT * FROM EMPLOYEE WHERE @LastName = @LNAME AND Salary = @SALARY). It looks like one has to consider each variable separately and, if a value was passed, add to the WHERE clause the corresponding condition. For 3 parameters there are 8 possibilities (in the general case is 2*2*...*2 n times for n parameters). But there is a better way to do it without using dynamic sql and using only one query:

    SELECT * FROM EMPLOYEE

    WHERE EmployeeID = EmployeeID

    AND (LEN(@LNAME) = 0 OR LastName = @LNAME)

    AND (LEN(@DEPT) = 0 OR Dept = @DEPT)

    AND (LEN(@SALARY) = 0 OR Salary = @SALARY)

    This assumes that all parameters are varchars, but you can adapt it to more general cases. Now, since there may be more than one table in the join and those tables can also depend on the parameters chosen, you may start by determining all the possible cases (for the tables only, not the columns). If that number is small, break your procedure into smaller ones, one for each combination of tables, and then use the technique described above. As I said before, that may or may not be the case in your application, but you should at least try to extract as many (simpler) cases as you can from that monster procedure and put them in separate procs so the dynamic sql is left for the most complex cases. And even then, you can determine (by collecting some usage statistics) which of those complex cases are the most commonly used and write procs to handle those specifically, that way the dynamic sql is left for the most complex and unusual reports.

    Hope this helps...

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply