DACPAC
DACPAC, or Data-tier Application
Component Packages, is a much touted feature that allows your developers to package
database changes into a single file in Visual Studio and send it to the DBAs
for deployment. This is a significant improvement over how changes are being
distributed now. Today they're either sent as a series of .SQL files with
deployment instructions, or as a set of Team Foundation Server paths for the
DBA to fetch with the same deployment document. Either way, the process leads
to deployment problems because there's just too much human interaction and
opportunity for misinterpretation.
With DACPAC, DBAs get a single
deployment file from the developers that contains all the changes; there's
nothing to misunderstand or forget. But in this first incarnation DACPAC has
some problems. For starters, to make even minor changes in the database, DACPAC
completely re-creates the database and all of its objects, then moves all of
the data over to the duplicated structures. In the final step, it drops the
original database and gives the new one the proper name. You can easily see
that creating an entirely new copy of the database for a small code change
won't fit into most situations. Further, DACPAC doesn't copy user permissions
or work with service broker or replication objects.
DACPAC is a great idea, but is currently
suitable only for very small databases with limited structures. However, keep
an eye on this feature; I suspect many of these limitations will be ironed out
in future versions. DACPAC should only get better and better.
Microsoft has combined SQL Server 2008 R2 and Visual Studio 2010 to give us a new feature called data-tier
applications, or DACs. Deployment of DACs from Visual Studio into SQL
Server is done through a data-tier application package, also known as a DACPAC.The good
DACPACs are great for small shops that are deploying minor application changes to SQL Server. It allows the code to be kept within the existing source control and gives developers an easy way to edit SQL Server objects from within the already familiar Visual Studio environment. This means that developers can work against their development database for all their coding, and then package up all the changes into a single DACPAC for release to production. The DACPAC can also be handed off to a DBA for release into a production or quality assurance (QA) environment.
The DACPAC handles versioning of the database through data-tier editing, providing the developer with an easy way to use the database editing system. The .NET developer has the ability to edit the tables, procedures, views, and functions of the database.
The bad
Data-tier applications have a few issues in this first release. The first problem is that not all features of the SQL Server engine are supported by DACPACs. This includes the SQL Server Service Broker, CLR Objects, and most importantly, SQL Server security.
Now all these features can be supported by using a post-release script. This is not the best solution, however, as developers need to know all the appropriate T-SQL commands in order to create and manage the objects and security.
Currently DACPACs can only be pushed to SQL Server 2008 R2, and they must be developed using Visual Studio 2010.
RAID (RAID 0 ,1,5,10)
Can a sp call itself or another
recursive sp? How many level of SP nesting is possible?
Yes. Because Transact-SQL supports recursion, you can
write stored procedures that call themselves. Recursion can be defined as a
method of problem solving wherein the solution is arrived at by repetitively
applying it to subsets of the problem. A common application of recursive logic
is to perform numeric computations that lend themselves to repetitive
evaluation by the same processing steps. Stored procedures are nested when one
stored procedure calls another or executes managed code by referencing a CLR
routine, type, or aggregate. You can nest stored procedures and managed code
references up to 32 levels.
Problem
I am trying to resolve an issue with triggers. We use them for things like automatically setting the update date and time on some tables. Recently we added a second trigger to a table and now we are getting the error "Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32)". For the time being we dropped that latest trigger. Can you help us to resolve this?
I am trying to resolve an issue with triggers. We use them for things like automatically setting the update date and time on some tables. Recently we added a second trigger to a table and now we are getting the error "Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32)". For the time being we dropped that latest trigger. Can you help us to resolve this?
Solution
As a first step let's review an example which will duplicate your problem. We'll create a single table with two triggers. We will use the following table to test our triggers:
As a first step let's review an example which will duplicate your problem. We'll create a single table with two triggers. We will use the following table to test our triggers:
|
CREATE TABLE dbo.tbl_TriggerTest
(
pk_TriggerTest INT IDENTITY
, Comment NVARCHAR(256)
, TriggerSample NVARCHAR(256)
, Created DATETIME DEFAULT
GETDATE()
, Updated DATETIME DEFAULT
GETDATE()
)
|
Now let's
create a trigger that sets the Updated column in our table to the current date
and time by using the GETDATE() function:
|
CREATE TRIGGER
dbo.tr_TriggerTest_Footprint
ON dbo.tbl_TriggerTest
AFTER UPDATE
AS
UPDATE dbo.tbl_TriggerTest
SET Updated = GETDATE()
FROM dbo.tbl_TriggerTest t
INNER JOIN inserted i ON t.pk_TriggerTest = i.pk_TriggerTest
|
The above
trigger will execute whenever there is an update to the tbl_TriggerTest
table. The trigger references the inserted table which is a pseudo-table
available to a trigger. An update is really handled as a delete followed
by an insert. There is also a deleted pseudo table that is available to a
trigger. The deleted table contains the rows that were updated with their
values before the update; the inserted table contains the rows that were
updated with their new values.
To test the
trigger, we will run the following script to insert a row into our sample table
then perform an update:
|
INSERT dbo.tbl_TriggerTest
(Comment)
VALUES ('X')
UPDATE dbo.tbl_TriggerTest
SET Comment= 'This is a comment'
|
The above
script runs successfully and the trigger updates the Comment column value to
'This is a comment'.
Now we will
create another trigger on our table to reproduce the trigger error:
|
CREATE TRIGGER
dbo.tr_TriggerTest_Sample
ON dbo.tbl_TriggerTest
AFTER UPDATE
AS
UPDATE dbo.tbl_TriggerTest
SET TriggerSample = 'updated '
+ CAST(GETDATE() AS NVARCHAR(20))
FROM dbo.tbl_TriggerTest t
INNER JOIN inserted i ON t.pk_TriggerTest = i.pk_TriggerTest
|
This trigger
updates the TriggerSample column in our table. When we rerun the test
script above we get the following error message:
|
Msg 217, Level 16, State 1,
Procedure tr_TriggerTest_Sample, Line 5
Maximum stored procedure,
function, trigger, or view nesting level
exceeded (limit 32).
|
Notice that
both triggers are updating the table that causes the triggers to fire.
Essentially what is happening is the update statements in the trigger are
causing the triggers to fire again and this keeps going until the nesting level
is exceeded. The resolution to this problem is to check the nesting level
in the trigger and only execute the update statement one time. To do this
you can use the function TRIGGER_NESTLEVEL.
Alter both
of the triggers as follows:
|
ALTER TRIGGER
dbo.tr_TriggerTest_Footprint
ON dbo.tbl_TriggerTest
AFTER UPDATE
AS
BEGIN
IF TRIGGER_NESTLEVEL() > 1
RETURN
UPDATE dbo.tbl_TriggerTest
SET Updated = GETDATE()
FROM dbo.tbl_TriggerTest t
INNER JOIN inserted i ON t.pk_TriggerTest = i.pk_TriggerTest
END
GO
ALTER TRIGGER
dbo.tr_TriggerTest_Sample
ON dbo.tbl_TriggerTest
AFTER UPDATE
AS
BEGIN
IF TRIGGER_NESTLEVEL() > 1
RETURN
UPDATE dbo.tbl_TriggerTest
SET TriggerSample = 'updated '
+ CAST(GETDATE() AS NVARCHAR(20))
FROM dbo.tbl_TriggerTest t
INNER JOIN inserted i ON t.pk_TriggerTest = i.pk_TriggerTest
END
GO
|
If you run
the above test script again, you will see that the error is now resolved.
The change to the above triggers is to test the nesting level and only execute
the code in the trigger when the nesting level is 1. The nesting level
will be 1 the first time the trigger is called as a result of an update statement.
Each successive time the trigger is called as a result of the same update
statement, the nesting level will increment by 1.
There is
another way to resolve this problem. The SQL Server configuration option
"nested triggers" determines whether an action performed in a trigger
can cause the trigger to fire again. The default value for "nested
triggers" is 1, which allows the behavior we saw in our sample. You
could change "nested triggers" to zero and disable trigger nesting.
No comments:
Post a Comment