Friday, March 30, 2012

Pros & Cons of Using Triggers

Hello,
Can anyone tell me what are the pros and cons of creating and using triggers
in the database? Are there any performance and debuggins concerns?
TIA,
DeeTriggers are primarily intended for providing procedural integrity, however
they can be used for several purposes. There are no generalized "pros &
cons" per se with triggers, but in specific situations you might come across
performance problems with locking, serialization and concurrency issues.
Anith

Pros & Cons of Using Triggers

Hello,
Can anyone tell me what are the pros and cons of creating and using triggers
in the database? Are there any performance and debuggins concerns?
TIA,
DeeTriggers are primarily intended for providing procedural integrity, however
they can be used for several purposes. There are no generalized "pros &
cons" per se with triggers, but in specific situations you might come across
performance problems with locking, serialization and concurrency issues.
--
Anith

Pros & Cons of Using Triggers

Hello,
Can anyone tell me what are the pros and cons of creating and using triggers
in the database? Are there any performance and debuggins concerns?
TIA,
Dee
Triggers are primarily intended for providing procedural integrity, however
they can be used for several purposes. There are no generalized "pros &
cons" per se with triggers, but in specific situations you might come across
performance problems with locking, serialization and concurrency issues.
Anith
sql

Pros & Cons of uniqueidentifier as PK.

Here are some pros and cons of using uniqueidentifier as a PK, do you think
that they justify using or losing it?
Pro
Inserts are evenly spread into the index (clustered or not) and therefore
the index requires less maintenance to remain optimally structured.
Pro
The client can generate the key and so doesnt need a round trip to the
server to get it, simplifying transactional load on the server by reducing
or eliminating the need for nested transactions. (no need to roll back an
inner committed transaction if a child record insert fails because you
havent inserted it yet).
Pro
No need for an extra indexed rowguid for merge replications, you already
have one.
Pro
People cannot guess your primary keys
Con
They can't read them back to you either
Con
when clustered, uses more space in each row on each index on the table,
larger indexes/tables take longer to traverse.
I'm still undecided, what do you think?
Mr Tea
http://mr-tea.blogspot.com> Here are some pros and cons of using uniqueidentifier as a PK, do you
think
> that they justify using or losing it?
You haven't told us anything about your requirements, business needs, usage,
hardware, etc. The pros and cons are laid out for you in many places (e.g.
see http://www.aspfaq.com/2504). It's YOUR job to make YOUR decision. None
of us can make it for you.
Think about reviewing cars at autos.msn.com. Do you want some reviewer to
pick out your car for you? Or do you just want to use his/her opinion as
part of your decision criteria?|||I'm after a few opinions to compliment my own (the faq and related documents
are good reading) although the final decision will always be biased by the
circumstances.
I'm looking for an average table with the normal ratio of
inserts/updates/selects, to attempt to derive which primary key
implementation would put the least load on the server for a given number of
users, and which implementation would handle the most concurrent users
without degrading performance.
This is one dilemma that I have never managed to nail down.
Mr Tea
"Aaron [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
news:OYBJyAOAFHA.3924@.TK2MSFTNGP15.phx.gbl...
> think
> You haven't told us anything about your requirements, business needs,
> usage,
> hardware, etc. The pros and cons are laid out for you in many places
> (e.g.
> see http://www.aspfaq.com/2504). It's YOUR job to make YOUR decision.
> None
> of us can make it for you.
> Think about reviewing cars at autos.msn.com. Do you want some reviewer to
> pick out your car for you? Or do you just want to use his/her opinion as
> part of your decision criteria?
>|||> I'm looking for an average table with the normal ratio of
> inserts/updates/selects, to attempt to derive which primary key
> implementation would put the least load on the server for a given number
of
> users, and which implementation would handle the most concurrent users
> without degrading performance.
> This is one dilemma that I have never managed to nail down.
I don't think you will "nail it down" by having three or four people vote on
it. Have you considered performing load/stress testing and analaysis to
compare the different approaches?|||> Pro
> Inserts are evenly spread into the index (clustered or not) and therefore
> the index requires less maintenance to remain optimally structured.
Actually, this random distribution is a con rather than a pro. This will
increase fragmentation and require more I/O for inserts. Even if you tune
the fillfactor to avoid splits, you'll still incur the cost of reduced
buffer efficiency during inserts with large tables.

> Pro
> People cannot guess your primary keys
Why do you care whether or not people can guess a surrogate primary key
value?
Hope this helps.
Dan Guzman
SQL Server MVP
"Lee Tudor" <mr_tea@.ntlworld.com> wrote in message
news:BHBId.1359$3j6.315@.newsfe4-gui.ntli.net...
> Here are some pros and cons of using uniqueidentifier as a PK, do you
> think that they justify using or losing it?
> Pro
> Inserts are evenly spread into the index (clustered or not) and therefore
> the index requires less maintenance to remain optimally structured.
> Pro
> The client can generate the key and so doesnt need a round trip to the
> server to get it, simplifying transactional load on the server by reducing
> or eliminating the need for nested transactions. (no need to roll back an
> inner committed transaction if a child record insert fails because you
> havent inserted it yet).
> Pro
> No need for an extra indexed rowguid for merge replications, you already
> have one.
> Pro
> People cannot guess your primary keys
> Con
> They can't read them back to you either
> Con
> when clustered, uses more space in each row on each index on the table,
> larger indexes/tables take longer to traverse.
> I'm still undecided, what do you think?
> Mr Tea
> http://mr-tea.blogspot.com
>|||> Why do you care whether or not people can guess a surrogate primary key
> value?
I can see this in some cases for the abnormally privacy sensitive, e.g. if I
know my customerID is #3367 and yours is #3389, I know that I was a customer
before you... things along that line.
Deep down, why does it matter? Who knows.
In most cases, you should be exposing the actual unique data that makes a
customer unique (e.g. name, billing address, e-mail address, etc) even if
you are using an artificial key for efficiency or other reasons. I
shouldn't have any clue what my customer ID is.
However, there are cases where that is not a hard and fast rule, either. If
I place 30 orders at buy.com, and I have a problem with one of them, I'd
rather e-mail support with an order number, rather than a big composite
piece of data including my name, dob, billing address, credit card #, e-mail
address and order date. :-)|||Lee Tudor wrote:
> Here are some pros and cons of using uniqueidentifier as a PK, do you
> think that they justify using or losing it?
> Pro
> Inserts are evenly spread into the index (clustered or not) and
> therefore the index requires less maintenance to remain optimally
> structured.
Actually, this causes page splitting and reduced insert throughput if
you cluster on the key. Plus, the UID is 4x larger than a normal INT,
which takes up more space. Used on a clustered index, you've also added
an additional 12 bytes to each non-clustered key.

> Pro
> The client can generate the key and so doesnt need a round trip to the
> server to get it, simplifying transactional load on the server by
> reducing or eliminating the need for nested transactions. (no need to
> roll back an inner committed transaction if a child record insert
> fails because you havent inserted it yet).
>
No roundtrips for identity values required. Your stored procedure that
does the insert just returns the new value using the scope_identity()
function. In fact, you are saving having to send 16 bytes for the UID
when you issue any inserts and pass far less for queries. Plus, you
joins should be faster on a 4-byte key than a 16-byte one.

> Pro
> No need for an extra indexed rowguid for merge replications, you
> already have one.
True.

> Pro
> People cannot guess your primary keys
Does this really matter?

> Con
> They can't read them back to you either
It's a surrogate key, so no one needs to know the value but the system
in most cases.
David Gugick
Imceda Software
www.imceda.com

Proprties not imported with Table

I am developing a DB with others in my group. When I import tables created on other servers to my server, the primary key and other properties do not import with the tables. Can anyone explain why this is happening? Is there a setting I have over looked?If you use DTS, make usre you use the Copy object(s) instead of Copy table(s) option.|||Nope...what method are you using to migrate the Data

Look at DTS transfer Database Task option

Or better yet, script the objects then build them...I prefer this method, and use bcp|||Originally posted by joejcheng
If you use DTS, make usre you use the Copy object(s) instead of Copy table(s) option.
The copy object worked ... Thanks

Proprietary data in SQL2005

I'm trying to understand what I can do to protect proprietary data in SQL
2005. I have an application that currently uses Paradox and I plan to move
it to SQL 2005. Most users will start off with SQL 2005 Express, but will
eventually move to a 'full' version of SQL Server. Paradox allows me to
encrypt whole tables. I know it's not very good security because someone
can still do memory dumps etc. but in combination with licensing agreements
it's probably sufficient in my case to protect proprietary data stored in
the database. Can I achieve something similar in SQL 2005?
I see that I can encrypt data in specific columns, but I'm guessing that
those columns can't be indexed? (Or if they were indexed, it would have to
be the encrypted values rather than the original unencrypted values that
would actually be indexed thus making the indexing less useful)? It doesn't
seem like there's any way to encrypt a whole table?
In some of the stuff I've read, I get the sense that if I create a named
instance of a new SQL Server (Standard or Express), I can set up my instance
to only use SQL Authentication. Then I can prevent the Computer Admin of
the machine where SQL Server is installed from using his/her Windows
Authentication to access the database or named instance of the server. The
only way to configure the server or it's databases would be to know the SA
password and use SQL Authentication to log into the SQL server instance. Is
this correct? And, what does this gain me? How hard would it be to take
the database from my named instance and move it to a different SQL Server
Instance and then gain access to it?Hi,
Thanks for using Microsoft Online Managed Newsgroup.
From your description, I understand that:
You wanted to know:
1. if you can encrypt a whole table in SQL Server 2005;
2. if you can set up your SQL Server instance only use SQL Authentication;
3. how you can move your database from your named instance to a different
SQL Server instance and gain access to it.
If I have misunderstood, please let me know.
For your first question, by now there has been no such setting to encrypt a
whole table in SQL Server. You can encrypt a particular column in a table
by using a key or a certificate. You may refer to:
Improving Data Security by Using SQL Server 2005
http://www.microsoft.com/technet/it...tsec.mspx#EYAAC
For your second question, I would like to let you know that SQL Server has
only two authentication mode: one is Windows Authentication mode; the other
is Mixed Authentication mode (include Windows Authentication and SQL
Authentication). So Windows authentication will be always used by SQL
Server. Any trusted connections or local users can access your SQL Server,
however they will not have permissions to access databases if they are not
members of local administrators group and if you do not assign permissions
to them.
For your last question, I recommend that you:
1. Fully backup all of your user databases and logs;
2. Restore the databases to your new SQL Server instance;
3. Transfer SQL Server logins and passwords to the new SQL Server instance.
Please refer to:
How to transfer logins and passwords between instances of SQL Server
http://support.microsoft.com/kb/246133/en-us
Also, I strongly recommend that you refer to this article for more
information:
How to move databases between computers that are running SQL Server
http://support.microsoft.com/kb/314546/en-us
Besides, for SQL Server 2005 instance, you can also use Copy Database to
move the databases:
Using the Copy Database Wizard
http://msdn2.microsoft.com/en-us/library/ms188664.aspx
If you are very concerned with the table level encryption, I recommend that
you give Microsoft feedback via the link:
https://connect.microsoft.com/SQL
Your feedback will be routed to SQL team so that this feature will probably
be included in the next release.
Look forward to your reply. If you have any other questions or concerns,
please feel free to let me know. It is my pleasure to be of assistance.
Charles Wang
Microsoft Online Community Support
========================================
==============
When responding to posts, please "Reply to Group" via your newsreader
so that others may learn and benefit from this issue.
========================================
==============
This posting is provided "AS IS" with no warranties, and confers no rights.
========================================
==============|||Hi,
What is everything going on? Please feel free to let me know if you need
further assistance.
Have a great day!
Sincerely yours,
Charles Wang
Microsoft Online Community Support

Proposed Architecture

I would be most grateful for any comments regarding the following
proposed datawarehouse architecture:
1. Backroom Box
This is where all the ETL work will be done.
Daily & monthly imports of data from a number of primary data sources.
Lots of surrogate key transformations.
- Instance of SQL Server 2000 (Enterprise) with the Data Staging Area
database.
2. Frontroom Box
This is where the star-schema, cubes & web application (XLCubed Web)
will be.
- Instance of SQL Server (Enterprise) with the Datawarehouse database.
- Instance of Analysis Services.
- Instance of XLCubed Web.
All the fact tables in the star-schema will probably total less than
50Gb.
The updates (whether daily or monthly) are likely to be less than 1Gb.
There will be max. 50 concurrent users across 4 sites.
There will be about 20 cubes (most will be small; i.e. less than 1Gb)
The biggest will have a fact table of about 2Gb (less than 20m rows &
max. 60 columns).
There will be no distinct counts.
There will be no virtual cubes.
The bigger cubes will have many measures (about 40).
With about another 20 calculated measures/members.
The biggest cube will have about 12 dimensions (all having 2 or 3
levels).
Some dimensions will be parent-child.
Some of the smaller cubes will have writeback ability which will be
done through XLCubed Excel version.
Does it make any difference where SQL Agent is run from?
Backroom instance makes most sense?
Please advise on what servers to buy.
Processors, RAM, disk, etc.
What matters most for the backroom server? Processor speed?
What matters most for the frontroom server? RAM?
2*3Ghz CPU, 4Gb RAM, 250Gb disk?
Good estimate! This is quite normal to have two dedicated boxes for
Staging and Production. Now, you need to be careful about update rate.
If it is 1 GB per day then you may need more space or need to come up
with horizontal partition and clustering but 1 GB per month is alright
although you may need partition and clustering down the track.
Cubes are seems OK. Write back is really depends on your application
but as you now, you need to keep track of your write back values which
means merge them back to fact table with DTS when write back activity
happens. Quite normal for Budgeting and Forecasting cubes and should be
OK.
SQL Agent can be run from anywhere but I think backroom box makes more
sense, please check with your system admin people.
Regarding buying server, have you actually done any research based on
price and performance yet? If you not then I strongly suggest that you
consider HP Itanium Box. SQL Server needs at least 4 CPU so that it can
share resources and at least 4 GB of RAM which you planned anyway.
Itanium is a 64 bit box, specially built for Windows 2003 server and
SQL Server. Also, IBM blade is OK but very expensive. I found Itanium
is more practical and value for money. It will be around 20-25 thousand
US Dollars these days.
Front room box should have same or more in it since it be will accessed
all the time.
Your overall estimate is quite legitimate.
Hope this helps!
Regards.
|||In my case, I prefer to use 1 big server instead of 2 dedicated.
Why?
simple, generally the loading process run during the night when the user are
not here, second a big server load data more quickly, specially if there is
a lot of and complex transformation in the staging.
also, a big server provide better response time for the end user.
another advantage if there is any data quality issue I can do some job
during working hour in a shorter time due to the better performance.
also when there are some cubes between the database and the user, playing in
the database don't impact the users.
using enterprise edition of Windows 2003 you have a tool to manage the
resources on the server (CPU, memory). so you can insure enough resources
available for the end users during the day and maximize the staging part
during the night.
Itanium CPU are good, but Dual Core CPU are good too!
regarding the license model of SQL server (per processor), dual core is
interesting (you pay for 1 CPU but you have 2 CPUs...)
I have found the AMD opteron CPU very efficient with SQL Server.
focus only on 64bits or x64 systems with at least 16Gb of memory.
Also try to use SAS drives, you can plug SATA drives on these controllers,
so you can save historical data on low price SATA drives.
"Dip" <soumyadip.bhattacharya@.gmail.com> wrote in message
news:1140392471.587633.100670@.g44g2000cwa.googlegr oups.com...
> Good estimate! This is quite normal to have two dedicated boxes for
> Staging and Production. Now, you need to be careful about update rate.
> If it is 1 GB per day then you may need more space or need to come up
> with horizontal partition and clustering but 1 GB per month is alright
> although you may need partition and clustering down the track.
> Cubes are seems OK. Write back is really depends on your application
> but as you now, you need to keep track of your write back values which
> means merge them back to fact table with DTS when write back activity
> happens. Quite normal for Budgeting and Forecasting cubes and should be
> OK.
> SQL Agent can be run from anywhere but I think backroom box makes more
> sense, please check with your system admin people.
> Regarding buying server, have you actually done any research based on
> price and performance yet? If you not then I strongly suggest that you
> consider HP Itanium Box. SQL Server needs at least 4 CPU so that it can
> share resources and at least 4 GB of RAM which you planned anyway.
> Itanium is a 64 bit box, specially built for Windows 2003 server and
> SQL Server. Also, IBM blade is OK but very expensive. I found Itanium
> is more practical and value for money. It will be around 20-25 thousand
> US Dollars these days.
> Front room box should have same or more in it since it be will accessed
> all the time.
> Your overall estimate is quite legitimate.
> Hope this helps!
> Regards.
>
sql