Monday, January 12, 2009

.Net Configurations-Application level,Machine level,Security Level

Application Configuration File:This configuration file is used to set the settings for single application.It is application level configuration file.Any changes made to application configuration file will applicable to only single application.

Machine Configuration : This configuration file is used to set the settings for a resource that is used by multiple applications.The machine.config file is located in C:\\Microsoft.NET\Framework\\config\machine.config.This is a system level configuration file.Any changes made to this file will applicable for all applications that are located in that system
Security Configuration File:
Security configuration files contain information about the code group hierarchy and permission sets associated with a policy level. It is strongly recommended that you use the .NET Framework Configuration tool (Mscorcfg.msc) or Code Access Security Policy tool (Caspol.exe) to modify security policy to ensure that policy changes do not corrupt the security configuration files.

SQL Server Functions

Functions : Creates a user-defined function, which is a saved Transact-SQL routine that returns a value. User-defined functions cannot be used to perform a set of actions that modify the global database state. User-defined functions, like system functions, can be invoked from a query. They also can be executed through an EXECUTE statement like stored procedures.
Types Of Functions:
  1. Row set functions :Can be used like table references in an SQL statement.
  2. Aggregate functions :Operate on a collection of values but return a single, summarizing value.
  3. Scalar functions :Operate on a single value and then return a single value. Scalar functions can be used wherever an expression is valid. This table categorizes the scalar functions.

SQL Server Function Categories:

  1. Configuration Functions:Returns information about the current configuration.
  2. Cursor Functions:Returns information about cursors.
  3. DateandTimeFunctions:DATEADD ,DATEDIFF,DATENAME,DATEPART,DAY ,GETDATE ,GETUTCDATE ,MONTH,YEAR
    Performs an operation on a date and time input value and returns either a string, numeric, or date and time value.
  4. Mathematical Functions:performs a calculation based on input values provided as parameters to the function, and returns a numeric value.
  5. Metadata Functions:Returns information about the database and database objects.
  6. Security Functions:Returns information about users and roles.
  7. String Functions:Performs an operation on a string (char or varchar) input value and returns a string or numeric value.
  8. System Functions:Performs operations and returns information about values, objects, and settings in Microsoft SQL Server.
  9. System Statistical Functions:Returns statistical information about the system.
  10. Text and Image Functions: Performs an operation on a text or image input values or column, and returns information about the value.

Functions Behaviour:UDF's support a wide range of functionality and just because you can’t commit write changes to a database, it doesn't mean that you should turn a blind eye to them. Here's a list of the statements that you can include in the body of a UDF

  • Flow control statements such as IF…ELSE, CASE, GOTO labels, etc.
  • UPDATE, INSERT and DELETE commands to modify the contents of tables which are local to the UDF and have been created using SQL Server 200's news TABLE variable type.
  • The EXECUTE command to execute stored procedures and extended stored procedures.
  • The SET and SELECT statements to assign a value to a variable.
  • SELECT statements to retrieve field values from tables into local variables.
  • CURSOR, OPEN CURSOR, CLOSE CURSOR, DEALLOCATE CURSOR and FETCH NEXT FROM statements as long as the cursor is both created and disposed of within the scope of the UDF. Note that actual rows can't be returned, and they must be explicitly returned into type-matching variables using the FETCH NEXT INTO statement.
  • DECLARE statements to create variables local (in terms of scope) to the UDF.
    Internally defined SQL variables which are prepended with "@@", such as @@ERROR and @@FETCH_STATUS.

Function and Stored Procedures Comparision:

  • Scope of modification: UDF's aren't allowed to modify the physical state of a database using INSERT, UPDATE or DELETE statements. They can only work with local data.
  • Output parameters: UDF's don't have the ability to return output parameters to the calling function. They do however let us return a single scalar value or a locally created table.
  • Error handling: In UDF's, if an error occurs during a call to a stored procedure or a trigger, then that statement is terminated and flow control continue through the UDF.
  • Calling conventions: One of the major differences between UDF's and stored procedures is that a UDF can be called through a SQL statement without using the EXECUTE statement

Creating Scalar Function:

CREATE FUNCTION dbo.mult(@num1 INT,@num2 INT)

RETURNS INT

AS

BEGIN

RETURN (@num1 * @num2)

END


Creating Table Valued Function :

CREATE FUNCTION getAuthorsByState(@state CHAR(2) )

RETURNS TABLE

AS

RETURN(SELECT au_fname + ' ' + au_lname AS NameFROM authorsWHERE state = @state)

Friday, January 9, 2009

SQL Server Cursors

Cursors :
These let you move through rows one at a time and perform processing on each row
Cursors Keywords:

  • The DECLARE CURSOR statement defines the SELECT statement that forms the basis of the cursor.
  • The OPEN statement executes the SELECT statement and populates the result set.
  • The FETCH statement returns a row from the result set into the variable. You can select multiple columns and return them into multiple variables.
  • The variable @@FETCH_STATUS is used to determine if there are any more rows. It will contain 0 as long as there are more rows. We use a WHILE loop to move through each row of the result set.
  • cursor_name - The name of the server side cursor, must contain from 1 to 128 characters.

    Types Of Cursors:
  • LOCAL - Specifies that cursor can be available only in the batch, stored procedure, or trigger in which the cursor was created. The LOCAL cursor will be implicitly deallocated when the batch, stored procedure, or trigger terminates.
  • GLOBAL - Specifies that cursor is global to the connection. The GLOBAL cursor will be implicitly deallocated at disconnect.
  • FORWARD_ONLY - Specifies that cursor can only fetch data sequentially from the first to the last row. FETCH NEXT is the only fetch option supported.
  • STATIC - Specifies that cursor will use a temporary copy of the data instead of base tables. This cursor does not allow modifications and modifications made to base tables are not reflected in the data returned by fetches made to this cursor.
  • KEYSET - Specifies that cursor uses the set of keys that uniquely identify the cursor's rows (keyset), so that the membership and order of rows in the cursor are fixed when the cursor is opened. SQL Server uses a table in tempdb to store keyset. The KEYSET cursor allows updates nonkey values from being made through this cursor, but inserts made by other users are not visible. Updates nonkey values made by other users are visible as the owner scrolls around the cursor, but updates key values made by other users are not visible. If a row is deleted, an attempt to fetch the row returns an @@FETCH_STATUS of -2.
  • DYNAMIC - Specifies that cursor reflects all data changes made to the base tables as you scroll around the cursor. FETCH ABSOLUTE option is not supported with DYNAMIC cursor.
  • FAST_FORWARD - Specifies that cursor will be FORWARD_ONLY and READ_ONLY cursor. The FAST_FORWARD cursors produce the least amount of overhead on SQL Server.
  • READ ONLY - Specifies that cursor cannot be updated.
  • SCROLL_LOCKS - Specifies that cursor will lock the rows as they are read into the cursor to ensure that positioned updates or deletes made through the cursor will be succeed.
  • OPTIMISTIC - Specifies that cursor does not lock rows as they are read into the cursor. So, the positioned updates or deletes made through the cursor will not succeed if the row has been updated outside the cursor since this row was read into the cursor.
  • TYPE_WARNING - Specifies that if the cursor will be implicitly converted from the requested type to another, a warning message will be sent to the client.
    SELECT_STATEMENT - The standard select statement, cannot contain COMPUTE, COMPUTE BY, FOR BROWSE, and INTO keywords.
  • UPDATE - Specifies that all cursor's columns can be updated (if OF column_name [,...n] is not specified), or only the columns listed in the OF column_name [,...n] list allow modifications

Optimization Tips:

  • Try to avoid using SQL Server cursors whenever possible.
    Using SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated subquery or derived tables if you need to perform row-by-row operations.
  • Do not forget to close SQL Server cursor when its result set is not needed.
    To close SQL Server cursor you can use the CLOSE {cursor_name} command. This command releases the cursor result set and frees any cursor locks held on the rows on which the cursor is positioned.
  • Do not forget to deallocate SQL Server cursor when the data structures comprising the cursor are not needed.
    To deallocate SQL Server cursor, you can use the DEALLOCATE {cursor_name} command. This command removes a cursor reference and releases the data structures comprising the cursor.
  • Try to reduce the number of records to process in the cursor.
    To reduce the cursor result set, use the WHERE clause in the cursor's select statement. It can increase cursor performance and reduce SQL Server overhead.
  • Try to reduce the number of columns to process in the cursor.
    Include in the cursor's select statement only necessary columns. It will reduce the cursor result set. So, the cursor will use fewer resources. This can increase cursor performance and reduce SQL Server overhead.
  • Use READ ONLY cursors, whenever possible, instead of updatable cursors.
    Because using cursors can reduce concurrency and lead to unnecessary locking, try to use READ ONLY cursors, if you do not need to update cursor result set.
    Try avoid using insensitive, static and keyset cursors, whenever possible.
    These types of cursor produce the largest amount of overhead on SQL Server as they cause a temporary table to be created in TEMPDB, which results in some performance degradation.
  • Use FAST_FORWARD cursors, whenever possible.
    The FAST_FORWARD cursors produce the least amount of overhead on SQL Server as they are read-only cursors and can only be scrolled from the first to the last row. Use FAST_FORWARD cursor if you do not need to update cursor result set and the FETCH NEXT will be the only used fetch option.
  • Use FORWARD_ONLY cursors, if you need updatable cursor and the FETCH NEXT will be the only used fetch option.
    If you need read-only cursor and the FETCH NEXT will be the only used fetch option, try to use FAST_FORWARD cursor instead of FORWARD_ONLY cursor. By the way, if one of the FAST_FORWARD or FORWARD_ONLY is specified, the other cannot be specified



Use Case Scenario's:

  • Reporting - Based on the design of the reports and the underlying design, cursors are typically not needed
  • Serialized processing - If you have a need to complete a process in serialized manner, cursors are a viable option.
  • Administrative tasks - Many administrative tasks need to be executed in a serial manner, which fits nicely into cursor based logic, but other system based objects exist to fulfill the need. In some of those circumstances, cursors are used to complete the process.
  • Large data sets - With large data sets you could run into any one or more of the following:
    Cursor based logic may not scale to meet the processing needs.
  • With large set based operations on servers with a minimal amount of memory, the data may be paged or monopolize the SQL Server which is time consuming can cause contention and memory issues. As such, a cursor based approach may meet the need.
  • Some tools inherently cache the data to a file under the covers, so processing the data in memory may or may not actually be the case.
  • If the data can be processed in a staging SQL Server database the impacts to the production environment are only when the final data is processed. All of the resources on the staging server can be used for the ETL processes then the final data can be imported.
    SSIS supports batching sets of data which may resolve the overall need to break-up a large data set into more manageable sizes and perform better than a row by row approach with a cursor.
  • Depending on how the cursor or SSIS logic is coded, it may be possible to restart at the point of failure based on a checkpoint or marking each row with the cursor. However, with a set based approach that may not be the case until an entire set of data is completed. As such, troubleshooting the row with the problem may be more difficult

Cursor Options and Related Stored Procedures:

  • sp_cursor_list : Returns a list of cursors currently visible on the connection and their attributes.
  • sp_describe_cursor : Describes the attributes of a cursor, such as whether it is a forward-only or scrolling cursor.
  • sp_describe_cursor_columns : Describes the attributes of the columns in the cursor result set.sp_describe_cursor_tables Describes the base tables accessed by the cursor.
  • Options
  • FETCH FIRST: Fetches the first row in the cursor.
  • FETCH NEXT: Fetches the row after the previously fetched row.
  • FETCH PRIOR: Fetches the row before the previously fetched row.
  • FETCH LAST: Fetches the last row in the cursor
  • FETCH ABSOLUTE n: If n is a positive integer, it fetches the nth row in a cursor. If n is a negative integer, it fetches the nth row before the last row. If n is 0, no row is fetched.
  • FETCH RELATIVE n: If n is positive, it fetches the nth row from the previously fetched row. If n is negative, it fetches the nth row before the previously fetched row.If n is 0, the same row is fetched again.

Example 1:

Declare Flight_cursor CURSOR SCROLL

FOR SELECT * from flight Order By Aircraft_code

Open Flight_Cursor

Fetch First From Flight_Cursor

While @@Fetch_Status =0

BEGIN

Fetch NExt from Flight_Cursor

End

Example 2:

Declare @@counter int

set @@counter=0

Declare @@ProductID int

Declare @@ProductName varchar(30)

Declare @@Qty int
Declare special cursorlocal

Scroll Keyset Optimistic

ForSelect * from Products

Open special /* Opening the cursor */

fetch Absolute 1 from special into @@ProductID,@@ProductName,@@Qty
while @@fetch_Status<>-1

begin

fetch next from specialinto @@ProductID,@@ProductName,@@QtyPrint @@ProductiDPrint @@ProductNameprint @@Qty
/*set @@ProductName=(Select Prod_Name from special)*/

Update Products set Prod_Name= @@counter
set @@counter=@@counter+1

end

close special

Deallocate special

Cursor Locks:

READ ONLY: This prevents any updates on the table.

SCROLL LOCK: This allows you to make changes to the table.

OPTIMISTIC: This checks if any change in the table record since the row fetched before updating. If there is no change, the cursor can update

Thursday, January 8, 2009

SQL Server Views

What is View:


  • A view is a virtual table that consists of columns from one or more tables. Though it is similar to a table, it is stored in the database.
  • It is a query stored as an object. Hence, a view is an object that derives its data from one or more tables. These tables are referred to as base or underlying tables.
  • Once you have defined a view, you can reference it like any other table in a database.

Scenarios and Behaviour:

  • A view serves as a security mechanism. This ensures that users are able to retrieve and modify only the data seen by them.
  • Users cannot see or access the remaining data in the underlying tables. A view also serves as a mechanism to simplify query execution.
  • Complex queries can be stored in the form as a view, and data from the view can be extracted using simple queries.
  • Ensure the security of data by restricting access to the following data:
  • Specific rows of the tables.
  • Specific columns of the tables.
  • Specific rows and columns of the tables.
  • Rows fetched by using joins.
  • Statistical summary of data in a given tables.

Creating View:

CREATE VIEW MyView AS

SELECT CustomerId, Company Name FROM Customers

Accessing View Data:

select * from MyView

Scenarios for Using Views:

  • Views are generally used to focus, simplify, and customize the perception each user has of the database. Views can be used as security mechanisms by allowing users to access data through the view, without granting the users permissions to directly access the underlying base tables of the view. Views can also be used, when copying data to and from Microsoft® SQL Server™ 2000, to improve performance and to partition data.
    To Focus on Specific Data
  • Views allow users to focus on specific data that interests them and on the specific tasks for which they are responsible. Unnecessary data can be left out of the view. This also increases the security of the data because users can see only the data that is defined in the view and not the data in the underlying table. For more information about using views for security purposes
  • To Simplify Data Manipulation
    Views can simplify how users manipulate data. You can define frequently used joins, projections, UNION queries, and SELECT queries as views so that users do not have to specify all the conditions and qualifications each time an additional operation is performed on that data. For example, a complex query that is used for reporting purposes and performs subqueries, outer joins, and aggregation to retrieve data from a group of tables can be created as a view. The view simplifies access to the data because the underlying query does not have to be written or submitted each time the report is generated; the view is queried instead. For more information about manipulating data.
    You can also create inline user-defined functions that logically operate as parameterized views, or views that have parameters in WHERE-clause search conditions.
    To Customize Data
  • Views allow different users to see data in different ways, even when they are using the same data concurrently. This is particularly advantageous when users with many different interests and skill levels share the same database. For example, a view can be created that retrieves only the data for the customers with whom an account manager deals. The view can determine which data to retrieve based on the login ID of the account manager who uses the view.
    To Export and Import Data:
  • Views can be used to export data to other applications. For example, you may want to use the stores and sales tables in the pubs database to analyze sales data using Microsoft® Excel. To do this, you can create a view based on the stores and sales tables. You can then use the bcp utility to export the data defined by the view. Data can also be imported into certain views from data files using the bcp utility or BULK INSERT statement providing that rows can be inserted into the view using the INSERT statement. For more information about the restrictions for copying data into views
  • The Transact-SQL UNION set operator can be used within a view to combine the results of two or more queries from separate tables into a single result set. This appears to the user as a single table called a partitioned view. For example, if one table contains sales data for Washington, and another table contains sales data for California, a view could be created from the UNION of those tables. The view represents the sales data for both regions.
  • To use partitioned views, you create several identical tables, specifying a constraint to determine the range of data that can be added to each table. The view is then created using these base tables. When the view is queried, SQL Server automatically determines which tables are affected by the query and references only those tables.
  • For example, if a query specifies that only sales data for the state of Washington is required, SQL Server reads only the table containing the Washington sales data; no other tables are accessed.
  • Partitioned views can be based on data from multiple heterogeneous sources, such as remote servers, not just tables in the same database. For example, to combine data from different remote servers each of which stores data for a different region of your organization, you can create distributed queries that retrieve data from each data source, and then create a view based on those distributed queries.
  • Any queries read only data from the tables on the remote servers that contains the data requested by the query; the other servers referenced by the distributed queries in the view are not accessed.
  • When you partition data across multiple tables or multiple servers, queries accessing only a fraction of the data can run faster because there is less data to scan. If the tables are located on different servers, or on a computer with multiple processors, each table involved in the query can also be scanned in parallel, thereby improving query performance. Additionally, maintenance tasks, such as rebuilding indexes or backing up a table, can execute more quickly.
  • By using a partitioned view, the data still appears as a single table and can be queried as such without having to reference the correct underlying table manually.
    Partitioned views are updatable if either of these conditions is met:
  • An INSTEAD OF trigger is defined on the view with logic to support INSERT, UPDATE, and DELETE statements.Both the view and the INSERT, UPDATE, and DELETE statements follow the rules defined for updatable partitioned views.

Dis Advantages of views:
So, if you have a view joining 10 tables and one of those tables has a million rows, your view will return at least 1 million rows. Joining such a view to a few other tables within a stored procedure could be disastrous.
What should you do? A better alternative in such cases would be a user-defined function (UDF), which accepts parameters and allows you to limit the number of returned rows. Another option is to join all tables within a single unit stored procedure and limit the output by using parameters.

SQL Server Joins

Joins:
Joins in SQL Server allows the retrieval of data records from one or more tables having some relation between them. Logical operators can also be used to drill down the number of records to get the desired output from sql join queries.
Types:

  • Inner Join
  • Outer Join 1)Left Outer Join 2)Right Outer Join 3)Full Outer Join
  • Cross Join
  • Inner Join: Inner Join is a default type join of SQL Server. It uses logical operators such as =, <, > to match the records in two tables. Inner Join includes equi join and natural joins.
  • Example:
  • SELECT C.CATEGORYID, C.CATEGORYNAME, P.PRODUCTID, P.PRODUCTNAME, P.UNITPRICE FROM CATEGORIES C INNER JOINPRODUCTS P ON P.CATEGORYID = C.CATEGORYIDWHERE P.UNITPRICE = 10ORDER BY C.CATEGORYNAME, P.PRODUCTNAME
  • Outer Join:
  • Outer Join has further 3 sub categories as left, right and full. Outer Join uses these category names as keywords that can be specified in the FROM clause.
    Left Outer Join: Left Outer Join returns all the rows from the table specified first in the Left Outer Join Clause. If in the left table any row has no matching record in the right side table then that row returns null column values for that particular tuple.
  • Example:
    SELECT A.AU_FNAME, A.AU_LNAME, P.PUB_NAMEFROM AUTHORS A LEFT OUTER JOIN PUBLISHERS PON A.CITY = P.CITYORDER BY A.AU_LNAME, A.AU_FNAME
    Cheryl
    Carson
    Algodata Infosystems
    Michel
    DeFrance
    NULL

    Right Outer Join:
  • Right Outer Join is exactly the reverse method of Left Outer Join. It returns all the rows from right table and returns null values for the rows having no match in the left joined table.
    Example:
  • SELECT A.AU_FNAME, A.AU_LNAME, P.PUB_NAMEFROM AUTHORS A RIGHT OUTER JOIN PUBLISHERS PON A.CITY = P.CITYORDER BY A.AU_LNAME, A.AU_FNAME

    Full Outer Join: Full outer join returns all the rows from both left and right joined tables. If there is any match missing from the left table then it returns null column values for left side table and if there is any match missing from right table then it returns null value columns for the right side table.
    SELECT A.AU_FNAME, A.AU_LNAME, P.PUB_NAMEFROM AUTHORS A FULL OUTER JOIN PUBLISHERS PON A.CITY = P.CITYORDER BY A.AU_LNAME, A.AU_FNAME
  • Cross Join: Cross join works as a Cartesian product of rows for both left and right table. It combined each row of left table with all the rows of right table.
  • Example:
    SELECT AU_FNAME, AU_LNAME, PUB_NAMEFROM AUTHORS CROSS JOIN PUBLISHERS WHERE AUTHORS.CITY = PUBLISHERS.CITYORDER BY AU_FNAME

Wednesday, January 7, 2009

SQL Server Interview Questions

What is DTS in SQL Server ?

Data Transformation Services is used to transfer the data from one source to our required destination. Considering am having some data in sql server and I need to transfer the data to Excel destination. Its highly possible with dialogue based tool called Data Transformation services. More customization can be achieved using SSIS. A specialized tool used to do such migration works.

What is the significance of NULL value and why should we avoid permitting null values?

Null means no entry has been made. It implies that the value is either unknown or undefined.We should avoid permitting null values because Column with NULL values can't have PRIMARY KEY constraints. Certain calculations can be inaccurate if NULL columns are involved.

Difference between primary key and Unique key?

  • Both constraints will share a common property called uniqueness. The data in the column should be unique.
  • The basic difference is,• Primary key won’t allow null value. Whereas, unique key will accept null value but only one null value.•
  • On creating primary key, it will automatically format the data inturn creates clustered index on the table. Whereas, this characteristics is not associated with unique key.
  • Only one primary key can be created for the table. Any number of Unique key can be created for the table.

What is normalization?

Normalization is the basic concept used in designing a database. Its nothing but, an advise given to the database to have minimal repetition of data, highly structured, highly secured, easy to retrieve. In high level definition, the Process of organizing data into tables is referred to as normalization. Denormalization:As the name indicates, it's opposite to normalisation. Introducing redundancy into the database is referred to as denormalization. The database becomes too complex and too many joins may occur to fetch the data. In that case, we used to prefer this denomalization concept. Narrow idea - Joining the divided or segregated tables.What is the difference between UNION ALL Statement and UNION ?The main difference between UNION ALL statement and UNION is UNION All statement is much faster than UNION,the reason behind this is that because UNION ALL statement does not look for duplicate rows, but on the other hand UNION statement does look for duplicate rows, whether or not they exist

What is a view?

If we have several tables in a db and we want to view only specific columns from specific tables we can go for views. It would also suffice the needs of security some times allowing specfic users to see only specific columns based on the permission that we can configure on the view. Views also reduce the effort that is required for writing queries to access specific columns every time.
What are the types of indexes available with SQL Server?

There are basically two types of indexes that we use with the SQL ServerClustered -

1. It will format the entire table, inturn physically sort the table.

2. Only one clustered index can be created for a table.

3. Data will be located in the leaf level.

4. By default, primary key will create clustered index on the table

.Non-Clustered Index

1. It wont touch the structure of the table.

2. It forms an index table as reference to the exact data.

3. A reference to the data will be located in the leaf level.

4. For a table, we can create 249 non clustered index

Stored Procedure Vs User Defined functions

in sql server Stored Procedure:Its a pre-compiled statements incorporating a set of T-SQL statements.We can't use SP in Joins.Stored procedure won't return table variables.User defined functions:Its a set of T-SQL statements. Repeated T-SQL statements can be grouped as a User defined functions.We can use UDF's and in joins.UDF returns table variablesCorrelated maximum capacity for SQL Server 2005:

Table Level:

  • Maximum Number of tables : No restrictions (Until your harddisk size you can create tables)
  • Maximum bytes per row in the table : 8060 bytes
  • Maximum tables in a select statment - 256
  • Maxiumu references per table - 253
  • Maximum rows in the table - Unlimited (Depends on the storage)
  • Maximum columns per base table : 1024
  • Maximum columns per insert statment : 1024
  • Maximum columns per Select statment : 1024 * 4 = 4096Bytes per Group by or Order by = 8000
    Index:
  • Maximum clustered index per table - 1
  • Maximum Non-clustered index per table - 249Maximum Columns in an index (composite index) - 16
    Keys:
  • Maximum Columns in a Primary key - 16
  • Maximum Columns in a Foreign key - 16
    Objects:
  • Nested Stored procedure levels- 32
  • Nested triggers levels- 32
  • Parameters per stored procedure - 2100
  • Parameters per User defined functions - 2100
  • Identifier length - 16
    Database Level Details:
  • Number of instances for a server - 50
  • Files per database - 32767
  • Filegroups per database - 32767
  • Databases per instance of SQL Server – 32767


Delete VS Truncate:

  • Delete will delete row by row in the table. The syntax is,delete from tablenameWe can give where conditions for deleting the data. Each delete will be logged in the log file.Its a DML statement
  • Trucate will deallocate the reference in the pages instead of deleting the data.Its DDL statement.
  • The syntax is,Truncate table tablenameWe can provide truncate with tables having foreign key relationships. (Because it wont do any cascade delete on the table).It wont log any information in the log file.Delete or truncate the data at a stretch and wont do any row by row manipulation

Chennai .Net Users group

Hi Friends,If any body working in chennai or moving to chennai on microsoft technologies.Here is the place where you can share your knowledge ,attend events on microsoft technologies,etc...
Chennai .Net Users Group:
http://cnug.co.in

BDOTNET Event In Bangalore (On Microsoft Technologies)

Hi friends,Its nice to have the group called BDOTNET ,Because it provides latest updates on microsoft technologies.My self and my friends all have attended the BDOTNET presentations so many times and presentations was fantastic.If you are bangaloreans ,then attend this nice event conducting by BDOTNET Group...
UG Meet on January 10thSession 1 : 4PM- 5PM
Topic : Data Services in ASP.NET 3.5
Chaitra
Session 2: 5PM - 6PM Topic : Biztalk Overview
Kashi


Venue :Microsoft Signature building, Embassy golf links (EGL) campus,Koramangala inner ring road, near domlur flyoverBangalore.

via http://bdotnet.in/

Tuesday, January 6, 2009

The Top 10 Products of the Year

via http://www.eweek.com/


At the end of each year, eWEEK Labs analysts look back on the products they have reviewed and pick the ones that stand out for their innovation and ability to meet the ever-changing—and increasingly challenging—needs of the enterprise. Combined, the products on this year’s list form the foundation of the next-gen enterprise.
Adobe AIR
Going back to the late 1990s, people have been talking about the Web as an operating system. Many believe that it was because of these ideas that Microsoft decided to aggressively go after Netscape.
To help further this vision, many people at this time began working on rich Internet applications, or RIAs. These included everything from Macromedia Flash to Java to Curl to ActiveX to some technologies that no longer exist. These products made it possible to build more interactive and GUI-like interfaces for Web applications, though in most cases the apps stayed tied to the browser.
What have become the next generation of RIAs are Web applications that can run independently of a browser—applications that run like those on a desktop but that still use Web services and interfaces for data and interactivity.
The product that comes closest right now to the true vision of this next-generation Web application is Adobe AIR. Descended from Flash, AIR makes it possible to build powerful and interactive applications that have all of the benefits of both Web and desktop apps (such as offline capabilities, in the latter case).
—Jim Rapoza
Agito RoamAnywhere Solution
With mobile phones becoming the preferred mode of voice communication for many employees, mobile phone costs are skyrocketing for many companies.
FMC (fixed mobile convergence) solutions solve this problem, allowing end users to seamlessly leverage Wi-Fi for voice over IP communications while using their corporate PBX profile on the road.
With its new emphasis on enterprise-grade redundancy and security—as well as integration into Cisco’s Motion to aid in roaming decisions among networks—Agito’s RoamAnywhere Solution builds upon its already outstanding fingerprint location deterministics, line-side PBX integration and solid client-side software. All of this provides an outstanding solution to help extend companies’ existing infrastructure to the mobile work force.
—Andrew Garcia

Cisco Catalyst 4900M
Cisco Systems’ newly minted Catalyst 4900M can help network engineers navigate the transition to 10G by offering a mixture of fixed and swappable, card-based ports.
As servers become more densely consolidated, and as the data sets that back-end applications are called on to process continue to swell, IT organizations must seek out strategies for transitioning their direct-to-server connectivity from 1GB to 10GB Ethernet.
The 4900M is a 2U (3.5-inch) form factor data center switch that's designed to sit atop a rack of servers, aggregate their traffic and uplink to an end-of-row switch such as a Catalyst 6500. The “M” in 4900M stands for modular, with the intention that 1G modules will be replaced with 10G modules as data center server network connections increase in bandwidth.
—Cameron Sturdevant
Click here for eWEEK's picks for the top 10 stories of 2008.
Firefox 3
This year may have been the most exciting ever when it comes to Web browsers. Several significant Web browsers were released—from the latest versions of Opera and Safari to the surprise entry of the intriguing Chrome browser from Google—and even Microsoft made significant strides with the beta release of Internet Explorer 8.
But, with the release of Firefox 3, Mozilla cemented its place at the top of the Web browser pile.
The latest release of Firefox hit all of the standard browser notes, with improved security, usability and customization options. However, some of the biggest improvements were under the covers, especially in offline capabilities and the browser’s ability to integrate with and take advantage of next-generation Web technologies.
These improvements haven’t gone unnoticed, with Firefox recently topping 20 percent in browser market share. Best of all, we now have something much better than a browser war—we have a vibrant and active browser market, with a lot of choices. And given some of the interesting features we expect to see in forthcoming browsers, Firefox will have to work hard to maintain its position in 2009.
—J.R.
iPhone 2.0 + the AppStore
Forget the new 3G-enabled iPhone hardware (hampered by AT&T’s spotty 3G coverage); Apple truly hit the mark with the iPhone 2.0 software that came preloaded on the new devices and was available as a free upgrade for first-generation units.
The new software made the iPhone palatable for enterprise consumption—adding improved Wi-Fi security, a Cisco VPN client and support for Microsoft Exchange e-mail environments—even though the enterprise management tools for the iPhone are still lacking.
However, the most significant improvement came with the introduction of the AppStore, along with the iPhone’s new support for third-party applications. The iPhone suddenly became much more than a phone and music player—it’s now a gaming platform, a productivity tool, a powerful vehicle for search and a general lifestyle enhancer for whatever a given user’s interests may be.
And with an increasing stable of enterprise applications on the way (look out for a Citrix client next year), an industry-leading mobile browser for Web-based solutions and a wide base of adoption, the iPhone is quickly becoming enterprise-capable.
—A.G.

IXIA iSimCity
Network performance test maker IXIA launched its iSimCity lab in Silicon Valley in February. In Phase 1, iSimCity focused on classic switch and router testing. In subsequent phases, iSimCity is expected to host data center performance benchmarks, including VOIP gateways, video servers, firewalls and e-mail.
The test center is expected to host city-scale demonstrations of high-performance triple-play testing. Using 1G and 10G hardware, engineers will use IxLoad software to measure application performance.
During the times that I’ve used the center, it’s been minimally provisioned, as IXIA kept only the equipment needed for specific tests on hand. When iSimCity is fully outfitted, it will house all the IXIA chassis, applications, interfaces, infrastructure components and professional expertise needed to run city-scale service emulations.
—C.S.
Nokia E71
Nokia really knocked it out of the park with the E71, packing just about everything a user—or a third-party mobile software developer—could want from a smartphone. Despite its relatively small screen and lack of a touch-screen interface, the E71 was the best overall device eWEEK Labs tested this year.
The smartphone offers revamped support for Microsoft Exchange e-mail via the new Mail for Exchange application. However, the E71 also aims to blend in capabilities to solve next-generation problems.
For example, with its dual Home Screen modes, users can switch the customizations of the device to fit both their personal and work needs. In addition, because the E71 offers the right mix of Wi-Fi performance, codecs, sound performance and open APIs, the device has become the de facto standard for FMC vendors.
—A.G.

Ubuntu 8.04
The dizzyingly diverse world of Linux-based operating systems is, for the most part, divided into two separate camps: those of conservative, stable and pay-per-machine "enterprise" distributions, and those of up-to-date, short-lived, free-of-charge "enthusiast" options.
Ubuntu 8.04—specifically in its desktop iteration—has earned a spot on our list of top products of 2008 by bridging the enterprise-vs.-enthusiast divide with a freely available Linux-based OS and a support term of three years, compared with about 18 months for Red Hat's and Novell's enthusiast distributions.
While "free" is an attractive quality in its own right, what's most appealing about Ubuntu is the broad and active community that has rallied around Ubuntu and its unified enterprise/enthusiast release structure. Ubuntu 8.04 features most of the same software components as other distributions, but Ubuntu's popularity results in more ready-to-install software packages, more troubleshooting answers out on the Web and, increasingly, more OEM preload availability than other desktop Linux options.
—Jason Brooks
VMware VI3
VMware’s Virtual Infrastructure 3 offers enterprises an impressive, mature framework for making virtualization promises a reality.
The foundation of VI3 is VMware’s ESX Server. The platform supports a wide variety of guest OSes, including Windows, Solaris and Linux. The suite uses VMware’s VirtualCenter management server to control systems. Companies looking to consolidate single-application servers, squeeze more out of underutilized hardware, extend the availability of their networked services and get a surer handle on the machines in their data centers would do well to evaluate VI3, which can deliver compelling results in any of these scenarios.
VMware’s product line is the clear leader among x86- and x86-64-based server virtualization products, and VI3 is the firm’s flagship product. I do recommend keeping an eye on the emerging Xen-based offerings from Virtual Iron and XenSource, as well as on the Xen-based functionality that’s built into Novell’s SUSE Linux Enterprise Server and Red Hat’s Red Hat Enterprise Linux 5.
—C.S.
Windows Server 2008
At a time when Vista missteps had many questioning Microsoft's product development mojo, the release of Windows Server 2008 demonstrated that the company is capable of putting out a lean and well-performing operating system with features in sync with customer needs.
In recognition of the growing importance of "just enough OS" system configurations, such as in virtual machine deployments, Windows Server 2008 introduced a stripped-down Server Core configuration for hosting certain Windows Server roles.
What's more, Windows Server 2008 featured a much more modular and securable Web server in IIS (Internet Information Services) 7.0, marked the debut of Microsoft's own hypervisor technology in Hyper-V, and introduced a slate of manageability enhancements for both command-line and GUI adherents.—J.B.

Tech that reads your mind

via http://news.cnet.com/
Produced by Shari Finkelstein
" Tech that explores your secrets..................."
How often have you wondered what your spouse is really thinking? Or your boss? Or the guy sitting across from you on the bus? We all take as a given that we'll never really know for sure. The content of our thoughts is our own--private, secret, and unknowable by anyone else. Until now, that is.

As correspondent Lesley Stahl reports, neuroscience research into how we think and what we're thinking is advancing at a stunning rate, making it possible for the first time in human history to peer directly into the brain to read out the physical make-up of our thoughts, some would say to read our minds.
* * * * * * * * * * * * * * * * * * * * * * *

The technology that is transforming what once was science fiction into just plain science is a specialized use of MRI scanning called "functional MRI," fMRI for short. It makes it possible to see what's going on inside the brain while people are thinking.
"You know, every time I walk into that scanner room and I see the person's brain appear on the screen, when I see those patterns, it is just incredible, unthinkable," neuroscientist Marcel Just told Stahl.
He calls it "thought identification."

Whatever you want to call it, what Just and his colleague Tom Mitchell at Carnegie Mellon University have done is combine fMRI's ability to look at the brain in action with computer science's new power to sort through massive amounts of data. The goal: to see if they could identify exactly what happens in the brain when people think specific thoughts.
They did an experiment where they asked subjects to think about 10 objects--5 of them tools like screwdriver and hammer, and 5 of them dwellings, like igloo and castle. They then recorded and analyzed the activity in the subjects' brains for each.

"The computer found the place in the brain where that person was thinking 'screwdriver'?" Stahl asked.

"Screwdriver isn't one place in the brain. It's many places in the brain. When you think of a screwdriver, you think about how you hold it, how you twist it, what it looks like, what you use it for," Just explained.

He told Stahl each of those functions are in different places.
When we think "screwdriver" or "igloo" for example, Just says neurons start firing at varying levels of intensity in different areas throughout the brain. "And we found that we could identify which object they were thinking about from their brain activation patterns," he said.
"We're identifying the thought that's occurring. It's...incredible, just incredible," he added.
I always tell my students that there is no science fiction anymore. All the science fiction I read in high school, we're doing.

--Paul Root Wolpe, director of the Center for Ethics at Emory University
"Are you saying that if you think of a hammer, that your brain is identical to my brain when I think of a hammer?" Stahl asked.
"Not identical. We have idiosyncrasies. Maybe I've had a bad experience with a hammer and you haven't, but it's close enough to identify each other's thoughts. So, you know, that was never known before," Just explained.

60 Minutes asked if his team was up for a challenge: would they take associate producer Meghan Frank, whose brain had never been scanned before, and see if the computer could identify her thoughts? Just and Mitchell agreed to give it a try and see if they could do it in almost real time.
Just said nobody had ever done an instant analysis like this.
Inside the scanner, Meghan was shown a series of ten items and asked to think for a few seconds about each one.

"If it all comes out right, when she's thinking 'hammer,' the computer will know she's thinking 'hammer'?" Stahl asked.
"Right," Mitchell replied.

Within minutes, the computer, unaware of what pictures Meghan had been shown and working only from her brain activity patterns as read out by the scanner, was ready to tell us, in its own voice, what it believed was the first object Meghan had been thinking about.
The computer correctly analyzed the first three words--knife, hammer, and window, and aced the rest as well.

According to Just, this is just the beginning.
"Who knows what you're gonna be able to read," Stahl commented. "A little scary, actually."
"Well, that's our research program for the next five years," Just said. "To see what, you know--we're not satisfied with "hammer."

And neither are neuroscientists 4,000 miles away in Berlin at the Bernstein Center. John Dylan-Haynes is hard at work there using the scanner not just to identify objects people are thinking about, but to read their intentions.

Subjects were asked to make a simple decision--whether to add or subtract two numbers they would be shown later on. Haynes found he could read directly from the activity in a small part of the brain that controls intentions what they had decided to do.

"This is a kind of blown-up version of the brain activity happening here. And you can see that if a person is planning to add or to subtract, the pattern of brain activity is different in these two cases," Haynes explained.

The future of crime-solving?"I always tell my students that there is no science fiction anymore. All the science fiction I read in high school, we're doing," Paul Root Wolpe, director of the Center for Ethics at Emory University in Atlanta, told Stahl.

To Wolpe, the ability to read our thoughts and intentions this way is revolutionary. "Throughout history, we could never actually coerce someone to reveal information. Torture doesn't work that well, persuasion doesn't work that well. The right to keep one's thoughts locked up in their brain is amongst the most fundamental rights of being human."

"You're saying that if someone can read my intentions, we have to talk about who might in the future be able to do that?" Stahl asked.

"Absolutely," he replied. "Whether we're going to let the state do it or whether we're going to let me do it. I have two teenage daughters. I come home one day and my car is dented and both of them say they didn't do it. Am I going to be allowed to drag them off to the local brain imaging lie detection company and get them put in a scanner? We don't know."
There will be a Supreme Court case about this.
--Paul Root Wolpe

But before we've even started the debate, there are two companies already offering lie detection services using brain scans, one with the catchy name "No Lie MRI." But our experts cautioned that the technique is still unproven.

In the meantime, Haynes is working on something he thinks may be even more effective: reading out from your brain exactly where you've been. Haynes showed Stahl an experiment he created out of a video game.

He had Stahl navigate through a series of rooms in different virtual reality houses.
"Now I would put you in a scanner and I would show you some of these scenes that you've seen and some scenes that you haven't seen," he told her.

Stahl recognized the bar. "And right at this moment, we would be able to tell from your brain activity that you've already seen this environment before," Haynes explained.
"And so, this is a potential tool...for the police...in the case of break-ins?" Stahl asked.
"You might be able to tell if someone's been in an al Qaeda training camp before," Haynes replied.

Haynes said while U.S. national security agencies had not been in touch with him, the Germans had.

"So there are people who are considering these kinds of possibilities," Stahl commented.
And some are using them. In India last summer, a woman was convicted of murder after an EEG of her brain allegedly revealed that she was familiar with the circumstances surrounding the poisoning of her ex-fiance.

"Can you through our legal system be forced to take one of these tests?" Stahl asked Paul Root Wolpe.

"It's a great question. And the legal system hasn't decided on this yet," he said.
"But we do have a Fifth Amendment. We don't have to incriminate ourselves," Stahl pointed out.
"Well here's where it gets very interesting, because the Fifth Amendment only prevents the courts from forcing us to testify against ourselves. But you can force me to give DNA or a hair sample or blood even if that would incriminate me. So here's the million dollar question: if you can brain image me and get information directly from my brain, is that testimony? Or is that like DNA, blood, semen, and other things that you could take from me?" Wolpe asked.
"There will be a Supreme Court case about this," he predicted.

For now, it's impossible to force someone to have his or her brain scanned, because the subject has to lie still and cooperate, but that could change.

"There are some other technologies that are being developed that may be able to be used covertly and even remotely. So, for example, they're trying to develop now a beam of light that would be projected onto your forehead. It would go a couple of millimeters into your frontal cortex, and then receptors would get the reflection of that light. And there's some studies that suggest that we could use that as a lie detection device," Wolpe said.

He said we wouldn't know if our brains were being scanned. "If you were sitting there in the airport and being questioned, they could beam that on your forehead without your knowledge. We can't do that yet, but they're working on it."

Scary as that is, imagine a world where companies could read our minds too.
Light beams may be a bit far off, but fMRI scanning is already being used to try to figure out what we want to buy and how to sell it to us. It's a new field called "neuromarketing." One of its pioneers is neuroscientist Gemma Calvert, co-founder of a London company called Neurosense.
Asked if she has a lot of clients, Calvert told Stahl, "Yes, such as Unilever, Intel, McDonald's, Proctor & Gamble, MTV or Viacom."

And she says it's a growing field. "What we've seen is a sort of snowballing effect over the last few years. I think there are about 92 neuromarketing agencies worldwide."
Proceed with cautionBut some experts question whether it's ethical to scan the brain for commercial purposes, and say neuromarketers may be promising more than they can really deliver.

"If you image my brain, and you say, 'Ah-ha! Paul craves chocolate chip cookies,' and I say, 'No, I don't,' now are you going to believe the brain over me? You can only do that if you have proven that that part of the brain lighting up means in all cases that that person desires chocolate chip cookies. And what a lot of people are doing is they're just imaging the brain, and then they're declaring what that means, and they're never proving that it actually translates into behavior," Wolpe said.

"You know it's very interesting. When you show someone a brain scan, people just believe it. It just reeks of credibility," Stahl commented.
"Absolutely. Absolutely," John Dylan-Haynes agreed.
"And you telling me, 'That's the area where people add and subtract,' I thought, 'Well, of course. He knows,'" Stahl said.

"But I could have told you anything," he pointed out.
So as brain imaging continues to advance and find its way into the courts, the market, and who knows what other aspects of our lives, one message is: be cautious. Another is to get ready. Back at Carnegie Mellon, Just and Mitchell have already uncovered the signatures in our brains for kindness, hypocrisy, and love.

"It's breathtaking," Stahl said. "And kind of eerie."
"Well, you know, I think the reason people have that reaction is because it reveals the essence of what it means to be a person. All of those kinds of things that define us as human beings are brain patterns," Just replied.

"We don't wanna know that...it all boils down to, I don't know, molecules and things like that," Stahl said.

"But we are, you know, we are biological creatures. You know, our limbs we accept are, you know, muscles and bone. And our brain is a biological thinking machine," he replied.
"Do you think one day, who knows how far into the future, there'll be a machine that'll be able to read very complex thought like 'I hate so-and-so'? Or you know, 'I love the ballet because...'?" Stahl asked.

"Definitely. Definitely," Just said. "And not in 20 years. I think in three, five years."
"In three years?" Stahl asked.
"Well, five," Just replied with a smile
Watch video at http://news.cnet.com/8301-11386_3-10131643-76.html?tag=newsCategoryArea.4

Monday, January 5, 2009

SQL Server Triggers

What is Trigger:


  • Triggers allow us to execute a batch of SQL code when an insert, update or delete command is executed against a specific table
  • One of these objects that many developers overlook is the trigger. Triggers are "attached" to a table and allow us to setup our database in such a way that whenever a record is added, updated, or deleted from a table, then SQL server will automatically execute a batch of SQL code after that table modification takes place.
  • A trigger is an object contained within an SQL Server database that is used to execute a batch of SQL code whenever a specific event occurs. As the name suggests, a trigger is “fired” whenever an INSERT, UPDATE, or DELETE SQL command is executed against a specific table.
  • Sample Syntax for Trigger:
  • CREATE TRIGGER tr_Source_INSERT ON Source FOR INSERT AS PRINT GETDATE()

Types of Triggers :
There are some added types in SQL Server 2005 for triggering actions:

1) DML Trigger:

1) AFTER Triggers:

  • An AFTER trigger is executed only after the triggering SQL statement, including all referential cascade actions and constraint checks associated with the object updated or deleted, is executed successfully.
  • The AFTER trigger operation checks for the effects of the triggering statement as well as all referential cascade UPDATE and DELETE actions caused by the triggering statement.
  • AFTER is the default, if only the FOR keyword is specified.
  • AFTER triggers may be defined only on tables.
  • Sample Example:
    CREATE TRIGGER job_delete ON jobs FOR DELETE AS
    DECLARE @job_id INT
    SELECT @job_id = (SELECT job_id FROM DELETED)
    IF EXISTS (SELECT job_id FROM employees WHERE job_id = @job_id)
    BEGIN
    RAISERROR ('you cannot delete a job that is held by an employee', 16, 1)
    ROLLBACK TRAN
    END
    2)INSTEAD OF Triggers:An INSTEAD OF Trigger is used to perform an action instead of the one that caused the trigger to be fired. This sounds like double-talk, so I will explain a little more. Let's say you have an INSTEAD OF INSERT trigger defined on a table and an INSERT is executed. A row is not added to the table, but the code in the trigger is fired. The following shows what this looks like.
    CREATE TRIGGER tr_Orders_INSERT_InsteadOf
    ON Orders
    INSTEAD OF INSERT
    AS
    PRINT 'Updateable Views are Messy'
    Go

2)DDL Triggers:

  • DDL triggers, like regular triggers, fire stored procedures in response to an event. However, unlike DML triggers, they do not fire in response to UPDATE, INSERT, or DELETE statements on a table or view. Instead, they fire in response to a variety of Data Definition Language (DDL) events.
  • These events primarily correspond to Transact-SQL statements that start with the keywords CREATE, ALTER, and DROP. Certain system stored procedures that perform DDL-like operations can also fire DDL triggers.
  • DDL triggers can be used for administrative tasks such as auditing and regulating database operations.
  • Use DDL triggers when you want to do the following:
  • You want to prevent certain changes to your database schema.
  • You want something to occur in the database in response to a change in your database schema.
  • You want to record changes or events in the database schema.
  • DDL triggers fire only after the DDL statements that trigger them are run. DDL triggers cannot be used as INSTEAD OF triggers.
  • The following example shows how a DDL trigger can be used to prevent any table in a database from being modified or dropped.
  • Example:
    CREATE TRIGGER safety ON DATABASE FOR DROP_TABLE, ALTER_TABLE AS PRINT 'You must disable Trigger "safety" to drop or alter tables!' ROLLBACK ;

  • DDL triggers can fire in response to a Transact-SQL event that is processed in the current database or on the current server.
  • The scope of the trigger depends on the event.
  • To obtain a DDL trigger example that is available in the AdventureWorks sample database, in Object Explorer in the SQL Server Management Studio, open the Database Triggers folder located in the Programmability folder of the AdventureWorks database. Right-click ddlDatabaseTriggerLog and select Script Database Trigger as. By default, the DDL trigger ddlDatabaseTriggerLog is disabled.
  • Scenarios for Using a DDL Trigger:
  • You can prevent or log changes to your database. As a DBA or database developer, it may be important to know if and when something in your database has changed. In some cases, you may even want to prevent changes by any user altogether.
  • You may have naming conventions in your database. DDL triggers can enforce them by preventing improperly named tables, views, stored procedures, and even table columns.
  • You may want to perform other actions in response to a DDL event. You could, for example, create a record in the server's event log or execute CLR code you've installed on your SQL Server
    3) CLR Triggers:

Sunday, January 4, 2009

SQL Server Indexes

Why and what:
  • What indexes do and why they are important. The primary reason indexes are built is to provide faster data access to the specific data your query is trying to retrieve. This could be either a clustered or non-clustered index. Without having an index SQL Server would need to read through all of the data in order to find the rows that satisfy the query
  • An index is made up of a set of pages (index nodes) that are organized in a B-tree structure. This structure is hierarchical in nature, with the root node at the top of the hierarchy and the leaf nodes at the bottom, as shown in Figure















When a query is issued against an indexed column, the query engine starts at the root node and navigates down through the intermediate nodes, with each layer of the intermediate level more granular than the one above. The query engine continues down through the index nodes until it reaches the leaf node. For example, if you’re searching for the value 113 in an indexed column, the query engine would first look in the root level to determine which page to reference in the top intermediate level. In this example, the first page points the values 1-100, and the second page, the values 101-200, so the query engine would go to the second page on that level. The query engine would then determine that it must go to the third page at the next intermediate level. From there, the query engine would navigate to the leaf node for value 113. The leaf node will contain either the entire row of data or a pointer to that row, depending on whether the index is clustered or nonclustered
Types of Indexes:



  • Clustered Indexes:
    A clustered index stores the actual data rows at the leaf level of the index. Returning to the example above, that would mean that the entire row of data associated with the primary key value of 113 would be stored in that leaf node.
  • An important characteristic of the clustered index is that the indexed values are sorted in either ascending or descending order. As a result, there can be only one clustered index on a table or view. In addition, data in a table is sorted only if a clustered index has been defined on a table.
  • Note: A table that has a clustered index is referred to as a clustered table. A table that has no clustered index is referred to as a heap.
  • Nonclustered Indexes
    Unlike a clustered indexed, the leaf nodes of a nonclustered index contain only the values from the indexed columns and row locators that point to the actual data rows, rather than contain the data rows themselves.
  • This means that the query engine must take an additional step in order to locate the actual data.
  • A row locator’s structure depends on whether it points to a clustered table or to a heap. If referencing a clustered table, the row locator points to the clustered index, using the value from the clustered index to navigate to the correct data row. If referencing a heap, the row locator points to the actual data row.
  • Nonclustered indexes cannot be sorted like clustered indexes; however, you can create more than one nonclustered index per table or view. SQL Server 2005 supports up to 249 nonclustered indexes, and SQL Server 2008 support up to 999. This certainly doesn’t mean you should create that many indexes. Indexes can both help and hinder performance
  • In addition to being able to create multiple nonclustered indexes on a table or view, you can also add included columns to your index. This means that you can store at the leaf level not only the values from the indexed column, but also the values from non-indexed columns. This strategy allows you to get around some of the limitations on indexes. For example, you can include non-indexed columns in order to exceed the size limit of indexed columns .

Sub Types in Indexes:

  • In addition to an index being clustered or nonclustered, it can be configured in other ways:
    A unique index is automatically created when you define a primary key or unique constraint:
    Primary key: When you define a primary key constraint on one or more columns, SQL Server automatically creates a unique, clustered index if a clustered index does not already exist on the table or view. However, you can override the default behavior and define a unique, nonclustered index on the primary key.
  • Unique: When you define a unique constraint, SQL Server automatically creates a unique, nonclustered index. You can specify that a unique clustered index be created if a clustered index does not already exist on the table.
  • Covering index: A type of index that includes all the columns that are needed to process a particular query. For example, your query might retrieve the FirstName and LastName columns from a table, based on a value in the ContactID column. You can create a covering index that includes all three columns

Index Best Practices:
When in comes to best practices, in some respects I consider them a set of ground rules that should be used as a means to start the design and development process as opposed to an absolute set of rules that are always correct. With that being said, let's outline some points of reference to use as a starting point to address your indexing needs:

  • For tables that are heavily updated, use as few columns as possible in the index, and don’t over-index the tables.
  • If a table contains a lot of data but data modifications are low, use as many indexes as necessary to improve query performance. However, use indexes judiciously on small tables because the query engine might take longer to navigate the index than to perform a table scan.
  • For clustered indexes, try to keep the length of the indexed columns as short as possible. Ideally, try to implement your clustered indexes on unique columns that do not permit null values. This is why the primary key is often used for the table’s clustered index, although query considerations should also be taken into account when determining which columns should participate in the clustered index.
  • The uniqueness of values in a column affects index performance. In general, the more duplicate values you have in a column, the more poorly the index performs. On the other hand, the more unique each value, the better the performance. When possible, implement unique indexes.
  • For composite indexes, take into consideration the order of the columns in the index definition. Columns that will be used in comparison expressions in the WHERE clause (such as WHERE FirstName = 'Raju') should be listed first. Subsequent columns should be listed based on the uniqueness of their values, with the most unique listed first.

Disadvantages of indexes:

  • Occupies high Disk space
  • Slower when performing insert, update, delete operations