Exciting features of SQL Server 2012
Feature number 1 (Revolution):- Column store indexes
Feature number 2 (Evolution):- Sequence objects
Feature number 3 (Revolution):- Pagination
Feature number 4 (Revolution):- Contained database
Feature number 5 (Evolution):- Error handling
Feature number 6 (Evolution):- User defined roles
Feature number 7 (Evolution):- Windows server core support
Feature number 8 (Revolution):- Tabular Model (SSAS)
Feature number 9 (Revolution):- Power view
Feature number 10 (Revolution):- DQS Data quality services
Feature number 1 (Revolution):- Column store indexesColumn store indexes are unexpected and awesome feature. When I read this feature first time I was like, mouth wide open. You can get this feature when you right click on the indexes folder as “Non-Clustered Column store Index” , as shown in the below figure.
So let’s quickly understand what exactly it does. Now Relational database store data “row wise”. These rows are further stored in 8 KB page size.
For instance you can see in the below figure we have table with two columns “Column1” and “Column2”. You can see how the data is stored in two pages i.e. “page1” and “page2”. “Page1” has two rows and “page2” also has two rows. Now if you want to fetch only “column1”, you have to pull records from two pages i.e. “Page1” and “Page2”, see below for the visuals.
As we have to fetch data from two pages its bit performance intensive.
If somehow we can store data column wise we can avoid fetching data from multiple pages. That’s what column store indexes do. When you create a column store index it stores same column data in the same page. You can see from the below visuals, we now need to fetch “column1” data only from one page rather than querying multiple pages.
Below is a simple code to create a sequence object. You can see we have created a sequence object called as “MySeq” with the following specification:-
- Starts with value 1.
- Increments with value 1 Minimum value it should start is with zero.
- Maximum it will go to 100. No cycle defines that once it reaches 100 it will throw an error.
- If you want to restart it from 0 you should provide “cycle”.
- “cache 50” specifies that till 50 the values are already incremented in to cache to reduce IO. If you specify “no cache” it will make input output on the disk.
create sequence MySeq as int
start with 1 -- Start with value 1
increment by 1-- Increment with value 1
minvalue 0 -- Minimum value to start is zero
maxvalue 100 -- Maximum it can go to 100
no cycle -- Do not go above 100
cache 50 -- Increment 50 values in memory rather than incrementing from
IOTo increment the value we need to call the below select statement. This is one more big difference as compared to identity.In identity the values increment when rows are added here we need to make an explicit call.
SELECT NEXT VALUE FOR dbo.MySequence AS seq_no;
For instance let’s says we have the following customer table which has 12 records. We would like to split the records in to 6 and 6.
So doing pagination is a two-step process: -
- First mark the start of the row by using “OFFSET” command.
- Second specify how many rows you want to fetch by using “FETCH” command.
select * from
tblcustomer order by customercode
offset 0 rows – start from zeroIn the below code snippet we have specified we want to fetch “6” rows from the start “0”position specified in the “OFFSET”.
fetch next 6 rows onlyNow if you run the above SQL you should see 6 rows.
To fetch the next 6 rows just change your “OFFSET” position. You can see in the below code snippet I have modified the offset to 6. That means the row start position will from “6”.
select * from
tblcustomer order by customercode
offset 6 rows
fetch next 6 rows onlyThe above code snippet displays the next “6” records , below is how the output looks.
So one of the requirements from easy migration perspective is to create databases which are self-contained. In other words, can we have a database with meta-data information, security information etc with in the database itself. So that when we migrate the database, we migrate everything with it. There’s where “Contained” database where introduced in SQL Server 2012.
Creating contained database is a 3 step process: -
Step 1: - First thing is to enable contained database at SQL Server instance level. You can do the same by right clicking on the SQL Server instance and setting “Enabled Contained Database” to “true”.
You can achieve the same by using the below SQL statements as well.
sp_configure 'show advanced options',1
RECONFIGURE WITH OVERRIDE
sp_configure 'contained database authentication', 1
RECONFIGURE WITH OVERRIDE
GOStep 2 - The next step is to enable contained database at database level. So when create a new database set “Containment type” to partial as shown in the below figure.
You can also create database with “containment” set to “partial” using the below SQL code.
CREATE DATABASE [MyDb]
CONTAINMENT = PARTIAL
( NAME = N'My', FILENAME = N'C:\My.mdf')
( NAME = N'My_log', FILENAME =N'C:\My_log.ldf')Step 3: - The final thing now is to test if “contained” database fundamental is working or not. Now we want the user credentials to be part of the database , so we need to create user as “SQL User with password”.
You can achieve the same by using the below script.
CREATE USER MyUser
WITH PASSWORD = 'pass@123';
GONow if you try to login with the user created, you get an error as shown in the below figure. This proves that the user is not available at SQL Server level.
Now click on options and specify the database name in “connect to database” , you should be able to login , which proves that user is part of database and not SQL Server
declare @n int = 0;
set @n = 1/0;
print('divide by zero');
RAISERROR ( ‘Divide by zero‘, 16, 1) ;
end catchBut what still is itching me in the above code is when it comes to propagating errors back to the client I was missing the “THROW” command. We still need to use “RAISEERROR” which does the job, but lacks lot of capabilities which “THROW” has. For example to throw user defined messages you need to make entry in to “sys.messages” table.
Below is how the code with “throw” looks like.
-- The code where error has occurred.
-- throw error to the client
end catchIf you want to throw exception with a user defined message defined you can use the below code. No entry need in the “sys.messages” table.
THROW 49903, 'User define exception.', 1From SQL Server 2012 onwards use “Throw” rather than “raiseerror” , looking at the features of “throw” looks like sooner or later “raiseerror” will be deprecated . Below is a comparison table which explains the difference between “throw” vs “raiseerror”.
User & system exception
Can generate only user exception.
Can generate user and system exception.
You can supply adhoc text does not need an entry in “Sys.Messages” table.
You need to make an entry in “Sys.Messages” table.
Original exception is propagated to the client.
Original exception is lost to the client.
Now that’s a serious limitation. Let’s say you have two sets of database user one programmers and the other DBA’s. The programmers should be able to fire insert, update and delete queries while DBA’s should be able to create database, backup and do maintenance related activities. But DBA’s should not be able to fire insert, update and delete queries. But now because you have fixed roles the DBA’s get more access so they can even fire insert, update and delete queries. In simple words we need flexible roles.
In SQL Server 2012 you can create your own role and define customized permission for the role at a more granular level.
You can see in the below image how you can select permission at a finer level and create customized roles which can be later assigned to a user.
So the basic flow goes in 3 steps :-
- First data is brought to central database (data ware house) using SSIS package. The design of the data ware house system is normally in snow flake or star schema, so that we can create CUBE’s effectively.
- Later analysis services runs over the data ware house to create CUBES to give multi-dimensional view of the data for better analysis.
- We can then run different clients like EXCEL, SSRS etc to display data to different sections of users.
The biggest issue is simple business users CAN NOT CONTRIBUTE TO CUBES. I mean if I am a business user who would like to take data from a excel sheet, use my excel formula skills, derive conclusions and publish cubes, so how do I go about it?. My personal belief is that the best business analysis can only be done by business end users who actually do business on the field. They are the best people who understand things and can create CUBES which are more useful and logical.
Also if you notice the previous steps its highly technical:-
- Can a simple business user create DB designs like snow flake / star schema?
- Can he use the complicated SSAS user interface to publish cubes?.
- Does he have the knowledge of using SQL Server analysis capability?
Now personal users work most of the time with EXCEL and if we really want to give analysis power to them, it should be inside excel itself. That’s what power pivot does. Power pivot is plugin which sits inside EXCEL and gives analytical capabilities to simple personal users to do analysis with data they have in EXCEL.
Now EXCEL data is in tabular format with rows and columns. So if you want publish this kind of analyzed data from EXCEL you need to have SSAS installed in tabular mode.
So now if you compare personal users with professional BI the workflow will be following:-
- Professional BI personal will use SSIS, data flows, control flows etc.
- Personal BI people can use import, copy past mechanism to get data in to EXCEL.
- Professional BI person will uses SSAS , BI intelligence algorithm to do analysis. Once analysis is done they will publish in multi-dimension format.
- Personal BI people will use power pivot and excel formulas to come to an analysis. Once analysis is done they will publish in tabular format.
So the personal BI user can use power pivot to do analysis. He can then save the same as an simple EXCEL file.
You can then select import from power pivot, go to power pivot EXCEL file and deploy the same in a tabular format.
Once deployed you should see the CUBE deployed in SSAS as shown in the below figure.
Because the CUBE is created from tabular format we cannot use MDX to query the CUBE. No worries, a new simple query language have been introduced called as DAX (Data analysis expression). You can see in the below figure how I have queried the “Sales 1” cube. DAX query starts with evaluate keyword, brackets and then the cube name.
This article will not go in to DAX as our main concentration is SQL Server 2012 new features.
Power view is created for simple end user who would like to drag and drop and create their own report using ad-hoc ways. It’s a simple Silverlight plugin which gets downloaded and you get a screen something as shown below. End users can now drag and drop the fields from right hand side, create a report and publish it. Please note end users can not add fields that have to be added from SSRS or Power pivot.
This feature would have been my top feature but due a serious limitation it is not. “Power view only works with SharePoint”….I am sure you are feeling hurt like me. Hope Microsoft makes this independent of share point.
If we visualize properly you can understand what the end GOAL of Microsoft is to empower simple business users so that can do BI themselves. So a personal BI user cannot get data in EXCEL, do analysis by using Power pivot and finally create reports using the ad-hoc reporting tool power view.
Knowledge base will help you define your validation rules. For instance you can see in the below figure how we are creating a validation called as “CustomerCode” and this validation checks if the data length is equal to 10.
You can also define correction rules like as shown one below. If you find data as “IND” change it to “India”.
Once you have defined you knowledge, next step is to run this knowledge base over a data. So create a DQS project and apply the knowledge base which you had created as shown in the below figure.
You can then define where the data can come from and also you can map which columns can have which validations. For instance you can see in the below screen for country and customer we have mapped different domains. Domains are nothing but validation rules.
Once done you can start the process and you would see a progress screen as shown below of corrected values and suggested values depending.
Finally you can export the cleaned data to SQL Server, Excel or CSV.