Saturday, 28 July 2012


ALTER TABLE ADD FOREIGN KEY Constraint Introduction

Foreign Key constraint is used to ensure that a value in the Foreign Key column exists also in Primary Key field in Parent Table. For example if you have new product with code ABC123 than you should not be able to sell it until it entered on the system in Product table (Parent table). Foreign Key constraint check if value 'ABC123' exists in Parent Table in Primary Key and if it does not then it will fail the operations (insert or update).

FOREIGN KEY constraint  syntax:
FOREIGN KEY  (ForeignKey_ColumnName)
REFERENCES PrimaryKey_TableName (PrimaryKey_ColumnName)
Foreign Key Constraints are added in tables where FK column exists (Child Table) using ALTER TABLE ADD CONSTRAINT. You need to specify your column using FOREIGN KEY and related to table that contains Primary Key for this Foreign Key using REFERENCES and specifying PirmaryKey Table Name (Parent Table) and Pirmary Key Column.

In the Object Explorer I open AdventureWorksDW2012 database then I expand Tables folder and find table which I will use to add  FOREIGN KEY constraint. In this case it is dbo.FactInternetSales table. 


I open new query and type my code to create foreign key constraint (see below) and run my query.

You can see that I add Foreign Key to FactInternetSales by using ALTER TABLE on this table. Add constraint name and specify field name using FOREIGN KEY (CustomerKey) and then use REFERENCE to say that CustomerKey is related to CustomerKey Primary key in DimCustomer
In the Object Explorer, Keys folder (NOT Constraints folder) I can see that new foreign key constraint is added.

I hope that help

PRIMARY KEY Constraint

ALTER TABLE ADD PRIMARY KEY Constraint Introduction

PRIMARY KEY  uniquely identifies each record in a database table.
Primary Key is a constraint as it does not allow duplicated records. Using ALTER TABLE ADD CONSTRAINT PRIMARY KEY statement we are able to create primary key and give it a proper name. Name of Primary Key Constraint is important if you need comply with company standards and would like to keep the code more manageable which is particularly important with bigger solutions. You can also use SQL CREATE TABLE to add Primary Key but we don't recommend it and we also suggest to use Visual Studio Database Project.
NOTE: You can have only one Primary Key in a table. If you need to enforce uniqueness of fields multiple times than check our SQL unique key constraint article.

PRIMARY KEY Constraint Syntax

Below you can syntax that allows you to create

ALTER TABLE table name 

In the Object Explorer I open required database in this case it is AdventureWorksDW2012 then I expand Tables folder and find table where I will use ALTER TABLE ADD CONSTRAINT PRIMARY KEY example. In this case it is a table I created and is called dbo.Client.
NOTE: I have created my own table because there are already Primary Keys.


I use Client table and you can see that my Keys folder is empty.

Below you can see my code and you that I ALTER TABLE dbo.Client and ADD CONSTRAINT with name pk_ClientID and specify constraint type which in my case is PRIMARY KEY and specify columns. In my case I specified one column which is often the case but in some cases you might need to specify more than one column and you can do that by separating the field names using comma.
In the Messages box I see information that my query was run successfully.

I use refresh button in the Object Explorer and in my Keys folder I can see pk_ClientID constraint.

I hope that will helps



DEFAULT CONSTRAINT is used to insert a default value into a column during insert operation if no other value is specified for that column,

In my example I will show you how to set DEFAULT Constraint on TotalChildren field. It might happen that someone hasn't got any children and no value is specified in this case DEFAULT value will be added to all New rows as a zero (0). That means during insert even if TotalChildren is not used value 0 will be used.


ADD CONSTRAINT def_DefaultName DEFAULT (value) FOR FieldName

In the Object Explorer I open required database in this case it is AdventureWorksDW2012 then I expand Tables folder and find table which will use to ALTER TABLE ADD CONSTRAINT DEFAULT example. In this case it is dbo.DimCustomer table. 


I create new query and type my code to create DEFAULT Constraint def_TotalChildren. and specify value 0.
I run my query and I can see in a Messages window that was successfully.

I go to Object Explorer to check if my Default Constraint was created. In a Constraints folder I see def_TotalChildren constraint appears.

I hope that helps
Take care

SQL CHECK Constraint

ALTER TABLE ADD CHECK Constraint Introduction

 We can use CHECK constraint when we need to limit value a user can insert into a table field. In my example I will show you how to set up CHECK Constraint on BirthDate field. In my example I will put a constraint so users cannot insert BirthDate that is in the future.
You can also use CHECK constraint with multiple fields in one table. Let's say you have StartDate and EndDate fields and you do not want to allow users to enter StartDate that is greater than EndDate.


See below syntax.
ADD CONSTRAINT chk_ConstraintName CHECK (condition)
In the above example you can see that you need to specify table name, constraint name and condition that will be used to check if value is allowed or not.
Before we start see below table that contains BirthDate field that I will use to restrict values.

ALTER TABLE ADD CHECK Constraint Example

I create new query and type my code.
My constraint Name is chk_BirthDate and my condition is BirthDate <= GetDate()
NOTE: GetDate() uses local server date and you might have to use different function if you need precision in dates.
I run the query and get a messages box back.

In the Object Explorer, dbo.DimCustomer in a Constraints folder i can see that my constraint chk_BirthDate was added.

I hope that helps
Take care

SQL Unique Constraint


 We use UNIQUE CONSTRAINT  to make sure that we don't allow any duplicated entries in a column.

I prepared two examples:
  • In my first example I will show you how to set up UNIQUE Constraint on EmailAddress field. The purpose of this example is to not allow user to entry the same EmailAddress more than once. In other words EmailAddress must be unique.
  • The second example I create unq_PersonConstraint to minimize the risk of entering the same person twice. We will only allow new records if FirstName, LastName and BirthDate is unique.
    • NOTE: Sometimes this can occur so you should make sure you don't restrict users from entering valid records.


ADD CONSTRAINT ConstraintName UNIQUE (FieldName) 

In the Object Explorer I open required database in this case it is AdventureWorksDW2012 then I expand Tables folder and find table which I will use to show ALTER TABLE ADD UNIQUE CONSTRAINT example. In this case it is dbo.DimCustomer table. 


First Example:
I open Object Explorer and go to dbo.DimCustomer table, Columns folder to check the field I use. In this case it is EmailAddress.

In new query I type the code to create unq_EmailAddress constraint.

I run the query and In the messages window I can see that my query was run successfully. My unq_EmailAddress was created.

Second Example:
In this example I show you how to create unq_Person and in this case I use three fields FirstName, LastName, BirthDate.

I create new query and type my code to create unq_Person constraint. You can see below.

In Messages window I get information that my query was run successfully. My unq_Person constraint was created.

Those two constraints were created in a dbo.DimCustomer table in a Keys folder (NOT Constraints folder)

I hope that helps

Setting up Database Mail and send mail over internet for SQL Server 2008,2005

Setup SQL Server Database Mail to use a Gmail, Hotmail, Yahoo or AOL account

Many things have changed with SQL Server 2005 and one of these changes is the replacement of SQL Mail with Database Mail.  This is a good thing, because SQL Mail relied on having a MAPI mail client installed such as Outlook in order for it to work.  With SQL Server 2005, this has changed and now the mail services use an SMTP server to send out emails which makes it a whole lot easier to setup and maintain.  So how do you setup Database Mail?


There are two ways that you can setup Database Mail, either by using the stored procedures that are included with SQL Server 2005 or by using SQL Server Management Studio.  For this exercise we will walk through how to setup Database Mail by using the GUI.
To setup Database Mail, connect to your server and expand the Management node of the tree and then right click on "Database Mail".

Then select "Configure Database Mail' and you will get the following welcome screen and the click "Next".

The following screen will appear and select "Set up Database Mail by performing..." and click "Next".

If Database Mail has not been enabled, you will get this following screen. Just click "Yes" to enable it.  If it has already been enabled this screen will not appear.

Enter in a name for the Profile and also a description and click "Add..."

The following screen will appear.  Fill out the details for your mail account that will be used to send out email from SQL Server.  When you are done click "OK".

After you click "OK" you will be brought back to this screen and the SMTP details will now show for the account you just setup.  Click "Next" to continue.

On the next screen you will see the name of the profile that you just setup.  Click on the checkbox to allow this to be a Public profile and also select "Yes" for the default profile and then click "Next".

The following screen has some additional parameters that can be set to control how mail is sent.  You can make changes or leave the defaults.  When you are done click "Next".

A summary screen will appear that shows you all of the options that were selected.  If everything is correct click "Finish" or click "Back" to go back and make changes.

When you click "'Finish" the next screen will appear that shows you the status of installing Database Mail. When this has finished just click "Close" to close this screen.

To test Database Mail, right click on Database Mail and select "Send Test E-Mail".

Fill in a "To:" email address and change the body of the email if you want and then click "Send Test E-Mail".

After you have sent the email you will get this message box to confirm if the email was received or not.  If it was you can click "OK" to close the screen or click "Troubleshoot" which will launch the help information to see what the issue may be and how it can be resolved.

That's all there is to it.  As I mentioned before this can also be setup by using stored procedures. To look at this approach take a look at this article Database Mail in SQL Server 2005.

Next Steps
  • Setting up Database Mail is not that complicated and it is much easier then SQL Mail. Take the time to see how this new approach to mail can work in your environment
  • After you setup Database Mail don't forget to setup your operators, alerts and SQL Agent alert settings

SQL Server DBA Tips SQL Server Developer Tips SQL Server Business Intelligence Tips SQL Server Career Tips SQL Server Tip Categories SQL Server Tutorials SQL Server Webcasts SQL Server Whitepapers SQL Server Tools SQL Server Questions and Answers MSSQLTips Authors About MSSQLTips SQL Server User Groups MSSLQTips Giveaways MSSQLTips Advertising Options

Setup SQL Server Database Mail to use a Gmail, Hotmail, Yahoo or AOL account



One great feature of SQL Server is the ability to get alerts when there are issues.  The alert process can send email notifications using Database Mail after you have configured your SMTP settings and setup your jobs and alerts to use this account.  In some cases you may not have a mail server, but still want to setup alerts.  In this tip we will walk through how you can setup Database Mail to use email services like Gmail, Hotmail, Yahoo, etc...


For this example, I have a SQL Server test environment configured and I want to test the alert mechanism using Hotmail. The following outlines the settings to do this.

Configuring Database Mail

When navigating through the database mail configuration wizard, you will encounter the below screen wherein the relevant details needs to be entered. Let's consider the example of using the SMTP server details for Hotmail.  Enter the details as shown below.setup database mail wizard

Under 'Outgoing Mail Server (SMTP)'
  • E-mail Address - Enter your Hotmail or Live account
  • Server Name - Enter as shown in screenshot
  • Port number - Enter as shown in screenshot
  • The server requires a secure connection (SSL) - check this box
    If this is left unchecked you will encounter this error message, 'The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 4 (2011-12-14T23:36:13). Exception Message: Cannot send mails to mail server. (The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.7.0 Must issue a STARTTLS command first).'
Under 'SMTP Authentication'
  • Select 'Basic authentication'
  • User Name - Enter your Hotmail or Live account
  • Password -  Enter password of your Hotmail or Live account.
  • Confirm Password - Enter password of your Hotmail or Live account.
The below table outlines the necessary SMTP server details and port numbers for Hotmail, Gmail, Yahoo and AOL (see this for more info

SMTP Server Details
Hotmail        SMTP server name:                    Port number:   587
Gmail           SMTP server name:                  Port number:   587
Yahoo          SMTP server name:          Port number :  25
AOL             SMTP server name:                    Port number :  587
Here are some additional links you can refer to that show how to configure Microsoft Outlook to connect to Gmail, Hotmail etc... which is similar to what needs to be done for Database Mail.

Sending a Test Email

Once your database mail profile has been configured you can send test emails for validation to verify setup is configured correctly.  Right click on Database Mail and select Send Test E-Mail as shown below.

send database mail test email from ssms
You could also execute the Database Mail system procedure sp_send_dbmail with the appropriate parameters using this sample script below.
EXEC msdb.dbo.sp_send_dbmail
    @profile_name = 'Enter valid database profile name',
    @recipients = 'Enter Valid Email Address',
    @body = 'This is a test email sent from TEST server',
    @subject = 'TEST EMAIL',
    @importance ='HIGH' 

Database Mail Troubleshooting

After testing, if you are unable to receive notification emails, you could use the below Database Mail views for troubleshooting.
select * from msdb.dbo.sysmail_sentitems 
This contains one row for each message successfully sent by Database Mail.
select * from msdb.dbo.sysmail_unsentitems 
This contains one row for each message that has an unsent or retrying status.
select * from msdb.dbo.sysmail_faileditems 
This contains one row for each Database Mail message that has a failed status.
Here are some additional views sysmail_event_log and sysmail_allitems. There is also a Database Mail system stored procedure msdb.dbo.sysmail_help_queue_sp which could be used as well. 
For detailed steps on troubleshooting database mail issues, refer to this link:



The above steps were performed using SQL Server 2008 R2 and I tested using Hotmail and Gmail.  This should work for any version of SQL Server 2005 and greater.
Note: If your company has an SMTP server I strongly urge that you use your companies SMTP server and only use this where you don't have access to an SMTP server or if you need to test Database Mail.

Next Steps


What is a GUID " globally unique identifier"

What is a GUID? The acronym stands for "globally unique identifier"; GUIDs are also called UUIDs, which stands for "universally unique identifier". (It is unclear to me why we need two nigh-identical names for the same thing, but there you have it.) A GUID is essentially a 128 bit integer, and when written in its human-readable form, is written in hexadecimal in the pattern {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.

  • GUIDs are guaranteed to be unique but not guaranteed to be random. Do not use them as random numbers.
  • GUIDs that are random numbers are not cryptographic strength random numbers.
  • GUIDs are only unique when everyone cooperates; if someone wants to re-use a previously-generated GUID and thereby artificially create a collision, you cannot stop them. GUIDs are not a security mechanism.
  • GUIDs have an internal structure; at least six of the bits are reserved and have special meanings.
  • GUIDs are allowed to be generated sequentially, and in practice often are.
  • GUIDs are only unique when taken as a whole.
  • GUIDs can be generated using a variety of algorithms.
  • GUIDs that are generated randomly are statistically highly unlikely to collide in the foreseeable future.
  • GUIDs could reveal information about the time and place they were created, either directly in the case of version one GUIDs, or via cryptanalysis in the case of version four GUIDs.
  • GUIDs might be generated by some entirely different algorithm in the future.

The purpose of a GUID is, as the name implies, to uniquely identify something, so that we can refer to that thing by its identifier and have confidence that everyone can agree upon what thing we are referring to. Think about this problem as it applies to, say, books. It is cumbersome to refer to a book by quoting it in its entirety every time you mention it. Instead, we give every book an identifier in the form of its title. The problem with using a title as an identifier is that there may be many different books with the same title. I have three different books all entitled "The C# Programming Language" on my desk right now; if I want to refer to one of them in particular, I'd typically have to give the edition number. But there is nothing (apart from their good sense) stopping some entirely different publisher from also publishing a book called "The C# Programming Language, fourth edition" that differs from the others.

Publishers have solved this problem by creating a globally unique identifier for each book called the International Standard Book Number, or ISBN. This is the 13-decimal-digit bar coded number you see on pretty much every book(*). How do publishers manage to get a unique number for each of the millions of books published? They divide and conquer; the digits of an ISBN each have a different meaning. Each country has been assigned a certain range of ISBN numbers that they can allocate; governments then further allocate subsets of their numbers to publishers. Publishers then decide for themselves how to assign the remaining digits to each book. The ISBNs for my three editions of the C# spec are 978-0-321-15491-6, 978-0-321-56299-9 and 978-0-321-74176-9. You'll notice that the first seven digits are exactly the same for each; they identify that this is a publishing industry code (978), that the book was published in a primarily English-speaking region (0), by Addison-Wesley (321). The next five digits are Addison-Wesley's choice, and the final digit is a checksum. If I wish to uniquely identify the fourth edition of the C# specification I need not state the ambiguous title at all; I can simply refer you to book number 978-0-321-74176-9, and everyone in the world can determine precisely which book I'm talking about.

An important and easily overlooked characteristic of the ISBN uniqueness system is that it only works if everyone who uses it is non-hostile. If a rogue publisher decides to deliberately publish books with the ISBN numbers of existing books so as to create confusion then the usefulness of the identifier is compromised because it no longer uniquely identifies a book. ISBN numbers are not a security system, and neither are GUIDs; ISBN numbers and GUIDs  prevent accidental collisions. Similarly, traffic lights only prevent accidental collisions if everyone agrees to follow the rules of traffic lights; if anyone decides to go when the light is red then collisions might no longer be avoided, and if someone is attempting to deliberately cause a collision then traffic lights cannot stop them.

The ISBN system has the nice property that you can "decode" an ISBN and learn something about the book just from its number. But it has the enormous down side that it is extraordinarily expensive to administer. There has to be international agreement on the general form of the identifier and on what the industry and language codes mean. In any given country there must be some organization (either a government body or private companies contracted by the government) to assign numbers to publishers. It can cost hundreds of dollars to obtain a unique ISBN.

What else is new Featuresin C# 5.0 ?

Introduction of New Features in C# 5.0

Please click on the C# 5.0 video to learn more about the C#5.0 new features

1. C# Evolution Matrix

Microsoft just published a new version of C# : 5.0 beta with CLR version 4.5 (Visual Studio 11 beta).
In order to get a big picture of the whole evolution of C# language, I summarized all the key features
into a C# Evolution Matrix for your reference as below diagram shows:

In C# version 5.0, there are two key features: Async Programming and Caller Information.


The big new feature in C# 5 is asynchronous programming support, which I wrote about last week. However, the C# folks have also slipped in a couple of smaller features and I thought I’d round things out by mentioning those.

Method caller information

There’s a complete style guide to be written on Writing Enterprisey Code, but one of my favourite “enterprisey” tells, after the use of Visual Basic, is obsessively logging every function you pass through:

Function AddTwoNumbers(a As Integer, b As Integer) As Integer
  Logger.Trace("ArithmeticHelpers", "AddTwoNumbers", "Entering AddTwoNumbers")
  Dim result = OracleHelpers.ExecInteger("SELECT " & a & " + " & b)
  Logger.Trace("ArithmeticHelpers", "AddTwoNumbers", "Calling PrintPurchaseOrders")
  PrintPurchaseOrders()  ' IFT 12.11.96: don't know why this is needed but shipping module crashes if it is removed
  Logger.Trace("ArithmeticHelpers", "AddTwoNumbers", "Returned from PrintPurchaseOrders")
  Logger.Trace("ArithmeticHelpers", "AddTwoNumbers", "Exiting AddTwoNumbers")
  Return result
End Function
Although this code is efficient and clear by enterprisey standards, with C# 5 it can be even efficienter and clearer. C# 4 introduced optional parameters, which meant callers of a method could leave out the arguments and the compiler would fill in the default values:

public void WonderMethod(int a = 123, string b = "hello") { ... }
WonderMethod(456);  // compiles to WonderMethod(456, "hello")
WonderMethod();     // compiles to WonderMethod(123, "hello")
With C# 5, you can put a special attribute on an optional parameter and the compiler will fill in the value not with a constant but with information about the calling method. This means we can implement the Logger.Trace to automagically pick up where it’s being called from:

public static void Trace(string message, [CallerFilePath] string sourceFile = "", [CallerMemberName] string memberName = "") {
  string msg = String.Format("{0}: {1}.{2}: {3}",
    DateTime.Now.ToString("yyyy-mm-dd HH:MM:ss.fff"),  // Lurking 'minutes'/'months' bug introduced during .NET port in 2003 and has not been noticed because nobody ever looks at the log files because they contain too much useless detail
Now, if the caller calls Log.Trace("some message") the compiler will fill in the missing arguments not with the empty string, but with the file and member where the call happens:

// In file Validation.cs
public void ValidateDatabase() {
  Log.Trace("Entering method");
  // compiles to Log.Trace("Entering method", "Validation.cs", "ValidateDatabase")
  Log.Trace("Exiting method");
Notice that the parameters to which you apply the attributes must be optional. If they aren’t optional, the C# compiler will require the calling code to provide them, and the provided values will override the defaults.
Another example of how you can use this is in implementing INotifyPropertyChanged without needing either literal strings, expression magic or mystic weavers:

public class ViewModelBase : INotifyPropertyChanged {
  protected void Set<T>(ref T field, T value, [CallerMemberName] string propertyName = "") {
    if (!Object.Equals(field, value)) {
      field = value;
  // usual INPC boilerplate
public class Widget : ViewModelBase {
  private int _sprocketSize;
  public int SprocketSize {
    get { return _sprocketSize; }
    set { Set(ref _sprocketSize, value); }  // Compiler fills in "SprocketSize" as propertyName
For what it’s worth, you can also get the line number of the calling code using [CallerLineNumber]. This may be useful for diagnostic methods, but if you really need it, that may be a sign that the calling code is just a bit too enterprisey.

Using loop variables in lambdas

Technically, this is a fix to a long-standing cause of confusion and suffering. But it makes C# that bit more usable, so I’m going to mention it anyway.
Since C# 3, it’s been quicker and easier to write anonymous functions than named ones, thanks to lambda syntax. Anonymous functions are widely used in LINQ, but they’re also used in many other cases where you want to quickly parameterise behaviour without investing in some humungous hierarchy of classes and interfaces and virtual functions. An important feature of anonymous functions is that they can capture variables from their local environment.

 Here’s an example:

public static IEnumerable<int> GetGreaterThan(IEnumerable<int> source, int n) {
  return source.Where(i => i > n);

Here, i => i > n is an anonymous function that captures the value of n. For example, if n is 17, then the function is i => i > 17.

In previous versions of C#, if you wrote a loop, you couldn’t use the loop variable in a lambda. Actually, it was rather worse than that. You could use the loop variable in a lambda, but it would give you the wrong results — it would use the value of the loop variable at the time the loop was exited, not at the time the variable was captured.

For example, here’s a function which returns a collection of ‘adder’ functions, one ‘adder’ for each addend in the input:

public static List<Func<int, int>> GetAdders(params int[] addends) {
  var funcs = new List<Func<int, int>>();
  foreach (int addend in addends) {
    funcs.Add(i => i + addend);
  return funcs;
Let’s take it for a spin:

var adders = GetAdders(1, 2, 3, 4, 5);
foreach (var adder in adders) {
// Printout: 15 15 15 15 15
Clearly this is horribly wrong! Every function in the returned collection has ended up capturing 5 as its addend. This is because they closed over the loop variable, addend, and the final value of the loop variable was 5.

To make this work in C# 3 and 4, you have to remember to copy the loop variable into a local variable (within the scope of the loop), and have your lambda close over the local variable:

foreach (var addend_ in addends) {
  var addend = addend_;  // DON'T GO NEAR THE LOOP VARIABLE
  funcs.Add(i => i + addend)
Because the functions are closing over a local variable rather than the loop variable, the value is now preserved and you get the correct results

In C# version 5.0, there are two key features: Async Programming and Caller Information.

2. Async Feature
Two new key words are used for Async feature: async modifier and await operator. Method marked
with async modifier is called async method. This new feature will help us a lot in async programming.
For example, in the programming of Winform, the UI thread will be blocked while we use
HttpWebRequest synchronously request any resource in the Internet. From the perspective of user
experience, we cannot interact with the form before the request is done.
private void
btnTest_Click(object sender, EventArgs e)


var request = WebRequest.Create(txtUrl.Text.Trim());

var content=new MemoryStream();

using (var response = request.GetResponse())


using (var responseStream = response.GetResponseStream())





txtResult.Text = content.Length.ToString();


In the above example, after we clicked the Test button, we cannot not make any change to the form
before the txtResult textbox shows the result.
In the past, we can also use BeginGetResponse method to send async request as this sample in MSDN
shows: But it
will takes us a lot effort to realize it.
Now, we can simply use below code to do request asynchronously :
private async void
btnTest_Click(object sender, EventArgs e)


var request = WebRequest.Create(txtUrl.Text.Trim());

var content = new MemoryStream();

Task<WebResponse> responseTask = request.GetResponseAsync();

using (var response = await responseTask)

using (var
responseStream = response.GetResponseStream())


Task copyTask = responseStream.CopyToAsync(content);

//await operator to supends the excution of the method until the task is completed. In the meantime,
the control is returned the UI thread.

await copyTask;



txtResult.Text = content.Length.ToString();

The await operator is applied to the returned task. The await operator suspends execution of the
method until the task is completed. Meanwhile, control is returned to the caller of the suspended

3. Caller Information
Caller Information can help us in tracing, debugging and creating diagnose tools. It will help us
to avoid duplicate codes which are generally invoked in many methods for same purpose, such
as logging and tracing.
We could get the below information of caller method :
Below example are a common practice prior to the new feature of Caller Information:
using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;


class Program


static void Main(string[] args)





static void MethodA()




static void MethodB()

{ }
static void
InsertLog(string methodName)


Console.WriteLine("{0} called method B at {1}", methodName,



In both Main and MethodA methods, method InsertLog is invoked for logging. Now we can change the
codes to be as per below lines:
using System;

using System.Collections.Generic;

using System.Linq;

using System.Runtime.CompilerServices;

using System.Text;

using System.Threading.Tasks;


class Program


static void Main(string[] args)





static void MethodA()




static void MethodB(

[CallerMemberName] string memberName = "",

[CallerFilePath] string sourceFilePath = "",

[CallerLineNumber] int sourceLineNumber = 0)



static void
InsertLog(string methodName)


Console.WriteLine("{0} called method B at {1}", methodName,





Tuesday, 17 July 2012

All About SQL Azure Tutorial

Microsoft Windows Azure offers several choices for data storage. These include Windows Azure storage and SQL Azure. You may choose to use one or both in your particular project. Windows Azure storage currently contains three types of storage structures: tables, queues or blobs (which can optionally be virtual machines).

SQL Azure is a relational data storage service in the cloud. Some of the benefits of this offering are the ability to use a familiar relational development model that includes most of the standard SQL Server language (T-SQL), tools and utilities. Of course, working with well-understood relational structures in the cloud, such as tables, views and stored procedures also results in increased developer productivity when working in this new platform type. Other benefits include a reduced need for physical database administration tasks to server setup, maintenance and security as well as built-in support for reliability, high availability and scalability.

Getting Started with SQL Azure

To start working with SQL Azure, you’ll first need to set up an account. If you are a MSDN subscriber, then you can use up to three SQL Azure databases (maximum size 1 GB each) for up to 16 months (details) at  as a developer sandbox. You may prefer to sign up for a regular SQL Azure account (storage and data transfer fees apply), to do so go here. Yet another option is to get a trial 30-day account (no credit card required). To do the latter, go here and use signup code - DPEWR02.

After you’ve signed up for your SQL Azure account, the simplest way to initially access it is via the web portal at You must first sign in with the Windows Live ID that you’ve associated to your Windows Azure account. After you sign in, you can create your server installation and get started developing your application. The number of servers and / or databases you are allowed to create will be dependent on the type of account you’ve signed up for.

An example of the SQL Azure web management portal is shown in Figure 1. Here you can see a server and its associated databases. You’ll note that there is also a tab on this portal for managing the Firewall Settings for this particular SQL Azure installation.


Figure 1 Summary Information for a SQL Azure Server

As you initially create your SQL Azure server installation, it will be assigned a random string for the server name. You’ll generally also set the administrator username, password, geographic server location and firewall rules at the time of server creation. You can select the physical (data center) location for your SQL Azure installation at the time of server creation. You will be presented with a list of locations to choose from. As of this writing, Microsoft has 6 physical data centers, located world-wide to select from. If your application front-end is built in Windows Azure, you have the option to locate both that installation and your SQL Azure installation in the same geographic location by associating the two installations together by using an Affinity Group.

By default there is no client access to your newly-created server, so you’ll first have to create firewall rules for all client IPs. SQL Azure uses port 1433, so make sure that that port is open for your client application as well. When connecting to SQL Azure you’ll use the username@servername format for your username. SQL Azure supports SQL Authentication only; Windows authentication is not supported. Multiple Active Result Set (MARS) connections are supported.

Open connections will ‘time out’ after 30 minutes of inactivity. Also connections can be dropped for long-running queries or transactions or excessive resource usage. Development best practices in your applications around connections are to open, use and then close those connections manually, to include retry connection logic for dropped connections and to avoid caching connections because of these behaviors. Another best practice is to encrypt your connection string to prevent man-in-the-middle attacks. For best practices and code samples for SQL Azure connections (including a suggested library which includes patterned connection retry logic), see this TechNET blog post.

You will be connected to the master database by if you don’t specify a database name in the connection string. In SQL Azure the T-SQL statement USE is not supported for changing databases, so you will generally specify the database you want to connect to in the connection string (assuming you want to connect to a database other than master). Figure 2 below, shows an example of an ADO.NET connection:

Figure 2 Format for SQL Azure connection string

Setting up Databases

After you’ve successfully created and connected to your SQL Azure server, then you’ll usually want to create one or more databases. Although you can create databases using the SQL Azure portal, you may prefer to do so using some of the other tools, such as SQL Server Management Studio 2008 R2. By default, you can create up to 149 databases for each SQL Azure server installation, if you need more databases than that; you must call the Azure business desk to have this limit increased.

When creating a database you must select the maximum size. The current options for sizing (and billing) are Web or Business Edition. Web Edition, the default, supports databases of 1 or 5 GB total. Business Edition supports databases of up to 50 GB, sized in increments of 10 GB – in other words, 10, 20, 30, 40 and 50 GB. Currently, both editions are feature-equivalent.

You set the size limit for your database when you create it by using the MAXSIZE keyword. You can change the size limit or the edition (Web or Business) after the initial creation using the ALTER DATABASE statement. If you reach your size or capacity limit for the edition you’ve selected, then you will see the error code 40544. The database size measurement does NOT include the master database, or any database logs. For more detail about sizing and pricing, see this link  Although you set a maximum size, you are billed based on actual storage used.

It’s important to realize that when you are creating a new database on SQL Azure, you are actually creating three replicas of that database. This is done to ensure high availability. These replicas are completely transparent to you. Currently, these replicas are in the same data center. The new database appears as a single unit for your purposes.  Failover is transparent and part of the service you are paying for is a SLA of 99.9% uptime.

After you’ve created a database, you can quickly get the connection string information for it by selecting the database in the list on the portal and then clicking the ‘Connection Strings’ button. You can also test connectivity via the portal by clicking the ‘Test Connectivity’ button for the selected database. For this test to succeed you must enable the ‘Allow Microsoft Services to Connect to this Server’ option on the Firewall Rules tab of the SQL Azure portal.

Creating Your Application

After you’ve set up your account, created your server, created at least one database and set a firewall rule so that you can connect to the database, then you can start developing your application using this data source.
Unlike with Windows Azure data storage options such as tables, queues or blobs, when you are using SQL Azure as a data source for your project, there is nothing to install in your development environment. If you are using Visual Studio 2010, you can just get started – no additional SDKs, tools or anything else are needed.

Although many developers will choose to use a Windows Azure front-end with a SQL Azure back-end, this configuration is NOT required. You can use ANY front-end client with a supported connection library such as ADO.NET or ODBC. This could include, for example, an application written in Java or PHP. Of note is that connecting to SQL Azure via OLE DB is currently not supported.

If you are using Visual Studio 2010 to develop your application, then you can take advantage of the included ability to view or create many types of objects in your selected SQL Azure database installation directly from the Visual Studio Server Explorer View. These objects are Tables, Views, Stored Procedures, Functions or Synonyms. You can also see the data associated with these objects using this viewer. For many developers using Visual Studio 2010 as their primary tool to view and manage SQL Azure data will be sufficient. The Server Explorer View window is shown in Figure

3. Both a local installation of a database and a cloud-based instance are shown. You’ll note that the tree nodes differ slightly in the two views. For example there is no Assemblies node in the cloud installation because custom assemblies are not supported in SQL Azure.

Figure 3 Viewing Data Connections in Visual Studio

Of note also in Visual Studio is that using the Entity Framework with SQL Azure is supported. Also you may choose to use Data-Tier application packages (or DACPACs) in Visual Studio. You can create, import and / or modify DACPACS for SQL Azure schemas in VS2010.
Another developer tool that can now use to create applications which use SQL Azure as a data source is Visual Studio Light Switch. This is a light-weight developer environment, based on the idea of ‘data and screens’ created for those who are tasked with part-time coding, most especially those who create ‘departmental applications. To try out the beta version of Visual Studio Light Switch go to this location .

Shown below (Figure 4) is connecting to a SQL Azure data source using the Light Switch IDE.


Figure 4 Connecting to SQL Azure in Visual Studio Light Switch

If you are wish to use SQL Azure as a data source for Business Intelligence projects, then you’ll use Visual Studio Business Intelligence Development Studio 2008 (R2 version needed to connect to SQL Azure). In addition, Microsoft has begun a limited (invite-only) customer beta of SQL Azure Reporting Services, a version of SQL Server Reporting Services for Azure. Microsoft has announced that on the longer-term roadmap for SQL Azure, they are working to cloud-enable versions of the entire BI stack, that is Analysis Services, Integration Services and Reporting Services.

More forward-looking, Microsoft has announced that in vNext of Visual Studio the BI toolset will be integrated into the core product with full SQL Azure compatibility and intellisense. This project is code-named ‘Juneau’ and is expected to go into public beta later this year. For more information (and demo videos of Juneau) see this link.
As I mentioned earlier, another tool you may want to use to work with SQL Azure is SQL Server Management Studio 2008 R2. Using SSMS, you actually have access to a fuller set of operations for SQL Azure databases using SSMS than in Visual Studio 2010. I find that I use both tools, depending on which operation I am trying to complete. An example of an operation available in SSMS (and not in Visual Studio 2010) is creating a new database using a T-SQL script. Another example is the ability to easily performance index operations (create, maintain, delete and so on). An example is shown in Figure 5 below.

Although working with SQL Azure databases in SSMS 2008 R2 is quite similar to working with an on-premises SQL Server instance, tasks and functionality are NOT identical. This is due mostly due to product differences. For example, you may remember that in SQL Azure the USE statement to CHANGE databases is NOT supported. A common way to do this when working in SSMS it is to right click an open query window, then click ‘Connection’>’Change connection’ on the context-sensitive menu and then to enter the next database connection information in the ‘Connect to Database Engine’ dialog box that pops up.

Generally when working in SSMS, if an option isn’t supported in SQL Azure either, you simply can’t see it such as folders in the Explorer tree not present; context-sensitive menu-options not available when connected to a SQL Azure instance, or you are presented with an error when you try to execute a command this isn’t supported in this version of SQL Server.  You’ll also note that many of the features available with GUI interfaces for SQL Server with SSMS are exposed only via T-SQL script windows for SQL azure. These include common features, such as CREATE DATABASE, CREATE LOGIN, CREATE TABLE, CREATE USER, etc…

One tool that SQL Server DBAs often ‘miss’ in SQL Azure is SQL Server Agent. This functionality is NOT supported. However, there are 3rd party tools as well as community projects, such as the one on CodePlex here  which provide examples of using alternate technologies to create ‘SQL-Agent-like’ functionality for SQL Azure.

Figure 5 Using SQL Server Management Studio 2008 R2 to Manage SQL Azure

As mentioned in the discussion of Visual Studio 2010 support, newly released in SQL Server 2008 R2 is a data-tier application or DAC. DAC pacs are objects that combine SQL Server or SQL Azure database schemas and objects into a single entity.

You can use either Visual Studio 2010 (to build) or SQL Server 2008 R2 SSMS (to extract) to create a DAC from an existing database. If you wish to use Visual Studio 2010 to work with a DAC, then you’d start by selecting the SQL Server Data-Tier Application project type in Visual Studio 2010. Then, on the Solution Explorer, right-click your project name and click ‘Import Data Tier Application’. A wizard opens to guide you through the import process. If you are using SSMS, start by right-clicking on the database you want to use in the Object Explorer, click Tasks, and then click ‘Extract Data-tier Application’ to create the DAC. The generated DAC is a compressed file that contains multiple T-SQL and XML files. You can work with the contents by right-clicking the .dacpac file and then clicking Unpack. SQL Azure supports deleting, deploying, extracting, and registering DAC pacs, but does not support upgrading them.   Figure 6 below, shows the template in Visual Studio 2010 for working with DACPACs


Figure 6 The ‘SQL Server Data-tier Application’ template in Visual Studio 2010 (for DACPACs)

Also of note is that Microsoft has released a CTP version of enhanced DACPACs, called BACPACs, that support import/export of schema AND data (via BCP). Find more information here . Another name for this set of functionality is the import/export tool for SQL Azure.

Another tool you can use to connect to SQL Azure is the Silverlight-based web tool called the SQL Azure Web Management tool shown in Figure 7 below. It’s intended as a zero-install client to manage SQL Azure installations. To access this tool navigate to the main Azure portal here,

then click on the ‘Database’ node in the tree view on the left side. You will next click on the database that you wish to work with and then click on the ‘Manage’ button on the ribbon. This will open the login box for the web client. After you enter the login credentials, then a new web page will open which will allow you to work with that databases’ Tables, Views, Queries and Stored Procedures in a SQL Azure database installation.


Figure 7 Using the Silverlight Web Portal to manage a SQL Azure Database
Of course, because the portal is built on Silverlight, you can view, monitor and manage the exposed aspects of SQL Azure with any browser using the web management tool. Shown below in Figure 8 is the portal running on a MacOS with Google Chrome.


Figure 8 Using the Silverlight Web Portal to manage a SQL Azure Database on a Mac with Google Chrome

Still another tool you can use to connect to a SQL Azure database is SQLCMD (more information here ). Of note is that even though SQLCMD is supported, the OSQL command-line tool is not supported by SQL Azure.

Using SQL Azure

So now you’ve connected to your SQL Azure installation and have created a new, empty database. So what exactly can you do with SQL Azure?

Specifically you may be wondering what are the limits on creating objects?

And after those objects have been created, how do you populate those objects with data? As I mentioned at the beginning of this article, SQL Azure provides relational cloud data storage, but it does have some subtle feature differences to an on premise SQL Server installation. Starting with object creation, let’s look at some of the key differences between the two.
You can create the most commonly used objects in your SQL Azure database using familiar methods. The most commonly used relational objects (which include tables, views, stored procedures, indices, and functions) are all available. There are some differences around object creation though. I’ll summarize the differences in the next paragraph.

SQL Azure tables MUST contain a clustered index. Non-clustered indices CAN be subsequently created on selected tables. You CAN create spatial indices; you can NOT create XML indices. Heap tables are NOT supported. CLR types of Geo-spatial only types (such as Geography and Geometry) ARE supported. Also Support for the HierachyID data type IS included. Other CLR types are NOT supported. View creation MUST be the first statement in a batch. Also view (or stored procedure) creation with encryption is NOT supported. Functions CAN be scalar, inline or multi-statement table-valued functions, but can NOT be any type of CLR function.

There is a complete reference of partially supported T-SQL statements for SQL Azure on MSDN here .

Before you get started creating your objects, remember that you will connect to the master database if you do not specify a different one in your connection string. In SQL Azure, the USE (database) statement is not supported for changing databases, so if you need to connect to a database other than the master database, then you must explicitly specify that database in your connection string as shown earlier.

Data Migration and Loading

If you plan to create SQL Azure objects using an existing, on-premises database as your source data and structures, then you can simply use SSMS to script an appropriate DDL to create those objects on SQL Azure. Use the Generate Scripts Wizard and set the ‘Script for the database engine type’ option to ‘for SQL Azure’.

An even easier way to generate a script is to use the SQL Azure Migration Wizard available as a download from CodePlex here . With this handy tool you can generate a script to create the objects and can also load the data via bulk copy using bcp.exe.

You could also design a SQL Server Integration Services (SSIS) package to extract and run a DML or DDL script. If you are using SSIS, you’d most commonly design a package that extracts the DDL from the source database, scripts that DDL for SQL Azure and then executes that script on one or more SQL Azure installations. You might also choose to load the associated data as part of this package’s execution path. For more information about working with SSIS here.

Also of note regarding DDL creation and data migration is the CTP release of SQL Azure Data Sync Services here). You can also see this service in action in a Channel 9 video here . Currently SQL Azure Data Sync services works via Synchronization Groups (HUB and MEMBER servers) and then via scheduled synchronization at the level of individual tables in the databases selected for synchronization.  For even more about Data Sync listen in to this recent MSDN geekSpeak show by new SQL Azure MVP Ike Ellis on his experiences with SQL Azure Data Sync.

You can use the Microsoft Sync Framework Power Pack for SQL Azure to synchronize data between a data source and a SQL Azure installation. As of this writing, this tool is in CTP release and is available here . If you use this framework to perform subsequent or ongoing data synchronization for your application, you may also wish to download the associated SDK.
What if your source database is larger than the maximum size for the SQL Azure database installation? This could be greater than the absolute maximum of 50 GB for the Business Edition or some smaller limit based on the other program options.

Currently, customers must partition (or shard) their data manually if their database size exceeds the program limits. Microsoft has announced that it will be providing a federation (or auto-partitioning utility) for SQL Azure in the future. For more information about how Microsoft plans to implement federation, read here.  To support federations new T-SQL syntax will be introduced. From the blog post referenced above, Figure 9, below, shows a conceptual representation of that new syntax.


Figure 9 SQL Azure Federation (conceptual syntax)

As of this writing SQL Azure Federation customer beta program has been announced. To Sign up go here 
It’s important to note that T-SQL table partitioning is NOT supported in SQL Azure. There is also a free utility called Enzo SQL Shard (available here) that you can use for partitioning your data source.
You’ll want to take note of some other differences between SQL Server and SQL Azure regarding data loading and data access. Added recently is the ability to copy a SQL Azure database via the Database copy command. The syntax for a cross-server copy is as follows:

The T-SQL INSERT statement IS supported (with the exceptions of updating with views or providing a locking hint inside of an INSERT statement). Related further to data migration is that T-SQL DROP DATABASE and other DDL commands have additional limits when executed against a SQL Azure installation. Also the T-SQL RESTORE and ATTACH DATABASE commands are not supported. Finally, the T-SQL statement EXECUTE AS (login) is not supported.

If you are migrating from a data source other than SQL Server, there are also some free tools and wizards available to make the job easier. Specifically there is an Access to SQL Azure Migration wizard and a MySQL to SQL Azure Migration wizard. Both work similarly to the SQL Azure Migration wizard in that they allow you to map the source schema to a destination schema, then create the appropriate DDL, then they allow you to configure and to execute the data transfer via bcp. A screen from the

MySQL to SQL Azure Migration wizard is shown in Figure 10 below.
Here are links for some of these tools:

1) Access to SQL Azure Migration Wizard – here
2) MySQL to SQL Azure Migration Wizard – here
3) Oracle to SQL Server Migration Wizard (you will have to manually set the target version to ‘SQL Azure’ for appropriate DDL script generation) – here 
Figure 10 Migration from MySQL to SQL Azure wizard screen
For even more information about migration, you may want to listen in to a recently recorded a 90 minute webcast with more details (and demos!) for Migration scenarios to SQL Azure  - listen in here.  Joining me on this webcast is the creator of the open-source SQL Azure Migration Wizard – George Huey.  I also posted a version of this presentation (both slides and screencast) on my blog – here.

Data Access and Programmability

Now let’s take a look at common programming concerns when working with cloud data.

First you’ll want to consider where to set up your development environment. If you are an MSDN subscriber and can work with a database under 1 GB, then it may well make sense to develop using only a cloud installation (sandbox). In this way there will be no issue with migration from local to cloud. Using a regular (i.e. not MSDN subscriber) SQL Azure account you could develop directly against your cloud instance (most probably a using a cloud-located copy of your production database). Of course developing directly from the cloud is not practical for all situations.

If you choose to work with an on-premises SQL Server database as your development data source, then you must develop a mechanism for synchronizing your local installation with the cloud installation. You could do that using any of the methods discussed earlier, and tools like Data Sync Services and Sync Framework are being developed with this scenario in mind.

As long as you use only the supported features, the method for having your application switch from an on-premise SQL Server installation to a SQL Azure database is simple – you need only to change the connection string in your application.

Regardless of whether you set up your development installation locally or in the cloud, you’ll need to understand some programmability differences between SQL Server and SQL Azure. I’ve already covered the T-SQL and connection string differences. In addition all tables must have a clustered index at minimum (heap tables are not supported). As previously mentioned, the USE statement for changing databases isn’t supported.

This also means that there is no support for distributed (cross-database) transactions or queries, and linked servers are not supported.
Other options not available when working with a SQL Azure database include:
- Full-text indexing
- CLR custom types (however the built-in Geometry and Geography CLR    types are supported)
- RowGUIDs (use the uniqueidentifier type with the NEWID function instead)
- XML column indices
- Filestream datatype
- Sparse columns

Default collation is always used for the database. To make collation adjustments, set the column-level collation to the desired value using the T-SQL COLLATE statement. And finally, you cannot currently use SQL Profiler or the Database Tuning Wizard on your SQL Azure database.

Some important tools that you CAN use with SQL Azure for tuning and monitoring are the following:

- SSMS Query Optimizer to view estimated or actual query execution plan details and client statistics

- Select Dynamic Management views to monitor health and status

- Entity Framework to connect to SQL Azure after the initial model and mapping files have been created by connecting to a local copy of your SQL
Azure database.

Depending of what type of application you are developing, you may be using SSAS, SSRS, SSIS or Power Pivot. You CAN also use any of these products as CONSUMERS of SQL Azure database data. Simply connect to your SQL Azure server and selected database using the methods already described in this article.

Another developer consideration is in understanding the behavior of transactions. As mentioned, only local (within the same database) transactions are supported. Also it is important to understand that the only transaction isolation level available for a database hosted on SQL Azure is READ COMMITTED SNAPSHOT. Using this isolation level, readers get the latest consistent version of data that was available when the statement STARTED. SQL Azure does not detect update conflicts. This is also called an optimistic concurrency model, because lost updates, non-repeatable reads and phantoms can occur. Of course, dirty reads cannot occur.

Yet another method of accessing SQL Azure data programmatically is via OData. Currently in CTP and available here , you can try out exposing SQL Azure data via an OData interface by configuring this at the CTP portal. For a well-written introduction to OData, read here . Shown in Figure 11 below is one of the (CTP) configuration screens for exposing SQL Azure data as OData.

Figure 11 SQL OData (CTP) configuration

Database Administration

Generally when using SQL Azure, the administrator role becomes one of logical installation management. Physical management is handled by the platform. From a practical standpoint this means there are no physical servers to buy, install, patch, maintain or secure. There is no ability to physically place files, logs, tempdb and so on in specific physical locations. Because of this, there is no support for the T-SQL commands USE <database>, FILEGROUP, BACKUP, RESTORE or SNAPSHOT.
There is no support for the SQL Agent on SQL Azure. Also, there is no ability (or need) to configure replication, log shipping, database mirroring or clustering. If you need to maintain a local, synchronized copy of SQL Azure schemas and data, then you can use any of the tools discussed earlier for data migration and synchronization – they work both ways. You can also use the DATABASE COPY command. Other than keeping data synchronized, what are some other tasks that administrators may need to perform on a SQL Azure installation?
Most commonly, there will still be a need to perform logical administration. This includes tasks related to security and performance management. Of note is that in SQL Azure only there are two new database roles in the master database which are intended for security management. These roles are dbmanager (similar to SQL Server’s dbcreator role) and (similar to SQL Server’s securityadmin role) loginmanager. Also certain common usernames are not permitted. These include ‘sa’, ‘admin’, ‘administrator’, ‘root’ and ‘guest’. Finally passwords must meet complexity requirements. For more, read Kalen Delaney’s TechNET Article on SQL Azure security here .
Additionally, you may be involved in monitoring for capacity usage and associated costs. To help you with these tasks, SQL Azure provides a public Status History dashboard that shows current service status and recent history (an example of history is shown in Figure 12) here .
Figure 12 SQL Azure Status History

There is also a new set of error codes that both administrators  and developers should be aware of when working with SQL Azure.  These are shown in Figure 13 below.  For a complete set of error codes for SQL Azure see this MSDN reference.  Also, developers may want to take a look at this MSDN code sample on how to programmatically decode error messages.

Figure 13 SQL Azure error codes
SQL Azure provides a high security bar by default. It forces SSL encryption with all permitted (via firewall rules) client connections. Server-level logins and database-level users and roles are also secured. There are no server-level roles in SQL Azure. Encrypting the connection string is a best practice. Also, you may wish to use Windows Azure certificates for additional security. For more detail read here .
In the area of performance, SQL Azure includes features such as automatically killing long running transactions and idle connections (over 30 minutes). Although you cannot use SQL Profiler or trace flags for performance tuning, you can use SQL Query Optimizer to view query execution plans and client statistics. A sample query to SQL Azure with Query Optimizer output is shown in Figure 14 below. You can also perform statistics management and index tuning using the standard T-SQL methods.
Figure 15 SQL Azure query with execution plan output shown
There is a select list of dynamic management views (covering database, execution or transaction information) available for database administration as well. These include sys.dm_exec_connections , _requests , _sessions, _tran_database_transactions, _active_transactions, _partition_stats For a complete list of supported DMVs for SQL Azure see here .
There are also some new views such as sys.database_usage and sys.bandwidth_usage. These show the number, type and size of the databases and the bandwidth usage for each database so that administrators can understand SQL Azure billing. Also this blog post gives a sample of how you can use T-SQL to calculate estimated cost of service. Here is yet another MVP’s view of how to calculate billing based on using these views. A sample is shown in Figure 16. In this view, quantity is listed in KB. You can monitor space used via this command:

SELECT SUM(reserved_page_count) * 8192 FROM sys.dm_db_partition_stats
Figure 16 Bandwidth Usage in SQL Query
Further around SQL Azure performance monitoring, Microsoft has released an installable tool which will help you to better understand performance. It produces reports on ‘longest running queries’, ‘max CPU usage’ and ‘max IO usage’. Shown in Figure 17 below is a sample report screen for the first metric. You can download this tool from this location
Figure 17 Top 10 CPU consuming queries for a SQL Azure workload
You can also access the current charges for the SQL Azure installation via the SQL Azure portal by clicking on the Billing link at the top-right corner of the screen. Below in Figure 18 is an example of a bill for SQL Azure.
Figure 18 Sample Bill for SQL Azure services

Learn More and Roadmap

Product updates announced at TechEd US / May 2011 are as follows:
  1. SQL Azure Management REST API – a web API for managing SQL Azure servers.
  2. Multiple servers per subscription – create multiple SQL Azure servers per subscription.
  3. JDBC Driver – updated database driver for Java applications to access SQL Server and SQL Azure.
  4. DAC Framework 1.1 – making it easier to deploy databases and in-place upgrades on SQL Azure.
For deeper technical details you can read more in the MSDN documentation here .
Microsoft has also announced that is it is working to implement database backup and restore, including point-in-time restore for SQL Azure databases. This is a much-requested feature for DBAs and Microsoft has said that they are prioritizing the implementation of this feature set due to demand.
To learn more about SQL Azure, I suggest you download the Windows Azure Training Kit. This includes SQL Azure hands-on learning, whitepapers, videos and more. The training kit is available here. There is also a project on Codeplex which includes downloadable code, sample videos and more here .  Also you will want to read the SQL Azure Team Blog here, and check out the MSDN SQL Azure Developer Center here .
If you want to continue to preview upcoming features for SQL Azure, then you’ll want to visit SQL Azure Labs here. Show below in Figure 19, is a list our current CTP programs.  As of this writing, those programs include – OData, Data Sync and Import/Export.  SQL Azure Federations has been announced, but is not open to invited customers.

Figure 19 SQL Azure CTP programs

A final area you may want to check out is the Windows Azure Data Market.  This is a place for you to make data sets that you choose to host on SQL Azure publically available.  This can be at no cost or for a fee.  Access is via Windows Live ID.  You can connect via existing clients, such as the latest version of the Power Pivot add-in for Excel, or programmatically.  In any case, this is a place for you to ‘advertise’ (and sell) access to data you’ve chosen to host on SQL Azure.


Are you still reading?  Wow! You must be really interested in SQL Azure.  Are you using it?  What has your experience been?  Are you interested, but NOT using it yet?  Why not?  Are you using some other type of cloud-data storage (relational or non-relational)?  What is it, how do you like it?  I welcome your feedback.
Happy coding!

What is SQL Azure?

SQL Azure is Microsoft’s solution for SQL Server in the cloud.  It has a few key differences from traditional SQL Server:
  • Database maximum size is 1GB or 10GB based on your price plan, but no larger
  • You can have as many databases as you want, but they may not be on the same server
  • There’s no cross-database querying
  • There’s no SQL Azure encryption
  • It supports a subset of SQL Server’s T-SQL commands, but not all of them
  • It supports a subset of SQL Server’s datatypes, but not all of them
I like to think of it as SQL Server Lite – it’s an excellent stepping stone to the real thing.  Build your app from the ground up for SQL Azure, and it’s pretty easy to support full-blown SQL Server down the road if your needs change.

How does SQL Azure pricing compare to SQL Server costs?

Azure is a lot cheaper if you only need one database – $10/month for 1gb of data, and $100/month for 10gb of data. Pricing is per database, not per application – you can write multiple applications to query the same database.  Price is only one part of the equation, though.
The more databases you need, or the larger databases you need, the more it makes sense to have your own SQL Server.  Generally speaking, I tell companies that if you have any of these requirements, SQL Azure probably isn’t for you:
  • More than one department that needs to store data in SQL Server permanently
  • You don’t have at least 2 very senior-level programmers who understand the concept of sharding
  • You face security concerns involving HIPAA, SOX, or PCI compliance
  • You’ve already built a schema and/or an application
I’m not saying you can’t meet those needs with SQL Azure, but generally speaking, you’re not the target market for Azure v1.0.  On the other hand, if you have the following requirements, Azure might be a very good fit for you:
  • No full-time IT staff, no DBAs
  • No datacenter (or maybe not even an office)
  • No (or very, very little) legacy code, and you’re building a new app from scratch
  • You don’t mind putting in extra man-hours in the beginning to avoid capital investments
In that situation, Azure might make good sense, but you’re going to have to plan your way around Azure’s limitations.

Why does SQL Azure have a 10GB limit?

I can’t speak for Microsoft, but it would make sense to put a small database size limit to make sure queries run fast.  If you’ve got a 10GB database, it’s easy to make sure every query runs blazing fast.  Throw hardware at the problem, throw a lot of indexes at it, and you’ll get good performance.  Keeping things small also makes backup & recovery easier, and makes high availability design easier.
There are no SQL Azure RAM limits or CPU limits.  You have no control over how much CPU power or memory your database gets.  SQL Server 2008 has a Resource Governor that allows database administrators to throttle queries based on login or database, but Azure doesn’t support those features.

How can I load balance SQL Azure or do cross-database joins?

Since SQL Azure databases max out at 10GB, it would be great if we could create several databases on the same server and use a view to select from all of them.  No dice – that’s not supported.  As of February 2010, your application has to know which databases to query.  If you need to combine results from multiple databases, you’ll need to do that processing inside the application by querying all of the databases and then joining the results together in the app.
Scaling SQL Azure this way is called sharding – partitioning your data into different shards, each stored in a different database (or even different servers.)  Design your app like this, and it will scale like crazy.  If you’re the kind of coder who loves reading the stories on, you’ll love sharding.  If you’ve never heard of, you don’t want to architect your own sharded database – bring in somebody who lives this stuff.

How do I handle the SQL Azure backup process?

Unfortunately, right now, you have to build one yourself.  SQL Azure does not support the BACKUP command.  You’ll need to figure out how to sync your data out to an external database, and while you’re doing that design, keep in mind that you pay Microsoft for data transfers in & out of SQL Azure.
Microsoft states that Azure is highly available and contains its own backups across multiple servers.  However, just as RAID is not a backup, high availability is not a backup either.  You need backups if you want protection from any of these scenarios:
BSOD-T - get yours today!
  • Your app might accidentally delete or modify data
  • Your users might accidentally delete or modify data
  • A hacker might purposely delete or modify data
  • Or believe it or not, Microsoft services just might go down.
Remember, folks, as much as I love Microsoft, we’re talking about the company that brought you the Blue Screen of Death.  You would be irresponsible not to back up your data for your own protection.  (T-shirt available for around $25, and if you want different colors or shirts, click the Customize button after the link.)

How good is SQL Azure performance?

The biggest SQL Azure bottleneck is your bandwidth, because all Azure queries go from your application to Microsoft’s servers and back.  Azure developers report that they’re very satisfied with the query speed, and usually report that it’s faster than servers they’ve built themselves.
The problem will arise when you’re not satisfied with SQL Azure’s performance.  You’ll want to ask these questions:
  • What else is querying my database right now?
  • What does my query execution plan look like?
  • What indexes could I add to make this query run faster?
  • Is another Azure database hammering the server right now?
Unfortunately, SQL Azure doesn’t support any commands, dynamic management views (DMVs), or functions that will help you answer those questions.

Can I use SQL Azure as a backup with log shipping or database mirroring?

Don’t think of SQL Azure as your disaster recovery solution – it doesn’t work that way.  If you design your database for sharding, then you can work out a method to sync between Azure and full-blown SQL Server, but right now you’re very much inventing the wheel.  Your next question would be how to run Azure inside your own datacenter so that you could keep an identical environment between production (Azure) and disaster recovery (your place.)

How do I run Microsoft SQL Azure on commodity hardware?

You can’t run SQL Azure in-house on your own hardware.  It only runs in Microsoft’s datacenters.  If you want to develop against Azure but you don’t want to pay for Azure, you have two options.
Option #1 is to buy SQL Server Developer Edition for under $50.  It’s functionally equivalent to SQL Server Enterprise Edition, but the license prohibits using it in production.  You’ll have to restrict yourself to only using the Azure-level features, though – if you accidentally design your schema to use, say, full text search, it’ll work fine on Developer Edition, but it won’t work in Azure.
Option #2 is to get an MSDN subscription with Azure benefits.

What SQL Azure plans do MSDN subscribers get for free?

It depends on your level of MSDN/Visual Studio: