Friday, April 19, 2013

Auto Complete Service Asp.Net

* javascript *

<link href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.1/themes/base/jquery-ui.css"
        rel="stylesheet" type="text/css" />

    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>

    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.1/jquery-ui.min.js"></script>
---------------------------------------------------------------------------------------------------------
* JavaScript Function *

<script type="text/javascript">
    
$(function() {
    $(".tb").autocomplete({
        source: function(request, response) {
            $.ajax({
                url: "Report/AuroCompleteService.asmx/GetMemberName",
                data: "{ 'MemberName': '" + request.term + "' }",
                dataType: "json",
                type: "POST",
                contentType: "application/json; charset=utf-8",
                dataFilter: function(data) { return data; },
                success: function(data) {
                    response($.map(data.d, function(item) {
                        return {
                            value: item
                           
                        }
                    }))
                },
                error: function(XMLHttpRequest, textStatus, errorThrown) {
                    alert(textStatus);
                }
            });
        },
        minLength: 2
    });
});
    </script>
-------------------------------------------------------------------------------------------------------
* Html Code *

<div class="demo">
        <div class="ui-widget">
            <label for="tbAuto">
                Enter Email:
            </label>
            <asp:TextBox ID="tbAuto" class="tb" runat="server">
            </asp:TextBox>
        </div>
    </div>

---------------------------------------------------------------------------------------------------------
* Add Web Service file*

AuroCompleteService.cs 
Put below Code in this file

[WebMethod]
    [ScriptMethod(ResponseFormat = ResponseFormat.Json)]
    public string[] GetMemberName(string MemberName)
    {
        DataTable _dtMemberName = SBusinessLayer.Member.GetAllProfile(0, MemberName);
        string[] _MemberName = new string[0];
        if (_dtMemberName.Rows.Count > 0)
        {
            //return _dtMemberName.Rows[0][0].ToString();
            int _i = 0;
            foreach (DataRow _drMemberName in _dtMemberName.Rows)
            {
                Array.Resize<string>(ref _MemberName, _i + 1);
                _MemberName.SetValue(_drMemberName["MemberName"], _i);
                _i = _i + 1;

            }
            return _MemberName;
        }
        else
        {
            return _MemberName;
        }
    }
---------------------------------------------------------------------------------------------------------




Saturday, April 13, 2013

Get Select Statement in a Variable SQL Server

DECLARE @remain VARCHAR(max)
DECLARE @MID VARCHAR(255) SET @MID=5
SET @remain='SELECT DATEDIFF(dd,GETDATE(),DateAdd(mm, mt.ValidMonth, py.UpdatedDate-1))
        FROM Payment.PaymentTransaction py
        INNER JOIN Member.BasicInfo B3 ON b3.MemberID=py.MemberID
        INNER JOIN Master.MemberType mt ON mt.MemberTypeID=b3.MemberTypeID
WHERE '
SET @remain=@remain +'b3.MemberID='+@MID
PRINT @remain
EXEC(@remain)

Note:(make sure your all declare variable must be same data type)

Friday, April 12, 2013

SQL Server DateDiff Example

Unit of timeQueryResult

NANOSECONDSELECT DATEDIFF(NANOSECOND,'2011-09-23 17:15:22.5500000','2011-09-23 17:15:22.55432133')4321300


MICROSECONDSELECT DATEDIFF(MICROSECOND,'2011-09-23 17:15:22.5500000','2011-09-23 17:15:22.55432133')4321


MILLISECONDSELECT DATEDIFF(MILLISECOND,'2011-09-23 17:15:22.004','2011-09-23 17:15:22.548')544

SECONDSELECT DATEDIFF(SECOND,'2011-09-23 17:15:30','2011-09-23 17:16:23')53

MINUTESELECT DATEDIFF(MINUTE,'2011-09-23 18:03:23','2011-09-23 17:15:30')-48

HOURSELECT DATEDIFF(HH,'2011-09-23 18:03:23','2011-09-23 20:15:30')2

WEEKSELECT DATEDIFF(WK,'09/23/2011 15:00:00','12/11/2011 14:00:00')12

DAYSELECT DATEDIFF(DD,'09/23/2011 15:00:00','08/02/2011 14:00:00')-52

DAYOFYEARSELECT DATEDIFF(DY,'01/01/2011 15:00:00','08/02/2011 14:00:00')213

MONTHSELECT DATEDIFF(MM,'11/02/2011 15:00:00','01/01/2011 14:00:00')-10

QUARTERSELECT DATEDIFF(QQ,'01/02/2011 15:00:00','08/01/2011 14:00:00')2

YEARSELECT DATEDIFF(YY,'01/02/2011 15:00:00','01/01/2016 14:00:00')5

Splitting SQL Server backups to multiple files





Why split database backups?

  1. Achieve faster database backup by splitting database backups to multiple files. By writing database backups to multiple files located on multiple drives, you can easily achieve a higher I/O, thereby reducing the time necessary to perform database backups.
  2. Using the Split File Backup option, you can easily split a very large backup file into multiple files of the same size.
  3. Since the database backup is split into multiple files, each file will be smaller, allowing the DBA to copy the files easily across the network or to tape.
  4. Open SQL Server Management Studio, expand Databases node and then right-click the database for which you would like to perform a Split File Full backup and click Tasks àBack Up… as shown in the screenshot below. For the purposes of this article, I am using the sample database named AdventureWorks2008R2.
  5. 1. In the Back Up Databases dialog box, click the Add… button, which is under Destination, to specify the path of database backup files. In the screenshot below, you can see that the AdventureWorks2008R2 database is backed up to two files. The first backup striped file resides on the C drive, and the second backup striped file resides on the D dive. By specifying the backup files on two different drives, you can reduce the time needed for database backups as SQL Server will be able to use multiple threads while performing backups. When you use striped database backups each backup file will be of identical size, helping you to utilize disk spaces more effectively. For example, let's suppose that your full database backup size is 10 GB and you want to break the backup file in such a way that each file is not more than 2 GB each. Then, while performing backup, you can split the backup to be written on 5 different files. Once the backup is completed, you will find that each file is approximately 2 GB in size.
  6. 2. Finally, to start the striped backup of the database, click OK in the Back Up Database dialog box. Once the backup is successful, you will see a dialog box similar to the one shown in the screenshot below.
    Please note that if you are using SQL Server 2008 Enterprise Edition or SQL Server 2008 R2 or SQL Server 2012 Standard and Enterprise Editions, you can even leverage the built-in Database Backup Compression feature to further reduce the database backup size.
    Using the below-mentioned T-SQL code it's possible to perform a full backup of the AdventureWorks2008R2 database by splitting the backups to two files.
    BACKUP DATABASE [AdventureWorks2008R2]
                   TO          DISK = N'C:\DBBackups\AdventureWorks2008R2_SplitFile1.BAK', 
                                  DISK = N'D:\DBBackups\AdventureWorks2008R2_SplitFile2.BAK'
                   WITH    NOFORMAT, NOINIT, 
                                                 NAME = N'AdventureWorks2008R2-Full Database Backup',
                                                 SKIP, NOREWIND, NOUNLOAD, 
                                                 STATS = 10
    GO

    3. Open SQL Server Management Studio; right-click Databasesnode and select the Restore Database… option from the drop-down menu as shown in the screenshot below.
    4. In the Restore Database dialog box, specify the name of thedestination database (for this example, the database will be restored as RestoreDatabaseFromSplitFiles), Specify the Sourcefor the restore under From Device and select the checkbox underRestore as shown in the screenshot below. One important thing to note is you would need both of the database backup files to successfully restore the database.

    5. Finally, to restore the database from the striped backup files click OK in the Restore Database dialog box. Once the restore is successful, you will get to see a dialog box similar to the one shown in the image below.
    If you don't specify both split files with which the full backup was performed, you would end up getting the error message displayed below. Hence, you need to make sure both of the split database backup files are secured safely to successfully restore your database in case of a failure.
    Error Message
    Msg 3132, Level 16, State 1, Line 1
    The media set has 2 media families but only 1 are provided. All members must be provided.
    Msg 3013, Level 16, State 1, Line 1
    RESTORE DATABASE is terminating abnormally.


    Restoring a database from multiple files

    Using the T-SQL code below, one can restore the database from multiple split files:
    RESTORE DATABASE [RestoreDatabaseFromSplitFiles]
                   FROM    DISK = N'C:\DBBackups\AdventureWorks2008R2_SplitFile1.BAK', 
                                  DISK = N'D:\DBBackups\AdventureWorks2008R2_SplitFile2.BAK'
                   WITH 
                   FILE = 1, 
                   MOVE N'AdventureWorks2008R2_Data'  TO
    N'D:\DB\RestoreDatabaseFromSplitFiles_Data.mdf', 
                   MOVE N'AdventureWorks2008R2_Log'  TO 
    N'D:\DB\RestoreDatabaseFromSplitFiles_Log.LDF', 
                   NOUNLOAD STATS = 10
    GO




     

Set Heigth / Width of CKEditor



<script type="text/javascript">
        CKEDITOR.config.width = 810;
        CKEDITOR.config.height = 120;
        CKEDITOR.config.contentsCss = '/css/yourstyle.css';
    </script>

Tuesday, April 2, 2013

Sql query For Existing

Insert Into Employee (Role_ID, FirstName, LastName, DateOfBirth, Active)
Select @Role_ID, @FirstName, @LastName, @DateOfBirth, @Active
Where not exists (
select 1 from Employee where FirstName = @FirstName
and LastName = @LastName
and DateOfBirth = @DateOfBirth
)

If @@rowcount=0 select 'User already exists!'

Monday, April 1, 2013

7 performance tips for faster SQL queries

1. Don't use UPDATE instead of CASEThis issue is very common, and though it's not hard to spot, many developers often overlook it because using UPDATE has a natural flow that seems logical.

Take this scenario, for instance: You're inserting data into a temp table and need it to display a certain value if another value exists. Maybe you're pulling from the Customer table and you want anyone with more than $100,000 in orders to be labeled as "Preferred." Thus, you insert the data into the table and run an UPDATE statement to set the CustomerRank column to "Preferred" for anyone who has more than $100,000 in orders. The problem is that the UPDATE statement is logged, which means it has to write twice for every single write to the table. The way around this, of course, is to use an inline CASE statement in the SQL query itself. This tests every row for the order amount condition and sets the "Preferred" label before it's written to the table. The performance increase can be staggering.



2. Don't blindly reuse codeThis issue is also very common. It's very easy to copy someone else's code because you know it pulls the data you need. The problem is that quite often it pulls much more data than you need, and developers rarely bother trimming it down, so they end up with a huge superset of data. This usually comes in the form of an extra outer join or an extra condition in the WHERE clause. You can get huge performance gains if you trim reused code to your exact needs.
3. Do pull only the number of columns you needThis issue is similar to issue No. 2, but it's specific to columns. It's all too easy to code all your queries with SELECT * instead of listing the columns individually. The problem again is that it pulls more data than you need. I've seen this error dozens and dozens of times. A developer does a SELECT * query against a table with 120 columns and millions of rows, but winds up using only three to five of them. At that point, you're processing so much more data than you need it's a wonder the query returns at all. You're not only processing more data than you need, but you're also taking resources away from other processes.
4. Don't double-dipHere's another one I've seen more times than I should have: A stored procedure is written to pull data from a table with hundreds of millions of rows. The developer needs customers who live in California and have incomes of more than $40,000. So he queries for customers that live in California and puts the results into a temp table; then he queries for customers with incomes above $40,000 and puts those results into another temp table. Finally, he joins both tables to get the final product.
Are you kidding me? This should be done in a single query; instead, you're double-dipping a superlarge table. Don't be a moron: Query large tables only once whenever possible -- you'll find how much better your procedures perform.
A slightly different scenario is when a subset of a large table is needed by several steps in a process, which causes the large table to be queried each time. Avoid this by querying for the subset and persisting it elsewhere, then pointing the subsequent steps to your smaller data set.
5. Do know when to use temp tablesThis issue is a bit harder to get a handle on, but it can yield impressive gains. You can use temp tables in a number of situations, such as keeping you from double-dipping into large tables. You can also use them to greatly decrease the processing power required to join large tables. If you must join a table to a large table and there's a condition on that large table, you can improve performance by pulling out the subset of data you need from the large table into a temp table and joining with that instead. This is also helpful (again) if you have several queries in the procedure that have to make similar joins to the same table.

6. Do pre-stage dataThis is one of my favorite topics because it's an old technique that's often overlooked. If you have a report or a procedure (or better yet, a set of them) that will do similar joins to large tables, it can be a benefit for you to pre-stage the data by joining the tables ahead of time and persisting them into a table. Now the reports can run against that pre-staged table and avoid the large join.
You're not always able to use this technique, but when you can, you'll find it is an excellent way to save server resources.
Note that many developers get around this join problem by concentrating on the query itself and creating a view-only around the join so that they don't have to type the join conditions again and again. But the problem with this approach is that the query still runs for every report that needs it. By pre-staging the data, you run the join just once (say, 10 minutes before the reports) and everyone else avoids the big join. I can't tell you how much I love this technique; in most environments, there are popular tables that get joined all the time, so there's no reason why they can't be pre-staged.
7. Do delete and update in batchesHere's another easy technique that gets overlooked a lot. Deleting or updating large amounts of data from huge tables can be a nightmare if you don't do it right. The problem is that both of these statements run as a single transaction, and if you need to kill them or if something happens to the system while they're working, the system has to roll back the entire transaction. This can take a very long time. These operations can also block other transactions for their duration, essentially bottlenecking the system.
The solution is to do deletes or updates in smaller batches. This solves your problem in a couple ways. First, if the transaction gets killed for whatever reason, it only has a small number of rows to roll back, so the database returns online much quicker. Second, while the smaller batches are committing to disk, others can sneak in and do some work, so concurrency is greatly enhanced.
Along these lines, many developers have it stuck in their heads that these delete and update operations must be completed the same day. That's not always true, especially if you're archiving. You can stretch that operation out as long as you need to, and the smaller batches help accomplish that. If you can take longer to do these intensive operations, spend the extra time and don't bring your system down.
Enjoy faster SQLFollow these dos and don'ts whenever you can when writing queries or processes to improve your SQL performance, but remember to evaluate each situation individually to see which method works best -- there are no ironclad solutions. You'll also find that many of these tips will increase your concurrency and generally keep things moving more smoothly. And note that while the physical implementation of these tips will change from one vendor to the next, the concepts and issues that they address exist in every SQL platform.

Opps Part 1 : Abstraction

  Abstraction in C# is a fundamental concept of object-oriented programming (OOP) that allows developers t...