Nielsen c32.tex V4 - 07/23/2009 2:20pm Page 732
Part V Data Connectivity
TABLE 32-3
(continued )
Class Type Description
DataSet Contains a local copy of the data retrieved by one or more DataAdapters.
The DataSet uses a local copy of the data, so the c onnection to the database
isn’t live. A user makes all changes to the local copy of the database, and then
the application requests an update. (Updates can occur in batch mode or one
record at a time.) The DataSet maintains information about both the original
and the current state of each modified row. If the original row data matches the
data on the database, then the DataAdapter makes the requested update. If
not, then the DataAdapter returns an error, which the application must
handle. DataSets may be typed or untyped in ADO.NET. They are defined in
the System.Data namespace; and they are not provider specific. Only the
DataAdapter classes are associated with the provider.
Transaction Implemented in ADO.NET 2.0. The ADO.NET transaction is, by default, a
lightweight transactional container for a single data source. If the ADO code
enlists another data source in the transaction, then the transaction will
transparently escalate to a distributed or multi-phase transaction, with no
additional coding required by the developer.
Managed providers
Five managed providers are included in ADO.NET 3.5:
■ OracleClient: The Microsoft provider for the Oracle database version 8.1.7 and later. This
provider requires that the Oracle client be installed.
■ OleDb: The bridge provider for using OLEDB providers in ADO.NET
■ SqlClient: The Microsoft provider for SQL Server 7.0 and later. Just as the OLEDB provider
directly connects SQL Server and ADO, the SQLClient uses a private protocol for direct
connection to SQL Server from ADO.NET.
■ SqlServerCe: The Microsoft provider for SQL Server CE mobile edition
■ ODBC: An API used to access data in a relational or indexed sequential access method
or database. Supported in SQL Server through the SQL Server Native Client ODBC
driver.
As noted, the
OracleClient provider requires co-installation of the Oracle client. The OLEDB.NET
provider relies on MDAC components for some functionality.
SqlClient and SqlServerCe are con-
tained in the SQL Native Client library.
The .NET Framework data provider for ODBC, which was previously available only as
a Web download, now ships with the .NET Framework under the
System.Data.Odbc
namespace.
While ADO.NET 1.x used the shared MDAC architecture, it did not have a single object that was
instantiated to create a command, a data reader, or a data adapter. It supported several per-provider,
class-specific objects contained in different libraries that performed these tasks. It was necessary for the
732
www.getcoolebook.com
Nielsen c32.tex V4 - 07/23/2009 2:20pm Page 733
Programming with ADO.NET 3.5 32
developer to select the namespace appropriate to the application. The selected namespace aligned with a
specific provider.
When working with SQL Server, that meant using the objects that Microsoft had optimized for native
SQL Server use, including
SqlConnection, SqlCommand, SqlDataReader,andSqlDataAdapter.
These provider-specific classes are still supported. When the data source is SQL Server, the provider-
specific classes will be optimized and therefore are preferred. Better performance should be expected
from any of the per-provider classes. In addition, there is a common base class alternative with
ADO.NET 3.5 that you should consider for use on new projects. In an environment that must support
multiple RDBMS data sources, the common class may require fewer lines of code than duplicating the
same logic with each provider. However, the likelihood that coding with the common base class will
require some per-provider customizations is quite high.
Table 32-4 shows a cross-reference of
System.Data.Common classes and the provider-specific classes
available for each of the class types listed previously in Table 32-3.
TABLE 32-4
ADO.NET 3.5 Class Reference by Namespace
Class Type
(from System.Data. System.Data. System.Data. System.Data. System.Data.
Table 32-3) Common SQLClient OracleClient OleDb Odbc
Connection DbConnection SqlConnection OracleConnection OleDbConnection OdbcConnection
ProviderFactory DbProviderFactory SqlClientFactory OracleClientFactory OleDbFactory OdbcFactory
Command DbCommand SqlCommand OracleCommand OleDbCommand OdbcCommand
Parameter DbParameter SqlParameter OracleParameter OleDbParameter OdbcParameter
Error None SqlError None OleDbError None
Exception DbException SqlException OracleException OleDbException OdbcException
DataAdapter DbDataAdapter SqlDataAdapter OracleDataAdapter OleDbDataAdapter OdbcDataAdapter
DataReader DbDataReader SqlDataReader OracleDataReader OleDbDataReader OdbcDataReader
Transaction DbTransaction SqlTransaction OracleTransaction OleDbTransaction OdbcTransaction
When working with other RDBMS data sources in ADO.NET 1.1, the developer had
to use a separate but parallel set of ADO.NET provider classes for each provider.
Included with the .NET Framework 1.1 were the classes of the
System.Data.OleDb namespace, the
System.Data.Odbc namespace, and the OracleClient namespace. Developers could also write custom
providers; and third-party sources made other provider namespaces available. The same Interfaces used
by Microsoft to create the included provider classes can be used to create custom providers.
Introduced in ADO.NET 2.0, the System.Data.Common namespace contains a base class to write
provider-independent (often termed generic) code. The provider is defined and accessed through
the
DbProviderFactory class in this model. The provider components are DbConnection,
733
www.getcoolebook.com
Nielsen c32.tex V4 - 07/23/2009 2:20pm Page 734
Part V Data Connectivity
DbCommand, DbDataReader,andDbDataAdapter. The factory model creates the capability to
specify not only the connection string, but also the provider in the application’s configuration file, the
registry, another structure, or even user input readable at the time the connection class is instanti-
ated. It is called a factory because of the capability to construct instances of provider-specific classes
automatically. The factory-created classes are inherited by a factory class in the
SQLClient or any
other provider-specific classes. In theory, the result is simplified and provider-agnostic code from the
developer’s perspective.
In many cases it may be possible to achieve that coding objective with the
System.Data.Common
classes. Applications that must be able to run on multiple database platforms are prime candidates for
generic ADO.NET. In reality, each specific provider needs extensions to the base model, so the base
class is probably not completely usable by any .NET providers without some references to provider-
specific classes in the developer’s code. Consequently, common base class coding may be more complex
than using the provider-specific namespaces at this time. Undoubtedly, the common base class’ usability
will continue to evolve in future releases of ADO.NET. It is conceivable that the provider-specific classes
derived from the common base class model may even fall out of favor in the future. For now, it may be
wise for all but the most adventurous developers to proceed with some caution into the generic coding
model.
For additional details on the common base class and the DBPProviderFactory,see
the MSDN white papers ‘‘Generic Coding with the ADO.NET 2.0 Base Classes and
Factories,’’ by Bob Beauchemin, at
/>‘‘Writing Generic Data Access Code in ASP.NET 2.0 and ADO.NET 2.0,’’ by Dr. Shahram Khosravi, at
While these articles refer to ADO.NET
2.0, they still apply to ADO.NET 3.5 because the foundations and principles are the same.
The managed database providers for ADO.NET incorporate a certain level of intelligence not found in
the ADO version of the same providers. For example, the .NET providers make better use of database
connections. They make and break connections as necessary to ensure optimal use of server and client
resources. The differences between an unmanaged provider and a managed provider can easily be cate-
gorized into four areas:
■ Object Access Technique: An unmanaged provider will use a COM progID to access required
objects. When working with a managed provider, the application relies on a
command class.
The
command class may still access the COM progID, but the command class hides access
details from the developer, which makes development faster and less prone to error. It also
enables streamlining of the SQL client data access and the possibility that the ADO.NET code
will have the same look and feel regardless of whether the underlying access is via a COM
ProgID.
■ Data Result Handling: The unmanaged provider relies on the
Parameter objects of
the
Command, along with the Recordset and Stream objects provided by ADO, to
present the data within the application. The managed equivalents include the
Parameter,
DataSet, DataTable,andDataReader classes, along with the ExecuteReader,
ExecutePageReader, ExecuteNonQuery,andExecuteScalar methods of the
command class and the XML stream. The unmanaged COM interface always incurs the
overhead of converting SQL data types to COM data types. The managed providers have the
distinct advantage here again because of the XML-based transport stream.
734
www.getcoolebook.com
Nielsen c32.tex V4 - 07/23/2009 2:20pm Page 735
Programming with ADO.NET 3.5 32
■ Data Updates: Because the unmanaged environment uses a live connection, resources are
in constant use and the user must have a connection to the database. In addition, the devel-
oper spends plenty of time creating the commands by hand. The managed environment uses
connections only as needed to actually transfer data, so resource usage is more efficient and
the user doesn’t need a connection at all times. As shown later in the chapter, the managed
environment also provides other automation techniques, including the
CommandBuilder
method.
■ Data-Transfer Format: The unmanaged environment uses binary data transfer. The managed-
data provider relies solely on XML for data transfer in ADO.NET 1.x. Distributed applications
in ADO.NET 2.0 and higher can also be transferred using binary serialization, with a remark-
able improvement in size and throughput over XML in cases where remoting is appropriate.
Remoting provides improved performance and interoperability in interprocess communication
between .NET applications. If either the source or the target of a data transfer is not a .NET
application, then XML will provide a standards-based method to transfer data, which requires
much less code and therefore lower maintenance cost than unmanaged transfer methods.
The differences in data-transfer methods between the managed XML and unmanaged data providers
require close examination. The XML data-transfer format used by a managed provider is better suited
to SOA and the Internet because it enables data transfer through firewalls that normally block binary
data transfers. However, XML is a bulkier data-transfer method and isn’t secure. In the past, it may have
been enticing to use ADO for local database needs and ADO.NET for distributed applications because
of the obvious size and performance penalties inherent in XML and the illusory security value of binary
over ASCII bits flying over the private network. The ADO.NET 3.5 binary serialization option provides a
performance advantage to remote streams and thereby helps reduce the often poorly founded temptation
to continue to support both ADO and ADO.NET.
SQL Native Client
With SQL Server 2008 and ADO.NET 3.5, access to SQL Server does not rely on MDAC. Instead, the
SQLClient is contained in a single .dll known as the SQL Native Client. The SQL Native Client is
expected to resolve the well-known consistency issues in the distribution of updates through the massive
MDAC file set, and improve security by limiting the number of interfaces — or surface area — exposed.
The proprietary .NET-to-SQL Server access protocols, as well as OLEDB and ODBC interfaces to SQL
Server and the traditional native interfaces to SQL Server, are contained in the SQL Native Client.
The SQL Native Client can only be used to access SQL Server 7.0 and later. While it would be nice
to say that the SQL Native Client is the only way to access SQL Server, that idea remains elusive. For
example, SQLXML is not integrated in the SQL Native Client. SQLXML is a method through which
SQL Server exposes the functionality of XML inside the .NET Framework. In all likelihood, many .NET
Framework applications will have dependencies on both MDAC and the SQL Native Client. After all,
one of the selling points for .NET is heterogeneous data sources. The monolithic SQL Native Client
should simplify maintenance and security of the contained interfaces, but it does not appear that there
will be a noticeable difference in how applications interact with SQL Server. Similarly, it is likely that
ODBC, OleDB, and in particular ADO, access will continue to be problematic to whatever extent the
SQL Native Client is dependent on MDAC components.
735
www.getcoolebook.com
Nielsen c32.tex V4 - 07/23/2009 2:20pm Page 736
Part V Data Connectivity
Introduced in SQL Server 2005, SQLXML is updated to version 4.0 in SQL Server 2008.
While not integrated into the SQL Native Client provider, SQLXML is supported in the
.NET Framework through the OLEDB-based SQLXMLOLED provider. Chapter 18, ‘‘Manipulating XML
Data,’’ offers more information on working with XML in SQL.
Data types
ADO.NET uses XML to move data from the database to the application. This is not XML as in the XML
data type, but XML as in the actual carrier of all SQL data, just as TDS is the native binary carrier of
SQL data from SQL Server. XML can support all data types without prejudice because XML is a stream
of bits. Visual Studio applications, conversely, rely on managed data types to represent onscreen data. In
other words, XML adds yet another translation layer to the mix. ADO.NET moves the data in and out
of the database on an XML stream. Then it must unpackage the XML stream into the desired flavor of
relational data results at the application. All of the data restrictions, oddities, and problems that were
discussed earlier regarding ADO also apply to the data provided to the application by ADO.NET. Conse-
quently, developers must consider the same problems, such as data loss and compatibility issues, during
development.
Fortunately, the managed environment provides good marshaling for data types used in database
management. Using ADO.NET may introduce a small performance penalty to transport the inherently
bloated XML stream and to package and unpackage that stream, but it is unlikely that the managed
environment will introduce data-translation problems. In fact, the expectation is that in the near future,
ADO.NET’s XML-based I/O performance will be as good as ADO’s binary transport method. ADO.NET
uses .NET Framework data types, rather than defining its own data types, as was the case with ADO.
This should immediately help curb the proliferation of data type conversion errors at runtime.
One particularly compelling reason to favor ADO.NET over ADO, even in the short term, is ADO.NET’s
better support for SQL Server’s data types, such as
XML, VARCHAR(MAX), VARBINARY(MAX), the new
date and time data types, the new spatial data types, and any CLR user-defined data types.
The
XML data type that can be stored in SQL Server 2005 and 2008 should not be confused with
the XML stream that ADO.NET uses to transport data. The
XML data type is supported by the
System.Data.SQLTypes SqlXml data type in ADO.NET 3.5. The XML data type can be used to
store XML documents and XML document fragments. ADO.NET 3.5 supports reading this data type
through the
XmlReader class. Unlike other SQL Server 2005 and 2008 data types, XML is not validated
in ADO.NET but at the SQL server. This means that the
XML data type has some risk of raising errors
during the
DataAdapter’s update method — when changes are sent back to the database — that
would not be expected for other primitive data types. SQLXML 4.0 provides the richest client-side
support for the
XML data type, including its own set of data access components. Recall that SQLXML 4.0
is not provided as part of ADO.NET 2.0 or higher.
CLR user-defined data types deserve a special mention here. In order to use a CLR user-defined data
type (UDT) in .NET code, a structurally consistent version of the assembly that defines the data type
must be available — not only on the database server, but also on the application server. There is a small
amount of flexibility in the requirements. It is not necessary that the assembly be a strongly named
match on the server and at the application, only that the structure of the type be identically defined at
both locations.
736
www.getcoolebook.com
Nielsen c32.tex V4 - 07/23/2009 2:20pm Page 737
Programming with ADO.NET 3.5 32
This requirement makes sense. The system-defined primitive data types — int, char, bit,andso
on — must exist at both locations. The difference lies in the logistics required to keep a custom feature
such as the UDT synchronized at all tiers, compared to the comparatively static primitive types. A UDT
that is used only on the SQL Server is likely to be of little real value, and a UDT that is deployed in a
production environment is likely to be somewhat fragile because of this awkward requirement. Careful
deployment planning is necessary when using UDTs in ADO.NET 3.5 applications.
For a complete discussion on creating and deploying CLR user-defined data types, see
Chapter 9, ‘‘Data Types, Expressions, and Scalar Functions.’’
DataAdapters and DataSets
Until this point in the chapter, ADO and ADO.NET have been shown to be very different technologies
despite the similarities in their names and a common purpose. The core of ADO is the
Recordset.
To modify data through a recordset, it is necessary to either use a server API cursor that stays in the
database or write code that sends updates back to the database once the changes are identified. Both
methods have proven to be fragile because of concurrency and contention issues, and both can require a
hefty chunk of code. To work with multiple
Recordsets, it is necessary to create multiple connections
and juggle the
Recordsets in code or pull multiple Recordsets from the database as a collection
and work with them one at a time. Similarly, both methods have proven to be rigid and can require a
hefty chuck of often repetitive code.
The primary in-memory data store of ADO.NET is the
DataSet.TheDataAdapter is used to con-
nect the data source and the in-memory data store. Starting with ADO.NET 2.0, the
DataSet is more
powerful than ever.
The
DataSet is disconnected from the database in order to reduce database contention, thereby
opening the door to highly scalable applications. The code required to either fill the
DataSet from the
database or update the database from the
DataSet is minimal.
The
DataAdapter populates the DataSet with a stream of data from the database, and handles any
inserts, updates, or deletes that must be propagated back to the database.
DataSet queries can reference multiple DataTables within the DataSet, and enforcement of
defined relationships and data types occurs seamlessly at the moment data is modified in the
DataSet.
Index optimizations in ADO.NET 3.5 have enabled the size of the
DataSet cache to increase signifi-
cantly, and performance has improved considerably over ADO.NET 1.x.A
DataSet populated with a
relatively small amount of data may not realize benefits from these index optimizations. As the
DataSet
grows larger, the optimizations become more important.
Visual Studio 2008 is well integrated with ADO.NET 3.5 classes. ADO.NET is already
a remarkably easy to use technology. Many features of the Visual Studio UI simplify
ADO.NET usage even further. For example, a
DataSet can be bound to a control simply by dragging
and dropping the DataSet on the control at design time. In addition, using the
DataAdapter Wizard and
the
DataSet Wizard at design time will produce a typed DataSet and the code to populate it.
737
www.getcoolebook.com
Nielsen c32.tex V4 - 07/23/2009 2:20pm Page 738
Part V Data Connectivity
Using the CommandBuilder method will automatically create the UPDATE, INSERT,andDELETE
commands used by the DataAdapter to maintain the database based on the SELECT statement used
to fill the
DataSet: one line of code to generate three database operations.
Binary serialization functionality in ADO.NET 3.5 also allows those larger
DataSets to be propagated
out of the database and between tiers of the application much faster. This feature is most useful in .NET
remoting situations whereby different components of the same application do not suffer the complexity
and performance penalties when decoding the binary data stream.
By default, the
DataReader class uses a read-only, forward-only cursor. The DataReader object is
able to offer a connected behavior or to enable disconnected mode operation for applications. A user
can download data from the company database while using an Internet (or other) connection. In the
disconnected mode, the data is then available for viewing offline (but not for direct modification at
the database because the connection to the database is lost). While perhaps more interesting for the
DataReader per se than the DataTable, ADO.NET 3.5 also provides methods to fill a DataTable
from a DataReader or a database-connected DataAdapter. Increased flexibility and reduction in
lines of code required by these refinements to the
DataTable class are two more reasons to consider
ADO.NET over ADO.
One drawback to programmatically making a
DataSet from a DataReader is that the DataSet will
not be typed. One advantage of typed data sets is an opportunity for better performance by not forc-
ing any more implicit data-conversion layers or any more complex translations than necessary in .NET.
Another advantage is more readable code. For example, to locate and use a column in a
DataTable of
an untyped
DataSet, the code would look like the following:
City = dsOrder.Tables[n].Rows[n].ItemArray.GetValue(1).ToSTring()
Using an ADO.NET typed DateSet, the code is considerably friendlier:
City = dsOrder.Invoice[0].Name;
Clearly, the reference to the typed DataSet is more concise.
An ADO
Recordset object has the advantage of requiring less code to access an individual value.
In addition, note that the field can be accessed by name when using a
Recordset object — the
DataSet or stand-alone DataTable offers an integer item indexed value that must be derived from
the field’s position within the data result. The typed DataSet is the readability winner hands down.
Using ADO.NET offers other significant advantages that may not be readily identified if an ‘‘ADO is
better’’ attitude is allowed to cloud the picture.
Starting in ADO.NET 2.0, ADO.NET provides the capability to return Multiple Active Result Sets
(MARS) for SQL Server 2005 and SQL Server 2008. MARS is easily understood as a mechanism for
pooling sessions in a similar manner to how connections have been pooled for quite some time in ADO
technologies. While there are still many reasons to create multiple connections, in some cases, such as
when multiple queries must maintain transactional consistency, a MARS connection can provide that
transactional consistency and offer performance benefits without the need for a two-phase transaction
riding on multiple connections.
SELECT statements can be interleaved as desired when the provider is
MARS-enabled. When inserts, updates, or deletes occur on a MARS-enabled connection, serialization of
DML and
SELECT statements will occur. The MARS behavior is not enabled by default in SQL Server
2005 and 2008. To enable the functionality, specify
MultipleActiveResultSets =true in the
connection string.
738
www.getcoolebook.com
Nielsen c32.tex V4 - 07/23/2009 2:20pm Page 739
Programming with ADO.NET 3.5 32
Even though MARS is not enabled by default, overhead is involved in all connections in
order to support MARS. Microsoft has stated that setting
MultipleActiveResultSets
=False
will not eliminate this MARS-related overhead. Possibly the only reason to ever set MARS off is
to ensure an error condition when more than one query is submitted on a connection.
ADO does provide remote connectivity features, but like all other COM-based technologies, it uses the
Distributed Component Object Model (DCOM) as the basis for data exchange across a remote network.
This means that the connection-port number changes often and that the data itself is in binary form.
One benefit of this approach is that few crackers and hackers have the knowledge required to peek at
the data (assuming they can unscramble it after locating it). The disadvantages include an inability to
leverage ADO.NET 3.5–specific features, the high technical cost of transferring binary data, and web
server firewall support. Good firewall design keeps ports closed and restricts binary data.
Simply stated, DCOM is COM when the components must communicate on the
network. With DCOM, the components use RPC t o communicate. In practice,
security and integration with the network protocol stack render DCOM a different technol-
ogy from COM, even though they serve a similar purpose. Readers seeking more details on
DCOM should refer to Microsoft’s 1996 white paper titled ‘‘DCOM Technical Overview,’’ at
/>ADO.NET gets around the firewall problems by using XML to transfer the data using Hypertext
Transport Protocol (HTTP) or some other appropriate data transfer technology. The data is in ASCII
format and relies on a single port for data transfers. Many other tools in the developer’s toolkit rise
above ADO.NET to better secure XML on the wire, such as SSL encryption and signing, certificate
exchange, and the self-contained Web Services Security protocol in environments where text on the wire
is a security issue.
A host of additional enhancements are available in ADO.NET 3.5. It would be prudent for the developer
community to pay attention to whether actual behavior matches expectations. Some enhancements are
made possible by the new features of 2008 and are covered throughout this book. Some of the more
interesting .NET enhancements include the following:
■ LINQ: Introduces query capabilities directly into the .NET Framework programming lan-
guages. Query operations are expressed in the query language itself and not as string literals
imbedded in the application code.
■ LINQ to SQL: Provides support for queries against an object model that is mapped to the
data structures of a Microsoft SQL Server database without an intermediate conceptual model.
Tables are represented by separate classes that are tightly coupled to the object model of the
database. LINQ to SQL translates the language-integrated queries in the object model into
T-SQL and sends those statements to the database for execution. The reverse happens when
the database returns the query results to the application.
■ LINQ to DataSet: Provides LINQ capabilities for disconnected data stored in a DataSet.
■ SQL Server 2008 SqlClient: Several new features have been added that support the new
enhancements to SQL Server 2008.
■ As discussed previously, new Date and Time data types have been added to SQL Server
2008.
■ Table-valued parameters provide the ability to marshal multiple rows of data from a client
application to SQL Server without requiring multiple round-trips to the database server.
739
www.getcoolebook.com
Nielsen c32.tex V4 - 07/23/2009 2:20pm Page 740
Part V Data Connectivity
■ Whereas SQL Server 2005 restricted the size of UDTs to 8 KB, this restriction has been
removed in SQL Server 2008.
■ SQL Server performance metrics are exposed in the programming interface as properties of
the connection.
Four important extensions to ADO.NET can be used in server-side CLR on SQL Server:
SQLContext, SqlPipe, SqlTriggerContext,andSqlDataRecord.
ADO.NET in Visual Studio 2008
There was some concern within the SQL Server DBA community when Microsoft announced that SQL
Server’s Enterprise Manager would be moving into Visual Studio in Visual Studio 2005. That seems to
have been a misunderstanding. SQL Server Management Studio has found its way into the common
Visual Studio graphical interface, but they remain separate products sharing little more than a common
look and feel.
A complete description of the Visual Studio integrated development environment (IDE) is beyond the
scope of this chapter. Fortunately, details of the IDE are well covered in the Microsoft Visual Studio
documentation. This section examines the IDE components that are particularly important for the
development of successful applications that use ADO.NET 3.5.
Server Explorer
It’s a smart bet that each ADO.NET 3.5 project will begin with the Server Explorer. The programmer
will either add a data connection or select an existing data connection when an ADO.NET 3.5 project
is created. A new data connection can be defined using the Data Source Configuration Wizard from the
Data menu item or in Server Explorer by selecting Add Connection from the context menu of the Data
Connections icon.
In addition to managing database connections, Server Explorer provides other useful database tools.
Database metadata diagramming, access, and generation capabilities are available to the developer from
Server Explorer. While the database components exposed by Server Explorer do not provide support
identical to Management Studio’s Object Explorer, they are quite similar. In some cases, Server Explorer
is better than Management Studio. For example, not only can database objects be created, viewed, and
modified from Server Explorer, but typed
DataSets and DataTables can be created just by dropping
tables from Server Explorer onto the
DataSet at design time.
Programming purists might object to this easy drag-and-drop approach to coding, but developers
shouldn’t discount the real performance and consistency benefits of typed data sets, which are often
derided in part because of the tedium and precision required to code a typed
DataSet. Developers
and DBAs with a shortage of time and a wealth of workload will appreciate the help. Everyone stands
a much improved chance of meeting those ever-shortening project deadlines when the Visual Studio
helpers are used.
Server Explorer functionality is accessible programmatically through the
ServiceController namespace. This can provide elegant, high-level access to
automated actions such as starting and stopping services.
740
www.getcoolebook.com
Nielsen c32.tex V4 - 07/23/2009 2:20pm Page 741
Programming with ADO.NET 3.5 32
Debugging ADO.NET
Interactively debugging application code in Visual Studio is straightforward and elegant. ADO.NET
code is specialized to the data source. It is of unquestionable benefit to be able to include not only the
application code, but also T-SQL code executing on the data source in the debugging session. Getting
the debugger to step into T-SQL code is somewhat painful. To be able to step into database code while
debugging an application in Visual Studio, all of the following conditions must be met:
■ Allow SQL/CLR Debugging must be enabled on the data connection in Server Explorer.
■ Enable SQL Server Debugging must be checked on the Debug tab of the project’s Properties
dialog in Solution Explorer.
■ Visual Studio’s Remote Components Setup must have been run on the SQL Server machine if
the SQL Server is not running on the same OS instance as Visual Studio.
■ Additionally, if SQL Server and Visual Studio are not on the same computer, and either SQL
Server or Visual Studio is running on Windows XP with the Internet firewall, then the firewall
must be configured to allow remote debugging. On the Visual Studio machine, this would
involve adding
devenv.exe to the white list and opening port 135. On the SQL Server
machine,
sqlservr.exe must be added and port 135 opened.
■ If server-side CLR components are to be debugged, then it is necessary to install and configure
the Visual Studio Remote Debug Monitor on the SQL Server 2008 machine.
■ If the remote SQL Server is a SQL Server 7 or SQL Server 2000 instance, then it is necessary
to configure DCOM on the SQL Server to allow remote debugging using the
dcomcnfg.exe
utility. The procedure varies by operating system. Refer to the DCOM documentation or to the
Microsoft Visual Studio documentation for complete details.
■ The Visual Studio documentation on debugging is outstanding. Complete information on
how to use the debugging tools, including what is installed on a SQL Server when remote
debugging is enabled, is available.
Debugging T-SQL code causes all managed connections on the database server instance to stop while the
developer is stepping through a stored procedure or function. All resource locks will also persist accord-
ing to the normal concurrency configuration for the SQL Server. Essentially, this means that debugging
T-SQL code on a busy SQL Server is to be avoided. By extension, it should almost go without saying
that using the Visual Studio debugger on a production SQL Server is sure to cause more problems than
it might solve.
Application tracing
Once the application has been moved to production, tools such as Server Explorer and the Visual Studio
debugger facilities are of little use. There are runtime tools such as the Windows debugger and SQL Pro-
filer that can identify problems that inevitably surface only in the production environment. SQL Profiler
is an excellent tool for performance tuning and troubleshooting at the SQL Server. Many readers may
be familiar with the ODBC runtime tracing capability. While ODBC trace is exceptionally verbose and
anything but lightweight, it identifies error conditions and connectivity problems down to the applica-
tion statement level. Furthermore, it is possible to employ ODBC trace even in the production environ-
ment when necessary. Visual Studio languages have offered various types of debug build-only assertion
capabilities for some time. The .NET Framework offers a diagnostics alternative that is usable in both
debug and release builds in the
System.Diagnostics.Trace namespace.
741
www.getcoolebook.com