Monthly Archives: August 2007

John Butler Trio, Grand National

John Butler Trio: Live At The Brisbane River Stage, 12 August 2007The John Butler Trio are currently on the Australian leg of their Grand National tour and we were lucky enough to see them.

The concert was held at the Brisbane River Stage last night and the event was sold out; which officially made it the single biggest crowd in the bands history that they’ve performed in front of world wide; go Brisbane.

The evening was fantastic, with the John Butler Trio playing all of the new songs from their Grand National release with every second song being an older one. The Trio played from about 7:30pm until approximately 10:15pm with no breaks other than for us to applaud for an encore!

Such a great band, you need to do yourselves a favour and go and see them next time they are in your neck of the woods.

Tech Ed 2007, Day 3 Wrap Up

Yesterday I looked into the building of Background Motion using the Composite Web Block, the Enterprise Library and putting all of the different .NET 3.x technologies together in a demonstration product named Dinner Now. Today was focused around SQL Server 2005 performance, optimisation and scalability followed by .NET language pragmatics.

Writing Applications That Make SQL Server 2005 Fly

Shu Scott presented about writing applications that make SQL Server 2005 fly, however I don’t think that name reflected the presentation all that well. The talk would have been better titled ‘Understanding The Cost Based Optimiser To Make SQL Server 2005 Fly’. None the less, Shu raised a lot of great points in her presentation and some of them I thought interesting are below:

  • Don’t use a function passing in a column as a parameter within a query, such as in a WHERE clause. SQL Server 2005 calculates statistics for a table per column, so as soon as you use a function on the column the statistics are unusable. The off shoot of this is that SQL Server 2005 can massively under or over estimate the selectivity of a table which on a complex-ish based query can dramatically change the query plan that SQL Server will choose.
  • Don’t alter an input parameter to a function or stored procedure after the procedure has started. Shu didn’t specify exactly why this is the case, however after investigating it further on the internet; it is related to the point below regarding parameter sniffing.
  • Avoid using local variables within a function or procedure in WHERE statements. During the presentation, I didn’t get a clear understanding of why this was sub-optimal, however after some research online it is caused by the cost based optimiser having to use a reduced quality estimate for the selectivity of the table. You can avoid this problem by supplying the OPTION(RECOMPILE> hint, use a literal instead of a local variable if possible, parameterise the query an accept the value via an input parameter.
  • Use automatic statistics, if you have a requirement to not use it – disable it on a per table basis if possible as having quality statistics for your database is vital in the cost based optimiser doing its job.
  • Do parameterise your queries where they are common, with a lot of reuse and are hit often. Do not parameterise queries that are ad-hoc or long running. Presumably there is no gain in parameterising a long running query as the server is already going to be spending significant time processing the query, in which case the few milliseconds the server spends generating a query plan won’t be noticed.
  • Be aware of parameter sniffing, which is where SQL Server uses the values of the input values to a function/procedure to produce the query plan. This is normally a very good thing, however if the cached plan happens to represent an atypical set of input values – then it is likely that the performance of a typical query is going to be severely impacted.
  • Look to utilise the INCLUDE keyword when creating non-clustered indexes. The INCLUDE keyword allows you to extend past the 900 byte limit on the index key and also allows you to include previously disallowed column types within the index (ex: nvarchar(max)). This type of index is excellent for index coverage, as all columns identified are stored within the index in leaf nodes, however only the key columns enforce the index type.
  • If you are unable to edit an SQL statement for some reason, consider using the plan guides. A plan guide is essentially a set of option hints for the query, however you aren’t editing the query itself to apply them. You configure the plan guides for a stored procedure, function or an individual statement and when it is matched SQL Server 2005 will automatically apply the suggested guides to the statement.
  • In a similar fashion to the plan guide, there is a more complex option called USE PLAN which lets you supply an actual execution plan to the SQL statement, again without editing the SQL statement directly. Essentially, you extract the XML representation for the execution plan you would prefer to have execute and supply that to the SQL statement. If you have skewed data sets, this would be a good option to guarantee consistent access speed for a particular query. Using the skewed data sets as an example, it would be possible to have SQL Server cache a query plan which represents the atypical data and as such performs very poorly for the majority of the typical data. Supply the query plan to the SQL statement can avoid that happening. It is worth noting though, if you do supply a query plan you would need to revisit the stored plan periodically to make sure that it still reflects the most efficient access path for your particular data set.

Implementing Scale Out Solutions With SQL Server 2005

This presentation was about scaling SQL Server 2005 out, such that you’re able to continue adding more and more servers to the mix to distribute the load across. I had previously read the majority of the information covered, however I learned about two features named the SQL Server Service Broker and Query Notifications.

  • Linked servers let you move whole sections of a database onto another server and you tell the primary server where the other data resides. Linked servers are transactionally safe, however will perform only as fast as the slowest component within the linked server group.
  • Distributed Partitioned Views allows you to move subsets of a tables data across servers and uses linked servers behind the scenes to communicate between servers. A partition might be as simple as customers 1 through 100,000 in partition A and 100,001 through 200,000 in partition B and so on.
  • SQL Server Shared Database (SSD) allows you to attach many servers to a single read only copy of the database, which might be a great way of increasing performance for a heavily utilised reporting server with relatively static data. Unfortunately, the servers reading from the database need to be detached to refresh the database but this could be managed in an off peak period to reduce impact.
  • Snapshop Replication snapshots an entire database and replicates it into the subscribers. Snapshot replication isn’t used a lot as it’s data and resource intensive. It is most commonly used to set up a base system and then enable merge replication to bring it up to date with the publisher or to refresh an infrequently changing database periodically.
  • Merge Replication tracks the changes to the data on the publishers and bundles them together, only sending the net changes when appropriate. Merge replication supports bi-directional replication and it also implements conflict resolution as well, however there is no guarantee of consistency as the changes aren’t necessarily being replicated in a near real time environment.
  • Transaction Replication sends all changes to all subscribers and is transactionally safe. If there were a lot of DML taking place in a database, there would be considerable overhead for using transactional replication as a simple UPDATE statement which might effect 100 rows locally is sent to the subscribers as 100 independent SQL statements; in case some or all of the subscribes have additional data that the publisher does not.
  • Peer To Peer (P2P) Replication is a variation of transactional replication however it requires that each peer be the master of it’s own data so as to avoid key read/write problems across servers and consistency issues. As an example, all of the Brisbane office writes its changes into server A, while Sydney writes its changes into server B. By making sure that each server ‘owns’ its respective block of data, it is then possible and safe to replicate data between all peers safely.
  • SQL Server Service Broker (SSB) provides a reliable asynchronous messaging system to SQL Server 2005, that allows you to pass messages between application either within the same database, same server or distributed over many servers and databases. The service broker doesn’t do the work for you, however it does provide the plumbing to make developing your system a whole lot simpler. Using the service broker, it would even be possible to send messages from one service into another service on a different machine; might be useful to help keep different pieces of information up to date in a vastly distributed database set up when replication doesn’t quite suit the purpose.
  • Query Notification, as it suggests is a notification system which is used to notify clients or caches that they need to update certain data. Once again, the query notification doesn’t do the updating – it merely provides the event to tell you do perform your own action. The Query Notification engine utilises the service broker under the hood.
  • Data Dependent Routing isn’t a SQL Server feature but more of an architectural design pattern. Using Data Dependent Routing, the client (whatever it is), knows a little bit about the storage system and optimistically seeks out the data store which is likely to return the best performance.

.NET Programming Language Pragmatics

Joel Pobar presentated on .NET programming language pragmatics and contrasted some of the recent developments in that space. At the start of the talk, he pointed out that there are generally three types of programming languages – static, dynamic and functional. The original version of the .NET Common Language Runtime was based around a static environment and has recently been enhanced to support functional programming and more recently dynamic.

The dynamic programming languages are handled through a new component, the Dynamic Language Runtime which sits on top of the existing CLR. The new Dynamic Language Runtime has allowed people to build IronPython and IronRuby, which are implementations of those particular languages sitting over the top of the .NET CLR.

Outside of the fact that it means you’ll be able to run a Python script in the native Python runtime or inside of the .NET DLR, which is just plain cool; the biggest picture here is that the .NET CLR is being enhanced and soon we’ll have a new super language (LISP advocates, back in your boxes!) which will support all of the current types of programming languages at once.

The presentation was fantastic and it is exciting to hear Joel present as he is so passionate about the field. In fact, I would go as far to say that his enthusiasm for his work is infectious; it is hard to walk away from one of his presentations and not have at least some of his excitement and enthusiasm rubbed off on you.

I’ve heard on the grape vine that Joel might be able to present at the one of the next Gold Coast .NET User Groups, can’t wait if he does!

Breaking News, I Have Broadband

I have been struggling to get broadband after I moved house at the start of June. As soon as we had our phone connected, I submitted a relocation order with my existing broadband internet provider and it was knocked back. Since then, I have resubmitted new applications numerous times and they were knocked back as well. With nothing to lose, I even tried using Bigpond in some sort of vein hope that the myth was true – it was rejected as well.

I resubmitted my application yet again last Friday and hoped that a port had become available on the RIM I am connected to. To be honest, after having the previous six applications rejected over the last two month – I wasn’t going to hold my breath. This time however, something changed and it was approved.

I have broadband again, woohoo!

Tech Ed 2007, Day 2 Wrap Up

Yesterday, I ventured into the world of Microsoft CRM 4.0 and IIS7 which were both very educational. Day two at Tech Ed was going to leave the products behind and jump into the deep end of software development.

Building BackgroundMotion using the Composite Web Block

The first presentation I attended was by Jeremy Boyd, a Microsoft Regional Director for New Zealand. The presentation was about building a community site named Background Motion which is all about sharing rich media that can be used as wallpaper within Vista utilising Dreamscene.

If the talk was simply about building a web site using ASP.NET, then it wouldn’t be all that interesting so Jeremy took everyone through how to utilise the Composite Web Block and developing against the Model View Presenter pattern, as opposed to ever popular Model View Controller approach. I really enjoyed seeing the Model View Presenter pattern in use first hand and I thought that the structure and flow felt really good; structure and order are always a good thing – anything to help code sprawling over time.

I have to give plenty of accolades to Jeremy, his presentation was without a doubt the smoothest that I have been involved with so far at Tech.Ed 2007. The flow of switching between the slides and into the Visual Studio was always seamless; no fluffing around configuring references or not having it compile unexpectedly. Jeremy used a simple system to make sure this worked as expected, he had numerous copies of his solution waiting in the appropriate state for each step of the presentation. No only were the technical aspects of the talk sorted out well in advance, his presentation style and pace throughout the talk were excellent.

Enterprise Library 3.x

The second talk I went into was about the Enterprise Library, formally known as Enterprise Application Blocks. Version three of the Enterprise Library comes with a bunch of bug fixes to some of their existing blocks such as the Data Access Application Block but the really interesting news was with the addition of the Validation Application Block and the Policy Injection Application Block.

Touching on each of those points briefly, the Validation Application Block is a generic validation package that provides an array of out of the box validation routines. Validation isn’t anything new, so the important point to note about the Validation Application block is that the same code will work identically using ASP.NET, Windows Forms and Windows Communication Foundation. You could use the validation block to provide ASP.NET level validation and provide a different or additional set of validation routines on the business object itself. The validation can be set up through configuration, attributes and through code. Through the use of the Validation Application Block, it is now convenient to only write validation routines and rules once where as it typically tends to be duplicated.

The real funk started happening when the Policy Injection Application Block came out to play. Using the Policy Injection Application Block, it is possible to separate out common tasks which happen across the enterprise or domain and reuse those through injection. As an example, common tasks like logging, authorisation and validation are common and typically should be reused throughout the code without copy/pasting the functionality. After configuring what policies to inject where and in what order, a new business object is instanced. Instead of getting back an instance of that business object, you get back a proxy that for all intended purposes looks and feels like the business object you asked for. When calling methods on this proxy business object, it invokes the Policy Injection engine and the request for the actual method must flow through pre and post execution paths on the policy injection engine before being accepted. Nifty stuff !

.NET Framework 3.0, Putting It All Together

This talk was about how to integrate all of the different .NET 3.x features into a single application. It appears that the community can see the strengths in any one of the components, however were struggling to see all of them integrated seamlessly together in a single application.

Enter Dinner Now, a fictional online business which lets you order take away food from more than one restaurant at a time and have it all delivered to your home. The Dinner Now sample application uses a wide spread of technology including IIS7, ASP.NET Ajax Extensions, Linq, Windows Communication Foundation, Windows Workflow Foundation, Windows Presentation Foundation, Windows Powershell, and the .NET Compact Framework.

The idea behind this presentation is quite exciting, however I felt that it could have had a little more meat in it. Maybe the talk was geared at a slightly lower entry point but I felt too much time was spent explaining what the different technologies accomplishes and not enough time going through the technical aspects of it. That said, I still found the presentation entertaining and it is fantastic that Microsoft have now recognised the requirement for a sample scenario that is more complex than Northwind.

Tech Ed 2007, Day 1 Wrap Up

Today was my first ever experience with Microsoft Tech Ed and it was a great one, what a fantastic conference! Across the course of the day, I attended a few different presentations:

Microsoft CRM 4.0 (Codename: Titan)

Across the course of the day, I attended three different presentations for CRM 4.0:

  • introduction
  • reporting and business intelligence
  • technical presentation aimed at developers to extend and enhance Microsoft CRM 4.0

The presenter noted that the difference between CRM 1.x and CRM 3.0 was a revolution, while the CRM 4.0 is more of an evolution. The majority of the functionality from Microsoft CRM 3.0 exists within the updated version, however with a lot of improvements along the way. Some of the items which caught my attention during the presentation:

  • Brand new user interface, it looks fantastic. I actually thought Microsoft had released a winform application when he first opened it up and then I realised that it was running with Internet Explorer and my jaw pretty much hit the floor.
  • Judicious use of AJAX throughout the product to reduce the number of popup windows and form postbacks required to get things done. Some of them are so subtle that you won’t even notice them (the best kind), such as an input box which turns into a drop down list when you enter a string and the AJAX’d response contains more than a single item.
  • The entire work flow pipeline from CRM 3.0 has been replaced with the newly released Windows Workflow Foundation that ships as part of .NET 3.0. It isn’t possible to write your own custom work flow and deploy it into Microsoft CRM 4.0 just yet, however it’s a feature that they are well aware of and plan to implement soon. In the mean time, the presenter thought that if you implemented all of the appropriate interfaces in the WF component and edited the XAML manually that it’d probably ‘just work’. Of course, until it ships with the functionality to load in your own custom work flow components, they are never going to suggest that as a recommended strategy.
  • To support the service based environment that most organisations operate within now, it is possible to implement asynchronous activities. Of course, you could then implement an activity when the data relating to the asynchronous event completes.
  • Since Microsoft CRM 4.0 is going to be deployed as a Microsoft Live product, significant work has taken place to increase the performance of the application. Considering the presenter was running it on his notebook, with two virtual PC’s running and all the associated server related services; it was very fast – so I can only imagine how fast it’d feel deployed on quality server hardware.

I’m very pleased that I attended the Titan presentation, even if I’m not going to use it immediately. It has really opened my eyes as to what the product is capable of and I can already see fantastic applications of it within our business.

Internet Information Services 7 (IIS7)

IIS, the Microsoft web server, has been undergoing heavy surgical procedures since version five. IIS5 was a horribly slow, hard to configure product that no one wanted to use and the market share that Apache held reflected that. With the release of IIS6, many of the problems of IIS5 were resolves or at least reduced – however what they had still felt as though it was largely IIS5 with some spit and polish. The release of IIS7 feels as though they have finally unshackled themselves from their forefathers and are starting fresh.

The big highlights in IIS7 which I love are:

  • A fully integrated request pipeline, which means that all requests (static, .NET, PHP, CGI, ..) all take the same path through the server.
  • IIS7 is built around a modular architecture, much like Apache. This is a good thing on a few levels but primarily for security, memory consumption and performance.
  • Developers are able to extend/enhance IIS7 through the use of modules and handlers, which can intercept the requests at virtually any point in the request/response cycle. The modules can be written in managed or unmanaged code using languages such as C, C++ or .NET.
  • Configuring IIS7 is a snap with its new web.config inspired configuration. Everything about the web server is configured within these XML files, even the loading and unloading of modules from each virtual host.
  • A new kernel level output cache is available, which can cache any response regardless of how it was generated. No longer are you limited to using the output cache which is part of ASP.NET, using the kernel level output cache you can just as easily cache PHP, CFML, CGI and so on.
  • Performance improvements across the board, especially for filters which were utilising the CGI interface within IIS. IIS7 now implements the Fast-CGI interface which has dramatically improve performance. During the presentation, the presenter compared a PHP photo gallery running under CGI, Fast-CGI and Fast-CGI with kernel output caching which results in 13, 57 and 920 requests per second on his laptop.

There were a lot of other very cool things which are available in IIS7, unfortunately you need to be running Windows Vista or Windows Server 2008 to get access to it; of which we’re not just yet.

Windows Communication Foundation

Daniel Crowley-Wilson gave a quick half hour presentation on using Windows Communication Foundation to deliver RESTful web services. For a long time, .NET developers have really only been able to deliver remote services through SOAP and WS-* which work however aren’t the nicest things to deal with. I was very excited to see what looked like a clean implementation of REST; in fact I would have loved if Daniel had of had an hour or so to provide a little more comprehensive presentation but it what he delivered packed a good punch for 20 minutes!

Windows Workflow Foundation

Throughout a few different presentations, Workflow Foundation was demonstrated. As mentioned toward the top, Microsoft CRM 4.0 utilises Workflow Foundation for all of its workflow components and individual presentations demonstrated it directly. Developing against a framework like Workflow Foundation to perform complex flow related tasks just makes sense as it removes so much of the complexity. After talking to a few people and seeing it in action a handful of times in the last month, I can see clear advantages in upgrading certain components of our enterprise stack to use Windows Workflow Foundation.

I can’t wait for Tech.Ed day two, I’m going to really enjoy attending another IIS7 presentation and I hope to find the time to get in a couple of the fast paced half hour Chalk Talks.