Category Archives: Internet

Google Acquires DoubleClick

April 13th saw Google finalise a deal to acquire online media and advertising heavy weight DoubleClick Inc. The announcement from Google states that they’ve purchased DoubleClick for USD$3.1 billion in cash from San Francisco based private equity firm Hellman & Friedman along with JMI Equity.

The interesting thing here isn’t that Google have purchased yet another monster business but that they are one of the biggest forces in the online advertising landscape. DoubleClick currently service a different type of online advertising client than Google, so the DoubleClick business will definitely complement Google’s online advertising strategies. More importantly though, Google gain all of the technology that DoubleClick have been developing which focuses strongly around rich media advertising. I expect it won’t be long before we see Google start to aggressively roll out rich media advertising into their current products such as Google Video and YouTube and subsequently into the wider market as well.

Interesting times ahead for online advertisers, the internet landscape is changing yet again.

Microsoft Live Search Tactics To Claw Back Market Share

I keep getting the annoying nag message from Microsoft MSN Messenger to upgrade and I’ve been ignoring it for months. I’ve currently got the clearly outdated version 7.5 installed, which is no where near bleeding edge enough – so apparently I need to upgrade post haste.

Microsoft 'MSN Messenger' search result pointing to Microsoft Live Search within Google pay per click marketingBeing the diligent computer user, I uninstalled MSN Messenger 7.5 and the original Windows Messenger that comes with Windows XP Professional. Not knowing the web address for MSN Messenger, I googled msn messenger to be presented with the search result to the left.

After glancing at the advertisement and seeing “Msn Messenger” as the advertising text, I clicked the link expecting to be taken to the Messenger home page on the Microsoft web site. No, that isn’t what I got at all – instead it redirected me to the new Microsoft Live Search web site, with my “MSN Messenger” search already performed. Not only that, they had a nifty JavaScript sliding panel with some useful advertising promoting Microsoft Live Search and telling me that it is “the ducks nuts”. After a few seconds, the useful advertising panel automatically slided away to leave the standard Microsoft Live Search page.

Microsoft Live Search presenting 'useful advertising' telling you why their service is so fantastic after getting to their search engine via a Google search!When the biggest software company in the world is required to participate in pay per click advertising on a competitors network to drive traffic to their own search engine, I think it is a pretty sure sign that their competitor is doing something right. I can understand that someone like Google and Yahoo! might advertise on their competitions web sites for pay per click marketing services but I’m yet to see an advertisement on Google or Yahoo! telling me that I should be using their competitors search engines.

Search Engine XML Sitemap Improvements

In December 2006, Google, Yahoo! & Microsoft collaborated and all agreed to support the new XML sitemap protocol that Google released as a beta in 2005.

Implementing an XML sitemap for a web site is a simple way for a webmaster to inform the search engines what content exists on their site that they absolutely want indexed. The XML sitemap does not necessarily need to include all content on a site you want indexed, however the content that exists within the XML sitemap is looked upon as a priority for indexing.

When the XML sitemap protocol was initially released by Google as a beta, webmasters needed to inform Google of its existence through the Google Webmasters Tools utility. When Yahoo! and Microsoft joined the party, all vendors accepted a standard HTTP request to a given URL as notification of the XML sitemaps location. These methods have worked fine, however required a little bit of extra work for each search engine. It was recently announced that you can now specify the location of the XML sitemap within a standard robots.txt file.

It’s a small change to the robots.txt file, however it’s an improvement that makes so much sense since the robots.txt file is specifically for the search engine crawlers. If you want to use this new notification method, simply add the following information into your existing robots.txt file:

  • Sitemap: <sitemap_location>

It is possible to list more than one sitemap using this mechanism, however if you’re already providing a sitemap index file – a single reference to the index file is all that is required. The sitemap_location should be the fully qualified location of the sitemap, such as http://www.mydomain.com/sitemap.xml.

EMI Removes Digital Rights Management (DRM)

Music industry heavy weight EMI has announced that it will allow the download of their music catalog online without Digital Rights Management attached to the files.

DRM was released with the intention it would curb digital media piracy. Unfortunately, implementing DRM in a manner that worked well for the consumer seemed to be a constant thorn in the side of the media publishers as the consumers pushed back against the use of digital rights management.

Users would download music encrypted using DRM technology and would then not be able to transfer the song from one media device to another, such as from a computer to an MP3 player. If you could move or transfer the file, there was a limited number of transfers available. This problem was circumvented if you were downloading from the Apple iTunes store, as it would allow syncing of the downloaded songs onto associated Apple devices.

As expected, Apple will be the first retailer to sell the newly released DRM free media through the iTunes store at a slightly higher price than a DRM encrypted media file. EMI have stated that they will continue to use DRM technology where appropriate for time or subscription based services. The use of DRM in that area is well within the bounds of its reach in my opinion and is an excellent use of the technology.

From my point of view, this is excellent news. I have refused to download music from iTunes store because the files have DRM attached. Sometime in the near future, I might just swing by the iTunes store and see what they have on offer.

Google Image Labeler Included In Google Webmasters Tools

In September 2006, Google released a new utility in the form of a game named Google Image Labeler.

The game aspect of the Google Image Labeler involves a pair of people. The contestants are chosen at random to play against one another based on who is online at any point in time. Each game lasts for 90 seconds and the contestants are shown the same series of images which they have to tag or describe with words or phrases. The contestants gain points when they match words or phrases with their opponent.

By gaining points when you match words with your opponent, Google are assuming both contestants consider the image to reflect the same object. At some point, Google will end up using this information in Google Images to provide a better quality of service to their customers.

The service aspect of the Google Image Labeler is of course about providing a higher quality of service to the Google user base. At the moment, Google rely on webmasters providing context around any images that they use on their web sites. As a simple example, a webmaster might:

  • provide a meaningful name for the image
  • provide a useful alt attribute, which describes the image in text format
  • provide captions for the image, which might be a more in depth text description of the image
  • talk about the image in the main content on the web page

Whilst this mechanism is very useful and in most cases accurate, it can also be inaccurate or abused. By relying on random Google users to categorise the images, the chances of an image being misrepresented are vastly reduced.

Having humans categorise the images also lends itself to Google producing software that learns how to recognise images. Google could attempt to identify what the images are on their own and use the tags or labels provided by the Google user base to essentially compare or validate the results.

When logged into the Google Webmasters console, you are now able to select whether or not you want the images on your site to be visible to the Google Image Labeler service. At this stage, I’m not quite sure why you would opt out of it – however Google are giving webmasters the option should they choose to.

If Google do end up walking the learning machine path, it could be interesting times ahead for the image searching service.