Sunday, March 19, 2017

Google Pixel XL Phone Review, Part 2: Google Messes Up Fulfillment

This is Part 2 in a blog series about my new Google Pixel XL phone. In Part 1, I described the ordering process, which was fine except for the long 7-week wait for the phone. I fully expected the next post in this series to be about unpacking and configuring the phone, but unfortunately I need to first post about the fulfillment experience which has not been good.

Original Order

To recap my first post, I ordered my Pixel XL on 2/13/17 and was told to expect it around 4/04/17. I'd ordered with Google Financing over 2 years. I was accepted into Google Financing at the time I placed my order, and received confirming paperwork the following week. All set! All I had to do now was wait until April to get my phone.

Order confirmation, which clearly shows payment method Google Financing

A Problem With Your Order

On March 16, 4 weeks in on my 7-week wait, I received an email from Google Store saying that there was a problem with my order. There was no explanation of what the problem was, but the message did have a Fix It button.

Order on Hold Message

Eager to resolve any issue with getting my phone, I clicked Fix It. This took me to a Google Payments screen which wanted me to update my payment method to an updated credit or debit card. This was odd, because my original order had nothing to do with a credit or debit card: it was made through Google Financing. However, I do have my Google account on a debit card for miscellaneous purchases such as Google Music. After confirming that I was in fact dealing with a real Google web site, I cooperated in updating my Google account payment method to a current debit card and completed the form.

And I guess this was the wrong thing to do, as you'll see in the next section. I want to point out though that this interaction 1) Google provided no other recourse for "correcting the problem with my order" and 2) certainly did not did not indicate that any kind of charge was about to be made.

Unauthorized Debit of Full Price of Phone

At this point I was suspicious. Had I just changed the payment method for my phone order? I went back to call up the status of my original order. It still indicated that payment was through Google Financing, so I relaxed.

Order Check, Confirming that Google Financing was Still Method of Payment

But just a few seconds later, I received an alert from my bank that $968.67 had been debited from my bank account! Wait, what!?!

The unauthorized debit

This was unbelievable. Google had TRICKED ME into paying up front for a phone that had already been ordered and approved for financing--and they did it without telling me what was going to happen.

Your Order Has Shipped

On the next day, 3/17/17, I received an email that my order had shipped (unexpected, because original delivery expectations had been set for April). Delivery of the phone was now promised by 3/23/17. It was nice my phone was coming sooner than originally indicated... but I had been charged nearly $1000 up-front for a phone I had ordered with financing, and that wasn't nice at all.

Interactions with Google Support

Rather upset at this series of events, I proceeded to send a support message to Google. 

My complaint email

The support form promised a reply within 24 hours. 24 hours went by with no reply, but I did receive a reply the following day on 3/18/17.

First response from Google Support - didn't understand or address my issue

Unfortunately, it appears the support person did not even read the entire message. I was assured that my order was not held up any longer (still no explanation has ever been given about what the problem was) and would arrive soon. Nothing about the unauthorized charge or resolving it. I checked with my bank, and the charge had not been reversed.

I responded that I expected this unauthorized charge to be reversed, and wanted to pay for my phone the way it had been ordered, through Google Financing.

My first response, making it clear what action was needed

The Charge Came From Synchrony Bank

The next day (3/19/2017), I received another reply from Google. They said they had looked into the matter, and the bank charge actually came from their financing partner, Synchrony Bank. I would have to take the matter up with Synchrony, because Google had no ability to correct matters directly. They provided basic contact information for Synchrony.

Google Support's second response, saying the charge came from Synchrony

This was disappointing, to say the least. Now I would have to sort out a problem with Google's financing partner? When you are setting up a retail experience, it's fine to have partners but you should take responsibility for providing customers with a seamless experience. That includes managing customer service problems, even if they involve your partners. You wouldn't get this kind of treatment from Amazon: they are famous for stellar customer service.

Sigh. I proceed to the Synchrony bank Contact Us page at, where I see Search or Select a Business. Many companies are listed here. Amazon is listed here. Walmart is listed here. Do you know who isn't listed here? Google.

Synchrony web site contact form - no option for Google

I select Other. Which only gave me an option to call by phone. Up till now, all my interactions has been through email so there would be a record of the communication, but that's not an option here. So, I pick up the phone.

For the next couple of hours, I am on the phone with Synchrony or Google as they in turn transfer me over to the other party.

  • Lavinia at Synchrony doesn't really seem to understand my problem. She assures me any problem must be a Google problem. She transfers me over to Google.

  • Jill at Google does try to help--she spends a lot of time on the phone with me, but ultimately concludes that it is Synchrony who must reverse the charge. I get transferred back to Synchrony.

  • Tina at Synchrony is of no help whatsoever. She doesn't seem to understand anything about the Google-Synchrony relationship or how to resolve issues.

Synchrony's support people are, well, completely useless. They are 100% clueless. Google's support people do seem to be making an effort, but appear to be powerless to actually correct the issue. All of them assure me they want to help me, but each time I end up getting sent back to the other company.

I have now had enough. Apparently no one is able to correct this, and it is extremely frustrating. I decide to accept the situation: my phone is coming, the logistics were screwy, the financing didn't really materialize. It's no longer worth my time. I don't really know who was at fault here but I know it wasn't me. I may just get my next phone from Apple the next time around.

Google Pixel Scorecard

So far, I have to grade my Google Pixel XL experience as follows:

Google gets an F for the order fulfillment experience. This is disappointing to say the least. I like Google. But this has been anything but pleasant. It shows a lot of immaturity when it comes to inventory logistics, reliable order fulfillment processes, and good customer service. Although I received much better support attention from Google than Synchrony, this is ultimately Google's failure because they own the product/process/partnership: it doesn't really matter if the problem was with Google or Synchrony: the responsibility lies with Google and they blew it.

Hopefully my experience with the actual phone (coming next week) will be much better. Stay tuned, I'll let you know.

Friday, March 10, 2017

Google Pixel XL Phone Review, Part 1: The Long Wait

As the new owner of a Google Pixel XL Android phone, I'll be writing a series of posts to review my experiences with Google's new flagship handset. In this first post, I'll... be talking about the ordering experience, because I don't actually have the phone yet. Mostly I'll be complaining about the incredibly looooooong wait for the phone: I ordered it on February 13th, and am expecting to receive it on April 6th. That's a long 7 weeks of waiting. As I write this, I "only" have 4 weeks to go. Talk about agonizing. What I can do in first this post is discuss why I chose this phone (despite the long wait), and what I've learned about it. 
I won't go into the reasons for the delay, since I don't really know them. Google says it's due to high demand. Others speculate it's due to poor planning and management. Since this isn't Google's first phone, it is hard to explain.

Why the Pixel?

Why this phone? Well, my previous two phones have been the Moto X (first generation) and Moto X (second generation), which are also Android phones. I liked the Moto X a great deal, especially the "Moto actions" and other nice touches: I can just wave my hand above the phone to wake it up; I get battery-efficient notification icons; and the phone knows if I am looking at it and won't go to sleep. I also liked that I could design my own customized phone, choosing front (white), back (leather, engraved), and accent (silver). So why not just stay with Moto? Well, I was thrilled when Google bought Motorola Mobility; unfortunately, they've since sold it off to Lenovo. The current Moto phones all seem to have a camera bump, which is something of a deal-breaker for me: I want my phone to lie flat on the desk.

So, I looked around. For a variety of reasons I'm not a big fan of Samsung and LG given past experiences. I'm also quite weary of all the bloatware carriers tend to install on Android phones, often apps that can't be deleted. When Google announced the Pixel it made a lot of sense: no carrier bloatware, a finely-tuned Android experience, high-end hardware, and the best possible integration with Google services. Yes, it's pricey--but it seems like the best choice given what I value (pure Android experience, good hardware, backed by a company I have some respect for). And so, I placed my order and am now waiting (impatiently). In the meantime, I've read dozens of reviews by others so I have a pretty good idea of what to expect.

Having settled on a Pixel, I chose the larger XL model (5.5" screen) - Google's answer to the iPhone Plus. This is a bit of a departure for me, but I'm thinking the larger screen will be easier on the eyes. My wife got an iPhone Plus recently, and it took her about a week to get used to the larger device. I also went with the 128GB storage, because one of my few complaints about the Moto line was that I'm always running low on storage space. As for the color, my first choice would have been silver and after that blue; alas, almost everything was out of stock. When I finally could order something, it was the black. No matter, I'll be putting a skin on it.

The Ordering Experience

The ordering experience was simple enough. Since I'm not going through Verizon (poor coverage where I live, plus refusal to put up with any more carrier bloatware), and since the Pixel isn't available through any other carriers, that means I'm getting it directly from Google. The Google Store ordering experience is what you'd expect from Google: it's simple, and easy to use. Except...

Except that I couldn't actually order one. What's odd about the ordering experience is the backlog, and how that's being handled. When I first visited the Google Store in January, they were sold out, at least in the configurations I was interested in. And, you couldn't pre-order the phone. All you could do is get added to a waitlist (an email would come your way when a phone was available for order). After a few weeks, now in February, I still could not order the phone I wanted. Tired of this, I decided to cave on color, and ordered what was available: a black 128GB Pixel XL.

The Google Store shows you a Pixel and lets you change the base unit (Pixel or Pixel XL), color, and storage. As you change the configuration, you can see the phone from front, side, and back perspectives.

Google Store - Pixel

Here's the unit I ended up ordering: a Black Pixel XL 128GB.

Google Store - Pixel XL

Finally, my phone was ordered--but I was still in for a looong wait. A 7 week wait.

What I'll Like about the Pixel XL

👍 Free Unlimited Cloud Backup of Photos and Videos

The Pixel gives you unlimited storage of images and videos in Google Photos (cloud storage). Android users normally get this feature anyway, but not at full quality. For Pixel owners, they are archived at full quality.

👍 Great Battery Life

The battery life is reportedly very good. There's also fast charging: you're supposed to be able to get 7 hours of battery life from 15 minutes of charging.

👍 Speed

The Pixel's speed is reported as fantastic by reviewers - even after the honeymoon period. This is due to a fast processor plus a really well-tuned edition of Android. This is the whole point of the Pixel line: with Google in control of both the hardware and software, the phone really sings.

👍 Great Android Experience

Reviewers are also in agreement that the Android experience is first-rate. There are all sorts of improvements and refinements to be found, some important and some just to be more Apple-like. Given that the rest of my household is on Apple phones, being more Apple-like won't be a bad thing. 

  • New Pixel Launcher.
  • Round icons. 
  • Long-press to get application shortcuts. 
  • Swipe gestures, such as swiping up to get the app tray. Swipe down on the rear fingerprint sensor to see notifications, swipe back up to dismiss.

👍 Camera

Google is advertising the best camera available for a smartphone. Reviews either agree or call it a close second to the iPhone 7. Either way, I'm sure to be happy with this camera.

What I Probably Won't Like About the Pixel XL

👎 Don't Get It Wet

One ding against the Pixel is it isn't waterproof. That's mostly a big deal just because its becoming a standard on high-end phones. For the high price, the Pixel should also be waterproof.

👎 Unique Features that No Longer Are

The features that are supposed to be unique to the Pixel are rapidly become available anywhere. You can install the Pixel launcher on other Android phones. As of this week, the Google Assistant is no longer exclusive to just the Pixel.

👎 Pixel 2 Rumors Already Circulating

Each week, more details are coming to light about the Pixel 2, which will apparently be released near the end of 2017. Meanwhile, I'm still waiting for my first-gen Pixel.

👎 Avoiding Damage

Although the Pixel is reportedly well-made, it also seems prone to picking up scratches, especially on the rear glass panel. I'll admit it, I'm one of those people who prefer to use their phone without a case. This of course has bitten me in the past: my first Moto X cracked when I dropped it just a month in; with my Moto X second gen I was ultra-careful and avoided any mishaps. 

Given the expense of the Pixel, I'm going to give in and get some protection. I've decided to go with a skin from ColorWare, which I recently ordered and will receive in a couple of weeks. It's Techno Blue, and looks like this. Of course, I'm told these skins can also get scratched so I'll still need to be careful.
Pixel XL skin by ColorWare

Well, I think that's all I can say for now. I'll share more when I actually get the thing. In the meantime, I'll be practicing how to be more patient. I hear it's a virtue.

Next: Pixel XL Phone Review, Part 2: Google Messes Up Fulfillment

Friday, March 3, 2017

Searching Blob Documents with the Azure Search Service

One of the core services in the Microsoft Azure cloud platform is the Storage Service, which includes Blobs, Queues, and Table storage. Blobs are great for anything you would use a file system for, such as avatars, data files, XML/JSON files, ...and documents. But until recently, documents in blob storage had one big shortcoming: they weren't searchable. That is no longer the case. In this post, we'll examine how to search documents in blob storage using the Azure Search Service.

Azure Blob Basics

Let's quickly cover the basic of Azure blobs: 

  • Storage Accounts. To work with storage, you need to allocate a storage account in the Azure Management Portal. A storage account can be used for any or all of the following: blob storage, queue storage, table storage. We're focusing on blob storage in this article. To access a storage account you need its name and a key.
  • Containers. In your storage account, you can create one or more named containers. A container is kind of like a file folder--but without subfolders. Fortunately, there's a way to mimic subfolders (you may include slashes in blob names). Containers can be publicly accessible over the Internet, or have restricted access that requires an access key.
  • Blobs. A blob is a piece of data with a name, content, and some properties. For all intents and purposes, you can think of a blob as a file. There are actually several kinds of blobs (block blobs, append blobs, and page blobs). For our purposes here, whenever we mention blob we mean block blob, which is the type that most resembles a sequential file.

Uploading Documents to Azure Blob Storage

Let's say you had the chapters of a book you were writing in Microsoft Word that you save as pdf files--ch01.pdf, ch02.pdf, ... up to ch10.pdf, along with toc.pdf and preface.pdf--which you would like to store in blob storage and be able to search. Here's an example of what a page of this book chapter content looks like:

In your Azure storage account you can create a container (folder) for documents. In my case, I created a container named book-docs to hold my book chapter documents. In the book-docs container, you can upload your documents. If you upload the 12 pdf documents described above, you'll end up with 12 blobs (files) in your container. 

Structure of Azure Storage showing a Container and Blobs

To upload documents and get at your storage account, you'll need a storage explorer tool. You can either use my original Azure Storage Explorer or Microsoft's Azure Storage Explorer. We'll use Microsoft's explorer in this article because it has better support for one of the features we need, custom metadata properties. After downloading and launching the Storage Explorer, and configuring it to know about our storage account, this is what it looks like after creating a container and uploading 12 blobs.

12 pdf documents uploaded as blobs

Setting Document Properties

It would be nice to search these documents not only based on content, but also based on metadata. We can add metadata properties (name-value pairs) to each of these blobs. In the Microsoft Azure Storage Explorer, right-click a blob and select Properties. In the Properties dialog, click Add Metadata to add a property and enter a name and value. We'll later be able to search these properties. In my example, we've added a property named DocType and a property named Title to each document, with values like "pdf" and "Chapter 1: Cloud Computing Explained".

Blob with several metadata properties

Azure Search Basics

The Azure Search Service is able to search a variety of cloud data sources that include SQL Databases, DocumentDB, Table Storage, and Blob Storage (which is what we're interested in here). Azure Search is powered by Lucene, an open-source indexing and search technology. 

Azure Search can index both the content of blob documents and metadata properties of blobs. However, content is only indexable/searchable for supported file types: pdf, Microsoft Office (doc/docx, xls/xlsx, ppt/pttx), msg (Outlook), html, xml, zip, eml, txt, json, and csv. 

To utilize Azure Search, it will be necessary to create three entities: a Data Source, an Index, and an Indexer (don't confuse these last two). These three entities work together to make searches possible.

  • Data Source: defines the data source to be accessed. In our case, a blob container in an Azure storage account.
  • Index: the structure of an index that will be filled by scanning the data source, and queried in order to perform searches.
  • Indexer: a definition for an indexing agent, configured with a data source to scan and an index to populate.

These entities can be created and managed in the Azure Management Portal, or in code using the Azure Search REST API, or in code using the Azure Search .NET API. We'll be showing how to do it in C# code with the .NET API.

Installing the Azure Search API Package

Our code requires the Azure Search package, which is added using nuget. In Visual Studio, right-click your project and select Manage Nuget Packages. Then find and install the Microsoft Azure Search Library.You'll also need the Windows Azure Storage Library, also installed with nuget.

At the top of our code, we'll need using statements for a number of Microsoft.Azure.Search and Microsoft.WindowsAzure namespaces, and some related .NET namespaces:

using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;
using Microsoft.Azure.Search.Serialization;
using Newtonsoft.Json;
using System.ComponentModel.DataAnnotations;
using System.Web;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;

Creating a Search Service

The first step in working with Azure Search is to create a search service using the Azure Management Portal. There are several service tiers you can choose from, including a free service tier which will let you play around with 3 data sources and indexes. To work with your search service in code, you'll need its name and an API admin key, both of which you can get from the management portal. We'll be showing a fake name and key in this article, which you should replace with your actual search service name and key.

Creating a Service Client

To interact with Azure Search in our code, we need to first instantiate a service client, specifying the name and key for our search service:

string searchServiceName = "mysearchservice";
string searchServiceKey = "A65C5028BD889FA0DD2E29D0A8122F46";

SearchServiceClient serviceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(searchServiceKey));

Creating a Data Source

To create a data source, we use the service client to add a new DataSource object to its DataSources collection. You'll need your storage account name and key (note this is a different credential from the search service name and key in the previous section). The following parameters are defined in the code below:

  • Name: name for the data source.
  • Type: the type of data source (AzureBlob).
  • Credentials: storage account connection string.
  • Container: identifies which container in blob storage to access.
  • DataDeletionDetectionPolicy: defines a deletion policy (soft delete), and identifies a property (Deleted) and value (1) which will be recognized as a deletion. Blobs with property Deleted:1 will be removed from the index. We'll explain more about this later.
String datasourceName = "book-docs";
if (!serviceClient.DataSources.Exists(datasourceName))
  serviceClient.DataSources.Create(new DataSource()
    Name = datasourceName,
    Type = Microsoft.Azure.Search.Models.DataSourceType.AzureBlob,
    Credentials = new DataSourceCredentials("DefaultEndpointsProtocol=https;AccountName=mystorageaccount;AccountKey=GL3AAN0Xyy/8nvgBJcVr9lIMgCTtBeIcKuL46o/TTCpEGrReILC5z9k4m4Z/yZyYNfOeEYHEHdqxuQZmPsjoeQ=="),
    Container = new Microsoft.Azure.Search.Models.DataContainer(datasourceName),
DataDeletionDetectionPolicy = new Microsoft.Azure.Search.Models.SoftDeleteColumnDeletionDetectionPolicy() {  SoftDeleteColumnName="Deleted", SoftDeleteMarkerValue="1" }

With our data source defined, we can move on to creating our index and indexer.

Creating an Index

Next, we need to create the index that Azure Search will maintain for searches. The code below creates an index named book. It populates the Fields collection with the fields we are interested in tracking for searches. This includes:

  • content: the blob's content.
  • native metadata fields that come from accessing blob storage (such as metadata_storage_name, metadata_storage_path, metadata_storage_last_modified,  ...). 
  • custom metadata properties we've decided to add: DocType, Title, and Deleted.
Once the object is set up, it is added to the service client's Indexes collection, which creates the index.

String indexName = "book";
Index index = new Index()
    Name = indexName,
    Fields = new List<Field>()

index.Fields.Add(new Field() { Name = "content", Type = Microsoft.Azure.Search.Models.DataType.String, IsSearchable = true });
index.Fields.Add(new Field() { Name = "metadata_storage_content_type", Type = Microsoft.Azure.Search.Models.DataType.String });
index.Fields.Add(new Field() { Name = "metadata_storage_size", Type = Microsoft.Azure.Search.Models.DataType.String });
index.Fields.Add(new Field() { Name = "metadata_storage_last_modified", Type = Microsoft.Azure.Search.Models.DataType.String });
index.Fields.Add(new Field() { Name = "metadata_storage_content_md5", Type = Microsoft.Azure.Search.Models.DataType.String });
index.Fields.Add(new Field() { Name = "metadata_storage_name", Type = Microsoft.Azure.Search.Models.DataType.String });
index.Fields.Add(new Field() { Name = "metadata_storage_path", Type = Microsoft.Azure.Search.Models.DataType.String, IsKey = true, IsRetrievable = true , IsSearchable = true});
index.Fields.Add(new Field() { Name = "metadata_author", Type = Microsoft.Azure.Search.Models.DataType.String });
index.Fields.Add(new Field() { Name = "metadata_language", Type = Microsoft.Azure.Search.Models.DataType.String });
index.Fields.Add(new Field() { Name = "metadata_title", Type = Microsoft.Azure.Search.Models.DataType.String });
index.Fields.Add(new Field() { Name = "DocType", Type = Microsoft.Azure.Search.Models.DataType.String, IsSearchable = true });
index.Fields.Add(new Field() { Name = "Title", Type = Microsoft.Azure.Search.Models.DataType.String, IsSearchable = true });

if (serviceClient.Indexers.Exists(indexName))

Let's take note of some things about the index we're creating:
  • Some of the fields are built-in from what Azure Search intrinsically knows about blobs. This includes content and all the properties beginning with "metadata_". Especially take note of metadata_storage_path, which is the full URL of the blob. This is marked as the key of the index. This will ensure we do not receive duplicate documents in our search results.
  • Some of the fields are custom properties we've chosen to add to our blobs. This includes DocType and Title.

Creating an Indexer

And now, we can create an indexer (not to be confused with index). The indexer is the entity that will regularly scan the data source and keep the index up to date. The Indexer object identifies the data source to be scanned and the index to be updated. It also contains a schedule. In this case, the indexer will run every 30 minutes. Once the indexer object is set up, it is added to the service client's Indexers collection, which creates the indexer. In the background, the indexer will start running to scan the data source and populate the index. It's progress can be monitored using the Azure Management Portal.

String indexName = "book";
String indexerName = "book-docs";
Indexer indexer = new Indexer()
    Name = indexerName,
    DataSourceName = indexerName,
    TargetIndexName = indexName,
    Schedule = new IndexingSchedule()
        Interval = System.TimeSpan.FromMinutes(30)
indexer.FieldMappings = new List();
indexer.FieldMappings.Add(new Microsoft.Azure.Search.Models.FieldMapping()
    SourceFieldName = "metadata_storage_path",
    MappingFunction = Microsoft.Azure.Search.Models.FieldMappingFunction.Base64Encode()


if (serviceClient.Indexers.Exists(indexerName ))
    serviceClient.Indexers.Delete(indexerName );

Let's point out some things about the indexer we're creating:

  • The indexer has a schedule, which determines how often it scans blob storage to update the index. The code above sets a schedule of every 30 minutes.
  • There is a field mapping function defined for the metadata_storage_path field, which is the document path and our unique key. Why do we need this? Well, it's possible this path value might contain characters that are invalid for an index column; to avoid failures, it is necessary to Base64-encode the value. We'll have to decode this value whenever we retrieve search results.
Putting this all together, when we run the sample included with this article it takes around half a minute to create the data source, index, and indexer. The index is initially empty, but the indexer is already running in the background and will be ready for searching in about a minute.

Creating data source, index, and indexer

Searching Blob Documents

With all of this set up underway, we're finally ready to do searches.

BlobDocument Class

As we perform searches, we're going to need a class to represent a blob document. This class needs to be aligned with how we defined our index. Our sample uses the BlobDocument class below.

public class BlobDocument
    public String content { get; set; }

    public String metadata_storage_name { get; set; }

    public String metadata_storage_path { get; set; }

    public String metadata_storage_last_modified { get; set; }

    public String metadata_storage_content_md5 { get; set; }

    public String metadata_author { get; set; }

    public String metadata_content_type { get; set; }

    public String metadata_language { get; set; }

    public String metadata_title { get; set; }

    public String DocType { get; set; }

    public String Deleted { get; set; } // A value of 1 is a soft delete

    public String Title { get; set; }

Simple Searching

A simple search simply specifies some search text, such as "cloud". 

Up until now we've been using a Service Client to set up search entities. To perform searches, we'll instead use an Index Client, which is created this way:

String indexName = "book";
ISearchIndexClient indexClient = serviceClient.Indexes.GetClient(indexName);

To perform a search, we first define what it is we want to return in our results. We'd like to know the blob document URL, its content, as well as the two custom metadata properties we defined for our blob documents, DocType and Title.

parameters = new SearchParameters()
    Select = new[] { "content", "DocType", "Title", "metadata_storage_path" }


We call the index client's Documents.Search method to perform a search for "cloud" and return results.

String searchText = "cloud";
DocumentSearchResult<BlobDocument> searchResults = indexClient.Documents.Search(searchText, parameters);

Search Results

The result of our search is a DocumentSearchResult<BlobDocument> object. We can iterate through the results using a loop. When we defined our index earlier, we had to give Azure Search instructions to Base64-encode the metadata storage path field when necessary. As a result, we now need to decode the path.

foreach (SearchResult<BlobDocument> result in searchResults.Results)
    Console.WriteLine("---- result ----");
    String path = result.Document.metadata_storage_path;
    if (!String.IsNullOrEmpty(path) && !path.Contains("/"))
        path = Base64IUrlDecode(result.Document.metadata_storage_path);
    Console.WriteLine("metadata_storage_path: " + path);
    Console.WriteLine("DocType: " + result.Document.DocType);
    Console.WriteLine("Title: " + result.Document.Title);

The path is part of the result that matters the most, because if a user is interested in a particular search result this lets them download/view the document itself. Note this is only true if your blob container is configured to permit public access.

Now that we have enough code to perform a search and view the results, let's try some simple searches. For starters, we search on "pdf". That is not a term that appears in the content of any of the documents, but it is a value in the metadata: specifically, the property Title that we added to each blob earlier. As a result, all 12 documents match:

Search for "pdf" - 12 matches to document metadata

Now, let's try a search term that should match some of the content within these documents. A search for "safe" matches 3 documents:

Search for "safe" - 3 matches to document content

More Complex Searches

We can use some special syntax in our search query to do more complex queries.

To perform an AND between two search terms, use the + operator. The query storage+security will only match documents that contain both "storage" and "security".

AND query

To perform an OR between two search terms, use the | operator. The query dangerous|roi will match documents containing "dangerous" or "ROI".

OR query

In a future post, we'll explore how to perform advanced searches. 

Deleting Documents

Normally, deleting a blob involves nothing more than selecting it in a storage explorer and clicking Delete (or doing the equivalent in code). However, with an Azure Search index it is a little more complicated: if you just summarily delete a blob that was previously in the index, it will remain in the index: the indexer will not realize the blob is now gone. This can lead to search results being returned about documents that no longer exist.

We can get around this unpleasantness by utilizing a soft delete strategy. We will define a property that means "deleted", which we will tell Azure Search about. In our case, we'll call our property Deleted. A "soft delete" will cause the blob to be removed from the index when Deleted:1 is encountered by the indexer--after which it is safe to actually delete the blob. You might consider having an overnight activity scheduled that deletes all blobs marked as deleted.


With Azure Search, documents in Azure Blob Storage are finally searchable. Using Azure Search gives you the power of Lucene without requiring you to set up and maintain it yourself, and it has the built-in capability to work with blob storage. Although there are a few areas of Azure Search that are cumbersome (notably, Base64 encoding of document URLs and handling of deleted documents), for the most part it is a joy to use: it's fast and powerful. 

You can download the full sample (VS2013 solution) here.

Monday, January 9, 2017

Developing for Google Home: Building a Knowledge Base in API.AI

In my previous post, I described how to get started developing for Google Home. We looked at the walk-through sample Google provides and created the not-terribly-useful Silly Name Maker agent. In this post, I'll describe my first solo project in API.AI, a knowledge base that provides Bible facts.

When you create projects for Google Home in API.AI, you have the option of also furnishing a back end. The API.AI project can interact with your back end using webhooks. But we won't be using a back-end today: for my first solo project, I wanted to see what I could do just in API.AI by itself. Accordingly, my knowledge base project will be based solely on intents (an intent is a mapping between what a user says and what action the agent takes in response). Rather than use a database, the intents will contain both the questions asked and the responses returned.

API.AI also has some other goodies we won't be using in this first simple project: entities let you extract parameter values from input; contexts are like properties that help you tailor intents to the context of a conversation. Although today's project doesn't require entities or contexts, you can be sure they'll come into play when we move on to more complex projects.

Knowledge Base Design

Since this knowledge base is about Bible facts, the place to start is the first chapter of Genesis -- which describes the creation account. The approach I'm using is to distill each piece of information I want to provide--such as what God created on Day 1--and create an intent for it.

My first intent, then, is called creation_day_1 and looks like this:

creation_day_1 intent

Similar intents follow for the rest of the creation week:

creation_day_2 intent

creation_day_3 intent
creation_day_4 intent

creation_day_5 intent

creation_day_6 intent

creation_day_7 intent

Since this is a knowledge base agent, the scope is simply providing answers to questions. Accordingly, there's no attempt to guide the user from one intent to another. However, the answers to some questions may naturally cause the user to ask follow-up questions.

Anticipating Multiple Phrasings

The idea is that a user can ask "What did God create on the first day?" and hear back "On the first day, God created the heavens and the earth." However, there are many ways a user might make this inquiry, and the knowledge base won't be very useful unless it anticipates more than one way of asking the question. In my intent, I've got the following inquiries listed, all of which trigger the same intent and the same information:

What did God create on the first day?
What did God create first?
When were the heavens created?
When was the earth created?
What was created first?
What was created on the first day?
What was created on day 1?

By the way, you can also include multiple variations in an intent's response. When there's more than one possible response, API.AI will randomly select one of the possible responses. Doing this makes your agent seem more conversational and less predictably--in other words, more alive.

After defining intents for the first couple of chapters of Genesis, I'm able to conduct a dialog like the following:

"Hey Google, go to Bible Answers."
"Welcome to Bible Answers. Ask me a question about Genesis."
"What did God create on the first day?"
"On the first day, God created the heavens and the earth, and also light and darkness."
"What did God create second?"
"On the second day God created the sky, and separated the waters above from the waters below."
"When were the animals created?"
"On the sixth day, God created land animals. He also created Man in His image, and declared it good."
"Who was the first child?"
"Adam and Eve's first child was Cain. Cain was a tiller of the ground."
"Who was the second child?"
"Adam and Eve's second child was Abel. Abel was a keeper of sheep."

Well, after all that I've only covered a tiny portion of the Bible--and there are 1,189 chapters in total! I can see I'll be incrementally adding more and more intents to this project for quite some time.

This simple project, while not yet complete, demonstrates how easy is to write agents for Google Home that conversationally answer questions.