I was thinking recently about all the work being done around “natural language” search with several startups (notably powerset.com and textdigger.com) looking to make everyone’s search experience less frustrating. Let’s face it; despite all the work going into algorythms behind the scene, I don’t think that search today is significantly different than 5 years ago.

You type in your word combos into the box and hope and pray that something relevant comes back in the first 10 entries on the page. Or you can repeat with a slightly different boolean combo. And still hope and pray…

Quintura searchBut maybe its not simply in the initial search terms that determine how successful the search for information will be. Maybe if we acknowledge that its really really hard to get back the results we want on the first try, that the way to develop a better search experience is how you engage with the results to refine that search.

 That’s why I was interested in Quintura and more recently in silobreaker.com.  In each case they look at related items in a search visually in order to expose connections to the key search term. Don’t think an item is related? Just get rid of it either by clicking the ‘x’ or in the case of silobreaker, dragging it to the trash.


It makes refining the search more intuitive but also exposes linkages between items by proximity and size. Which helps you understand what the internet google Quintura/Silobreaker thinks of you.

Note: Another visual search engine was launched in Beta called SearchMe. Techcrunch covers it here