Saturday, August 31, 2013

Internals Of Solr/ Lucene Document Scoring

This post is in continuation a discussion on the solr community about the efficiency of Solr/ Lucene scoring algorithm.

The search algorithm given here can be summarized to:

- Query query =  Build query using user's search terms.
- Collector collector = Typically the TopScoreDocCollector
- Searcher searcher = new IndexSearcher(indexReader);
- searcher.search(query, collector);
- Weight weight = query.weight(searcher);
- Scorer scorer = weight.scorer(indexReader); // Typically BooleanScorer2
- scorer.score() => ConjunctionScorer (on every sub-scorer) in a leap frog/ skip ahead mechanism.

Algo needs improvement!

The AND query shows a leap frog/ skip ahead ahead pattern implemented in the BooleanScorer2 (ConjunctionScorer) level.

For example with the query, q=A AND B, where A & B match doc. id's
A -> 1,3,5,7,11,15,17
B -> 2, 6

- Scorer starts with the min. of each, i.e. A -> 1  & B -> 2, & current highest doc id set to 2

- In the next few iterations:
A is advanced past the current highest value to 3 & current highest updated to 3.
B advanced past current highest 3 to 6 & current highest set to 6.
A advanced past 6 to 7 & current highest set to 7.
B has no more docs & this breaks out, without any match.

On the other hand if the two had converged/ agreed on a particular doc id, that doc would be scored & collected (added to a min-heap of scores).

Thursday, August 15, 2013

Update Apt Repositories Location for Old Ubuntu Versions

When working with an old versions of Ubuntu (11.04, 10.04, etc.), the biggest handicap is the lack of a functional package manager such as apt or synaptic. The reason why the package managers stop working is that at end of support/ licence for an old version of Ubuntu, the team behind Ubuntu archive the repositories.

At this point as an user you are supposed to Upgrade (the recommended practice) to a more recent version. There are normally enough advance notices and alerts sent out by Ubuntu's Update Manager for the same. If however, you have a compelling reason to stick on to your current version, then here's a   way to update your repositories' sources list to be able to install and use old software that is present in the archival repository. This is based on the recommendation made on this forum discussion.

Sunday, August 11, 2013

Resume Large Downloads in Mozilla Firefox

When downloading a large file via Firefox over a slow internet connection you might get disconnected in between and end up with a partially downloaded file (with a .part file extension). Here's a little trick to Resume the download after reconnecting, on wards, from where the download had stopped previously.

1. Open up the Firefox Downloads window (Tools > Downloads OR use shortcut Ctrl+ Shift + Y). Not sure if this works with the recent versions of Firefox.

2. Click on the Resume button next to the file you were downloading/ got downloaded partially. If this works then great, nothing else to do.

3. On the other hand, if step 2 didn't work, then click on the Retry button. This will result in the download to start off all over again from the very beginning. Let it start and go over to step 4.

4. Once a few bytes of the file has been downloaded & the progress meter on the Download window indicates that the new download has started (might also give an estimate of time left), click on the Pause button next to the download.

5. Now go to your Downloads folder (where Firefox was downloading the file). Rename the first file that was partially downloaded (having the file name extension as .part) to the new file that just started downloading in step 4.

6. Go back to the Firefox Downloads window and click the ResumeStart/ Restart button next to the download process that was Paused in step 4.

That's it. The download should resume from the point where the initial partially downloaded (.part) file had stopped.

Thursday, August 1, 2013

Trees and Graphs

Useful things to know about the trees and graphs based data structures:

These:
  • Binary Trees Vs. Binary Search Trees
  • 2,4 and Red Black
  • AVL
  • Tries
  • Heaps
  • B & B+ Trees
& these:
  • BFS, DFS
  • Sorting - Quick, Merge, Radix, Timsort
  • Kruskal's & Prim's algorithm for Minimum Spanning Trees
  • Morris Traversal, without extra space or recursion, using Threaded Binary Trees
  • Djikstra's algorithm for shortest path
  • Topological sorting
That horses for courses is applicable:

The big-O deal:

With Java, well tested implementations are mostly available:
  • TreeSet
  • TreeMap
  • LinkedHashMap
  • ConcurrentSkipListMap
  • PriorityQueue

Thursday, July 11, 2013

Solr Analyzers Basics


Solr offers several Analyzers to pre-process document fields being indexed and searched. As part of modelling the schema one needs to make an informed choice for the specific chain of Analyzers to be applied to every field (fieldType) defined in the schema.xml.

To start off one needs to understand that different kinds of Analyzers and their purpose:
  1. Char Filters (or CharacterFilterFactories)
    • Always applied first, i.e. before Tokenizers
    • Operates at the Character level (of the field values)
    • Zero or More Char Filters can be chained together. Get applied as per the sequence in schema.xml
  2. Tokenizers (or TokenizerFactories)
    • Converts stream of Characters into a series of Tokens
    • Only One Tokenizer can be there in each Analyzer chain
  3. Token Filters (or TokenFilterFactories)
    • Always applied last, i.e. after Tokenizers
    • Operates at the Tokens level generated by the Tokenizers
    • Zero or More Token Filters can be chained together. Get applied as per the sequence in schema.xml








To take an example, let's say we have a field title with the value (V1) "Mr. James <b>Bond</b> MI007". Now we run it through the following:

1. Character FilterFactory (One): HTMLStripCharFilterFactory (CF1)

(Output: "Mr. James Bond MI007")

2. Tokenizer (One): StandardTokenizerFactory (T)

(Output: Tokens: [ALPHANUM: "Mr.", ALPHANUM:"James", ALPHANUM:"Bond", ALNUM:"MI007"])

3. TokenFilters (Two): WordDelimiterFilterFactory (TF1) & LowerCaseFilterFactory (TF2) 
  • Mr. => WordDelim => Lowercase => mr.
  • James => WordDelim => Lowercase => james
  • Bond => WordDelim => Lowercase => bond
  • MI007 => WordDelim => [MI, 007] => Lowercase => mi, 007
Finally the output text actually indexed: "mr. james bond mi 007"

There are several other options and many more Analyzers that one could.  Among them the different PatternReplace Analyzers, EdgeNGram and the simple WhiteSpaceFilterFactory are the more popular ones.  Finally, if none of the standard ones are adequate for a specific use case then there is also the option of writing a custom analyzer.

Monday, June 10, 2013

Solution for making Long GET Request to Solr via SolrNet

Solr has REST api's available for performing various searches on indexed documents. The client generally issues GET requests to Solr with different parameters (fields, row, facet, etc.) set. Since there typically are size/ query length limitations on GET requests (imposed by container, OS, etc.), Solr allows the same queries to be issued to the Solr RequestHandlers as POST request as well.

We ran into one such issue with long GET request to Solr from SolrNet and did a few changes to solve the same.

Solr Side Changes:
First up, we increased the headerBufferSize of the application server as explained on SO here and increased the maxBooleanClauses parameter in solrconfig.xml. This allowed Solr side to start responding to much longer GET requests. The problem however wasn't solved. The client side was a dot net application running within IIS having additional length limitations imposed by Windows OS & the dot net framework.

SolrNet Side Changes:
In round two, we went for a better fix and switched over to a POST requests in place of long GET requests. The solution is largely the same as mentioned on the SolrNet group here & here. The difference being to switch over to a POST request from within the Get() method of the SolrConnection.cs class, when the request string is longer than a configurable threshold value.


Update: PostSolrConnection.cs class has made it to the head branch of SolrNet.  

Tuesday, May 28, 2013

Redmine Project Management Tool

In trying to find an Open source Agile project management tool, somewhat of an alternative to Rally, chanced upon Redmine. The initial feel of the tool has been good so far.

Needed somewhat of an integrated tool that would allow various teams to collaborate. Redmine does well on this count as it has a task tracker, bug tracker, knowledge repository (file/ document management and wiki), all rolled in to one.

Additionally, we have been able to migrate our bugs and user accounts from Bugzilla, to get off the ground quick. Now it is about letting rubber hit the road, and having the teams to start working with Redmine. 

Monday, May 27, 2013

SolrNet Separate Highlighting Query - hl.q

Solr allows highlighting of matched sections in field values.  There are several parameters that can be set by the caller to adjust the highlighting behaviour.

SolrNet, a library to connect to Solr from dot net applications, also has HighlightingParameters exposed in SolrNet core library. However, not all/ a very small subset of parameters are currently exposed.

Recently needed to use the hl.q query, to issues a separate/ more specific highlighting query to Solr. The work around was to make use of the ExtraParams option, from the base CommonQueryOptions class.

The same approach could be used for any of the other parameters not exposed by SolrNet, such ais hl.BoundaryScanner, per field highlighting, maxScan, etc., essentially all the 3.5x onward features mentioned on the Solr Highlighting wiki.

Friday, May 3, 2013

Php Script To Display Process, Vmstat, Disk Usage, Syslog Of A Linux Server Via A Browser

A Php script that executes some standard shell programs for monitoring resource utilization & processes on a given Linux box. The script directs the output to a web-browser.

Apache web-server should be installed on the server.  To run copy the script to the  DocumentRoot (/var/www/html). Appropriate execute rights (-rw-x) need to be given to the apache user (which runs this script, but is not the owner) to execute this Php file & to be able to access /var/log/syslog.

Save this file as: showHealth.php in the /var/www/html folder:

Saturday, April 20, 2013

Linux/Unix Shell Function For Date Addition and Subtraction

Here is a small shell script to do date addition and subtraction. This works on the bash shell with GNU Date.

Wednesday, April 17, 2013

Upload to Amazon S3 Bucket via Signed Url with Server Side Encryption

Continuing further from my previous post on upload & download from Amazon S3 bucket via signed url's, here is how to enable Server Side Encryption (SES) with the file being uploaded to S3.

Add a x-amz-server-side-encryption request parameter with the GeneratePresignedUrlRequest before getting the signed url:


Monday, April 1, 2013

Upload and Download from Amazon AWS S3 Bucket via Signed Url

While the code snippets are using the Java AWS SDKs, principally these will work with the other SDKs as well.


1. Get hold of FederatedCredentials using your AWS credentials:

Pass in proper access Policy settings for the FederatedCredentials on the S3 Bucket and/ or Item.

E.g.



For Download you could additionally set up ResponseHeaderOverrides for withContentDisposition, ContentType, etc.


2. Get BasicSessionCredentials using the Federated Credentials


3. Generate GeneratePresignedUrlRequest

4. Finally, generate a pre-signed url via the S3Client object:


5. To test this:
- Download:
Get the url.toString() & hit it from a browser


- Upload:

Wednesday, March 20, 2013

Uploading Large Files In Chunks To Amazon S3

A collection of best practices based on my experience building a scaled out solution for the server side file upload handler.

1. Authentication/ Authorization

2. Chunking

3. Stateless upload & Session

4. Shared memory for post file operations

5. Retries & Failover

6. Bulk operations

To be completed.. 

Wednesday, March 13, 2013

Autovue Jump To Page

Autovue is a browser based document viewing & markup application.  
To open up a specific page of a document in the viewer simply set up an ONINIT javascript call back method via the applet param.



You can get more info on this from the Advanced Scripting Functionality section of the InstallConfigGuideCS of Oracle Autovue. 

Friday, March 1, 2013

Atomic Updates via SolrNet

As of today the SolrNet api doesn't offer atomic updates to be issued to a running Solr server. While the Solrnet api is supposed to offer this feature sometime in the future, the following alternative can be used in the interim.

1. Build a custom atomic update XML message:


(See: http://wiki.apache.org/solr/UpdateXmlMessages for more details)

2. Get hold of the connection object (via ServiceLocator):


3. Issue a call to Solr via the connection object:


Will be adding sample code snippets soon..

Friday, February 15, 2013

Solr Cell, Tika And Pages

With Solr Cell, aka Tika, you get the power to index content from within  a wide set of digital files such as Pdfs, Office, Text, etc.

Tika however doesn't naturally offer any demarcations for page boundaries. So you can search for content matches from a file, but not for specific pages from within these files.

Among several different ways to solve this problem, one way could be to index each page of the file as a separate document in Solr and do a field collapsing/ result grouping on the search results by a common file identifier shared by all pages of the file.

Since there could be performance overheads with result grouping, another way is to index the combined file as one solr document (of type Combined) & each page as a separate solr document (of type Page) with a common file identifier. The search can then be performed initially against the combined document (type:combined AND text:abc) to identify files that match & then against the corresponding page type document (type:page AND file-id:123 AND text:abc) to identify pages.

Wednesday, February 6, 2013

Mocking AWS ELB Behaviour Locally For Testing

Once hosted out of Amazon, you make use of the AWS Elastic Load Balancer (ELB) for balancing load across your EC2's within or acroos Availability Zones (AZ). Since code gets developed and tested locally (outside of Amazon), at times you might want to test load balancer scenarios before deploying to production. Here's one way to mock up the load balancer behaviour for local testing.

Use Apache (you could very well use something like Nginx instead) in a reverse proxy, load balancer set up via mod_proxy & mod_proxy_balancer.  Fairly simple for anyone with slight experience with configuring Apache. We used Apache as a load balancer front-end to IIS on local, exactly the way ELB would load balance in front of production IIS.

Additionally, since ELB was also an SSL end point for our production servers, we set up Apache to be the SSL end point (via mod_ssl) on local. Apache was configured to listen on port 443 (using a self-signed certificate), and would forward all traffic from port 443 to backend IIS on port 80.

Once we had that set-up going, we were quickly able to reproduce an issue with application generated Secure cookies not getting set properly across client request/ response. Once we had the fix on the local (which was to set the flag on the cookies in the request, not response) the same worked flawlessly on the AWS as well.

Wednesday, January 23, 2013

Headless Java Monster

You know you are up against the same fellow if you start seeing the
java.awt.HeadlessException, typically running off a virtual server, or in the rare  case of a dedicated server without a monitor (aka head).

The solution is simple. First shut down the application, tomcat, etc. that got the exception.

1. Install the X display manager. 


On a Ubuntu on the other hand, you could install the Xvfb package (via apt, synaptic, etc.)

2. Start X:

3. Export display:
With those done, now you should have entered the simpler "No X11 DISPLAY variable" zone. Simply export the display variable to fix this.



(In the Ubuntu case above you have to export DISPLAY=:1)


4. Allow all users to connect/ use this Display variable:


Now restart the application, tomcat, etc. that you were trying to run initially & it should work. Hope nothing headless ever troubles no man!

Tuesday, November 20, 2012

C# - A home away from home

For someone with years of experience in Java, the stint to code in C# seemed like a cake walk. The large scale port of popular Java frameworks into dot net such as NHibernate, Spring.net, NUnit, etc., make life all that much easier for anyone starting to bridge the gap.

There are however some bits that trouble us Java natives no end. Particularly anything & everything to with the web.config file. This one file has enough traps in it to confuse the hell out of any sane minded developer. The file has hints for the IDE (Visual studio), the framework (Asp.net), the web server (IIS), & everyone else connected to the runtime.

The other bit that seems bothersome is how dependencies, packages & Dll's get referenced. Particularly with frequently changing Dll's, & the tools optimized for caching, it gets difficult to know what version is really loaded & running.

Anyway, once you get past these issues, the ride ahead is through familiar grounds.

Friday, November 2, 2012

Using Pentaho Kettle to Index Data in Solr

Pentaho Kettle is a fine open source ETL tool written in Java. There are several implementations, hooks and plugins available off the shelf for performing various Extract (E), Transform (T), Load (L) processes on data from a source location to a destination location.

Solr, on the other hand, is a rich and powerful production grade search engine written on top of Lucene. So how would it be to get the two to function in tandem? To use Kettle to load data into Solr for indexing purpose.

The data load phase for indexing in Solr is very similar to an ETL process. The data is sourced (Extract) from a relational Database (MySql, Postgre, etc.). This data is denormalized and transformed to a Solr compatible document (Transform). Finally the transformed data is streamed to Solr for indexing (Load). Kettle excels in performing each of these steps!

A Kettle ETL job to load data into Solr for indexing, is a good alternative to using Solr's very own Data Import Handler (DIH). Since DIH typically runs off the same Solr setup (with a few common dependencies) so there's some intermixing of concerns with such a set-up,  between what Solr is good at (search & indexing) versus what the DIH is built to do (import documents). The DIH also competes for resources (CPU, IO) with Solr. Ketttle has no such drawbacks and can be run off a different set of physical boxes.

There are additional benefits of using Kettle such as availability of stable implementations for working across data sources, querying, bulk load, setting up of staged workflows with configurable queues & worker threads. Also Kettle's exception handling, retry mechanism, REST/ WS client, JSON serializer, custom Java code extension, and several handy transformation capabilities, all add up in its favour.

On the cons, given that the call to Solr would be via standard REST client from Kettle, the set-up would not be Solr Cloud or Zookeeper (ZK) aware to be able to do any smart routing of documents. One option to solve this could be to use the Custom Java Code step in Kettle and delegate the call to Solr via the SolrJ's CloudSolrServer client (which is Solr Cloud/ ZK aware). 

Thursday, October 25, 2012

Amdahl's Law for Max Utilization and Speedup With Parallelization


There's a lot of talk these days about constraints introduced by the CAP theorem. One other equally relevant law for Parallel and Distributed systems is the Amdahl's law. Amdahl's law talks about the amount of speedup that can be achieved when a given single processor (or threaded) task is split and handed over to N-processors (or threads) to be executed in parallel.

To take an example let's work with the typical entrance examination problem: "if one person takes 2 hours to eat up a cake, how long would four people take to eat the same cake?".

Simple each person eats up one-fourth of the cake. So time taken = time-for-1-person/ N = 2/ 4 = 0.5 hours. Right? Ya, well, unless it's the very same cake that we were referring to, the one that got eaten ;)

What Amdahl's law says is that if the single processor task has sub-tasks or steps (unlike the cake eating example above) not every sub-task/ step can be parallelized. There is some percentage of sub-tasks (F%) that need to be run sequentially. As a result the speedup is not N-times but less computed as follows.



- To go back to our cake eating example,

i.e. 4 times speedup. Indeed 4 people took 1/4 th the time, i.e. 30 mins, as compared to 2 hours by one person.

- Let's now add some sequential steps before the cake eating. First you got to pay for it at the cashier & then take delivery from the delivery counter.
These have to be done in sequence & only by one person (why pay twice?). Thanks to the monopoly & popularity of the cake vendor, there's invariably a long queue at the cashier. It takes 15 mins to pay and another 15 mins to get the delivery of the cake.


So you see, due to the 20% sequential tasks the speedup has dropped from 4 times to 2.5 times.

Wednesday, October 10, 2012

Brewer's CAP Theorem


Brewer's CAP theorem talks about Consistency (C), Availability (A), Partition (P) tolerance, as the constraints that primarily govern the design of all distributed systems. There's a lot of literature available online explaining the theorem. The summary is that given that network partitions (P) will happen, pick one of the other two - Consistency (C) or Availability (A) for designing your system on a case by case basis (since you can't have all three)!

A partition could be caused by the failure of some kind of component - hardware (routers, gateway, cables, physical boxes/ nodes, disks, etc.) and/or software. When that happens:

- If you pick Consistency (C) => All your systems, processing, etc. is blocked/ held up until the failed component(s) recovers.

This has been the default with traditional RDBMS (thanks to their being ACID compliant). For financial & banking applications this normally has to be the choice.

- On the other hand, if you pick Availability (A) => All systems, other than the currently partitioned/ failed systems, continue to function as is within their own partitions. Seems good? Well not quite, cause this obviously results in inconsistencies across the two (or more) partitioned sections.

Systems thus designed with Availability (A) as their selection (over C), must be able to live with inconsistencies across different partitions. Such systems also have some automated way to later get back to consistent state (eventual consistency) once the partitioned/ failed systems have recovered.

This is mostly the design choice with the NoSqls. Also with services such as Amazon AWS where eventual consistency within some reasonable time window (of a few seconds to a minutes) is acceptable.

Monday, September 17, 2012

Workaround for Copy Command from WebHDFS

At the moment the WebHDFS api doesn't offer the Copy command. As a result, the client ends up having to download the file to the local disk and re-upload the files via the Create command. Since this ends up being a lot of round trips all the way to the client (typically a non Java based client) the following workaround can be set up to partly alleviates the problem.

Set up a HDFS Webdav server on one of the DN or NN boxes. Issue the Copy command to the Webdav server via a REST call. Free up the client application, while letting the Webdav server with much better connectivity & proximity to the HDFS complete the Copy command request.

Wednesday, September 12, 2012

Remotely Debug Solr Cloud in Eclipse Using JPDA, JDWP, JVMTI & JDI


The acronym's first:
JPDA - Java Platform Debug Architecture
JDWP - Java Debug Wire Protocol
JVMTI - JVM Tool Interface
JDI - Java Debug Interface

To debug any of the open source Java projects such as Solr using Eclipse, rely on the JDWP feature available within any standard JVM. You can get a lot more info about the terms and architecture here.

At a high level the concept is that there is a JVM to be debugged (Solr) & a client side JVM debuggee (Eclipse). The two communicate over the JDWP. Thanks to a standardized wire protocol the client may even be a non JVM application which subscribes to the protocol.

One of the two JVMs acts as the debugging server (the one that waits for the client to connect). The other JVM acts as the debugging client which connects to the debugger server, to start the debugging process.

In our case, to keep things simple let Solr be the debugger server, while Eclipse can be the debugger client. The configurations then are as follows.

On Solr side (assuming Solr Cloud):

java -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 -Djetty.port=7200 -Dhost=myhost -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=myconf -Djava.util.logging.config.file=etc/logging.properties -DnumShards=3 -DzkHost=zk1:2171 -jar start.jar

Note: Since we have set suspend = y, Solr side will stay suspended until the Eclipse debugger client has connected

On Eclipse side:
Go to Run > Debug Configurations > Remote Java Application
Then choose Standard Socket Attach. Host: localhost (or IP). Port: 8000 (the same as set above)

Also in Eclipse you should have checked out the Solr source code from Solr trunk as a project. This will allow you to put break points at appropriate location to help with the debugging. So go on give this a shot and happy debugging!

Friday, August 10, 2012

REST based integration with HDFS via WebHDFS

WebHDFS offers a set of perfectly good REST api's for any application to integrate with the HDFS. This can be particularly advantageous for applications written in languages other than Java such as Rails, Dot Net and so on.

Within our lan with commodity desktop class boxes with 2.5 Ghz processors, 8 G Ram set up, and a replication factor of 2, we found about  read/ write speeds of about 27 Mbps via WebHDFS. This was only a shade slower than the 30 Mbps that we were getting via raw file transfers between the same Data Nodes (DN).

Another observation was that our best transfer rates were achieved by setting the buffer size to 22K. We played around with several other buffer size values, but found 22K to be the magic number. Hoping to find some logical explanation for this observation. 

Friday, July 20, 2012