Tuesday, November 20, 2012

C# - A home away from home

For someone with years of experience in Java, the stint to code in C# seemed like a cake walk. The large scale port of popular Java frameworks into dot net such as NHibernate, Spring.net, NUnit, etc., make life all that much easier for anyone starting to bridge the gap.

There are however some bits that trouble us Java natives no end. Particularly anything & everything to with the web.config file. This one file has enough traps in it to confuse the hell out of any sane minded developer. The file has hints for the IDE (Visual studio), the framework (Asp.net), the web server (IIS), & everyone else connected to the runtime.

The other bit that seems bothersome is how dependencies, packages & Dll's get referenced. Particularly with frequently changing Dll's, & the tools optimized for caching, it gets difficult to know what version is really loaded & running.

Anyway, once you get past these issues, the ride ahead is through familiar grounds.

Friday, November 2, 2012

Using Pentaho Kettle to Index Data in Solr

Pentaho Kettle is a fine open source ETL tool written in Java. There are several implementations, hooks and plugins available off the shelf for performing various Extract (E), Transform (T), Load (L) processes on data from a source location to a destination location.

Solr, on the other hand, is a rich and powerful production grade search engine written on top of Lucene. So how would it be to get the two to function in tandem? To use Kettle to load data into Solr for indexing purpose.

The data load phase for indexing in Solr is very similar to an ETL process. The data is sourced (Extract) from a relational Database (MySql, Postgre, etc.). This data is denormalized and transformed to a Solr compatible document (Transform). Finally the transformed data is streamed to Solr for indexing (Load). Kettle excels in performing each of these steps!

A Kettle ETL job to load data into Solr for indexing, is a good alternative to using Solr's very own Data Import Handler (DIH). Since DIH typically runs off the same Solr setup (with a few common dependencies) so there's some intermixing of concerns with such a set-up,  between what Solr is good at (search & indexing) versus what the DIH is built to do (import documents). The DIH also competes for resources (CPU, IO) with Solr. Ketttle has no such drawbacks and can be run off a different set of physical boxes.

There are additional benefits of using Kettle such as availability of stable implementations for working across data sources, querying, bulk load, setting up of staged workflows with configurable queues & worker threads. Also Kettle's exception handling, retry mechanism, REST/ WS client, JSON serializer, custom Java code extension, and several handy transformation capabilities, all add up in its favour.

On the cons, given that the call to Solr would be via standard REST client from Kettle, the set-up would not be Solr Cloud or Zookeeper (ZK) aware to be able to do any smart routing of documents. One option to solve this could be to use the Custom Java Code step in Kettle and delegate the call to Solr via the SolrJ's CloudSolrServer client (which is Solr Cloud/ ZK aware). 

Thursday, October 25, 2012

Amdahl's Law for Max Utilization and Speedup With Parallelization


There's a lot of talk these days about constraints introduced by the CAP theorem. One other equally relevant law for Parallel and Distributed systems is the Amdahl's law. Amdahl's law talks about the amount of speedup that can be achieved when a given single processor (or threaded) task is split and handed over to N-processors (or threads) to be executed in parallel.

To take an example let's work with the typical entrance examination problem: "if one person takes 2 hours to eat up a cake, how long would four people take to eat the same cake?".

Simple each person eats up one-fourth of the cake. So time taken = time-for-1-person/ N = 2/ 4 = 0.5 hours. Right? Ya, well, unless it's the very same cake that we were referring to, the one that got eaten ;)

What Amdahl's law says is that if the single processor task has sub-tasks or steps (unlike the cake eating example above) not every sub-task/ step can be parallelized. There is some percentage of sub-tasks (F%) that need to be run sequentially. As a result the speedup is not N-times but less computed as follows.



- To go back to our cake eating example,

i.e. 4 times speedup. Indeed 4 people took 1/4 th the time, i.e. 30 mins, as compared to 2 hours by one person.

- Let's now add some sequential steps before the cake eating. First you got to pay for it at the cashier & then take delivery from the delivery counter.
These have to be done in sequence & only by one person (why pay twice?). Thanks to the monopoly & popularity of the cake vendor, there's invariably a long queue at the cashier. It takes 15 mins to pay and another 15 mins to get the delivery of the cake.


So you see, due to the 20% sequential tasks the speedup has dropped from 4 times to 2.5 times.

Wednesday, October 10, 2012

Brewer's CAP Theorem


Brewer's CAP theorem talks about Consistency (C), Availability (A), Partition (P) tolerance, as the constraints that primarily govern the design of all distributed systems. There's a lot of literature available online explaining the theorem. The summary is that given that network partitions (P) will happen, pick one of the other two - Consistency (C) or Availability (A) for designing your system on a case by case basis (since you can't have all three)!

A partition could be caused by the failure of some kind of component - hardware (routers, gateway, cables, physical boxes/ nodes, disks, etc.) and/or software. When that happens:

- If you pick Consistency (C) => All your systems, processing, etc. is blocked/ held up until the failed component(s) recovers.

This has been the default with traditional RDBMS (thanks to their being ACID compliant). For financial & banking applications this normally has to be the choice.

- On the other hand, if you pick Availability (A) => All systems, other than the currently partitioned/ failed systems, continue to function as is within their own partitions. Seems good? Well not quite, cause this obviously results in inconsistencies across the two (or more) partitioned sections.

Systems thus designed with Availability (A) as their selection (over C), must be able to live with inconsistencies across different partitions. Such systems also have some automated way to later get back to consistent state (eventual consistency) once the partitioned/ failed systems have recovered.

This is mostly the design choice with the NoSqls. Also with services such as Amazon AWS where eventual consistency within some reasonable time window (of a few seconds to a minutes) is acceptable.

Monday, September 17, 2012

Workaround for Copy Command from WebHDFS

At the moment the WebHDFS api doesn't offer the Copy command. As a result, the client ends up having to download the file to the local disk and re-upload the files via the Create command. Since this ends up being a lot of round trips all the way to the client (typically a non Java based client) the following workaround can be set up to partly alleviates the problem.

Set up a HDFS Webdav server on one of the DN or NN boxes. Issue the Copy command to the Webdav server via a REST call. Free up the client application, while letting the Webdav server with much better connectivity & proximity to the HDFS complete the Copy command request.

Wednesday, September 12, 2012

Remotely Debug Solr Cloud in Eclipse Using JPDA, JDWP, JVMTI & JDI


The acronym's first:
JPDA - Java Platform Debug Architecture
JDWP - Java Debug Wire Protocol
JVMTI - JVM Tool Interface
JDI - Java Debug Interface

To debug any of the open source Java projects such as Solr using Eclipse, rely on the JDWP feature available within any standard JVM. You can get a lot more info about the terms and architecture here.

At a high level the concept is that there is a JVM to be debugged (Solr) & a client side JVM debuggee (Eclipse). The two communicate over the JDWP. Thanks to a standardized wire protocol the client may even be a non JVM application which subscribes to the protocol.

One of the two JVMs acts as the debugging server (the one that waits for the client to connect). The other JVM acts as the debugging client which connects to the debugger server, to start the debugging process.

In our case, to keep things simple let Solr be the debugger server, while Eclipse can be the debugger client. The configurations then are as follows.

On Solr side (assuming Solr Cloud):

java -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 -Djetty.port=7200 -Dhost=myhost -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=myconf -Djava.util.logging.config.file=etc/logging.properties -DnumShards=3 -DzkHost=zk1:2171 -jar start.jar

Note: Since we have set suspend = y, Solr side will stay suspended until the Eclipse debugger client has connected

On Eclipse side:
Go to Run > Debug Configurations > Remote Java Application
Then choose Standard Socket Attach. Host: localhost (or IP). Port: 8000 (the same as set above)

Also in Eclipse you should have checked out the Solr source code from Solr trunk as a project. This will allow you to put break points at appropriate location to help with the debugging. So go on give this a shot and happy debugging!

Friday, August 10, 2012

REST based integration with HDFS via WebHDFS

WebHDFS offers a set of perfectly good REST api's for any application to integrate with the HDFS. This can be particularly advantageous for applications written in languages other than Java such as Rails, Dot Net and so on.

Within our lan with commodity desktop class boxes with 2.5 Ghz processors, 8 G Ram set up, and a replication factor of 2, we found about  read/ write speeds of about 27 Mbps via WebHDFS. This was only a shade slower than the 30 Mbps that we were getting via raw file transfers between the same Data Nodes (DN).

Another observation was that our best transfer rates were achieved by setting the buffer size to 22K. We played around with several other buffer size values, but found 22K to be the magic number. Hoping to find some logical explanation for this observation. 

Friday, July 20, 2012

Sunday, June 10, 2012

Triangular Matrix

There are two special kinds of square matrices (n X n matrix) known as the Upper Triangular Matrix & the Lower Triangular Matrix.

Upper Triangular Matrix - Elements below the main diagonal are all zero.
     i.e. M[i,j] = 0, for i > j

Lower Triangular Matrix - Elements above the main diagonal are all zero.
     i.e. M[i,j] = 0, for i < j

where,
Main diagonal - The diagonal running through M[1,1] to M[n,n] for a nXn square matrix.

Sunday, May 20, 2012

Java Classloaders


Key things about Java classloaders:
1. There is a hierarchy among classloaders:
Bootstrap <---|
                      Extension <--|
                                          System <---|
Custom

- Child classloaders (typically) delegate class loading to parent classloaders. child class loader’s findClass() method is not called if the parent classloader can load the class.
- Custom classloaders can override the default delegation chain to a certain extent.
- Due to the delegation chain of classloaders, ensure classes to be loaded by custom classloaders are not present on the system class path, boot class path, or extension class path.

2. Bootstrap classloader is a special classloader included with the JVM, written in native code. Bootstrap classloader is tasked with loading all core classes part of the jre.
None of the other classloaders can override the Bootstrap classloader's behvaiour.

3. All other classloaders are written in Java.

4. Extension classloader loads classes from the extension directories: JAVA_HOME/jre/lib/ext/

5. A more popular alternative to using the Extension classloader, is to use the System classloader which loads classes from the CLASSPATH environment variable location.

6. Finally, Custom classloaders can be written to override certain defaults like delegating classloading to parents, etc. Custom classloaders is commonly used by Application servers (such as Tomcat).

7. Separate Namespaces per Classloader:
Same class loaded by two different classloaders, are considered different. Trying to cast an object of one class (loaded by classloader 1) to a reference of the other (loaded by classloader 2, though identical in terms of its fully qualified class name) will result in a ClassCastException.

8. Lazy loading and Caching of Classes:
Classloaders load classes lazily. Once loaded classloaders cache all previously loaded classes for the duration of the JVM.

Key Methods of Classloaders:
To be detailed..

Dynamic Reloading of Classes:
Due to the non-overridable behaviour of caching of classes by classloaders, reloading of classes within a running JVM poses problems. To reload a class dynamically (a common use case for app. servers), a new instance of the classloader itself needs to be created.
Once the earlier classsloader is orphaned/ garbage, classes loaded & cached by it (and reachable only via the now GC'd classloader) also become garbage, which can then be collected by the GC.

Sunday, May 13, 2012

Reflexive, Symmetric, Transitive, Associate, Commutative, Distributive Laws and Operators

A refresher:

Reflexive: a = a
(Every element is mapped back to itself)

Symmetric:  a = b,  => b = a

Transitive: a = b  & b = c, => a = c

Associative: (x # y) # z = x # (y # z)
(The operator is immune to different grouping of the operands)

Commutative: x $ y = y $ x
(The operator is immune to changing order of operands)

Distributive: x @ (y ^ z) = x @ y ^  x @ z,
(Outer Operand x can be distributed over the other two y & z without affecting the results)

Monday, April 2, 2012

NP, NP-Hard & NP-Complete


NP: One that has a non-deterministic, polynomial time solution. (Mind you, it is still polynomial time). The other way to define it is, given a solution (certificate), one can verify correctness of the solution using a deterministic Turing Machine (TM) in polynomial time.

NP-Hard: A hard problem - at least as hard as the the hardest problem known (& unsolvable) so far.

NP-Complete: ( NP ) <intersection> ( NP-Hard ).

NP & NP-Hard are different, and overlap only in certain cases when they are NP-Complete. Many NP-Hard as a result, are not NP-Complete. 

Monday, March 5, 2012

Rails Cheat Sheet

This is a living doc. with my notes on getting off the ground and surviving around a Rails apps. My experience with Rails is rather clunky & hardly anything to speak of Ruby.

Set up
Ubuntu OS, Rails 2.0, Ruby 1.8 running on Mongrel behind an Apache web server

Books
A good starting point would be "Four Days on Rails" by John McCreesh.

Installing on Ubuntu:

- Install the following through Synaptic
Ruby 1.8
Gem 1.8
mysql client/ server (should be installed to connect & run mysql server locally)

- Further install all essential gems (you can go about this lazily, installing what you need):
sudo gem install -v=2.0.2 rails

Gems installed on my dev:
gem list 
    actionmailer (2.1.0, 2.0.2)
    actionpack (2.1.0, 2.0.2)
    activerecord (2.1.0, 2.0.2)
    activeresource (2.1.0, 2.0.2)
    activesupport (2.1.0, 2.0.2)
    cgi_multipart_eof_fix (2.5.0)
    daemons (1.0.10)
    fastthread (1.0.1)
    gem_plugin (0.2.3)
    localization_generator (1.0.8)
    mongrel (1.1.5)
    mongrel_cluster (1.0.5)
    mysql (2.7)
    rails (2.0.2)
    rake (0.8.1)
    salted_login_generator (2.0.2)


Creating your first app
    rails <Application Name>
    edit config/database.yml for correct database settings
    ./script/generate model <modelName>
    ./script/generate controller <controllerName>
    edit db/migrate/<record name> for the appropriate insert and delete scripts
    rake db:migrate
    rake db:migrate RAILS_ENV=production
    Scaffold the controller (put the text active_scaffold : <modelName>

Starting the server
  Within the project folder type:
 ./script/server

Hit http://localhost:9000/ to test. Monitor the console outputs for any issues. Once you find things are working, you can turn this into a daemon (adding & to the script).
./script/server --env=production &

Sunday, February 26, 2012

Phantom References in Java

Depending upon the tenacity of the grasp of the reference object, Java language has defined four different kinds of reference types - Strong, Soft, Weak & Phantom.

A simple search leads you to good explanations on the topic. Particularly the article by Ethan, this SO discussionthe official Java docs from Oracle, and this article on reachability are useful.

The focus of this article is to explain the working of the example in KDGregory's post on Phantom References.

You must understand the call sequence shown above and the purpose of the participating classes. The key points are:

(i) PooledConnection: class extends Connection class.

(ii) Client Applications: only interact with instances of  the wrapper PooledConnection class. Applications get hold of new connections via the getConnection() for the ConnectionPool.

(iii) ConnectionPool: has a fixed number of Connections (base).

(iv) IdentityHashMap: Strong references to currently assigned/ in use Connections are maintained in two IdentityHashMaps - ref2Cxt & cxt2Ref. The references in the two HashMaps is the PhantomReference to the newly created PooledConnection.

The code is able to handle the following cases with regard to releasing of connections:

Case 1: A well behaving client, which calls PooledConnection.close():
This in turn calls the ConnectionPool.releaseConnection(Connection), passing in the associated the Connection object, leading to returning of the associated Connection object to the Connection Pool.

Case 2: Misbehaving client, Crashed client, etc. which does NOT call PooledConnection.close():
Once the client application stops using the PooledConnection object, the PooledConnection object goes out of scope. At this stage the PhantomReference associated with the PooledConnection object (see wrappedConnection()), gets enqueued in the associated ReferenceQueue.

In the ConnectionPool.getConnection() method, the call to ReferenceQueue.remove(), returns the PhantomReference to the particular  PooledConnection. Passing in this PhantomReference to ConnectionPool.releaseConnection(Reference), locates the associated Connection object from the _ref2Cxt, IdentityHashMap, & returns the Connection back to the Connection Pool.

This automatic reclaiming of the precious (yet orphaned) Connection is the crux of KD Gregory's Phantom Reference example.

Sunday, January 22, 2012

Java Generics PECS & Get-Put Principle, Non-Reifiable Types & Erasure, Covariance & Contravariance


Joshua Bloch's in the book Effective Java has introduced the PECS (Producer Extends Consumer Super) mnemonic for managing type hierarchies via Java Generics, i.e. Covariance & Contravariance for Generics.

Covariance & Contravariance

Covariance is a subtyping principle where a child/ sub-type can be used in place of parent/ super-type. In Java land this is seen with Arrays, Method Overriding (Java 5 onwards) & Generics Extends.

Contravariance is the reverse, where a parent/ super-type can be used in place of a sub-type. In Java, contravariance is seen with Generics Super.

PECS

Now coming back to PECS, also referred to as the Get-Put, principle of Generics. The key thing about PECS is that it is defined from the persepective of the Collection object in focus, and not the caller/ client using the Collection object.

Case 1: When the Collection is being used to retrieve existing (previously added) data, from the Collection's perspective it is a Producer of data. As per PECS it needs to Extend.
The caller in this case can be sure that results have objects who's parent is MyParent. So the caller can safely iterate over results as an immutable collection of MyParent objects.

However, any kind of addition into the Collection <? extends MyParent> is unsafe. MyParent can have any number of subtypes (MyChild1, MyChild2, etc.) and there is no way for the caller application to know this specific subtype. So at this stage any addition is not allowed.

Case 2: When the Collection is being used to store new data, from the Collection's perspective it is a Consumer of data. As per PECS it needs to define Super.

Type Erasure & Non-Reifiable Nature of Generics

Java Generics is implemented through erasure entirely at the compiler level. The type info. is discarded from within the bytecodes (i.e. List<String> & List have identical bytecodes). This also causes the non-reifiable behaviour of Generic types, i.e. the inability to determine the type info. from the bytecode at runtime. (Note that Arrays are reifiable as shown in E.g. 2 above).

The PECS rule is therefore to make the compiler operate defensively.  Once type info has been associated with a given Collection via Generics, the PECS restriction is there to enable compile time detection of potential unsafe usage.

Friday, January 20, 2012

SEDA - Staged Event Driven Architecture

Welsh, Culler, Brewer's paper at SOSP-01 introduces the concepts around SEDA lucidly. SEDA, in my experience, has been a good architectural choice for building scalable back-end systems. SEDA based systems, not only scale well, but also have trackability, failure recovery, load balancing, etc. introduced very easily/ naturally into the system along the stage boundaries. 

Monday, January 2, 2012

Intent

The intent of this blog is to show a way forward to others like myself to getting around the complex world of algorithms. We will also be delving into other areas of computer science, distributed systems, architecture, FOSS every now and then. So happy reading.