Tuesday, 30 December 2014

Tuesday, 19 August 2014

Do Analog Input Devices Really Need Replacement?

I wanted to watch TED talks on my TV instead of laptop. The only way I knew was by downloading the youtube video, copying it in a USB drive and inserting that drive in USB/CD player. I was not ready to take the extra steps- time consuming and tedious! When I inquired about laptop to TV converter, most of the folks suggested me to change TV. "No one uses CRT TV these days. Change your TV. New models can directly connect to laptop." Technology will keep evolving, so there must be some way out I assumed. Also the output ports of laptops keep changing every few years.

I started on the mission to connect my laptop's HDMI port to TV's RCA input despite the fact that I'm not a hardware person or a technician. When I searched on online stores I got HDMI to RCA cable. Happiness unlimited! Just when I was about to order the cable I thought of checking out what these input/output types are. Here is what wikipedia gave me-

HDMI (High-Definition Multimedia Interface) is a compact audio/video interface for transferring uncompressed video data and compressed or uncompressed digital audio data from an HDMI-compliant source device.

RCA plugs for composite analog video (yellow) and analog audio (white and red).

Logically speaking- how can a simple cable convert digital output into analog input? Clearly I needed a converter. I searched for converters in online stores and in shops. What I got was disappointment! I learnt about an electronics market at a distance. I went and started asking for converter. Finally one shopkeeper showed me a converter box. But there was no way to test it. None of those shops had laptops! Yes, they did have CRT TVs. This was what was written on the box:
So I got a HDMI cable and a RCA cable to complete the setup....everything without guarantee, warranty and no-return policy! I was investing INR 1600 without testing. Risk. I returned home and completed the connection setup. 

All I could see was multi-coloured strips on TV. Switching off the converter blanked out the TV screen- the only means which indicated that TV is taking the converter as its input. Couple of days passed by with no luck. Finally one morning I made the connections first and then started my laptop. Voila! Laptop screen was getting replicated on TV. However I could hear audio only from laptop. Tried with Mac. Again only video, no audio from TV.

Again couple of days passed by. And I discovered Win + P shortcut for projecting. Choose "Second screen only".....And there you hear audio from your television.

Hoping to know Win + P magic for Mac, so that I can connect Mac to TV!!

Saturday, 24 May 2014

Top open source projects in Java

Here is what GitHub says (I am surprised to see Spring at #10):

Friday, 9 May 2014

Strategy Guide to Entering into Freelancing

I attended my engineering college's alumni meet last week. Many juniors approached me to know about what needs to be done to work as a freelancer, how to get projects and much more. Even students had these questions. So I'm jotting down some important points to pitch into the competitive field of freelancing and excelling in it. 

Create a profile which is accessible on net

LinkedIn is presently a widely professional networking site. Create a profile and keep it updated. The key features of portfolio should be- summary, experience (projects/paper presentations/technical competitions/conferences) and academic details. Recommendations and endorsements are added advantages.

Create a project/app- should be published on app engine or any of the web stores  

If one is interested in mobile applications development, develop and publish the app on respective store- Andriod or iOS. Similarly for web apps. Also publish the source code on github to showcase coding skills.

Register yourself on freelancing websites

There are multiple websites where one can bid for projects. Some of them are Elance or Freelancer.
Add summary of the points listed and worked upon in the profile. One needs to keep an eye on which projects he is interested or well-versed at. Look for suitable project and apply.

Participate in networking activities

Groups conduct regular meetups. Attending those meetups will help to build social network. Folks belonging to either side of need attend them- those who have projects and want to outsource and the ones who want to get projects to work upon. Try for getting projects from people within network.

Participate in design/development/data sciences contests

Various competitions are held at different levels- local, national, international. These competitions give us a close look at real world problems and getting a solution. Also, we learn to think from problem analysis till getting a working solution.

Contribute to some existing open-source projects

To begin with, add a few projects(usually the apps which you use often) to watch list and raise bugs/issues. Next step is suggesting features for the app.

Friday, 25 April 2014

[Big Data] Apache Kafka - Part I

Huge amount of real-time data is continuously getting generated these days by various sources. We’ve so many examples: Facebook-  Your feed will continuously be populated with newer and newer items, you also have recent activities of your friends listed down which keeps on updating as and when any activity happens. Similarly, the question answer site Quora shows your notifications, answers, upvotes, new questions asked etc. You do not have to click the refresh button to get them. Twitter is another such very good example. On the other hand there are many such applications which want to consume this data. In most of the cases these data consuming apps are not connected to the data producing apps. Since we do not have data producers and consumers under the same umbrella, we need a mechanism which will seamlessly integrate these ends. So producers need not even know who the consumers are. They just have to bother about their work of pushing messages to a system as and when generated. The generated data is Big Data in present time. We have already got familiar with Big Data in our previous post, we also know its characteristics- volume, velocity and variety, as well as its importance. Size of the data generated poses a big challenge in this integration system. In most of the cases it is not just about consuming the data, but also performing analytics on it. Real-time analytics on huge amount of data to produce real-time outputs is something that has to be catered to. Yes, there are some systems which do not need the real-time data. When they want to consume data, they get connected, get the data generated till that time, go back offline again and then perform analytics on the data.

Kafka is the intermediate system between the producers and consumers which seamlessly allows different kinds of applications to consume messages. It is a publish-subscribe commit log system. It is designed to process real-time data stream activity like news feed and logs.

It was developed at LinkedIn and later open-sourced. Need- since LinkedIn had to deal with so many events e.g. Updates, user activity. With low latency.

Kafka is a distributed, partitioned system. Logs are saved under various 'topics'. For each topic, Kafka saves messages in partitions with the intention of scaling, fault-tolerance and parallel consumption. Each partition is ordered, immutable sequence of messages which keep on adding to the log. The log is saved for a predefined amount of time. Consumers can subscribe to multiple topics. Messages are stored in order and each message has got a sequential id. There is 'offset' of each consumer. Offset shifts with consumption of messages. Usually a consumer will consume messages in order.


A server in Kafka cluster is called as Broker. Kafka cluster saves the messages for predefined period. So even if a consumer is not continuously connected with the cluster, it can keep connecting at specified periods and consume the messages published by that time.

It is upto the producers in which topic and which partition should the message get published. Consumers can be grouped together into consumer groups. When a messages is published, it is delivered to one consumer within the consumer group.

A single Kafka broker can handle terabytes of reads/writes per second from multiple clients.
Messages persist on disk and replicated within the cluster. We can consume a message multiple times since there is no data loss. Kafka is cluster-centric which allows fault-tolerance and durability.

Sunday, 20 April 2014

AnswerReader : An Awesome App in the Making!

AnswerReader is a powerful app to organize and customize Quora, providing a friendly experience.
AnswerReader acts as a single interface for performing various activities. Using Quora from a browser will most likely result in multiple browser tabs being opened. AnswerReader provides a multi-column view of Quora. User can save which all topics he wants in those columns. These saved topics appear as per their order, until the user doesn't remove or change them. User need not set them every time he uses AnswerReader. Any of the columns can be scrolled to main readable area by simply clicking on the topic name in the left navigation panel. AnswerReader provides quick access to most of the profile related info like stats and credits.
  • Create a customized Quora view: Manage columns, shortcuts and much more- all in one app.
  • Boost Productivity: No need to save every time what you want to see in the AnswerReader columns.
  • All in one interface: Manage answers, comments, replies, upvotes, drafts ,posts.
  • Stay Focused: Never miss out anything related to the topic you are most interested in.
  • Multiple Shortcuts: Shortcuts without leaving main page. Get rid of opening multiple browser tabs.
  • Follow without actually following: Keep track of activities of a topic or question even without following it in Quora.
  • Manage What or Whom to Follow: Follow or unfollow question, topic or person.
  • Become a Power User: Everything that you can do on Quora plus ease and multi-column view.
Download from Chrome Web Store: 
AnswerReader is an open source project. Fork it on GitHub:
Note: This tool is under active development. A lot of new features are coming soon.

Thursday, 10 April 2014

Big Data : An Introduction

What is Big Data?

For years together, companies have been making decisions based on analytics performed on huge data stored in relational databases. Saving data in the structured manner in relational databases used to be very costly. Storing and processing huge amount of unstructured data (big data) is much cheaper and faster. This is the main reason why big data has attracted so much of attention. Big data typically has following types:
  • Data from social media sites.
  • Enterprise data including customer related information of CRM applications.
  • Logs of any system- be it related to software or manufacturing.
Big data and its processing are characterised by 3 qualities:
  • Volume : Normally we speak of gigabytes or GBs. Here it goes from tera, peta and exa and so on bytes. And the amount keeps on increasing.
  • Velocity : Relational databases do not scale up in linear way. We expect having same performance even when the data is huge.
  • Variety : Most of the data is unstructured data with a small amount of structured data too.
Why big data is important?

The economic value of big data varies a lot. Sometimes there are indirect advantages of big data as in decision making. Typically there is good amount of information hidden within this big chunk of unstructured data. We should be able to figure out precisely what part of data is valuable and can be used further. This leaves us wondering what do we ultimately do with such a huge amount of data analytics which should be produced as fast as possible. There are so many use cases- Twitter wants to find out most retweeted tweets or trending ones, find tweets containing a particular hashtag, Google has to get the results of so many queries, ad publishers need to know how many new ads have been posted, Quora has to publish the questions posted newly or generate news feed as per every user’s topics and people followed, millions of emails are being sent as notification and much more from so many websites, number of applications downloaded from app store, new article or post published on various news sites to be displayed and a whole lot many things. Especially with the increasing use of smart phones and GPS enabled devices, ad publishers want to display location specific ads or ads of stores located nearby the current location of the user. This helps in targeting the right set of customers.


To get maximum benefit of the big data, we should be able to process all of the data together instead of processing it in distinct small sets. Since this data is much larger than our traditional data, the real challenge is to handle it in a way so as to overcome the computational and mathematical challenges. The heterogeneous and incomplete nature of the data is to be tackled in processing. When we do manual computations, we can handle such heterogeneity of data. However when the processing has to be done by machine, it expects the data to be complete and homogeneous. Another challenge with big data is its volume and it keeps on increasing rapidly. On hardware front, this is taken care by increasing the number of cores rather than simply increasing the clock speed and by replacing traditional hard disc drives by better I/O performance storage. Cloud computing too helps handle this volume challenge by being able to process varying workloads. The large volume of data poses the difficulty of achieving timeliness. As larger data is to be processed, it will take more time. However the quickness of getting analysis is very crucial in most of the cases.

Wednesday, 19 February 2014

Independent projects I worked on while being a software developer

Note : Everything mentioned here was developed after regular office hours and mostly for fun/learning purpose only.

When I started my career as a software developer(Java), all I knew was OOPS concepts, Collections, I/O and Exception packages, a bit of Multi-Threading and XMLs (DOM parser only).

Apart from regular day-to-day development, the first personal project I worked on was a file-search-app. Very similar to how Windows file search works. After doing some coding, I was able to: 1.Search in sub-directories 2. Search by file-type/modified-date 3. Search by file name patterns (*VO.*, notes*.txt) etc.
Next, I wanted to create a UI for this app. So learnt Swing and created a nice (If I can say so) UI for the same.
Couldn't find time to do file-indexing to improve search performance. 

Few months later, I got some more interest in Swing and started working on another project - A Java based IDE. It was just for fun and not with the intention to build something better than Eclipse or NetBeans :) After I spent few weekends on coding, I was able to build and run a Java project through my IDE. Auto-suggest for method-names etc. was interesting to develop (Learnt reflection).

It was the 2nd year of my software development career and I was working on web/enterprise apps. I was getting introduced to various web technologies like - JSP, Servlets, Struts, JSF, GWT etc. Influenced by the magic of web technologies, I decided to build my own social network (you can laugh now :)). I knew I would never launch it but it helped me think like Mark Zuckerberg. Someone found this project interesting and finally used it for a closed group networking (with a small user-base). To be honest, what I gave was a very basic version (I realised it is a lot of work and not worth spending that much time on it) which they got enhanced later by others.

Learning so far from my independent projects - Even though I was a software developer in good companies, I was working as a Product Manager, Designer, Architect and Programmer on my pet projects.

My next project started when Google App Engine was launched. It did not take me more than a second to realize that I can now host and run my web applications for free. I was so motivated that I learned python to create my first web app on GAE (as python was the only supported language at that time). And published one more app few months later, but gradually lost interest as I was using Java, Java EE, Spring, Hibernate etc. in my office-related work.  

But hey...wait a second...Google adding Java support to GAE. Is it true?...yes it is...and GAE with Java support was released. I had a big smile on my face!!!

And then I started again and never stopped actually. Till today I have created around 15 apps (using 4 Google Accounts).

Learning so far - apart from what I highlighted in the first part of my answer, I also got the opportunity to learn new language (and related tech-stack), GAE (and hence Cloud computing- IaaS/PaaS/SaaS etc and other cloud service providers) and enjoyed seeing my web applications live (at appspot dot com).

Next - I became the API-maniac. I got into the habit of breathing with APIs. Every week I used to choose some APIs from programmable-web(API directory) and do something with it. Apart from learning API programming, it also helped me win an iPad in PayPal X Developer Challenge.

Chrome Apps and Extensions - Rolling out my ideas in the form of utilities was quick, easy and interesting. For example - 'Java Populars' which I build in half an hour, has 40K+ users. Similarly, News-You-Like and Favorite-Bollywood-Tweets apps got featured in the Digit magazine. I learnt a lot about HTML5 and JavaScript through this and built 20+ apps/extensions so far - 'Quick Chart', 'Simple Task Manager', 'TechCrunch Slides' etc to name a few.

Summary : The entire journey helped me become a better contributor in the main projects (for which I am getting paid).

PS : Getting lazy to share my interest in mobile apps ( and other areas) and what did I do as part of this.