Friday, May 4, 2018

Upgrade to angular 6

How to upgrade/update to angular 6 ?

By using the angular cli we can upgrade to angular 6, below are the steps to upgrade

  1. Update the NodeJS version to 8.9+
  2. Update Angular cli to latest version

  3. npm uninstall -g @angular/cli
    npm cache verify
    npm cache clean
    npm install -g @angular/cli@next
    npm install typescript@2.7.2

Monday, April 16, 2018

MongoDB – Allow remote access

By default, MongoDB doesn’t allow remote connections. to enable the remote access bind the ip address of the machine in mongod.conf.
in linux /etc/mongod.conf
 # network interfaces
  port: 27017
  bindIp: ip-address of the machine // by default it will be

Wednesday, February 14, 2018

How to make @requestparam optional in spring


Make spring endpoint with optional request parameters


To make the request parameter optional on the spring controller, user JAVA 8 feature Option type or set required attribute to false.

public void count(@RequestParam(name="name") Optional name){
if (name.isPresent()) { 
 String name= name.get()

Sunday, February 11, 2018

@angular/platform-browser/src/browser/transfer_state.d.ts (34,40): ',' expected.


`ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:40 TS1005: ',' expected. ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:42 TS1139: Type parameter declaration expected. ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:46 TS1109: Expression expected. ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:51 TS1005: ')' expected. ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:59 TS1005: ';' expected. ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:60 TS1128: Declaration or statement expected. ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:73 TS1005: '(' expected. ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:42 TS2532: Object is possibly 'undefined'. ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:48 TS2304: Cannot find name 'key'. ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:53 TS2304: Cannot find name 'string'. ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:62 TS2693: 'StateKey' only refers to a type, but is being used as a value here.`


Update your typescript dependency to the latest version

 "devDependencies": {
    "typescript": "^2.6.1"

Monday, February 5, 2018

MSBUILD : error MSB3428: Could not load the Visual C++ component "VCBuild.exe"


MSBUILD : error MSB3428: Could not load the Visual C++ component "VCBuild.exe"


This error will be thrown on the windows machine for the node module build, to resolve this issue, run/install the windows build tools. Run from the administrative CMD or PowerShell!

npm install --global --production windows-build-tools

Friday, January 19, 2018

Error: Could not find or load main class org.gradle.wrapper.GradleWrapperMain


Error: Could not find or load main class org.gradle.wrapper.GradleWrapperMain. Bitbucket pipeline or Jenkins console output, the build is failed – “Could not find org.gradle.wrapper.GradleWrapperMain”


This error will be thrown when gradlew build unable to find the gradle folder which contains the gradle-wrapper.jar file, if the generated gradle-wrapper.jar is not check-in/commit to bitbucket please check-in or add the file.

    gradle folder structure:

Friday, September 1, 2017

guacd:"Error: Protocol Security Negotiation Failure"

Guacamole server is unable to make rdp connection with windows system and guacd library will throw this exception because windows remote desktop setting is not enabled.

Enable remote desktop setting:

this will solve the problem, if not make sure firewall ,antivirus and windows defender are turned-off as well in-case.

Sunday, August 20, 2017

error 10 cordova google plus login ionic

Below are the reason for the error 10.
  1. Android key and web client id not used properly.
  2. Missing debug key in .android folder of windows system
  3. Missing SHA1 key in the fire base console.
  4. Apk not using the generated debug key.
Before going further first we will check the firebase account,google console and system environments are set properly
  1. If firebase account is not setup, follow the steps Goto click here and create new project

  2. Copy the config details and use in ionic app.
  3. Goto Authentication and click on SIG-IN METHOD and enable google provider
  4. Once the google authentication is enabled, fire base will automatically generate the web client id and secret for the app
  5. Observe the google note on the how to add SHA-1 key to the android application. SHA-1 config
  6. Now its time to choose the application from fire base start page, add,configure the app and add the generated SHA-1 key to the app.
Now basic setup done and assume your able to build and install app in mobile but unable to login through the google plus and getting error 10, to resolve the issue check the below conf are correct as mentioned above.

Validate below steps to solve error 10 issue

1. Android key and web client id not used properly:

There are 2 types of keys android and client id. if the SHA-1 key is generate in the environment setup and added to the app in the firebase, then only a android key will be generate and added in google console. go to google developer console to get the android key.
Android Key: This key will be configured in config.xml and package.json file as reserve client id in the application and SHA-1 key configured to these android key should be used for signing the app apk file.
click on edit icon of the android key and check is the same SHA-1 key is configured same as fire base app.

Web client Id: This key will be generated by fire base on creation of the app or we can create manually.

Configure the web client with the app and use in the app code to authenticate the user.

2. Missing debug key in .android folder of windows system

Make sure you generate debug/release key and placed in .android folder under user and same copy should be placed under app folder.

3.Missing SHA1 key in the fire base console.

Make sure same SHA-1 key used across fire base app, google console and in .android folder in building system. any mismatch in SHA-1 key will lead to error 10. check same package is configure in the fire base for app and config.xml file.

4.Apk not using the generated debug key.

Validate the apk is generate using the configured debug/release SHA-1 key, by using the below command we can validate the apk is using the proper key.
keytool -list -printcert -jarfile android-debug.apk
If none of the above did not worked to solve the problem, try below miscellaneous steps to resolve the issue, its not standard practice or procedure but solve the issue in some weird scenarios.

Miscellaneous steps.

  1. if your not using the android studio, try to add your app to android studio and configure, it will configure debug key setting properly and build apk.
  2. Don't reverse your client_id i.e. like, use as it is, some times it solves the issue
  3. Use android key from google developer console in code as web client_id and config.xml,package.json, which is acceptable and solves the issue

Wednesday, January 13, 2016

Remove or Filter Stopping/Stemming words using java

For the better indexing or searching the data in the big text chunk we need to filter the unwanted words from the data to get the better performance on the search by indexing only the logical words.

What is Stopping Words?

Stopping words are the words which will be used to make the sentence along with consonants/verbs i.e. where is my car? in this "where/is/my" are the stopping words which are not required for the search.

What is Stemming Words?

Stemmer are the words which will make the action word along with stopping words. i.e Stopping Word + "ing/tion/ational/ization/ation....etc" : going/standing

I was looking of the library to achieve the filtering of the stopping/stemming words, Not found much on googling, decided to go through the stopping/stemming words blogs/white papers and algorithms, started writing small util library to do the filtering by using java and its available on maven repository and full source on github.

Exude Library

This is simple library for removing/filtering the stopping,stemming words from the text data, this library is in very basic level of development need to work on for later changes.

This is the part of maven repository now,Directly add in pom following.

Download latest version of exude download

1.Filter stopping words from given text/file/link
2.Filter stemming words from given text/file/link
3.Get swear words from given text/file/link

How Exude library works:
Step 1: Filter the duplicate words from the input data/file.
Step 2: Filter the stopping words from step1 filtered data.

Step 3: Filter the stemmer/swear words from step2 filtered data using the Porter algorithm which is used for suffix stripping.
exude process sequence flow:
How to use exude Library
Environment and dependent jar file
1.Minimum JDK 1.6 or higher
2.Apache Tika jar (which is used to parse the files for the data extraction)

Sample code
Sample Text Data
 String inputData = "Kannada is a Southern Dravidian language, and according to Dravidian scholar Sanford Steever, its history can be conventionally divided into three periods; Old Kannada (halegannada) from 450–1200 A.D., Middle Kannada (Nadugannada) from 1200–1700 A.D., and Modern Kannada from 1700 to the present.[20] Kannada is influenced to an appreciable extent by Sanskrit. Influences of other languages such as Prakrit and Pali can also be found in Kannada language.";
 String output = ExudeData.getInstance().filterStoppings(inputData);

extent southern influenced divided according halegannada kannada language three 450 found modern influences periods pali steever a middle d languages old nadugannada dravidian sanford history scholar appreciable 1700 1200 conventionally sanskrit prakrit present 20 
Sample File Data
String inputData = "any file path";
String output = ExudeData.getInstance().filterStoppings(inputData);
System.out.println("output : "+output);
Sample link Data
String inputData = "";
String output = ExudeData.getInstance().filterStoppings(inputData);
System.out.println("output : "+output);
Get swear words from data/file/link
String inputData = "enter text with bad words";
String output = ExudeData.getInstance().getSwearWords(inputData);

Library source code on github

Sunday, December 27, 2015

Elastic Search 2.x sample CRUD code

What is ElasticSearch?

Elasticsearch is an open-source, restful, distributed, search engine built on top of apache-lucene, Lucene is arguably the most advanced, high-performance, and fully featured search engine library in existence today—both open source and proprietary.
Elasticsearch is also written in Java and uses Lucene internally for all of its indexing and searching, but it aims to make full-text search easy by hiding the complexities of Lucene behind a simple, coherent, RESTful API.

Basic Concept and terminologies:

1.Near Realtime (NRT) Elasticsearch is a near real time search platform. What this means is there is a slight latency (normally one second) from the time you index a document until the time it becomes searchable.
2.Cluster A cluster is a collection of one or more nodes (servers) that together holds your entire data and provides federated indexing and search capabilities across all nodes. default cluster name will be "elasticsearch".
3.Node A node is a single server that is part of your cluster, stores your data, and participates in the cluster’s indexing and search capabilities.
4.Index An index is a collection of documents that have somewhat similar characteristics i.e like database.
5.Type Within an index, you can define one or more types. A type is a logical category/partition of your index and defined for documents that have a set of common fields. i.e. like table in relational database. a type
6.Document A document is a basic unit of information that can be indexed. For example

Below image will show how we can co-relate the relational database with elastic index which will make easy to understand the elastic terms and api.
In Elasticsearch, a document belongs to a type, and those types live inside an index. You can draw some (rough) parallels to a traditional relational database:

Relational DB ⇒ Databases ⇒ Tables ⇒ Rows ⇒ Columns Elasticsearch ⇒ Indices ⇒ Types ⇒ Documents ⇒ Fields

Development: Maven library dependency:

Client: using java client we can performe operations on elastic search cluster/node.
1.Perform standard index, get, delete and search operations on an existing cluster
2.Perform administrative tasks on a running cluster
3.Start full nodes when you want to run Elasticsearch embedded in your own application or when you want to launch unit or integration tests

Two types of client to get the client connection with cluster to perform the operations.
1. Node Client.
2. TransportClient.
Node Client: Instantiating a node based client is the simplest way to get a Client that can execute operations against elasticsearch. TransportClient: The TransportClient connects remotely to an Elasticsearch cluster using the transport module. It does not join the cluster, but simply gets one or more initial transport addresses and communicates with them. sample elastic search crud sample code: Node Client:
Node node  = NodeBuilder.nodeBuilder().clusterName("yourclustername").node();
Client client = node.client();
Settings settings = Settings.settingsBuilder()
                    .put(ElasticConstants.CLUSTER_NAME, cluster).build();
TransportClient transportClient = TransportClient.builder().settings(settings).build().
                    addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(host), port));
Creat Index: We can create the IndexRequest or using XContentBuilder we can populate the request to store in the index.
 XContentBuilder jsonBuilder = XContentFactory.jsonBuilder();
 Map<String, Object> data = new HashMap<String, Object>();
 data.put("FirstName", "Uttesh");
 data.put("LastName", "Kumar T.H.");;
public IndexResponse createIndex(String index, String type, String id, XContentBuilder jsonData) {
    IndexResponse response = null;
    try {
        response = ElasticSearchUtil.getClient().prepareIndex(index, type, id)
        return response;
    } catch (Exception e) {
        logger.error("createIndex", e);
    return null;
Find Document By Index:
public void findDocumentByIndex() {
        GetResponse response = findDocumentByIndex("users", "user", "1");
        Map<String, Object> source = response.getSource();
        System.out.println("Index: " + response.getIndex());
        System.out.println("Type: " + response.getType());
        System.out.println("Id: " + response.getId());
        System.out.println("Version: " + response.getVersion());
        System.out.println("getFields: " + response.getFields());

public GetResponse findDocumentByIndex(String index, String type, String id) {
        try {
            GetResponse getResponse = ElasticSearchUtil.getClient().prepareGet(index, type, id).get();
            return getResponse;
        } catch (Exception e) {
            logger.error("", e);
        return null;

Find Document By Value
public void findDocumentByValue() {
        SearchResponse response = findDocument("users", "user", "LastName", "Kumar T.H.");
        SearchHit[] results = response.getHits().getHits();
        System.out.println("Current results: " + results.length);
        for (SearchHit hit : results) {
            System.out.println("Index: " + hit.getIndex());
            System.out.println("Type: " + hit.getType());
            System.out.println("Id: " + hit.getId());
            System.out.println("Version: " + hit.getVersion());
            Map<String, Object> result = hit.getSource();
        Assert.assertSame(response.getHits().totalHits() > 0, true);

    public SearchResponse findDocument(String index, String type, String field, String value) {
        try {
            QueryBuilder queryBuilder = new MatchQueryBuilder(field, value);
            SearchResponse response = ElasticSearchUtil.getClient().prepareSearch(index)
            SearchHit[] results = response.getHits().getHits();
            return response;
        } catch (Exception e) {
            logger.error("", e);
        return null;
Update Index
public void UpdateDocument() throws IOException {
    XContentBuilder jsonBuilder = XContentFactory.jsonBuilder();
    Map<String, Object> data = new HashMap<String, Object>();
    data.put("FirstName", "Uttesh Kumar");
    data.put("LastName", "TEST");;
    UpdateResponse updateResponse = updateIndex("users", "user", "1", jsonBuilder);

public UpdateResponse updateIndex(String index, String type, String id, XContentBuilder jsonData) {
    UpdateResponse response = null;
    try {
        System.out.println("updateIndex ");
        response = ElasticSearchUtil.getClient().prepareUpdate(index, type, id)
        System.out.println("response " + response);
        return response;
    } catch (Exception e) {
        logger.error("UpdateIndex", e);
    return null;
Remove Index:
public void RemoveDocument() throws IOException {
    DeleteResponse deleteResponse = elastiSearchService.removeDocument("users", "user", "1");

public DeleteResponse removeDocument(String index, String type, String id) {
        DeleteResponse response = null;
        try {
            response = ElasticSearchUtil.getClient().prepareDelete(index, type, id).execute().actionGet();
            return response;
        } catch (Exception e) {
            logger.error("RemoveIndex", e);
        return null;
Full sample code is available at guthub Download full code

Monday, May 11, 2015

ipc.Client: Retrying connect to server: localhost/ Already tried 0 time(s); retry policy is

15/05/08 01:26:12 INFO ipc.Client: Retrying connect to server: localhost/ Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Bad connection to FS. command aborted. exception: Call to localhost/ failed on connection exception: Connection refused

solution : run "bin/hadoop namenode -format" command

Hadoop Set Up on Ubuntu Linux (Single-Node Cluster)

Running Hadoop on Ubuntu Linux (Single-Node Cluster)

Hadoop is a framework written in Java, Incorporates features similar to those of the Google File System (GFS) and of the MapReduce computing paradigm.

Hadoop’s HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware. It provides high throughput access to application data and is suitable for applications that have large data sets.

Simple Hadoop installation up and running so that you can play around with the software and learn more about it.

For windows OS user to learn hadoop install the virtual box along with Ubuntu OS.

Click here for the virtual box and Ubuntu set-up

After the virtual box with Ubuntu set-up is done, follow below for the hadoop set up.

Step 1. Hadoop requires a working Java 1.5+ installation.
Step 2. Adding a dedicated Hadoop system user.
Step 3. Configuring SSH
Step 4. Disabling IPv6
Step 5. Hadoop Installation

Step 1. Hadoop requires a working Java 1.5+ installation:

run following command for sun JDK

# Update the source list
$ sudo apt-get update

# Install Sun Java 7 JDK
$ sudo apt-get install sun-java7-jdk
We can also install oracle jdk manually or running following commands

$ sudo apt-add-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java7-installer
The full JDK which will be placed in /usr/lib/jvm/java-6-* (well, this directory is actually a symlink on Ubuntu).

After installation, check whether JDK is correctly set up:
uttesh@uttesh-VirtualBox:~$ java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)

Step 2. Adding a dedicated Hadoop system user: *this is not recommended, you can skip only it helps to separate the Hadoop installation from other software applications and user accounts running on the same machine.

$ sudo addgroup hadoop
$ sudo adduser --ingroup hadoop hduser

Step 3. Configuring SSH

Hadoop requires SSH access to manage its nodes,For single-node setup of Hadoop, we therefore need to configure SSH access to "localhost"

a. Install SSH : ssh is pre-packaged with Ubuntu, but we need to install ssh first to start sshd server. Use the following command to install ssh and sshd.

$ sudo apt-get install ssh

Verify installation using following commands.

$ which ssh
## Should print '/usr/bin/ssh'

$ which sshd
## Should print '/usr/bin/sshd'

b. Check if you can ssh to the localhost without a password.

$ ssh localhost

Note that if you try ssh to the localhost without installing ssh first, an error message will be printed saying 'ssh: connect to host localhost port 22: Connection refused'. So be sure to install ssh first.

c. If you cannot SSH to the localhost without a password create a ssh key pair using the following command.

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

d. Now the key pair has been created, note that id_rsa is the private key and is the public key are in .ssh directory. We need to include the new public key to the list of authorized keys using the following command.

$ cat ~/.ssh/ &gt;&gt; ~/.ssh/authorized_keys
uttesh@uttesh-VirtualBox:~$ ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/uttesh/.ssh/id_rsa): 
Created directory '/home/uttesh/.ssh'.
Your identification has been saved in /home/uttesh/.ssh/id_rsa.
Your public key has been saved in /home/uttesh/.ssh/
The key fingerprint is:
53:e9:c6:d8:0a:7f:3e:7b:b2:36:2d:6c:df:be:16:7c uttesh@uttesh-VirtualBox
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|           .     |
|          o      |
|         *       |
|      . S =  .   |
|       o +    o E|
|        o...   o |
|         oO o..  |
|         o+X.o+. |
e. try connect to the localhost and check if you can ssh to the localhost without a password.

$ ssh localhost

If the SSH connect should fail, these general tips might help:

Enable debugging with ssh -vvv localhost and investigate the error in detail.

Step 4. Disabling IPv6 :

One problem with IPv6 on Ubuntu is that using for the various networking-related Hadoop configuration options will result in Hadoop binding to the IPv6 addresses of my Ubuntu box. there’s no practical point in enabling IPv6 on a box when you are not connected to any IPv6 network. Hence, I simply disabled IPv6 on my Ubuntu machine.

To disable IPv6 on Ubuntu 10.04 LTS, open /etc/sysctl.conf in the editor of your choice and add the following lines to the end of the file:

# disable ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
You have to reboot your machine in order to make the changes take effect.

You can check whether IPv6 is enabled on your machine with the following command:

$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6

A return value of 0 means IPv6 is enabled, a value of 1 means disabled.

Step 5. Hadoop Installation :

1. Download the latest stable Hadoop release from this hadoop-2.5.1.tar.gz

2. Install Hadoop in /usr/local or any preferred directory. Decompress the downloaded file using the following command.

$ tar -xf hadoop-2.5.1.tar.gz -C /usr/local/

or right click on the file and click extract from UI.

3. Add $HADOOP_PREFIX/bin directory to your PATH, to ensure Hadoop is available from the command line.

Add the following lines to the end of the $HOME/.bashrc file of user. If you use a shell other than bash, you should of course update its appropriate configuration files instead of .bashrc.


# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
# Requires installed 'lzop' command.
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less

# Add Hadoop bin/ directory to PATH

Standalone Mode
Hadoop by default is configured to run as a single Java process, which runs in a non distributed mode. Standalone mode is usually useful in development phase since it is easy to test and debug. Also, Hadoop daemons are not started in this mode. Since Hadoop's default properties are set to standalone mode and there are no Hadoop daemons to run, there are no additional steps to carry out here.

Pseudo-Distributed Mode
This mode simulates a small scale cluster, with Hadoop daemons running on a local machine. Each Hadoop daemon is run on a separate Java process. Pseudo-Distributed Mode is a special case of Fully distributed mode.

To enable Pseudo-Distributed Mode, you should edit following two XML files. These XML files contain multiple property elements within a single configuration element. Property elements contain name and value elements.

1. etc/hadoop/core-site.xml
2. etc/hadoop/hdfs-site.xml

Edit the core-site.xml and modify the following properties. fs.defaultFS property holds the locations of the NameNode.


Edit the hdfs-site.xml and modify the following properties. dfs.replication property holds the number of times each HDFS block should be replicated.


Configuring the base HDFS directory :
hadoop.tmp.dir property within core-site.xml file holds the location to the base HDFS directory. Note that this property configuration doesn't depend on the mode Hadoop runs on. The default value for hadoop.tmp.dir property is /tmp, and there is a risk that some linux distributions might discard the contents of the /tmp directory in the local file system on each reboot, and leads to data loss within the local file system, hence to be on the safer side, it makes sense to change the location of the base directory to a much reliable one.

Carry out following steps to change the location of the base HDFS directory.

1.Create a directory for Hadoop to store its data locally and change its permissions to be writable by any user.
$ mkdir /var/lib/hadoop
$ chmod 777 /var/lib/hadoop

2.Edit the core-site.xml and modify the following property.

Formatting the HDFS filesystem

We need to format the HDFS file system, before starting Hadoop cluster in Pseudo-Distributed Mode for the first time. Note that formatting the file system multiple times will result deleting the existing file system data.

Execute the following command on command line to format the HDFS file system.
$ hdfs namenode -format

Starting NameNode daemon and DataNode daemon

$ $HADOOP_HOME/sbin/

Now you can access the name node web interface at http://localhost:50070/.

Saturday, May 9, 2015

Install Ubuntu Linux on Virtual Box

It is always good to have virtual box with our required OS installed, If u have windows box and want to learn hadoop, its good to have virtual box with ubuntu to learn.

Prerequisites :

1. Download and install Virtual box
2. Download Ubuntu ISO from

Installation of the virtual box is simple and easy. after installing the virtual box now we will install Ubuntu linux in VM.

Create the VM instance for the ubuntu OS.

Click on the "new" menu item from VM virtual box and it will pop-up the window as show below and choose the name for the VM alongwith system bit and OS Type.

select the RAM for the system, its always good to have RAM more 1GB

select the hard drive

select the Memory for the system, its always good to have Memory more 15GB.

VM is created and now we need to install the Ubuntu linux on this VM.

run the created VM or double click on the created VM instance.

select the download ubuntu iso file.

after some time, ubuntu installation window will load.

click on the install button and follow the ubuntu installation process, it will take 10-15 min for the ubuntu installation.

after the successful installation of the ubuntu, install the guess-addition for the full screen mode of the VM.

Tuesday, April 28, 2015

Analyzing the application code by using the sonarqube ANT/MAVEN

SonarQube™ software (previously known as “Sonar”) is an open source project hosted at Codehaus. By using this we can analyze the source code, its very easy to configure and use.

1. Download and unzip the SonarQube distribution ("C:\sonarqube" or "/etc/sonarqube")

2. Start the SonarQube server: under bin folder run the executable file according to respective OS.


3.Browse the results at http://localhost:9000

we will use Embedded database for learning.

under sonarqube/conf/ will have the db base configuration, default it uses embeded db H2 which is in build in java.

Application level ANT configuration :

Download the sonar-ant-task jar file download

copy the jar file to /lib folder

add following to existing build.xml file of the application.

<taskdef uri="antlib:org.sonar.ant" resource="org/sonar/ant/antlib.xml">
    <classpath path="path/to/sonar-ant-task-*.jar" />

if you don't want to modify the existing build.xml file then use below xml file and run "ant -f analyze-code.xml"

after the successful execution it will provide the url to access the result.

for maven application it simple run the following command

mvn clean install sonar:sonar

sample result page :

web service client JAXWS by maven

Generate web service client stub class by using the JAXWS maven plugin.

"jaxws-maven-plugin" will generate the web service stub classes by using that we can implement client or test the web service.

generated stub classes will stored under src folder and by using this service classes we can communicate with service and get the response.

for free webservice for the learning and client implementation visit

take any service and generate the client stub classes.

add the WSDL URL in the pom.xml

    enter the wsdl URL here

full sample :

Tuesday, April 14, 2015

JMETER load testing by code/ JMETER API implementation sample by java code

This tutorial attempts to explain the basic design, functionality and usage of the Jmeter, Jmeter is excellent tool used to perform load testing on the application, By using the jmeter GUI we can create the test samples for the request
according to our requirement and execute the samples with load of number of users.
As jmeter tool is fully developed by using JAVA, We can write the java code to do the same without using the GUI of the jmeter, Its not advisable to implement the java code for the load testing, its just a proof of concept to write the samples by java code using the jmeter libraries.
Jmeter as very good documentation/APIs, After going through the jmeter source code and other reference resources, wrote the following sample code.


Prior to understand following code we must have basic knowledge of the how jmeter works.
Initially we need load the jmeter properties which will be used by jmeter classes/libraries in later stage of code
//JMeter Engine
StandardJMeterEngine jmeter = new StandardJMeterEngine();
//JMeter initialization (properties, log levels, locale, etc)
JMeterUtils.initLogging();// you can comment this line out to see extra log messages of i.e. DEBUG level

1. Create "Test Plan" Object and JOrphan HashTree

//JMeter Test Plan, basically JOrphan HashTree
HashTree testPlanTree = new HashTree();
// Test Plan
TestPlan testPlan = new TestPlan("Create JMeter Script From Java Code");
testPlan.setProperty(TestElement.TEST_CLASS, TestPlan.class.getName());
testPlan.setProperty(TestElement.GUI_CLASS, TestPlanGui.class.getName());
testPlan.setUserDefinedVariables((Arguments) new ArgumentsPanel().createTestElement());

2. Samplers : Add "Http Sample" Object

Samplers tell JMeter to send requests to a server and wait for a response. They are processed in the order they appear in the tree. Controllers can be used to modify the number of repetitions of a sampler
// First HTTP Sampler - open
HTTPSamplerProxy examplecomSampler = new HTTPSamplerProxy();
examplecomSampler.setProperty(TestElement.TEST_CLASS, HTTPSamplerProxy.class.getName());
examplecomSampler.setProperty(TestElement.GUI_CLASS, HttpTestSampleGui.class.getName());

3.Loop Controller

Loop Controller will execute the samples number times the loop iteration is declared.
// Loop Controller
LoopController loopController = new LoopController();
loopController.setProperty(TestElement.TEST_CLASS, LoopController.class.getName());
loopController.setProperty(TestElement.GUI_CLASS, LoopControlPanel.class.getName());

4.Thread Group

Thread group elements are the beginning points of any test plan. All controllers and samplers must be under a thread group. Other elements, e.g. Listeners, may be placed directly under the test plan, in which case they will apply to all the thread groups. As the name implies, the thread group element controls the number of threads JMeter will use to execute your test.

// Thread Group
ThreadGroup threadGroup = new ThreadGroup();
threadGroup.setName("Sample Thread Group");
threadGroup.setProperty(TestElement.TEST_CLASS, ThreadGroup.class.getName());
threadGroup.setProperty(TestElement.GUI_CLASS, ThreadGroupGui.class.getName());

5. Add sampler,controller..etc to test plan

// Construct Test Plan from previously initialized elements
HashTree threadGroupHashTree = testPlanTree.add(testPlan, threadGroup);
// save generated test plan to JMeter's .jmx file format
SaveService.saveTree(testPlanTree, new FileOutputStream("report\\jmeter_api_sample.jmx"));
above code will generate the jmeter script which we wrote from the code.

5. Add Summary and reports

//add Summarizer output to get test progress in stdout like:
// summary =      2 in   1.3s =    1.5/s Avg:   631 Min:   290 Max:   973 Err:     0 (0.00%)
Summariser summer = null;
String summariserName = JMeterUtils.getPropDefault("", "summary");
if (summariserName.length() > 0) {
    summer = new Summariser(summariserName);
// Store execution results into a .jtl file, we can save file as csv also
String reportFile = "report\\report.jtl";
String csvFile = "report\\report.csv";
ResultCollector logger = new ResultCollector(summer);
ResultCollector csvlogger = new ResultCollector(summer);
testPlanTree.add(testPlanTree.getArray()[0], logger);
testPlanTree.add(testPlanTree.getArray()[0], csvlogger);

Finally Execute the test

// Run Test Plan

System.out.println("Test completed. See " + jmeterHome + slash + "report.jtl file for results");
System.out.println("JMeter .jmx script is available at " + jmeterHome + slash + "jmeter_api_sample.jmx");

Full Source Code of the POC is available on the GitHub click here
Simple source :

Generate JMX sample file by code and opened in jmeter UI.

Summary Report generated by code after test execution

Wednesday, April 8, 2015

get byte or memory size of array,list,collections in java

In java lot of time we will come across the scenerio where in which we need to find the how much memory used by given list.

The ArrayList holds a pointer to a single Object array, which grows as the number of elements exceed the size of the array. The ArrayList's underlying Object array grows by about 50% whenever we run out of space.

ArrayList also writes out the size of the underlying array, used to recreate an identical ArrayList to what was serialized.

sample code to get the memory size the collection in bytes