I will blog about new technologies and resolving issues we face daily while developing the web application. util methods which will help in code reuse.
By default, MongoDB doesn’t allow remote connections. to enable the remote access bind the ip address of the machine in mongod.conf.
in linux /etc/mongod.conf
# network interfaces
net:
port: 27017
bindIp: ip-address of the machine // by default it will be 127.0.0.1
`ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:40
TS1005: ',' expected.
ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:42
TS1139: Type parameter declaration expected.
ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:46
TS1109: Expression expected.
ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:51
TS1005: ')' expected.
ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:59
TS1005: ';' expected.
ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:60
TS1128: Declaration or statement expected.
ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:73
TS1005: '(' expected.
ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:42
TS2532: Object is possibly 'undefined'.
ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:48
TS2304: Cannot find name 'key'.
ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:53
TS2304: Cannot find name 'string'.
ERROR in [at-loader] ./node_modules/@angular/platform-browser/src/browser/transfer_state.d.ts:34:62
TS2693: 'StateKey' only refers to a type, but is being used as a value here.`
Solution:
Update your typescript dependency to the latest version
MSBUILD : error MSB3428: Could not load the Visual C++ component "VCBuild.exe"
Solution
This error will be thrown on the windows machine for the node module build, to resolve this issue, run/install the windows build tools.
Run from the administrative CMD or PowerShell!
Error: Could not find or load main class org.gradle.wrapper.GradleWrapperMain.
Bitbucket pipeline or Jenkins console output, the build is failed – “Could not find org.gradle.wrapper.GradleWrapperMain”
Solution:
This error will be thrown when gradlew build unable to find the gradle folder which contains the gradle-wrapper.jar file, if the generated gradle-wrapper.jar is not check-in/commit to bitbucket please check-in or add the file.
Guacamole server is unable to make rdp connection with windows system and guacd library will throw this exception because windows remote desktop setting is not enabled.
Enable remote desktop setting:
this will solve the problem, if not make sure firewall ,antivirus and windows defender are turned-off as well in-case.
Missing debug key in .android folder of windows system
Missing SHA1 key in the fire base console.
Apk not using the generated debug key.
Before going further first we will check the firebase account,google console and system environments are set properly
If firebase account is not setup, follow the steps
Goto click here and create new project
Copy the config details and use in ionic app.
Goto Authentication and click on SIG-IN METHOD and enable google provider
Once the google authentication is enabled, fire base will automatically generate the web client id and secret for the app
Observe the google note on the how to add SHA-1 key to the android application. SHA-1 config
Now its time to choose the application from fire base start page, add,configure the app and add the generated SHA-1 key to the app.
Now basic setup done and assume your able to build and install app in mobile but unable to login through the google plus and getting error 10, to resolve the issue check the below conf are correct as mentioned above.
Validate below steps to solve error 10 issue
1. Android key and web client id not used properly:
There are 2 types of keys android and client id. if the SHA-1 key is generate in the environment setup and added to the app in the firebase, then only a android key will be generate and added in google console.
go to google developer console to get the android key.
Android Key: This key will be configured in config.xml and package.json file as reserve client id in the application and SHA-1 key configured to these android key should be used for signing the app apk file.
click on edit icon of the android key and check is the same SHA-1 key is configured same as fire base app.
config.xml:
package.json:
Web client Id: This key will be generated by fire base on creation of the app or we can create manually.
Configure the web client with the app and use in the app code to authenticate the user.
2. Missing debug key in .android folder of windows system
Make sure you generate debug/release key and placed in .android folder under user and same copy should be placed under app folder.
3.Missing SHA1 key in the fire base console.
Make sure same SHA-1 key used across fire base app, google console and in .android folder in building system. any mismatch in SHA-1 key will lead to error 10.
check same package is configure in the fire base for app and config.xml file.
4.Apk not using the generated debug key.
Validate the apk is generate using the configured debug/release SHA-1 key, by using the below command we can validate the apk is using the proper key.
If none of the above did not worked to solve the problem, try below miscellaneous steps to resolve the issue, its not standard practice or procedure but solve the issue in some weird scenarios.
Miscellaneous steps.
if your not using the android studio, try to add your app to android studio and configure, it will configure debug key setting properly and build apk.
Don't reverse your client_id i.e. like com.googleusercontent.apps.xxx, use as it is, some times it solves the issue
Use android key from google developer console in code as web client_id and config.xml,package.json, which is acceptable and solves the issue
For the better indexing or searching the data in the big text chunk we need to filter the unwanted words from the data to get the better performance on the search by indexing only the logical words.
What is Stopping Words?
Stopping words are the words which will be used to make the sentence along with consonants/verbs i.e. where is my car? in this "where/is/my" are the stopping words which are not required for the search.
What is Stemming Words?
Stemmer are the words which will make the action word along with stopping words.
i.e Stopping Word + "ing/tion/ational/ization/ation....etc" : going/standing
I was looking of the library to achieve the filtering of the stopping/stemming words, Not found much on googling, decided to go through the stopping/stemming words blogs/white papers and algorithms, started writing small util library to do the filtering by using java and its available on maven repository and full source on github.
Exude Library
This is simple library for removing/filtering the stopping,stemming words from the text data, this library is in very basic level of development need to work on for later changes.
This is the part of maven repository now,Directly add in pom following.
1.Filter stopping words from given text/file/link
2.Filter stemming words from given text/file/link
3.Get swear words from given text/file/link
How Exude library works:
Step 1: Filter the duplicate words from the input data/file.
Step 2: Filter the stopping words from step1 filtered data.
Step 3: Filter the stemmer/swear words from step2 filtered data using the Porter algorithm which is used for suffix stripping.
exude process sequence flow:
How to use exude Library
Environment and dependent jar file
1.Minimum JDK 1.6 or higher
2.Apache Tika jar (which is used to parse the files for the data extraction)
Sample code
Sample Text Data
String inputData = "Kannada is a Southern Dravidian language, and according to Dravidian scholar Sanford Steever, its history can be conventionally divided into three periods; Old Kannada (halegannada) from 450–1200 A.D., Middle Kannada (Nadugannada) from 1200–1700 A.D., and Modern Kannada from 1700 to the present.[20] Kannada is influenced to an appreciable extent by Sanskrit. Influences of other languages such as Prakrit and Pali can also be found in Kannada language.";
String output = ExudeData.getInstance().filterStoppings(inputData);
output
extent southern influenced divided according halegannada kannada language three 450 found modern influences periods pali steever a middle d languages old nadugannada dravidian sanford history scholar appreciable 17001200 conventionally sanskrit prakrit present 20
Elasticsearch is an open-source, restful, distributed, search engine built on top of apache-lucene, Lucene is arguably the most advanced, high-performance, and fully featured search engine library in existence today—both open source and proprietary.
Elasticsearch is also written in Java and uses Lucene internally for all of its indexing and searching, but it aims to make full-text search easy by hiding the complexities of Lucene behind a simple, coherent, RESTful API.
Basic Concept and terminologies:
1.Near Realtime (NRT)
Elasticsearch is a near real time search platform. What this means is there is a slight latency (normally one second) from the time you index a document until the time it becomes searchable.
2.Cluster
A cluster is a collection of one or more nodes (servers) that together holds your entire data and provides federated indexing and search capabilities across all nodes.
default cluster name will be "elasticsearch".
3.Node
A node is a single server that is part of your cluster, stores your data, and participates in the cluster’s indexing and search capabilities.
4.Index
An index is a collection of documents that have somewhat similar characteristics i.e like database.
5.Type
Within an index, you can define one or more types. A type is a logical category/partition of your index and defined for documents that have a set of common fields.
i.e. like table in relational database. a type
6.Document
A document is a basic unit of information that can be indexed. For example
Below image will show how we can co-relate the relational database with elastic index which will make easy to understand the elastic terms and api.
In Elasticsearch, a document belongs to a type, and those types live inside an index. You can draw some (rough) parallels to a traditional relational database:
Relational DB ⇒ Databases ⇒ Tables ⇒ Rows ⇒ Columns
Elasticsearch ⇒ Indices ⇒ Types ⇒ Documents ⇒ Fields
Client: using java client we can performe operations on elastic search cluster/node.
1.Perform standard index, get, delete and search operations on an existing cluster
2.Perform administrative tasks on a running cluster
3.Start full nodes when you want to run Elasticsearch embedded in your own application or when you want to launch unit or integration tests
Two types of client to get the client connection with cluster to perform the operations.
1. Node Client.
2. TransportClient. Node Client:Instantiating a node based client is the simplest way to get a Client that can execute operations against elasticsearch.TransportClient:
The TransportClient connects remotely to an Elasticsearch cluster using the transport module. It does not join the cluster, but simply gets one or more initial transport addresses and communicates with them.
sample elastic search crud sample code:
Node Client:
Running Hadoop on Ubuntu Linux (Single-Node Cluster)
Hadoop is a framework written in Java, Incorporates features similar to those of the Google File System (GFS) and of the MapReduce computing paradigm.
Hadoop’s HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware. It provides high throughput access to application data and is suitable for applications that have large data sets.
Simple Hadoop installation up and running so that you can play around with the software and learn more about it.
For windows OS user to learn hadoop install the virtual box along with Ubuntu OS.
The full JDK which will be placed in /usr/lib/jvm/java-6-* (well, this directory is actually a symlink on Ubuntu).
After installation, check whether JDK is correctly set up:
uttesh@uttesh-VirtualBox:~$ java -version
java version "1.7.0_80"Java(TM) SE Runtime Environment (build 1.7.0_80-b15)Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
Step 2. Adding a dedicated Hadoop system user: *this is not recommended, you can skip only it helps to separate the Hadoop installation from other software applications and user accounts running on the same machine.
Hadoop requires SSH access to manage its nodes,For single-node setup of Hadoop, we therefore need to configure SSH access to "localhost"
a. Install SSH : ssh is pre-packaged with Ubuntu, but we need to install ssh first to start sshd server. Use the following command to install ssh and sshd.
$ sudo apt-get install ssh
Verify installation using following commands.
$ which ssh
## Should print '/usr/bin/ssh'$ which sshd
## Should print '/usr/bin/sshd'
b. Check if you can ssh to the localhost without a password.
$ ssh localhost
Note that if you try ssh to the localhost without installing ssh first, an error message will be printed saying 'ssh: connect to host localhost port 22: Connection refused'. So be sure to install ssh first.
c. If you cannot SSH to the localhost without a password create a ssh key pair using the following command.
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
d. Now the key pair has been created, note that id_rsa is the private key and id_rsa.pub is the public key are in .ssh directory. We need to include the new public key to the list of authorized keys using the following command.
uttesh@uttesh-VirtualBox:~$ ssh-keygen -t rsa -P ""Generating public/private rsa key pair.Enter file in which to save the key (/home/uttesh/.ssh/id_rsa): Created directory '/home/uttesh/.ssh'.Your identification has been saved in /home/uttesh/.ssh/id_rsa.Your public key has been saved in /home/uttesh/.ssh/id_rsa.pub.The key fingerprint is:53:e9:c6:d8:0a:7f:3e:7b:b2:36:2d:6c:df:be:16:7c uttesh@uttesh-VirtualBoxThe key's randomart image is:+--[ RSA 2048]----+| || . || o || * || . S = . || o + o E|| o... o || oO o.. || o+X.o+. |+-----------------+
e. try connect to the localhost and check if you can ssh to the localhost without a password.
$ ssh localhost
If the SSH connect should fail, these general tips might help:
Enable debugging with ssh -vvv localhost and investigate the error in detail.
Step 4. Disabling IPv6 :
One problem with IPv6 on Ubuntu is that using 0.0.0.0 for the various networking-related Hadoop configuration options will result in Hadoop binding to the IPv6 addresses of my Ubuntu box. there’s no practical point in enabling IPv6 on a box when you are not connected to any IPv6 network. Hence, I simply disabled IPv6 on my Ubuntu machine.
To disable IPv6 on Ubuntu 10.04 LTS, open /etc/sysctl.conf in the editor of your choice and add the following lines to the end of the file:
2. Install Hadoop in /usr/local or any preferred directory. Decompress the downloaded file using the following command.
$ tar -xf hadoop-2.5.1.tar.gz -C /usr/local/
or right click on the file and click extract from UI.
3. Add $HADOOP_PREFIX/bin directory to your PATH, to ensure Hadoop is available from the command line.
Add the following lines to the end of the $HOME/.bashrc file of user. If you use a shell other than bash, you should of course update its appropriate configuration files instead of .bashrc.
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)export JAVA_HOME=/usr/lib/jvm/java-6-sun# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/nullalias fs="hadoop fs"unalias hls &> /dev/nullalias hls="fs -ls"# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command#line; run via:
##$ lzohead /hdfs/path/to/lzop/compressed/file.lzo
## Requires installed 'lzop' command.
#lzohead () { hadoop fs -cat $1 | lzop -dc | head -1000 | less}# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin
Standalone Mode
Hadoop by default is configured to run as a single Java process, which runs in a non distributed mode. Standalone mode is usually useful in development phase since it is easy to test and debug. Also, Hadoop daemons are not started in this mode. Since Hadoop's default properties are set to standalone mode and there are no Hadoop daemons to run, there are no additional steps to carry out here.
Pseudo-Distributed Mode
This mode simulates a small scale cluster, with Hadoop daemons running on a local machine. Each Hadoop daemon is run on a separate Java process. Pseudo-Distributed Mode is a special case of Fully distributed mode.
To enable Pseudo-Distributed Mode, you should edit following two XML files. These XML files contain multiple property elements within a single configuration element. Property elements contain name and value elements.
Configuring the base HDFS directory :
hadoop.tmp.dir property within core-site.xml file holds the location to the base HDFS directory. Note that this property configuration doesn't depend on the mode Hadoop runs on. The default value for hadoop.tmp.dir property is /tmp, and there is a risk that some linux distributions might discard the contents of the /tmp directory in the local file system on each reboot, and leads to data loss within the local file system, hence to be on the safer side, it makes sense to change the location of the base directory to a much reliable one.
Carry out following steps to change the location of the base HDFS directory.
1.Create a directory for Hadoop to store its data locally and change its permissions to be writable by any user.
We need to format the HDFS file system, before starting Hadoop cluster in Pseudo-Distributed Mode for the first time. Note that formatting the file system multiple times will result deleting the existing file system data.
Execute the following command on command line to format the HDFS file system.
It is always good to have virtual box with our required OS installed, If u have windows box and want to learn hadoop, its good to have virtual box with ubuntu to learn.
Installation of the virtual box is simple and easy. after installing the virtual box now we will install Ubuntu linux in VM.
Create the VM instance for the ubuntu OS.
Click on the "new" menu item from VM virtual box and it will pop-up the window as show below and choose the name for the VM alongwith system bit and OS Type.
select the RAM for the system, its always good to have RAM more 1GB
select the hard drive
select the Memory for the system, its always good to have Memory more 15GB.
VM is created and now we need to install the Ubuntu linux on this VM.
run the created VM or double click on the created VM instance.
select the download ubuntu iso file.
after some time, ubuntu installation window will load.
click on the install button and follow the ubuntu installation process, it will take 10-15 min for the ubuntu installation.
after the successful installation of the ubuntu, install the guess-addition for the full screen mode of the VM.
SonarQube™ software (previously known as “Sonar”) is an open source project hosted at Codehaus. By using this we can analyze the source code, its very easy to configure and use.
1. Download and unzip the SonarQube distribution ("C:\sonarqube" or "/etc/sonarqube")
2. Start the SonarQube server: under bin folder run the executable file according to respective OS.
This tutorial attempts to explain the basic design, functionality and usage of the Jmeter, Jmeter is excellent tool used to perform load testing on the application, By using the jmeter GUI we can create the test samples for the request
according to our requirement and execute the samples with load of number of users.
As jmeter tool is fully developed by using JAVA, We can write the java code to do the same without using the GUI of the jmeter, Its not advisable to implement the java code for the load testing, its just a proof of concept to write the samples by java code using the jmeter libraries.
Jmeter as very good documentation/APIs, After going through the jmeter source code and other reference resources, wrote the following sample code.
Prior to understand following code we must have basic knowledge of the how jmeter works.
Initially we need load the jmeter properties which will be used by jmeter classes/libraries in later stage of code
//JMeter Engine
StandardJMeterEngine jmeter =new StandardJMeterEngine();//JMeter initialization (properties, log levels, locale, etc)
JMeterUtils.setJMeterHome(jmeterHome.getPath());
JMeterUtils.loadJMeterProperties(jmeterProperties.getPath());
JMeterUtils.initLogging();// you can comment this line out to see extra log messages of i.e. DEBUG level
JMeterUtils.initLocale();
1. Create "Test Plan" Object and JOrphan HashTree
//JMeter Test Plan, basically JOrphan HashTree
HashTree testPlanTree =new HashTree();
// Test Plan
TestPlan testPlan =new TestPlan("Create JMeter Script From Java Code");
testPlan.setProperty(TestElement.TEST_CLASS, TestPlan.class.getName());
testPlan.setProperty(TestElement.GUI_CLASS, TestPlanGui.class.getName());
testPlan.setUserDefinedVariables((Arguments)new ArgumentsPanel().createTestElement());
2. Samplers : Add "Http Sample" Object
Samplers tell JMeter to send requests to a server and wait for a response. They are processed in the order they appear in the tree. Controllers can be used to modify the number of repetitions of a sampler
Thread group elements are the beginning points of any test plan. All controllers and samplers must be under a thread group. Other elements, e.g. Listeners, may be placed directly under the test plan, in which case they will apply to all the thread groups. As the name implies, the thread group element controls the number of threads JMeter will use to execute your test.
// Construct Test Plan from previously initialized elements
testPlanTree.add(testPlan);
HashTree threadGroupHashTree = testPlanTree.add(testPlan, threadGroup);
threadGroupHashTree.add(examplecomSampler);// save generated test plan to JMeter's .jmx file format
SaveService.saveTree(testPlanTree,new FileOutputStream("report\\jmeter_api_sample.jmx"));
above code will generate the jmeter script which we wrote from the code.
5. Add Summary and reports
//add Summarizer output to get test progress in stdout like:// summary = 2 in 1.3s = 1.5/s Avg: 631 Min: 290 Max: 973 Err: 0 (0.00%)
Summariser summer =null;
String summariserName = JMeterUtils.getPropDefault("summariser.name","summary");if(summariserName.length()>0){
summer =new Summariser(summariserName);}// Store execution results into a .jtl file, we can save file as csv also
String reportFile ="report\\report.jtl";
String csvFile ="report\\report.csv";
ResultCollector logger =new ResultCollector(summer);
logger.setFilename(reportFile);
ResultCollector csvlogger =new ResultCollector(summer);
csvlogger.setFilename(csvFile);
testPlanTree.add(testPlanTree.getArray()[0], logger);
testPlanTree.add(testPlanTree.getArray()[0], csvlogger);
Finally Execute the test
// Run Test Plan
jmeter.configure(testPlanTree);
jmeter.run();
System.out.println("Test completed. See "+ jmeterHome + slash +"report.jtl file for results");
System.out.println("JMeter .jmx script is available at "+ jmeterHome + slash +"jmeter_api_sample.jmx");
System.exit(0);
Full Source Code of the POC is available on the GitHub click here
Simple source :
Generate JMX sample file by code and opened in jmeter UI.
Summary Report generated by code after test execution
In java lot of time we will come across the scenerio where in which we need to find the how much memory used by given list.
The ArrayList holds a pointer to a single Object array, which grows as the number of elements exceed the size of the array. The ArrayList's underlying Object array grows by about 50% whenever we run out of space.
ArrayList also writes out the size of the underlying array, used to recreate an identical ArrayList to what was serialized.
sample code to get the memory size the collection in bytes