Tuesday, August 9, 2016

Validate IP address using Java regex


Step 1. Write a java class with name ValidateIPAddress
Step 2. Write a regex pattern. Learn more about reg expression https://docs.oracle.com/javase/tutorial/essential/regex/
public class ValidateIPAddress {
 
 private static final String PATTERN =
   "^([01]?\\d\\d?|2[0-4]\\d|25[0-5])\\." +
   "([01]?\\d\\d?|2[0-4]\\d|25[0-5])\\." +
   "([01]?\\d\\d?|2[0-4]\\d|25[0-5])\\." +
   "([01]?\\d\\d?|2[0-4]\\d|25[0-5])$";
 public static void main(String []args)
    {
     //Pass input value has hard coded value or as a input parameter
            String IP = "000.12.12.034";
            System.out.println(IP.matches(PATTERN));
        

    }
}
Step 3. Description about regex

1. ^                   #line start
2. (                #  start of group 
3. [01]?\\d\\d?        # It can be one or two digits. If three digits appear, it must start either 0 or 1
4. |             # or
5. 2[0-4]\\d        # start with 2, follow by 0-4 and end with any digit (2[0-4][0-9])
6. |                   # or
7. 25[0-5]             # start with 2, follow by 5 and ends with 0-5 (25[0-5])
8. )             #  end of group #2
9. \.                  #  follow by a dot "."
10. ....               # repeat with 3 times (3x)
11. $             #end of the line
 
Step 4. Input 1. Hello.IP 2. 000.12.12.034
Step 5. Output 1. false 2.true

Thank you very much for viewing this post

Monday, July 18, 2016

Getting started with Apache Kafka on windows environment. Run kafka, zoookeeper on windows environment


This post will explain you about how to work with apache kafka on windows environment along with zookeeper and java.

Pre requesties
1. Download Java latest version and install the same.
Setup the path variables where our java is installed.
2.Download zookeeper latest version and install the same.
Setup the path variables where our zookeeper is installed.
3.Download apache kafka latest version( kafka_2.10-0.10.0.0.tgz) and install the same.

Zookeeper setup

1. Go to confdir, where we have installed our zookeeper.
2. Rename zoo_sample.cfg to zoo.cfg
3. Open zoo.cfg file
4. find dataDir=/tmp/zookeeper to C:\zookeeper-3.3.6\data

5. Setup path for zookeeper in Environment variables


6. Open the Environment variables- click the system variables C:\spark\zookeeper-3.3.6\bin
7. If we want we can change the default port no 2181 in zoo.cfg file
8. Run the Zookeeper from cmd prompt. execute zkserver command
9. We can see the below image after successful zookeeper started


Kafka setup and run kafka

1. Untar the same and go to kafka config directory
2. Look for server.properties and edit the same
3. Find the log.dirs=/tmp/kafka-logs to log.dirs= “C:\spark\kafka_2.10-0.10.0.0\kafka-logs”
4. Now go to kafka installation directory – copy the installation path
5. Open the command prompt and go to kafka installation directory-
C:\spark\kafka_2.10-0.10.0.0
6. Execute the below command from the command prompt
.\bin\windows\kafka-server-start.bat .\config\server.properties


7. Once everything fine then kafka server will start and display image as mentioned below




How to Create topics

1. Open command prompt and go to C:\spark\kafka_2.10-0.10.0.0\bin\windows
2. Copy the below command and hit enter
kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic kafkatest
         

How to create producer
1. Open command prompt and go to C:\spark\kafka_2.10-0.10.0.0\bin\windows
2. Copy the below command and hit enter
kafka-console-producer.bat –broker-list localhost:9092 --topic kafkatest 

How to create consumer
1. Open command prompt and go to C:\spark\kafka_2.10-0.10.0.0\bin\windows
2. Copy the below command and hit enter
kafka-console-consumer.bat --zookeeper localhost:2181 --topic kafkatest
        

Once Producer and consumer started then, we can start to post messages from producer and reflect in consumer.
How to replicate data from producer to consumer
1. Try to enter some data in producer window, the same data will be replicates in consumer window.


More useful commands


1. Listing all topics which we have created
-
kafka-topics.bat --list --zookeeper localhost:2181
2. describe about particular topic
-
kafka-topics.bat --list --zookeeper localhost:2181 
3. Read all messages from particular topic
-
kafka-console-consumer.bat --zookeeper localhost:2181 --topic kafkatest --from-beginning 


Thank you very much for viewing this post.

Sunday, July 17, 2016

Spark Closures, Broadcasting , Optimizing and Partitioning


This post will explain you about how to do Optimization in Spark and how to work with closures, Broadcasting and partitioning.

1. Closures
- It is standalone function, which contains at least one bound variable
var count = 0
   var list =  1 to 20
   list.foreach(x => {
    count +=1
    println(s"count is currently $count")
    })
   println(s"Final count is $count") 


How to use Closures in our Spark?
1. Since Spark distributed so variable reference is could not cross node boundary’s.
So each partition will get it’s own copy of variables.
var count = 0
   val rdd = sc.makeRDD(1 to 20 , 10)
   rdd.foreach(x => {
     count +=1
println(s"count is currently $count")
})
println(s"Final count is $count") 


2. This happens in outside Driver . So final count will not be updated.
3. For this we will us built in methods
2. Broadcasting

val indexer =Map(…) //1MB - it will be distributed across clusters for each execution
rdd.flatMap(rddVal => indexer.get(rddVal))
a. Usually Map will distribute Simple 1MB data into multiple workers and store size will be 10 to 11 MB data
b. To avoid this we have broadcast variables into place
val indexer = sc.brodcast(Map(…)) //Map 1MB ; indexer<1MB rdd.flatMap(rddVal = >indexer.value.get(rddVal))
3. Optimizing Partitioning
a. Make RDD with lot of data with 10000 chunks
b. Then use the filter to drastically reduces the data set
c. Then we will do the some more transformations before calling the final collect.
sc.makeRDD(1 to Int.MaxValue,10000).filter(x=>x < 10).sortBy(x=>x).map(x=>x+1).collect
          sc.makeRDD(1 to Int.MaxValue,10000).filter(x=>x < 10).coalesce(8,true).sortBy(x=>x).map(x=>x+1).collect
       
We can check the jobs data using http://localhost:4040

How normal partition will work as how partition will work with coalesce


This is how spark advanced concepts will work.
Thank you very much for viewing this post.

AddToAny

Contact Form

Name

Email *

Message *