Spark program to read data from RDBMS

I wanted to figure out how to connect to RDBMS from spark and extract data, so i followed these steps. You can download this project form github
First i did create Address table in my local mysql like this

CREATE TABLE `address` (
  `addressid` int(11) NOT NULL AUTO_INCREMENT,
  `contactid` int(11) DEFAULT NULL,
  `line1` varchar(300) NOT NULL,
  `city` varchar(50) NOT NULL,
  `state` varchar(50) NOT NULL,
  `zip` varchar(50) NOT NULL,
  `lastmodified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`addressid`),
  KEY `contactid` (`contactid`),
  CONSTRAINT `address_ibfk_1` FOREIGN KEY (`contactid`) REFERENCES `CONTACT` (`contactid`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8;
Then i did add 5 sample records to the address table. When i query address table on my local this is what i get
After that i did create a Spark Scala project that has mysql-connector-java as one of the dependencies
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.spnotes.spark</groupId>
<artifactId>JDBCSpark</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<maven.compiler.target>1.7</maven.compiler.target>
<encoding>UTF-8</encoding>
<scala.tools.version>2.10</scala.tools.version>
<scala.version>2.10.4</scala.version>
<spark.version>1.5.2</spark.version>
</properties>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_${scala.tools.version}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_${scala.tools.version}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.38</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/scala</sourceDirectory>
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<version>2.15.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
view raw pom.xml hosted with ❤ by GitHub
The last step was to create a simple Spark program like this,
package com.spnotes.spark
import java.sql.{Connection, DriverManager, ResultSet}
import org.apache.spark.rdd.JdbcRDD
import org.apache.spark.{SparkConf, SparkContext}
/**
* Created by sunilpatil on 4/19/16.
*/
object JDBCRDDClient {
case class Address(addressId: Int, contactId: Int, line1: String, city: String, state: String, zip: String)
def main(argv: Array[String]): Unit = {
val sparkConf = new SparkConf().setMaster("local[2]").setAppName("HelloJDBC")
val sparkContext = new SparkContext(sparkConf)
val jdbcRdd = new JdbcRDD(sparkContext, getConnection,
"select * from address limit ?,?",
0, 5, 1, convertToAddress)
jdbcRdd.foreach(println)
}
def getConnection(): Connection = {
Class.forName("com.mysql.jdbc.Driver")
DriverManager.getConnection("jdbc:mysql://localhost/test1?" + "user=test1&password=test1")
}
def convertToAddress(rs: ResultSet): Address = {
new Address(rs.getInt("addressid"), rs.getInt("contactid"), rs.getString("line1"),
rs.getString("city"), rs.getString("state"), rs.getString("zip"))
}
}
My program has 4 main sections
  1. First is Address as case class with same schema as that of Address table, without lastmodified field
  2. Next is this call to create object of JdbcRDD that says query everything from address with addressid between 1 and 5. new JdbcRDD(sparkContext, getConnection, "select * from address limit ?,?", 0, 5, 1, convertToAddress)
  3. Then i did define getConnection() method that creates JDBC connection to my database and returns it
  4. Last is the convertToAddress() method that knows how to take a ResultSet and convert it into object of Address
When i run this program in IDE this is the output i get

How to implement cache (LRU Cache) using LinkedHashMap in java

Recently i wanted to implement a simple Least recently used (LRU) cache in one my applications. But my use case is simple enough that instead of going for something ehcache i decided to build it on own by using java.util.LinkedHashMap
As you can see from the code its very simple. All you have to do is extend java.util.LinkedHashMap and override its protected removeEldestEntry() method so that it checks if the size of map is greater than a size you specified while creating the Map if yes remove the eldest entry
import java.util.Collections;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.Map;
public class LRUCache<Key,Value> extends LinkedHashMap<Key,Value>{
private int cacheSize;
public LRUCache(int cacheSize) {
super(cacheSize, 0.75f,true);
this.cacheSize = cacheSize;
}
@Override
protected boolean removeEldestEntry(Map.Entry<Key, Value> eldest) {
return size() > cacheSize;
}
public static void main(String[] argv){
LRUCache<String, String> cache = new LRUCache<String,String>(3);
Map<String,String> map = Collections.synchronizedMap(cache);
cache.put("First","1");
cache.put("Second","2");
cache.put("Third","3");
System.out.println(cache);
cache.get("First");
cache.get("First");
cache.put("Fourth","4");
System.out.println(cache);
cache.put("Fifth","5");
System.out.println(cache);
}
}
view raw LRUCache.java hosted with ❤ by GitHub
Now the question is when Map is full which entry will it remove, you have 2 options
  1. Eldest: If you just want to remove the first entry that you inserted in the Map when adding a new entry then in your constructor you could use super(cacheSize, 0.75f);, so LinkedHashMap wont keep track of when a particular entry were accessed.
  2. Least recently used (LRU): But if you want to make sure that the entry that was least recently used should be removed then call super(cacheSize, 0.75f, true); from constructor of your LRUCache so that LinkedHashMap keeps track of when entry was accessed and removes the Least recently used entry

Spark Streaming Kafka 10 API Word Count application Scala

In Spark Kafka Streaming Java program Word Count using Kafka 0.10 API blog entry i talked about how you create a simple java program that uses Spark Streaming's Kafka10 API using Java. This blog entry does the same thing but using Scala. You can download the complete application from github
You can run this sample by first downloading Kafka 0.10.* from Apache Kafka WebSite, then you can create and start a test topic and send messages to it by following this Kafka Quick start document
package com.spnotes.spark
import org.apache.kafka.clients.consumer.ConsumerConfig
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{Durations, StreamingContext}
import scala.collection.mutable
/**
* Created by sunilpatil on 1/11/17.
*/
object Kafka10 {
def main(argv: Array[String]): Unit = {
// Configure Spark to connect to Kafka running on local machine
val kafkaParam = new mutable.HashMap[String, String]()
kafkaParam.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")
kafkaParam.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
"org.apache.kafka.common.serialization.StringDeserializer")
kafkaParam.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
"org.apache.kafka.common.serialization.StringDeserializer")
kafkaParam.put(ConsumerConfig.GROUP_ID_CONFIG, "group1")
kafkaParam.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest")
kafkaParam.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true")
val conf = new SparkConf().setMaster("local[2]").setAppName("Kafka10")
//Read messages in batch of 30 seconds
val sparkStreamingContext = new StreamingContext(conf, Durations.seconds(30))
//Configure Spark to listen messages in topic test
val topicList = List("test")
// Read value of each message from Kafka and return it
val messageStream = KafkaUtils.createDirectStream(sparkStreamingContext,
LocationStrategies.PreferConsistent,
ConsumerStrategies.Subscribe[String, String](topicList, kafkaParam))
val lines = messageStream.map(consumerRecord => consumerRecord.value().asInstanceOf[String])
// Break every message into words and return list of words
val words = lines.flatMap(_.split(" "))
// Take every word and return Tuple with (word,1)
val wordMap = words.map(word => (word, 1))
// Count occurance of each word
val wordCount = wordMap.reduceByKey((first, second) => first + second)
//Print the word count
wordCount.print()
sparkStreamingContext.start()
sparkStreamingContext.awaitTermination()
}
}
view raw Kafka10.scala hosted with ❤ by GitHub

Spark Kafka Streaming Java program Word Count using Kafka 0.10 API

Kafka API went through a lot of changes starting Kafka 0.9. Spark Kafka Streaming API also was changed to better support Kafka 0.9. i wanted to try that out so i built this simple Word Count application using Kafka 0.10 API. This blog entry does the same thing but using Scala. You can download the complete application from github
You can run this sample by first downloading Kafka 0.10.* from Apache Kafka WebSite, then you can create and start a test topic and send messages to it by following this Kafka Quick start document First thing i did was to include Kafka 0.10 API dependencies for the Spark Project. As you can see i am using Spark 2.1 version
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.1.0</version>
</dependency>
view raw pom.xml hosted with ❤ by GitHub
Then i did create a SparkKafka10.java file that looks like this. Please take a look at comments inside the code for what i am doing. Now if you create test topic and send messages to it, you should be able to see the wordcount on console
package com.test;
import com.test.schema.ContactType;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.function.*;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaInputDStream;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka010.ConsumerStrategies;
import org.apache.spark.streaming.kafka010.KafkaUtils;
import org.apache.spark.streaming.kafka010.LocationStrategies;
import scala.Tuple2;
import java.util.*;
/**
* Created by sunilpatil on 1/11/17.
*/
public class SparkKafka10 {
public static void main(String[] argv) throws Exception{
// Configure Spark to connect to Kafka running on local machine
Map<String, Object> kafkaParams = new HashMap<>();
kafkaParams.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"localhost:9092");
kafkaParams.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
"org.apache.kafka.common.serialization.StringDeserializer");
kafkaParams.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
"org.apache.kafka.common.serialization.StringDeserializer");
kafkaParams.put(ConsumerConfig.GROUP_ID_CONFIG,"group1");
kafkaParams.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"latest");
kafkaParams.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,true);
//Configure Spark to listen messages in topic test
Collection<String> topics = Arrays.asList("test");
SparkConf conf = new SparkConf().setMaster("local[2]").setAppName("SparkKafka10WordCount");
//Read messages in batch of 30 seconds
JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(30));
// Start reading messages from Kafka and get DStream
final JavaInputDStream<ConsumerRecord<String, String>> stream =
KafkaUtils.createDirectStream(jssc, LocationStrategies.PreferConsistent(),
ConsumerStrategies.<String,String>Subscribe(topics,kafkaParams));
// Read value of each message from Kafka and return it
JavaDStream<String> lines = stream.map(new Function<ConsumerRecord<String,String>, String>() {
@Override
public String call(ConsumerRecord<String, String> kafkaRecord) throws Exception {
return kafkaRecord.value();
}
});
// Break every message into words and return list of words
JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
@Override
public Iterator<String> call(String line) throws Exception {
return Arrays.asList(line.split(" ")).iterator();
}
});
// Take every word and return Tuple with (word,1)
JavaPairDStream<String,Integer> wordMap = words.mapToPair(new PairFunction<String, String, Integer>() {
@Override
public Tuple2<String, Integer> call(String word) throws Exception {
return new Tuple2<>(word,1);
}
});
// Count occurance of each word
JavaPairDStream<String,Integer> wordCount = wordMap.reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer first, Integer second) throws Exception {
return first+second;
}
});
//Print the word count
wordCount.print();
jssc.start();
jssc.awaitTermination();
}
}