Web Services

Apache Spark Streaming Example

1. Introduction

This is an article showing the Apache Spark Streaming Example.

Apache Spark was created in UC Berkeley’s AMPLab in 2009. It was created by Matei Zaharia. It was open-sourced in 2010 with a BSD license. Apache acquired Spark in 2013. It became a popular project in the Apache program in 2014.

Apache Spark is based on a cluster computing framework. It is used for big data processing to give real-time results. The key feature is the in-memory cluster which helps in providing greater performance. It provides a programming interface for creating clusters. The data processing can be parallelized and it is fault-tolerant. Different performance-intensive tasks like batch applications, iterative algorithms, queries, and streaming can be processed as jobs on Apache Spark.

2. Apache Spark Streaming

2.1 Prerequisites

Java 8 is required on the Linux, Windows, or Mac operating systems. Apache spark 3.0.1 can be used from the apache website. The example is based on Hadoop 2.7

2.2 Download

You can download Java 8 can be downloaded from the Oracle web site. Apache Maven 3.6.1 can be downloaded from Apache site. Apache Spark can be downloaded from the Apache web site.

2.3 Setup

2.3.1 Java Setup

You can set the environment variables for JAVA_HOME and PATH. They can be set as shown below:

Environment Setup For Java

JAVA_HOME="/desktop/jdk1.8.0_73"
export JAVA_HOME
PATH=$JAVA_HOME/bin:$PATH
export PATH

The environment variables for maven are set as below:

Environment Setup for Maven

JAVA_HOME=”/jboss/jdk1.8.0_73″
export M2_HOME=/users/bhagvan.kommadi/Desktop/apache-maven-3.6.1
export M2=$M2_HOME/bin
export PATH=$M2:$PATH

2.3.2 Spark Setup

You need to unzip the file spark-3.0.1-bin-hadoop2.7.tgz after downloading.

2.4 Spark Streaming Features

Apache spark is performant and has a 100X benchmark relative to Hadoop MapReduce for Big Data Processing. Controlled partitioning is another technique for high performance. Spark has caching capability and can persist to the disk. It can be deployed using Hadoop’s YARN, Mesos, and Spark’s Cluster Manager. Spark provides real-time speed and low latency due to its in-memory cluster manager. Spark has APIs in different languages such as Java, Python, Scala, and R. It has a programming shell in Python and Scala.

2.5 Spark Streaming Operations

Apache Spark is open source and has features related to machine learning, SQL query processing, streaming, and graph processing. Apache Spark is based on a layered architecture that has loosely coupled components and layers. Apache spark supports operations on two types of datasets which are Resilient Distributed Dataset (RDD) and directed acyclic graph (DAG).

Resilient Distributed Dataset has computation executors. They can support multiple languages such as Java, Scala, and Python. They are immutable, distributed, and fault-tolerant. These datasets can be spread across multiple nodes. Directed Acyclic Graph has a set of events which are tasks. A graph has edges and vertices. RDDs are vertices and operations are edges. Each operation can operate on the sequence’s different areas.

2.6 Spark Streaming Basics

Apache Spark streaming happens in four different steps as shown below:

  • Data Streamed from sources
    • real-time from different sources like Kafka, flume, AWS, Parquet, Akka
    • Static/ Batch Streaming sources
  • Using MLibAPI, Machine Learning algorithms are executed on the data
  • Spark SQL helps in different data operations
  • Streaming results are persisted in different data systems such as Cassandra, MemSQL, HBase, Kafka, HDFS, Elastic Search and File Systems

Streaming Context is used for registering the input data streams (Discretized Stream) and consuming the data stream from sources such as Akka Actor, ZeroMQ, and Twitter. This context has a spark cluster connection and you can create RDDs, broadcast variables, and accumulators. Spark Stream has support for Discretized Stream (Dstream) which is continuous. DStream consists of a series of RDDs. Each RDD has data within an interval. This steam of data is from real-time streaming sources. The receiver associated with each Dstream is persisted in Spark’s memory. DStream operations result in operating the underlying RDDs. The output operations are sent to external data systems like File systems and databases. DStreams have features for caching and persisting the data stream in memory. Data is replicated by default to two different nodes for fault tolerance.

Accumulators are related to associative and commutative operations. They are variables used for those operations like sums, and counters. Spark has support for numeric accumulators. Broadcast variables are read-only variables cached on every machine. They help in cutting down communication costs. Checkpoints help in restoring during failures.

2.7 Spark Streaming Example

Typical Streaming data examples are website browsing clickstream and ad clickstream. The other examples are based on AWS Kinesis and Apache Kafka streams. In this example, we are looking at a simulator that creates a stream of events. Let us start looking at the EventCreation Simulator first.

EventCreationSimulator class code is shown below:

EventCreationSimulator

package org.javacodegeeks.streaming.simulator;
import java.io.*; 
import java.net.*;
import java.util.Random;
import java.util.concurrent.*;

public class EventCreationSimulator {
    private static final Executor SERVER_EXECUTOR = Executors.newSingleThreadExecutor();
    private static final int PORT = 8888;
    private static final String DELIMITER = "-";
    private static final long EVENT_PERIOD_SECONDS = 1;
    private static final Random random = new Random();

    public static void main(String[] args) throws IOException, InterruptedException {
        BlockingQueue eventQueue = new ArrayBlockingQueue(100);
        SERVER_EXECUTOR.execute(new EventStreamingServer(eventQueue));
        while (true) {
            eventQueue.put(createEvent());
            Thread.sleep(TimeUnit.SECONDS.toMillis(EVENT_PERIOD_SECONDS));
        }
    }

    private static String createEvent() {
        int customerNumber = random.nextInt(20);
        String event = random.nextBoolean() ? "mobile" : "laptop";
        return String.format("customer-%s", customerNumber) + DELIMITER + event;
    }

    private static class EventStreamingServer implements Runnable {
        private final BlockingQueue eventQueue;

        public EventStreamingServer(BlockingQueue eventQueue) {
            this.eventQueue = eventQueue;
        }

        @Override
        public void run() {
            try (ServerSocket serverSocket = new ServerSocket(PORT);
                 Socket clientSocket = serverSocket.accept();
                 PrintWriter outWriter = new PrintWriter(clientSocket.getOutputStream(), true);
            ) {
                while (true) {
                    String event = eventQueue.take();
                    System.out.println(String.format("outputing \"%s\" to the socket.", event));
                    outWriter.println(event);
                }
            } catch (IOException|InterruptedException exception) {
                throw new RuntimeException("Run Time error", exception);
            }
        }
    }
}

2.7.1 Local Execution

Now let us look at the Spark Streaming application. Spark Streaming application connects to the server running in EventCreationSimulator.java. BasicStreaming Application class reads the data and logs the data that’s been received every 10 seconds.

BasicStreamingApplication Class code is shown below:

BasicStreamingApplication

package org.javacodegeeks.streaming.app;
import org.apache.log4j.*;
import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.*;

public class BasicStreamingApplication {
    private static final String HOST = "localhost";
    private static final int PORT = 8888;

    public static void main(String[] args) throws InterruptedException {
        SparkConf conf = new SparkConf()
                .setMaster("local[*]")
                .setAppName("BasicStreaming");
        JavaStreamingContext streamingContext =
                new JavaStreamingContext(conf, Durations.seconds(10));
        Logger.getRootLogger().setLevel(Level.ERROR);

        JavaReceiverInputDStream lines = streamingContext.socketTextStream(HOST, PORT);
        lines.print();

        streamingContext.start();
        streamingContext.awaitTermination();
    }
}

The command below builds the project:

build command

mvn package

The output of the executed command is shown below.

output

apples-MacBook-Air:java bhagvan.kommadi$ mvn package
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for org.javacodegeeks:spark-streaming-example:jar:1.0
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-compiler-plugin is missing. @ line 13, column 21
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING] 
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING] 
[INFO] 
[INFO] ---------------------------
[INFO] Building spark-streaming-example 1.0
[INFO] --------------------------------[ jar ]---------------------------------
[WARNING] The POM for commons-codec:commons-codec:jar:1.15-SNAPSHOT is missing, no dependency information available
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ spark-streaming-example ---
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory /Users/bhagvan.kommadi/Desktop/JavacodeGeeks/Code/sparkstreamingexample/java/src/main/resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ spark-streaming-example ---
[INFO] Changes detected - recompiling the module!
[WARNING] File encoding has not been set, using platform encoding UTF-8, i.e. build is platform dependent!
[INFO] Compiling 3 source files to /Users/bhagvan.kommadi/Desktop/JavacodeGeeks/Code/sparkstreamingexample/java/target/classes
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ spark-streaming-example ---
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory /Users/bhagvan.kommadi/Desktop/JavacodeGeeks/Code/sparkstreamingexample/java/src/test/resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ spark-streaming-example ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ spark-streaming-example ---
[INFO] No tests to run.
[INFO] 
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ spark-streaming-example ---
[INFO] Building jar: /Users/bhagvan.kommadi/Desktop/JavacodeGeeks/Code/sparkstreamingexample/java/target/spark-streaming-example-1.0.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  6.333 s
[INFO] Finished at: 2020-12-17T20:00:34+05:30
[INFO] ------------------------------------------------------------------------
apples-MacBook-Air:java bhagvan.kommadi$

The command below starts the EventCreationSimulator:

start command for Event CreationSimulator

.
mvn exec:java -Dexec.mainClass=org.javacodegeeks.streaming.simulator.EventCreationSimulator

The output of the executed command is shown below.

Output

.
apples-MacBook-Air:java bhagvan.kommadi$ mvn exec:java -Dexec.mainClass=org.javacodegeeks.streaming.simulator.EventCreationSimulator
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for org.javacodegeeks:spark-streaming-example:jar:1.0
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-compiler-plugin is missing. @ line 13, column 21
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING] 
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING] 
[INFO] 
[INFO] ---------------------------
[INFO] Building spark-streaming-example 1.0
[INFO] --------------------------------[ jar ]---------------------------------
[WARNING] The POM for commons-codec:commons-codec:jar:1.15-SNAPSHOT is missing, no dependency information available
[INFO] 
[INFO] --- exec-maven-plugin:3.0.0:java (default-cli) @ spark-streaming-example ---
outputing "customer-19-mobile" to the socket.
outputing "customer-6-mobile" to the socket.
outputing "customer-15-laptop" to the socket.
outputing "customer-4-laptop" to the socket.
outputing "customer-13-laptop" to the socket.
outputing "customer-17-laptop" to the socket.
outputing "customer-10-laptop" to the socket.
outputing "customer-19-mobile" to the socket.
outputing "customer-16-laptop" to the socket.
outputing "customer-8-laptop" to the socket.
outputing "customer-11-laptop" to the socket.
outputing "customer-4-laptop" to the socket.
outputing "customer-17-mobile" to the socket.
outputing "customer-10-laptop" to the socket.
outputing "customer-15-mobile" to the socket.
outputing "customer-8-mobile" to the socket.
outputing "customer-4-laptop" to the socket.
outputing "customer-14-mobile" to the socket.
outputing "customer-9-mobile" to the socket.
outputing "customer-17-laptop" to the socket.
outputing "customer-7-laptop" to the socket.
outputing "customer-12-laptop" to the socket.
outputing "customer-4-mobile" to the socket.
outputing "customer-8-mobile" to the socket.
outputing "customer-9-laptop" to the socket.
outputing "customer-10-mobile" to the socket.
outputing "customer-6-laptop" to the socket.
outputing "customer-2-mobile" to the socket.
outputing "customer-12-mobile" to the socket.
outputing "customer-0-mobile" to the socket.
outputing "customer-7-mobile" to the socket.
outputing "customer-6-laptop" to the socket.
outputing "customer-11-laptop" to the socket.
outputing "customer-8-laptop" to the socket.
outputing "customer-13-mobile" to the socket.
outputing "customer-4-laptop" to the socket.
outputing "customer-12-mobile" to the socket.
outputing "customer-10-laptop" to the socket.
outputing "customer-15-mobile" to the socket.
outputing "customer-0-mobile" to the socket.
outputing "customer-10-mobile" to the socket.
outputing "customer-12-laptop" to the socket.
outputing "customer-16-laptop" to the socket.
outputing "customer-3-mobile" to the socket.
outputing "customer-8-laptop" to the socket.
outputing "customer-11-laptop" to the socket.
outputing "customer-1-laptop" to the socket.
outputing "customer-5-mobile" to the socket.
outputing "customer-12-laptop" to the socket.
outputing "customer-15-laptop" to the socket.
outputing "customer-16-mobile" to the socket.
outputing "customer-16-mobile" to the socket.
outputing "customer-8-mobile" to the socket.
outputing "customer-18-mobile" to the socket.
outputing "customer-5-laptop" to the socket.
outputing "customer-3-mobile" to the socket.
outputing "customer-4-laptop" to the socket.
outputing "customer-6-laptop" to the socket.
outputing "customer-0-laptop" to the socket.
outputing "customer-4-mobile" to the socket.
outputing "customer-9-mobile" to the socket.
outputing "customer-14-mobile" to the socket.
outputing "customer-12-laptop" to the socket.
outputing "customer-8-laptop" to the socket.
outputing "customer-19-laptop" to the socket.
outputing "customer-8-laptop" to the socket.
outputing "customer-5-laptop" to the socket.
outputing "customer-15-mobile" to the socket.
outputing "customer-15-laptop" to the socket.
outputing "customer-17-mobile" to the socket.
outputing "customer-18-laptop" to the socket.
outputing "customer-17-mobile" to the socket.
outputing "customer-17-mobile" to the socket.
outputing "customer-10-mobile" to the socket.
outputing "customer-16-laptop" to the socket.
outputing "customer-13-laptop" to the socket.
outputing "customer-3-mobile" to the socket.
outputing "customer-5-mobile" to the socket.
outputing "customer-8-laptop" to the socket.
outputing "customer-9-mobile" to the socket.
outputing "customer-16-laptop" to the socket.
outputing "customer-14-mobile" to the socket.
outputing "customer-5-laptop" to the socket.
outputing "customer-15-laptop" to the socket.
outputing "customer-17-mobile" to the socket.
outputing "customer-6-mobile" to the socket.
outputing "customer-15-mobile" to the socket.
outputing "customer-9-laptop" to the socket.
outputing "customer-11-laptop" to the socket.
apples-MacBook-Air:java bhagvan.kommadi$

The command below starts the BasicStreamingApplication:

start command for BasicStreamingApplication

.
mvn exec:java -Dexec.mainClass=org.javacodegeeks.streaming.app.BasicStreamingApplication

The output of the executed command is shown below.

Output

.
apples-MacBook-Air:java bhagvan.kommadi$ mvn exec:java -Dexec.mainClass=org.javacodegeeks.streaming.app.BasicStreamingApplication
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for org.javacodegeeks:spark-streaming-example:jar:1.0
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-compiler-plugin is missing. @ line 13, column 21
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING] 
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING] 
[INFO] 
[INFO] ---------------------------
[INFO] Building spark-streaming-example 1.0
[INFO] --------------------------------[ jar ]---------------------------------
[WARNING] The POM for commons-codec:commons-codec:jar:1.15-SNAPSHOT is missing, no dependency information available
[INFO] 
[INFO] --- exec-maven-plugin:3.0.0:java (default-cli) @ spark-streaming-example ---
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/12/17 20:06:32 WARN Utils: Your hostname, apples-MacBook-Air.local resolves to a loopback address: 127.0.0.1; using 192.168.1.9 instead (on interface en0)
20/12/17 20:06:33 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/12/17 20:07:03 INFO SparkContext: Running Spark version 2.3.0
20/12/17 20:07:03 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/12/17 20:07:04 INFO SparkContext: Submitted application: BasicStreaming
20/12/17 20:07:04 INFO SecurityManager: Changing view acls to: bhagvan.kommadi
20/12/17 20:07:04 INFO SecurityManager: Changing modify acls to: bhagvan.kommadi
20/12/17 20:07:04 INFO SecurityManager: Changing view acls groups to: 
20/12/17 20:07:04 INFO SecurityManager: Changing modify acls groups to: 
20/12/17 20:07:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(bhagvan.kommadi); groups with view permissions: Set(); users  with modify permissions: Set(bhagvan.kommadi); groups with modify permissions: Set()
20/12/17 20:07:05 INFO Utils: Successfully started service 'sparkDriver' on port 54935.
20/12/17 20:07:05 INFO SparkEnv: Registering MapOutputTracker
20/12/17 20:07:05 INFO SparkEnv: Registering BlockManagerMaster
20/12/17 20:07:05 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/12/17 20:07:05 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/12/17 20:07:05 INFO DiskBlockManager: Created local directory at /private/var/folders/cr/0y892lq14qv7r24yl0gh0_dm0000gp/T/blockmgr-7ea1adbf-a452-4404-abfd-a77b71f752f5
20/12/17 20:07:05 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/12/17 20:07:05 INFO SparkEnv: Registering OutputCommitCoordinator
20/12/17 20:07:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/12/17 20:07:06 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.9:4040
20/12/17 20:07:06 INFO Executor: Starting executor ID driver on host localhost
20/12/17 20:07:06 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 54938.
20/12/17 20:07:06 INFO NettyBlockTransferService: Server created on 192.168.1.9:54938
20/12/17 20:07:06 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/12/17 20:07:06 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.9, 54938, None)
20/12/17 20:07:06 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.9:54938 with 366.3 MB RAM, BlockManagerId(driver, 192.168.1.9, 54938, None)
20/12/17 20:07:06 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.9, 54938, None)
20/12/17 20:07:06 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.9, 54938, None)
-------------------------------------------
Time: 1608215830000 ms
-------------------------------------------
customer-19-mobile
customer-6-mobile
customer-15-laptop
customer-4-laptop
customer-13-laptop
customer-17-laptop
customer-10-laptop
customer-19-mobile
customer-16-laptop
customer-8-laptop
...

-------------------------------------------
Time: 1608215835000 ms
-------------------------------------------
customer-5-mobile
customer-8-laptop
customer-9-mobile
customer-16-laptop
customer-14-mobile

2.7.2 Execution on Apache Spark

Now let us look at how to run the app on Apache Spark. To run the sparkstreamingApplication on ApacheSpark, you can use the code below:

BasicStreamingSparkApplication

package org.javacodegeeks.streaming.app;
import org.apache.log4j.*;
import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.*;

public class BasicStreamingSparkApplication {
    private static final String HOST = "localhost";
    private static final int PORT = 8888;

    public static void main(String[] args) throws InterruptedException {
        SparkConf conf = new SparkConf()
                .setAppName("BasicStreamingSparkApp");
        JavaStreamingContext streamingContext =
                new JavaStreamingContext(conf, Durations.seconds(10));
        Logger.getRootLogger().setLevel(Level.ERROR);

        JavaReceiverInputDStream lines = streamingContext.socketTextStream(HOST, PORT);
        lines.print();

        streamingContext.start();
        streamingContext.awaitTermination();
    }
}

The command below starts the BasicStreamingApplication on ApacheSpark:

start command for Event CreationSimulator

/users/bhagvan.kommadi/downloads/spark-3.0.1-bin-hadoop2.7/bin/spark-submit --class org.javacodegeeks.streaming.app.BasicStreamingSparkApplication target/spark-streaming-example-1.0.jar

The output of the executed command is shown below.

Output

.
apples-MacBook-Air:java bhagvan.kommadi$ /users/bhagvan.kommadi/downloads/spark-3.0.1-bin-hadoop2.7/bin/spark-submit --class org.javacodegeeks.streaming.app.BasicStreamingSparkApplication target/spark-streaming-example-1.0.jar 
20/12/17 20:13:16 WARN Utils: Your hostname, apples-MacBook-Air.local resolves to a loopback address: 127.0.0.1; using 192.168.1.9 instead (on interface en0)
20/12/17 20:13:16 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/12/17 20:13:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/12/17 20:13:49 INFO SparkContext: Running Spark version 3.0.1
20/12/17 20:13:49 INFO ResourceUtils: ==============================================================
20/12/17 20:13:49 INFO ResourceUtils: Resources for spark.driver:

20/12/17 20:13:49 INFO ResourceUtils: ==============================================================
20/12/17 20:13:49 INFO SparkContext: Submitted application: BasicStreamingSparkApp
20/12/17 20:13:50 INFO SecurityManager: Changing view acls to: bhagvan.kommadi
20/12/17 20:13:50 INFO SecurityManager: Changing modify acls to: bhagvan.kommadi
20/12/17 20:13:50 INFO SecurityManager: Changing view acls groups to: 
20/12/17 20:13:50 INFO SecurityManager: Changing modify acls groups to: 
20/12/17 20:13:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(bhagvan.kommadi); groups with view permissions: Set(); users  with modify permissions: Set(bhagvan.kommadi); groups with modify permissions: Set()
20/12/17 20:13:51 INFO Utils: Successfully started service 'sparkDriver' on port 55029.
20/12/17 20:13:51 INFO SparkEnv: Registering MapOutputTracker
20/12/17 20:13:51 INFO SparkEnv: Registering BlockManagerMaster
20/12/17 20:13:51 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/12/17 20:13:51 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/12/17 20:13:51 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
20/12/17 20:13:51 INFO DiskBlockManager: Created local directory at /private/var/folders/cr/0y892lq14qv7r24yl0gh0_dm0000gp/T/blockmgr-d64f47c7-a269-469a-9dea-be15a08ecd2e
20/12/17 20:13:51 INFO MemoryStore: MemoryStore started with capacity 366.3 MiB
20/12/17 20:13:51 INFO SparkEnv: Registering OutputCommitCoordinator
20/12/17 20:13:52 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/12/17 20:13:52 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.9:4040
20/12/17 20:13:52 INFO SparkContext: Added JAR file:/Users/bhagvan.kommadi/Desktop/JavacodeGeeks/Code/sparkstreamingexample/java/target/spark-streaming-example-1.0.jar at spark://192.168.1.9:55029/jars/spark-streaming-example-1.0.jar with timestamp 1608216232770
20/12/17 20:13:53 INFO Executor: Starting executor ID driver on host 192.168.1.9
20/12/17 20:13:53 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 55030.
20/12/17 20:13:53 INFO NettyBlockTransferService: Server created on 192.168.1.9:55030
20/12/17 20:13:53 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/12/17 20:13:53 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.9, 55030, None)
20/12/17 20:13:53 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.9:55030 with 366.3 MiB RAM, BlockManagerId(driver, 192.168.1.9, 55030, None)
20/12/17 20:13:53 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.9, 55030, None)
20/12/17 20:13:53 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.9, 55030, None)
-------------------------------------------
Time: 1608216235000 ms
-------------------------------------------

-------------------------------------------
Time: 1608216240000 ms
-------------------------------------------
customer-9-mobile
customer-1-laptop
customer-7-mobile
customer-18-mobile
customer-1-laptop
customer-6-mobile
customer-9-laptop
customer-12-laptop
customer-17-laptop
customer-16-mobile
...

-------------------------------------------
Time: 1608216245000 ms
-------------------------------------------
customer-0-mobile
customer-15-mobile
customer-14-laptop
customer-2-laptop
customer-12-mobile

3. Download the Source Code

Download
You can download the full source code of this example here: Apache Spark Streaming Example

Bhagvan Kommadi

Bhagvan Kommadi is the Founder of Architect Corner & has around 20 years’ experience in the industry, ranging from large scale enterprise development to helping incubate software product start-ups. He has done Masters in Industrial Systems Engineering at Georgia Institute of Technology (1997) and Bachelors in Aerospace Engineering from Indian Institute of Technology, Madras (1993). He is member of IFX forum,Oracle JCP and participant in Java Community Process. He founded Quantica Computacao, the first quantum computing startup in India. Markets and Markets have positioned Quantica Computacao in ‘Emerging Companies’ section of Quantum Computing quadrants. Bhagvan has engineered and developed simulators and tools in the area of quantum technology using IBM Q, Microsoft Q# and Google QScript. He has reviewed the Manning book titled : "Machine Learning with TensorFlow”. He is also the author of Packt Publishing book - "Hands-On Data Structures and Algorithms with Go".He is member of IFX forum,Oracle JCP and participant in Java Community Process. He is member of the MIT Technology Review Global Panel.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button