How-To : Write a Kafka Producer using Twitter Stream ( Twitter HBC Client)

Twitter open-sourced its Hosebird client (hbc), a robust Java HTTP library for consuming Twitter’s Streaming API. In this post, I am going to present a demo of how we can use hbc to create a Kafka twitter stream producer, which tracks a few terms in Twitter statuses and produces a Kafka stream out of it, which can be utilised later for counting the terms, or sending that data from Kafka to Storm (Kafka-Storm pipeline) or HDFS ( as we will see in next post about using Camus API).

You can download and run a complete Sample here.

Requirements

  • Apache Kafka 2.6.0
  • Twitter Developer account ( for API Key, Secret etc.)
  • Apache Zookeeper ( required for Kafka)
  • Oracle JDK 1.8 (64 bit )

Build Environment

  • Eclipse
  • Apache Maven 2/3

How to Generate Twitter API Keys Using Developer Account

  1. Go to https://dev.twitter.com/apps/new and log in, if necessary.
  2. Enter your Application Name, Description, and your website address. You can leave the callback URL empty.
  3. Accept the TOS.
  4. Submit the form by clicking the Create your Twitter Application.
  5. Copy the consumer key (API key) and consumer secret from the screen into your application.
  6. After creating your Twitter Application, you have to give access to your Twitter Account to use this Application. To do this, click the Create my Access Token.
  7. Now you will have Consumer Key, Consumer Secret, Acess token, Access Token Secret to be used in streaming API calls.

Steps to Run the Sample

1. Start the Zookeeper server in Kafka using the following script in your Kafka installation folder  –

$bin/zookeeper-server-start.sh config/zookeeper.properties &

and, verify if it is running on default port 2181 using –

$netstat -anlp | grep 2181

2. Start Kafka server using the following script –

$bin/kafka-server-start.sh config/server.properties  &

and, verify if it is running on default port 9092

If you are on a mac, and you have brew installed, both can be done with simple brew commands.

$brew install kafka   # this internally installs zookeeper too

$brew services start zookeeper

$kafka-server-start  /usr/local/etc/kafka/server.properties

3. Create Topic

$bin/kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic twitter-topic

4. Validate the Topic

$bin/kafka-topics --describe --zookeeper localhost:2181 --topic twitter-topic

3. Now, when we are all set with Kafka running and ready to accept messages on the topic we just created., we will create a Kafka Producer, which makes use of hbc client API to get twitter stream for tracking terms and puts on the topic named as “twitter-topic”.

  • First, we need to give maven dependencies for hbc-core for latest version and some other dependencies needed for Kafka –

<dependency>
<groupId>com.twitter</groupId>
<artifactId>hbc-core</artifactId> <!-- or hbc-twitter4j -->
<version>2.2.0</version> <!-- or whatever the latest version is -->
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.6.0</version>
</dependency>

  •  Then, we need to set properties to configure our Kafka Producer to publish messages to the topic –
Properties properties = new Properties();
properties.put(ProducerConfig.<em>BOOTSTRAP_SERVERS_CONFIG</em>, TwitterKafkaConfig.<em>SERVERS</em>);
properties.put(ProducerConfig.<em>ACKS_CONFIG</em>, "1");
properties.put(ProducerConfig.<em>LINGER_MS_CONFIG</em>, 500);
properties.put(ProducerConfig.<em>RETRIES_CONFIG</em>, 0);
properties.put(ProducerConfig.<em>KEY_SERIALIZER_CLASS_CONFIG</em>, LongSerializer.class.getName());
properties.put(ProducerConfig.<em>VALUE_SERIALIZER_CLASS_CONFIG</em>, StringSerializer.class.getName());
  • Set up a StatusFilterEndpoint, which will set up track terms to be tracked on recent status messages, as in the example – 

    StatusesFilterEndpoint endpoint = new StatusesFilterEndpoint();
  • endpoint.trackTerms(Lists.newArrayList(term));
  • Provide authentication parameters for OAuth ( we are getting them using command line parameters for this program, so don’t forget to pass those as VM arguments when you run it on IDE) for using twitter that we generated earlier and create the client using endpoint and auth –
Authentication auth = new OAuth1(consumerKey, consumerSecret, token,
        secret);
Client client = new ClientBuilder().hosts(Constants.<em>STREAM_HOST</em>)
        .endpoint(endpoint).authentication(auth)
        .processor(new StringDelimitedProcessor(queue)).build();
  • Last step, connect to the client, fetch messages from the queue and send through Kafka Producer –
client.connect();
try (Producer&lt;Long, String> producer = <em>getProducer</em>()) {
while (true) {
ProducerRecord&lt;Long, String> message = new ProducerRecord&lt;>(TwitterKafkaConfig.<em>TOPIC</em>, queue.take());
producer.send(message);
}
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
client.stop();
}

To run the complete example run TwitterKafkaProducer.java class as a Java Application in your favorite IDE and don’t forget to pass the arguments with your API keys and terms. Read detailed instructions here.

Also, to see how you can integrate Kafka with HDFS using camus from LinkedIn, you can visit the blog here.

Happy Learning !!

References:-

[1] https://kafka.apache.org/quickstart.html

[2] https://github.com/twitter/hbc

[3] https://themepacific.com/how-to-generate-api-key-consumer-token-access-key-for-twitter-oauth/994/

Interesting Reads –

Multithreaded Mappers in Mapreduce

You may also like...

%d