Tuesday, September 22, 2015

WSO2 IS authenticator - LinkedIn

1) Create a new LinkedIn app

2) The app will be assigned an Client ID and Client Secret. In the panel, make sure to set Authorized Redirect URLs to https://localhost:9443/commonauth

Screenshot from 2015-09-08 23:04:59.png

3) Build the sso sample from product-is/modules/samples/sso/sso-agent-sample to get the travelocity war file, add the war file to a web server (apache tomcat) and start the web server.

4) Download the IS ditribution and the service pack from http://wso2.com/products/identity-server/ and apply the service pack to the IS pack.


5) Clone the linkedin authentication connector source from https://github.com/katheesR/is-connectors/tree/master/linkedin, build the linkedin connector and copy the jar to the IS_HOME/components/dropins

6) Create a Identity provider from IS management console.

Screenshot from 2015-09-08 20:34:55.png

Now you can see the linkedIn configuration under Federated authentications section. Enable and fill the value for client ID, client secret and callback URL which can be got from the step1.

Screenshot from 2015-09-08 23:53:03.png
7) Create a service provider from IS management console.

Screenshot from 2015-09-08 23:59:35.png


Screenshot from 2015-09-08 20:31:02.png


8) Extract the certificate from browser by navigating to https://www.linkedin.com/ and place the certificate file in following locations.

IS_HOME/repository/resources/security

Navigate to the above location from command prompt and execute
'keytool -importcert -file CERT_FILE_NAME -keystore client-truststore.jks -alias "LinkedIn"' in command line to import linkedin certificate into keystore. Give "wso2carbon" as password.

import the following two certificates.

keytool -importcert -file www.linkedin.com -keystore client-truststore.jks -alias "linkedin"
keytool -importcert -file DigiCertSHA2SecureServerCA -keystore client-truststore.jks -alias "Dig"


9) go to the travelocity app http://localhost:8081/travelocity.com/index.jsp and click SAML2 redirect login
Screenshot from 2015-09-09 00:05:35.png


9) The page will be redirected to the linkedin authentication page.

Screenshot from 2015-09-09 11:17:28.png

If the credential is success then you will get the  linkedin account details.

Screenshot from 2015-09-09 00:11:13.png


sample LinkedIn authentication connector code - https://github.com/katheesR/is-connectors/tree/master/linkedin

Sunday, September 20, 2015

Apache Kafka Qiuckstart

Introduction

Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system. Kafka maintains feeds of messages in topics. Producers write data to topics and consumers read from the topics. One of the use case of Kafka is messaging system. Kafka offers queuing and publish-subscribe model.

Kafka command line tool

  • Download Apache Kafka distribution form here 
   
  • Extract and goto Kafka home  
    
  • Start the Zookeeper   
         bin/zookeeper-server-start.sh config/zookeeper.properties       

  • Start the Kafka server   
          bin/kafka-server-start.sh config/server.properties

  • Create a topic,
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

  • Run a producer and type the message.
bin/kafka-console-producer.sh --broker-list localhost:9092 
-- topic test 
         This is a message
         This is another message 

  • Run the consumer,the messages appear in the consumer
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test  --from-beginning
This is a message
This is another message


Kafka Multi Broker Cluster Setup

We will create 3 Kafka brokers (broker0, broker1 and broker2) whose configurations are based on the default.

First we make a configuration file for each of the brokers:

The default broker0 server properties file is.

config/server-1.properties:
    broker.id=0
    port=9092
    log.dir=/tmp/kafka-logs-0

copy the server.properties file for the broker1 and broker2.

> cp config/server.properties config/server-1.properties
> cp config/server.properties config/server-2.properties


Edit these new files and set the following properties:

config/server-1.properties:
    broker.id=1
    port=9093
    log.dir=/tmp/kafka-logs-1

config/server-2.properties:
    broker.id=2
    port=9094
    log.dir=/tmp/kafka-logs-2

Now we have created 3 Kafka  broker cluster. Start the Kafka server with the  appropriate server properties file.

Broker0
> bin/kafka-server-start.sh config/server.properties

Broker1
> bin/kafka-server-start.sh config/server1.properties

Broker2
> bin/kafka-server-start.sh config/server2.properties

WSO2 ESB 4.9.0 - Kafka Support


The WSO2 ESB Kafka Inbound consumes the message from the Kafka brokers. It allows the message consuming at different speed(polling interval), tenant loading and coordination supports.

You can download latest ESB version from http://wso2.com/products/enterprise-service-bus/


Kafka Inbound Use Cases
  1. ESB Kafka Inbound as Queue
  2. ESB Kafka Inbound as Topic
  3. ESB Kafka Inbound consumes from beginning
  4. ESB Kafka Inbound consumes from multiple topics
  5. ESB Kafka Inbound consume from specific server and topic partition

USE CASE 1 : WSO2 ESB Kafka Inbound as Queue
If the consumer instances are in a same consumer group, then this works like traditional queue. So only one Kafka inbound will consume the message.



Consider the following Kafka inbound configurations (KafkaInboundEP1 and KafkaInboundEP2). Since the group.id parameter values in both inbound configurations are same, both are in a same consumer group. In such case, any one of these endpoints will consumes the message from the topic.
  
KafkaInboundEP1
<inboundEndpoint xmlns="http://ws.apache.org/ns/synapse"
                name="KafkaInboundEP1"
                sequence="requestHandlerSeq"
                onError="inFaulte"
                protocol="kafka"
                suspend="false">
  <parameters>
     <parameter name="interval">100</parameter>
     <parameter name="coordination">true</parameter>
     <parameter name="sequential">true</parameter>
     <parameter name="zookeeper.connect">localhost:2181</parameter>
     <parameter name="consumer.type">highlevel</parameter>
     <parameter name="content.type">application/xml</parameter>
     <parameter name="topics">topic</parameter>
     <parameter name="group.id">consumer-group</parameter>
  </parameters>
</inboundEndpoint>

KafkaInboundEP2
<inboundEndpoint xmlns="http://ws.apache.org/ns/synapse"
                name="KafkaInboundEP2"
                sequence="requestHandlerSeq"
                onError="inFaulte"
                protocol="kafka"
                suspend="false">
  <parameters>
     <parameter name="interval">100</parameter>
     <parameter name="coordination">true</parameter>
     <parameter name="sequential">true</parameter>
     <parameter name="zookeeper.connect">localhost:2181</parameter>
     <parameter name="consumer.type">highlevel</parameter>
     <parameter name="content.type">application/xml</parameter>
     <parameter name="topics">topic</parameter>
     <parameter name="group.id">consumer-group</parameter>
  </parameters>
</inboundEndpoint>

USE CASE 2 : WSO2 ESB Kafka Inbound as Topic
If the consumer instances are in different consumer groups, then this works like publish-subscribe and all messages will be broadcasted to all consumers.



In the following Kafka inbound configurations,  both Kafka inbound endpoints (KafkaListenerEP and KafkaListenerEP2) are in different consumer groups since the group.id parameter values are different. So both inbound endpoint can consume the messages from the topic1.

KafkaInboundEP1
<inboundEndpoint xmlns="http://ws.apache.org/ns/synapse"
                name="KafkaInboundEP2"
                sequence="requestHandlerSeq"
                onError="inFaulte"
                protocol="kafka"
                suspend="false">
  <parameters>
     <parameter name="interval">100</parameter>
     <parameter name="coordination">true</parameter>
     <parameter name="sequential">true</parameter>
     <parameter name="zookeeper.connect">localhost:2181</parameter>
     <parameter name="consumer.type">highlevel</parameter>
     <parameter name="content.type">application/xml</parameter>
     <parameter name="topics">topic1</parameter>
     <parameter name="group.id">test-group1</parameter>
  </parameters>
</inboundEndpoint>

KafkaInboundEP2
<inboundEndpoint xmlns="http://ws.apache.org/ns/synapse"
                name="KafkaInboundEP2"
                sequence="requestHandlerSeq"
                onError="inFaulte"
                protocol="kafka"
                suspend="false">
  <parameters>
     <parameter name="interval">100</parameter>
     <parameter name="coordination">true</parameter>
     <parameter name="sequential">true</parameter>
     <parameter name="zookeeper.connect">localhost:2181</parameter>
     <parameter name="consumer.type">highlevel</parameter>
     <parameter name="content.type">application/xml</parameter>
     <parameter name="topics">test</parameter>
     <parameter name="group.id">test-group2</parameter>
  </parameters>
</inboundEndpoint>

USE CASE 3 : WSO2 ESB Kafka inbound as message consumer from beginning

The Kafka inbound endpoint allows to consume the messages from beginning. 

The following configuration can be used for this use case. KafkaInboundEP1 is used to consume the message from specific topic partition 1 from server1 and KafkaaInboundEP2 is used to consume the message from specific topic partition 2 from server1. So we can consume  all messages from specific server from beginning.

KafkaInboundEP1
<inboundEndpoint xmlns="http://ws.apache.org/ns/synapse"
                name="KafkaInboundEP1"
                sequence="requestHandlerSeq"
                onError="inFaulte"
                protocol="kafka"
                interval="1000"
                suspend="false">
  <parameters>   
     <parameter name="zookeeper.connect">localhost:2181</parameter>
     <parameter name="group.id">test-group</parameter>  
     <parameter name="content.type">application/xml</parameter>
     <parameter name="consumer.type">simple</parameter>
     <parameter name="simple.max.messages.to.read">5</parameter>
     <parameter name="simple.topic">topic</parameter>
     <parameter name="simple.brokers">localhost</parameter>
     <parameter name="simple.port">9092</parameter>
     <parameter name="simple.partition">1</parameter>
     <parameter name="interval">100000</parameter>
  </parameters>
</inboundEndpoint>

KafkaInboundEP2
<inboundEndpoint xmlns="http://ws.apache.org/ns/synapse"
                name="KafkaInboundEP2"
                sequence="requestHandlerSeq"
                onError="inFaulte"
                protocol="kafka"
                interval="1000"
                suspend="false">
  <parameters>   
     <parameter name="zookeeper.connect">localhost:2181</parameter>
     <parameter name="group.id">test-group</parameter>  
     <parameter name="content.type">application/xml</parameter>
     <parameter name="consumer.type">simple</parameter>
     <parameter name="simple.max.messages.to.read">100000</parameter>
     <parameter name="simple.topic">topic</parameter>
     <parameter name="simple.brokers">localhost</parameter>
     <parameter name="simple.port">9092</parameter>
     <parameter name="simple.partition">2</parameter>
     <parameter name="interval">1000</parameter>
  </parameters>
</inboundEndpoint>

USE CASE 4 : A WSO2 ESB Kafka Inbound consume from more than one topic

A Kafka inbound can consume the messages from more than one topics.

Each topics are added using comma separator in the Kafka inbound endpoint configuration. KafkaInboundEP1 can consume the messages from topic1 and topic2.

KafkaInboundEP1
<inboundEndpoint xmlns="http://ws.apache.org/ns/synapse"
                name="KafkaInboundEP1"
                sequence="requestHandlerSeq"
                onError="inFaulte"
                protocol="kafka"
                suspend="false">
  <parameters>
     <parameter name="interval">100</parameter>
     <parameter name="coordination">true</parameter>
     <parameter name="sequential">true</parameter>
     <parameter name="zookeeper.connect">localhost:2181</parameter>
     <parameter name="consumer.type">highlevel</parameter>
     <parameter name="content.type">application/xml</parameter>
     <parameter name="topics">topic1,topic2</parameter>
     <parameter name="group.id">test-group</parameter>
  </parameters>
</inboundEndpoint>

USE CASE 5 : WSO2 ESB Kafka Inbound as consume the message from specific server and specific partition

The Kafka Inbound allows to consume the messages from specific Kafka server and specific topic partition.

In the following configuration, The messages are consumed from Kafka server localhost:9092 and topic partition 1.

KafkaInboundEP
<inboundEndpoint xmlns="http://ws.apache.org/ns/synapse"
                name="KafkaInboundEP2"
                sequence="requestHandlerSeq"
                onError="inFaulte"
                protocol="kafka"
                interval="1000"
                suspend="false">
  <parameters>   
     <parameter name="zookeeper.connect">localhost:2181</parameter>
     <parameter name="group.id">test-group</parameter>  
     <parameter name="content.type">application/xml</parameter>
     <parameter name="consumer.type">simple</parameter>
     <parameter name="simple.max.messages.to.read">100000</parameter>
     <parameter name="simple.topic">topic</parameter>
     <parameter name="simple.brokers">localhost</parameter>
     <parameter name="simple.port">9092</parameter>
     <parameter name="simple.partition">1</parameter>
     <parameter name="interval">1000</parameter>
  </parameters>
</inboundEndpoint>

Create a REST API with Spring Boot

In this post, I will explain how to create a simple a REST API with Spring Boot Spring Boot Spring Boot is a framework that provides inbuil...