More Effective Java With Joshua Bloch

Many of us already know and agree on how great the book “Effective Java by Joshua Bloch” is and it’s a must read for every Java Developer out there whether you have just started or working for a while.While reading the book and researching on some of the Items listed in the book, I came across this Interview with Joshua Bloch Link at Oracle ,

Effective Java
Effective Java : Joshua Bloch

in which he speaks about some of the great things in the book and shares his knowledge on some great topics in the language.This should be a good read for someone interested to explore more while reading this book or afterwards –

Here is the link –

http://www.oracle.com/technetwork/articles/java/bloch-effective-08-qa-140880.html


Also take a looks at –

What can I learn right now in just 10 minutes that could be useful for the rest of my life?

Answer by Vishnu Haridas:

This one I discovered recently: If you get an unusable headphones, don’t throw it away. You can cut & remove the wire, and use the TRS jack as the FM antenna for your smartphone.

All you need is to plug-in this TRS jack into your phone’s headphones plug, and open the FM radio app, then start listening through your loudspeaker.

How it works: The headphones wire works as the FM antenna for mobile phones. Usually FM transmission will have a very strong signal, which needs a small piece of wire to receive the signal.

What can I learn right now in just 10 minutes that could be useful for the rest of my life?

How-To : Generate Restful API Documentation with Swagger ?

“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”
– Martin Fowler

What is Swagger ?

Swagger is a specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services. The goal of Swagger is to enable client and documentation systems to update at the same pace as the server. The documentation of methods, parameters, and models are tightly integrated into the server code, allowing APIs to always stay in sync.

Why is Swagger useful?

The  framework simultaneously solves server, client, and documentation/sandbox needs.

With Swagger’s declarative resource specification, clients can understand and consume services without knowledge of server implementation or access to the server code. The Swagger UI framework allows both developers and non-developers to interact with the API in a sandbox UI that gives clear insight into how the API responds to parameters and options.

It happily speaks both JSON and XML, with additional formats in the works.

Now let’s see a working example and how do we configure Swagger, to generate API documentation of our sample REST API created using Spring Boot.

How to Enable Swagger in your Spring Boot Web Application ?

If you are one of those lazy people who hates reading the configurations, download the complete working example here , otherwise go on –

Step 1 : Include Swagger-SpringMVC dependency in Maven

Step 2 : Create Swagger Java Configuration

  • Use the @EnableSwagger annotation.
  • Autowire SpringSwaggerConfig.
  • Define one or more SwaggerSpringMvcPlugin instances using springs @Bean annotation.
[gist https://gist.github.com/saurzcode/9dcee7110707ff996784/]

Step 3 : Create Swagger UI using WebJar

For using webjar dependency add, following repository and dependency, which will auto configure swagger UI for you.

 

That’s it. Now run the Application.java as a java application in your IDE , and you will see the application running in embedded tomcat/jetty server running at default port 8080.

Verify the API Configuration by pointing your browser at  – http://localhost:8080/api-docs

And finally ,you can see the Swagger API Docs  and test the APIs at  http://localhost:8080/index.html

Swagger: API Doc for Spring Boot Application
Swagger : API Doc for RESTful API

 

Also, please note that default url in webjar files is - http://petstore.swagger.wordnik.com/api/api-docs So you might see an error like this, "Can't read from server. It may not have the appropriate access-control-origin settings." Solution : Just replace the URL [http://petstore.swagger.wordnik.com/api/api-docs] on screen with [http://localhost:8080/api-docs] and you will see UI as above.

 

Again, complete project is available at GitHub.

https://github.com/saurzcode/saurzcode-swagger-spring/

References :

Do write back in comments,  if you face any issues or concerns !!


You may also like :-

 

How-To : Setup Realtime Alalytics over Logs with ELK Stack : Elasticsearch, Logstash, Kibana?

Once we know something, we find it hard to imagine what it was like not to know it.

– Chip & Dan Heath, Authors of Made to Stick, Switch

 

What is the ELK stack ?

The ELK stack consists of opensource tools ElasticSearch, Logstash and Kibana. These three provide a fully working real-time data analytics tool for getting wonderful information sitting on your data.

ElasticSearch

ElasticSearch,built on top of Apache Lucene, is a search engine with focus on real-time analysis of the data, and is based on the RESTful architecture. It provides standard full text search functionality and powerful search based on query. ElasticSearch is document-oriented/based and you can store everything you want as JSON. This makes it powerful, simple and flexible.

Logstash

Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use.In ELK Stack logstash plays an important role in shipping the log and indexing them later which can be supplied to Elastic Search.

Kibana

Kibana is a user friendly way to view, search and visualize your log data, which will present the data stored from Logstash into ElasticSearch, in a very customizable interface with histogram and other panels which provides real-time analysis and search of data you have parsed into ElasticSearch.

How do I get it  ?

https://www.elastic.co/downloads

How do they work together ?

Logstash is essentially a pipelining tool. In a basic, centralized installation a logstash agent, known as the shipper, will read input from one to many input sources and output that text wrapped in a JSON message to a broker. Typically Redis, the broker, caches the messages until another logstash agent, known as the collector, picks them up, and sends them to another output. In the common example this output is Elasticsearch, where the messages will be indexed and stored for searching. The Elasticsearch store is accessed via the Kibana web application which allows you to visualize and search through the logs. The entire system is scalable. Many different shippers may be running on many different hosts, watching log files and shipping the messages off to a cluster of brokers. Then many collectors can be reading those messages and writing them to an Elasticsearch cluster.

Realtime Analytics for logs using ELK Stack
(E)lasticSearch (L)ogstash  (K)ibana (The ELK Stack)

How do i fetch useful information out of logs ? 

Fetching useful information from logs is one of the most important part of this stack and is being done in logstash using its grok filters and a set of input , filter and output plugins which helps to scale this functionality for taking various kinds of inputs ( file,tcp, udp, gemfire, stdin, unix, web sockets and even IRC and twitter and many more) , filter them using (groks,grep,date filters etc.) and finally write ouput to ElasticSearch,redis,email,HTTP,MongoDB,Gemfire , Jira , Google Cloud Storage etc.

A bit more about Log Stash

Realtime Analytics over Logs using ELK Stack

Filters 

Transforming the logs as they go through the pipeline is possible as well using filters. Either on the shipper or collector, whichever suits your needs better. As an example, an Apache HTTP log entry can have each element (request, response code, response size, etc) parsed out into individual fields so they can be searched on more seamlessly. Information can be dropped if it isn’t important. Sensitive data can be masked. Messages can be tagged. The list goes on.

e.g.

 

Above example takes input from an apache log file applies a grok filter with %{COMBINEDAPACHELOG}, which will index apache logs information on fields and finally output to Standard Output Console.

Writing Grok Filters

Writing grok filters and fetching information is the only task that requires some serious efforts and if done properly will give you great insights in to your data like Number of Transations performed over time, Which type of products have most hits etc.

Below links will help you a lot in writing grok filters and test them with ease –

Grok Debugger

Grok Debugger is a wonderful tool for testing your grok patterns before using in your logstash filters.

http://grokdebug.herokuapp.com/

Grok Patterns Lookup

You can lookup grok for various commonly used log patterns here –

https://github.com/elastic/logstash/blob/v1.4.2/patterns/grok-patterns

If you like this post you will love to read my book on ELK stack – https://www.packtpub.com/big-data-and-business-intelligence/learning-elk-stack  . The book covers all the basics of Elasticsearch, Logstash and Kibana4 to get you started on ELK stack.Please find more details of the book here.

References


Related Articles :-

Hadoop Certification

Getting Started with Apache Pig

Hadoop Reading List

 

%d bloggers like this: