Monitor your firewall with ELK – all open source

We all know that monitoring of real-time application is critical for the overall operations of your infrastructure. There are, as always in the IT world hundreds of ways to solve a singel problem, most of them are also in one way or another a suitable way to solve the problem. Here I’ll show you one way that is entirely based on open source.

FW-PFSense

To accomplish this you would need to create an logging server hosting the ELK stack. ELK stands for Elastic Search, Logstash and Kibana. These three components helps you collect logs, analyze them and store them in a database that is searchable from a web GUI. The power of the tools are mostly within the visualization layer, we all know how hard it can be to crawl trough gigabytes of logs to spot errors, or indications to errors and correlate these over time. Kibana does all that work for you.

As we can understand this can be useful in a number of scenarios, my first ELK task where to crawl trough hundreds of Gigabytes of webserver logs to find unusual patters in traffic, over a period of 6 months, that’s nothing you simply accomplish with a log parser 🙂

To get started you would need a few tools:

  • A Linux server, RedHat, CentOS, Ubunu, most will work as the ELK stack is written in Java. Most guides on the net is based on CentOS or Ubuntu so these would work in your organization you would prefer these, otherwise CentOS it very similar to RedHat.
  • Lots of disk, as we all know logs take space, and there is not different on a ELK platform, to make it somewhat worse you would need fast disk, if possible SSD disks should be the target here, or if your would run ELK on a virtual platform (the optimal scenario) you should go with a SAN volume with as many spindels as you can, latency is more important than speed here.
  • Think big. For a Proof of Concept implementation you will not have a lot of considerations to take into account but bear in mind that if you start pouring in logs from various sources and your PoC is a success you might not want to go back and start importing these logs again. Its also a good idea to define the scope from the beginning in terms of number of sources. As this logs focus on firewall logs we know a firewall can several hundred thousands sessions active per second and as soon as these are initiated or closed they will initiate a log entry.
  • Sources obviously, in this post we will focus on firewalls and have picked PFSense 2.2 and Juniper SRX, both are popular firewalls at home and in the enterprise world. There are however several guides online for how you setup Logstash configs for Cisco, Paolo Alto and others.

So in our example we will use two different sources, one based on the open source firewall PFSense and one virtual Juniper Firewall, the vSRX (Previously called Firefly Perimeter). In both cases we will rely on SYSLOG for the log shipment from our firewalls.

The installation can be easily done by following a few simple steps: (steps originally here)

  • Create your basic Ubuntu 14.04 server, at least 4GB och ram and with minimum 2 vCPUs or physical with at least two cores
  • Consider to use SSDs if you will ship a lot of logs as previously mentioned.
  • Set your servers hostname, configure IP addresses, dns etc.

First thing would be to install JAVA as ELK is heavily relying on it, we would want to use Oracle Java, not open source, so there are a few steps to follow. In these examples we are not including sudo as it’s up to you to run the commands with admin rights, also we are picking nano as the editor, but that is also entirely up to you.

Add the Oracle Java PPA to apt

add-apt-repository -y ppa:webupd8team/java

Update your apt package database

apt-get update

Install the latest stable version of Oracle Java 8 (You need to accept it’s license agreement)

apt-get -y install oracle-java8-installer

Import the Elasticsearch public GPG key into apt

wget -O – http://packages.elasticsearch.org/GPG-KEY-elasticsearch | apt-key add –

Create the Elasticsearch source list

echo ‘deb http://packages.elasticsearch.org/elasticsearch/1.4/debian stable main’ | tee /etc/apt/sources.list.d/elasticsearch.list

Update your apt package database

apt-get update

 Install Elasticsearch

apt-get -y install elasticsearch=1.4.4

 Let’s edit the configuration

nano /etc/elasticsearch/elasticsearch.yml

You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can’t read your data or shutdown your Elasticsearch cluster. Find the line that specifies network.host, uncomment it, and replace its value with “localhost” so it looks like this

network.host: localhost

Save and exit elasticsearch.yml.

Now start Elasticsearch and make it auto-start

service elasticsearch restart
update-rc.d elasticsearch defaults 95 10

Download Kibana 4 to your home directory
unpack it with the following commands (This is based on the latest 4.1 release)

cd ~; wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz

tar xvf kibana-*.tar.gz

Open the Kibana configuration file for editing

nano ~/kibana-4*/config/kibana.yml

In the Kibana configuration file, find the line that specifies host, and replace the IP address (“0.0.0.0” by default) with “localhost”

host: “localhost”

Save and exit. This setting makes it so Kibana will only be accessible to the localhost.

Let’s copy the Kibana files to a more appropriate location.
Create the /opt directory with the following command

sudo mkdir -p /opt/kibana

Now copy the Kibana files into your newly-created directory

cp -R ~/kibana-4*/* /opt/kibana/

Kibana can be started by running /opt/kibana/bin/kibana, but we want it to run as a service. Download a Kibana init script with this command:

cd /etc/init.d && wget https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/bce61d85643c2dcdfbc2728c55a41dab444dca20/kibana4

Now enable the Kibana service, and start it

chmod +x /etc/init.d/kibana4
update-rc.d kibana4 defaults 96 9
service kibana4 start

Before we can use the Kibana web interface, we have to set up a reverse proxy. Let’s do that now, with Nginx.

sudo apt-get install nginx apache2-utils

Now open the Nginx default server block

nano /etc/nginx/sites-available/default

Delete the file’s contents, and paste the following code block into the file. Be sure to update the server_name to match your server’s name:

server {
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection ‘upgrade’;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

Save and exit. This configures Nginx to direct your server’s HTTP traffic to the Kibana application, which is listening on localhost:5601.

Now restart Nginx to put our changes into effect

service nginx restart

Kibana is now accessible via your FQDN or the IP address of your Logstash Server i.e. http://logstash_server_public_ip/. If you go there in a web browser, you should see a Kibana welcome page which will ask you to configure an index pattern. Let’s get back to that later, after we install all of the other components.

The Logstash package is available from the same repository as Elasticsearch so let’s create the Logstash source list

echo ‘deb http://packages.elasticsearch.org/logstash/1.5/debian stable main’ | tee /etc/apt/sources.list.d/logstash.list

Update your apt package database

apt-get update

Install Logstash with this command

apt-get install logstash

Logstash is installed but it is not configured yet.

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

In part 2 we will configure our firewalls to ship logs with SYSLOG and create a logstash config that will read the logs, ship them to elasticsearch that will make them available for kibana to present, stay tuned.

Thanks for reading.

Direktorn Comments

comments

Pin It

Comments are closed.