Wednesday, September 18, 2013

Allow host with dynamic ip through iptables firewall

I have a telnet enabled service, which doesn't have any form of authentication. Naturally I don't want to expose this to the public internet - so it's firewall. Now I want to allow my host to access it, but my host have a dynamic ip address. Iptables only support static ips and ip ranges - but for my use case I only wanted a single ip.

My solution is to update rules in iptables using cron, such that only the ip found at my dynamic dns is allowed through the firewall.
HOSTNAME=$1
IP=$(host $HOSTNAME | grep -iE "[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+" |cut -f4 -d' '|head -n 1)

# If chain for remote doesn't exist, create it
if ! /sbin/iptables -L $HOSTNAME -n >/dev/null 2>&1 ; then 
  /sbin/iptables -N $HOSTNAME >/dev/null 2>&1
fi

# Flush old rules, and add new
/sbin/iptables -F $HOSTNAME
/sbin/iptables -I $HOSTNAME -s $IP -j ACCEPT

# Add chain to INPUT filter if it doesn't exist
if ! /sbin/iptables -C INPUT -t filter -j $HOSTNAME >/dev/null 2>&1 ; then
  /sbin/iptables -t filter -I INPUT -j $HOSTNAME
fi

Example usage: 
./dnsallow.sh my.dynamic.dns.com

Thursday, March 14, 2013

Bind9 failover using Heartbeat and virtual IP

Bind9 master-slave setup is designed for failover, however when the primary dns server in /etc/hosts is down, there's a 5 second timeout by default before giving up, and trying the secondary server. This is experienced as everything being slow.

In order to avoid this 5 second lag, a virtual IP can be used - in order to simply move the primary ip over to the secondary server.

There's however one problem with this, and that is that Bind9 doesn't support listening to 0.0.0.0, as it actually looks up the ips when loading configuration (with listen {any;} set).

To solve this, the following configuration solves this quite nicely (thanks Christoph Berg)
# cat /etc/ha.d/haresources
server01 bind9release IPaddr::10.0.0.3 bind9takeover
# cat /etc/ha.d/resource.d/bind9release
#!/bin/sh
# when giving up resources, reload bind9
case $1 in
        stop) /etc/init.d/bind9 reload ;;
esac
exit 0
# cat /etc/ha.d/resource.d/bind9takeover
#!/bin/sh
# on takeover, reload bind9
case $1 in
        start) /etc/init.d/bind9 reload ;;
esac
exit 0

Wednesday, February 6, 2013

Installing Kibana under Passenger for Apache in Ubuntu 12.04

Kibana is a very nice frontend for Logstash gathered logs sent to ElasticSearch. It's likely to be merged with Logstash at some point, but at the moment it's installed on its own.

Kibana now conforms to Rack conventions, which makes it easy to deploy using Passenger. At the time of writing the downloadable zip file (v0.2.0) does not, so you need to download it from Github.

Install dependencies:
apt-get install libapache2-mod-passenger apache2 ruby ruby1.8-dev libopenssl-ruby rubygems git -y
Get Kibana and rubygems needed
cd /var/www
git clone --branch=kibana-ruby https://github.com/rashidkpc/Kibana.git --depth=1 kibana
cd kibana
gem install bundle
bundle install
Set up a virtualhost for Apache:
<VirtualHost *:80>
    ServerName kibana
    DocumentRoot /var/www/kibana/public
    <Directory /var/www/kibana/public>
        Allow from all
        Options -MultiViews
    </Directory>
</VirtualHost>
Then just restart apache, and browse to your newly installed Kibana!
Notes:
Elasticsearch server is set to localhost:9200 by default. If this is not the case, configuration is found in /var/www/KibanaConfig.rb

Monday, February 4, 2013

Cisco VOIP devices' syslog into Logstash

This how to explains how to retrieve and parse logs from video and VOIP equipment from Cisco.

In particular:
  • Cisco Telepresence Video Communications Server (VCS), formerly Tandberg Video Communications Server (VCS) version X7.2.2
  • Cisco TelePresence Server MSE 8710 version 3.0(2.24)
  • Cisco Unified Communications Manager (CUCM), formerly Cisco Unified CallManager version 9.1.1
  • Cisco Unity Connection version 8.6.1
 I'm utilizing Logstash 1.1.9 installed on Ubuntu 12.04 which is set up to read all config files in /etc/logstash, and thus I've split up my config slightly.

VCS and TelePresence Server


Logstash parsing is straight forward, as they're utilizing legacy BSD format by default.

To enable remote syslog:
  • VCS:  Maintenance -> Logging
  • MSE: Log -> Syslog

Then on the Logstash server, I'm using the following config to parse it:

# /etc/logstash/syslog.conf
input {
  # Default Syslog server port require root permissions due to port < 1024
  tcp {
    port => 514
    type => syslog_relay
    tags => [ "syslog" ]
  }
  udp {
    port => 514
    type => syslog_relay
    tags => [ "syslog" ]
  }
}

filter {
  # strip the syslog PRI part and create facility and severity fields.
  # the original syslog message is saved in field %{syslog_raw_message}.
  # the extracted PRI is available in the %{syslog_pri} field.
  #
  # You get %{syslog_facility_code} and %{syslog_severity_code} fields.
  # You also get %{syslog_facility} and %{syslog_severity} fields if the
  # use_labels option is set True (the default) on syslog_pri filter.
  grok {
    type => "syslog_relay"
    pattern => [ "^<[1-9]\d{0,2}>%{SPACE}%{GREEDYDATA:message_remainder}" ]
    tags => [ "syslog" ]
    add_tag => "got_syslog_pri"
    add_field => [ "syslog_raw_message", "%{@message}" ]
  }
  syslog_pri {
    type => "syslog_relay"
    tags => [ "got_syslog_pri" ]
  }
  mutate {
    type => "syslog_relay"
    tags => [ "got_syslog_pri" ]
    replace => [ "@message", "%{message_remainder}" ]
    remove => [ "message_remainder" ]
    remove_tag => "got_syslog_pri"
  }

  # strip the syslog timestamp and force event timestamp to be the same.
  # the original string is saved in field %{syslog_timestamp}.
  # the original logstash input timestamp is saved in field %{received_at}.
  grok {
    type => "syslog_relay"
    pattern => [ "^%{SYSLOGTIMESTAMP:syslog_timestamp}%{SPACE}%{GREEDYDATA:message_remainder}" ]
    tags => [ "syslog" ]
    add_tag => "got_syslog_timestamp"
    add_field => [ "received_at", "%{@timestamp}" ]
  }

  mutate {
    type => "syslog_relay"
    tags => [ "got_syslog_timestamp" ]
    replace => [ "@message", "%{message_remainder}" ]
    remove => [ "message_remainder" ]
    remove_tag => "got_syslog_timestamp"
  }
  date {
    type => "syslog_relay"
    tags => [ "got_syslog_timestamp" ]
    # season to taste for your own syslog format(s)
    syslog_timestamp => [ "MMM  d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601" ]
  }

  # strip the host field from the syslog line.
  # the extracted host field becomes the logstash %{@source_host} metadata
  # and is also available in the filed %{syslog_hostname}.
  # the original logstash source_host is saved in field %{logstash_source}.
  grok {
    type => "syslog_relay"
    pattern => [ "^%{SYSLOGHOST:syslog_hostname}%{SPACE}%{GREEDYDATA:message_remainder}" ]
    tags => [ "syslog" ]
    add_tag => "got_syslog_host"
    add_field => [ "logstash_source", "%{@source_host}" ]
  }
  mutate {
    type => "syslog_relay"
    tags => [ "got_syslog_host" ]
    replace => [ "@source_host", "%{syslog_hostname}" ]
    replace => [ "@message", "%{message_remainder}" ]
    remove => [ "message_remainder" ]
    remove_tag => "got_syslog_host"
  }

  # Extract the APP-NAME and PROCID if present
  grok {
    type => "syslog_relay"
    pattern => [ "%{SYSLOGPROG:syslog_prog}:%{SPACE}%{GREEDYDATA:message_remainder}" ]
    tags => [ "syslog" ]
    add_tag => "got_syslog_prog"
  }
  mutate {
    type => "syslog_relay"
    tags => [ "got_syslog_prog" ]
    replace => [ "@message", "%{message_remainder}" ]
    remove => [ "message_remainder" ]
    remove_tag => "got_syslog_prog"
  }

  # Filter to replace source_host IP with PTR (reverse dns) record
  dns {
    type => 'syslog_relay'
    reverse => [ "@source_host", "@source_host" ]
    action => "replace"
  }

  # Remove redundant fields
  mutate {
    type => "syslog_relay"
    tags => [ "syslog" ]
    remove => [ "syslog_hostname", "syslog_timestamp" ]
  }
}

output {
  # If your elasticsearch server is discoverable with multicast, use this:
  elasticsearch { }

  # If you can't discover using multicast, set the address explicitly
  #elasticsearch {
  #  host => "myelasticsearchserver"
  #}
}

Configuration does the following:
  • Sets up syslog server to listen to default port 514, which tags events with "syslog"
  • Ignore logs without syslog tag
  • Parse metadata from message, and puts it in separate fields (such as priority)
  • Replace @message with message remainder without metadata
  • Do reverse dns lookup of source ip
  • Sends parsed log event to Elasticsearch

CUCM and Unity

 For the legacy Cisco devices, the log messages are in what appears to be a custom format, and thus require slightly more parsing. Also, it only seem to support default syslog port 514 which requires Logstash to utilize root privileges in order to bind to ports below 1024. In addition, any legacy BSD parsing would fail on these logs, so we have to ensure that only legacy Cisco log parsing takes place. The tricky part here is when you have multiple events sent to the same input (type), and thus you must be careful to ensure that the right filters are applied to the right log messages.
 This can be solved this using tags and grep in combination with grok. Logfiles are read lexicographically, so you'd have to ensure the syslog tag is remove prior to the "main" syslog filter kicks in. This is solved using an appropriate filename. Resulting configuration below:

# /etc/logstash/10-cisco_legacy.conf
filter {
  # Filter for:
  #  - Cisco Unified Communications Manager (CUCM)
  #  - Cisco Unity Connection
  #
  # Legacy Cisco Unified Communications Solutions use a slightly different
  # syslog syntax and order than BSD, which require the use of a tailored
  # grok filter to parse it.
  #
  # Examples:
  #
  # CUCM: MSGID: : PROGRAM: MESSAGE
  # <187>578739: : ccm: 439053: cucm.cisco.com: Jan 25 2013 14:31:58.235 UTC :  %UC_CALLMANAGER-3-DbInfoError: %[DeviceName=][AppID=Cisco CallManager][ClusterID=CUCM1][NodeID=cucm]: Configuration information may be out of sync for the device and Unified CM database
  # <30>578740: : snmpd[20884]: Connection from UDP: [127.0.0.1]:56722
  # <85>578741: : sudo: database : TTY=console ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/logrotate /usr/local/cm/bin/ccmlogRotate
  # <86>578642: : crond[29895]: pam_unix(crond:session): session closed for user root
  # <85>578643: : sudo: database : TTY=console ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/logrotate /usr/local/cm/bin/ccmlogRotate
  #
  # Unity:
  # <85>77071: : sudo: database : TTY=console ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/logrotate /usr/local/cm/bin/ccmlogRotate
  # <85>77072: : sudo:  servmgr : TTY=console ; PWD=/ ; USER=ccmservice ; COMMAND=/usr/local/cm/bin/soapservicecontrol.sh perfmonservice PerfmonPort status 8443
  # <85>77073: : sudo: cusysagent : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/opt/cisco/connection/lib/config-modules/dbscripts/mailstore/refreshmbxdbsizes.sh
  # <78>77074: : crond[2510]: (root) CMD (  /etc/rc.d/init.d/fiostats show)
  # <86>77075: : crond[2507]: pam_unix(crond:session): session opened for user root by (uid=0)

  # First use grep to tag messages as Cisco legacy format
  grep {
    match => [ "@message", "^<[1-9]\d{0,2}>\d*: : (?:[\w._/%-]+)(?:\[\b(?:[1-9][0-9]*)\b\])?:" ]
    add_tag => "CISCO_LEGACY"
    drop => false

    # Halt default syslog parsing (make sure other syslog filters require this tag)
    remove_tag => [ "syslog" ]
  }

  # Parse header
  grok {
    type => "syslog_relay"
    tags => [ "CISCO_LEGACY" ]
    pattern => [ "^<(?:\b(?:[1-9]\d{0,2})\b)>%{POSINT:msgid}:%{SPACE}:%{SPACE}%{SYSLOGPROG}:%{SPACE}%{GREEDYDATA:message_remainder}" ]
    add_tag => "got_legacy_header"
    add_field => [ "syslog_raw_message", "%{@message}" ]
  }
  syslog_pri {
    type => "syslog_relay"
    tags => [ "got_legacy_header" ]
  }
  mutate {
    type => "syslog_relay"
    tags => [ "got_legacy_header" ]
    replace => [ "@message", "%{message_remainder}" ]
    remove => [ "message_remainder" ]
    remove_tag => "got_legacy_header"
  }
  # Look for additional fields if present
  grep {
    match => [ "@message", "\b(?[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b):\s*(?\b\w\w\w \d\d \d\d\d\d \d\d:\d\d:\d\d\.\d+\b)\s*UTC" ]
    drop => false
    add_tag => "extra_legacy_info"
  }
  grok {
    type => "syslog_relay"
    tags => [ "extra_legacy_info" ]
    break_on_match => true
    pattern => [ "%{SYSLOGHOST:syslog_hostname}:%{SPACE}(?\b\w\w\w \d\d \d\d\d\d \d\d:\d\d:\d\d\.\d+\b)%{SPACE}UTC%{SPACE}:%{SPACE}%{GREEDYDATA:message_remainder}" ]
  }
  mutate {
    type => "syslog_relay"
    tags => [ "extra_legacy_info", "CISCO_LEGACY" ]
    replace => [ "@source_host", "%{syslog_hostname}" ]
    replace => [ "@message", "%{message_remainder}" ]
    replace => [ "syslog_timestamp", "%{syslog_timestamp}+00:00"]
    remove => [ "message_remainder" ]
  }
  date {
    type => "syslog_relay"
    tags => [ "extra_legacy_info", "CISCO_LEGACY" ]
    match => [ "syslog_timestamp", "MMM dd yyyy HH:mm:ss.SZZ", "MMM dd yyyy HH:mm:ss.SSZZ", "MMM dd yyyy HH:mm:ss.SSSZZ" ]
    add_field => [ "received_at", "%{@timestamp}" ]
    remove_tag => [ "extra_legacy_info" ]
  }

  # Remove redundant fields
  mutate {
    type => "syslog_relay"
    tags => [ "CISCO_LEGACY" ]
    remove => [ "syslog_hostname" ]
    remove => [ "syslog_timestamp" ]
  }
}

This configuration ensures:
  • Any log message determined to be in legacy Cisco format, loose the syslog tag (must happen before the "default" syslog filter kicks in)
  • Messages are parsed using a tailored filter
  • Allows for multiple inputs to the default syslog port 514 with different log formats

Thursday, January 17, 2013

Installing Logstash as syslog server on Ubuntu Server 12.04 LTS

Centralized log server

Server for receiving logs in legacy BSD format

ElasticSearch

Install Java dependency (java 6 or newer)
apt-get install default-jre -y
Get Elasticsearch .deb from: http://www.elasticsearch.org/download/ and install it + dependencies
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.7.deb
dpkg -i elasticsearch-0.90.7.deb
apt-get install -f

Logstash

Logstash comes with a ready to run monolithic jar file, but I prefer .deb package including init script and sample configs since I find .deb more familiar to deploy and upgrade (E.G using Puppet)

Install dependencies for creating .deb
 apt-get install git rubygems -y
 gem install fpm
Create .deb:
git clone https://github.com/Yuav/logstash-packaging.git --depth=1
cd logstash-packaging
./package.sh
Install:
cd ..
dpkg -i logstash_1.2.2.deb
This will install logstash init scripts and sample config. For a quick test to see if it's working, try starting logstash and access the web interface on port 9292 after it's done spinning up.

At this point, you might want to optimize ElasticSearch to minimize storage footprint, depending on your setup.

Also, you probably want to install Kibana as a web frontend, which is due to replace the default web interface in logstash core at a later time.

Monday, November 19, 2012

High-Availability NFS Pseudo Load-Balanced Using Round Robin DNS, Virtual IP and Heartbeat on Ubuntu 12.04 LTS

Goal of this setup is to provide fault tolerance, and increased read performance for an NFS share.

Prerequisites:

  • Two servers with equal content to be shared using NFS
  • DNS server 

 The setup:

  • Server1: server1.example.com, IP address: 10.0.0.100
  • Server2: server2.example.com, IP address: 10.0.0.101
  • Virtual IP address: 10.0.0.200 preferred server1, failover at server2
  • Virtual IP address: 10.0.0.201 preferred server2, failover at server1
  • Virtual IP address: fe80::f1ee:dead:beef preferred server1, failover at server2
  • Virtual IP address: fe80::baad:dead:beef preferred server2, failover at server1
  • DNS entry with an A record pointing to both virtual ip (multihomed, resolved using round robin)
For my particular setup, I have replicated storage using DRBD with OCFS2 to ensure equal content across my two servers.

Result:

  • Fault tolerant NFS share, against a single DNS
  • If one server goes down, the other server will take over the virtual IP, providing high-availability
  • Round robin balanced NFS mounts across the two servers

Set up two virtual ips

Virtual ips are set up and managed by Heartbeat. Set it up on both servers:
apt-get install heartbeat -y
echo -e "auth 3 \n3 md5 secretpasswordhere" > /etc/ha.d/authkeys
chmod 600 /etc/ha.d/authkeys

Server1:
nano /etc/ha.d/ha.cf
# How many seconds between heartbeats
keepalive 2

# Seconds before declaring host dead
deadtime 10
 
# What UDP port to use for udp or ppp-udp communication?
udpport 694

bcast  eth0
mcast eth0 225.0.0.1 694 1 0
ucast eth0 10.0.0.101

# What interfaces to heartbeat over?
udp     eth0
logfacility     local0

# Allow ip to float back when server recovers
auto_failback on

# Tell what machines are in the cluster
# node must match uname -n
node    server1.example.com
node    server2.example.com

  Server2:
nano /etc/ha.d/ha.cf  
#
#       keepalive: how many seconds between heartbeats
#
keepalive 2
#
#       deadtime: seconds-to-declare-host-dead
#
deadtime 10
#
#       What UDP port to use for udp or ppp-udp communication?
#
udpport        694
bcast  eth0
mcast eth0 225.0.0.1 694 1 0
ucast eth0 192.168.0.100
#       What interfaces to heartbeat over?
udp     eth0
#
#       Facility to use for syslog()/logger (alternative to log/debugfile)
#
logfacility     local0
 
# Allow ip to float back when server recovers
auto_failback on 
 
#
#       Tell what machines are in the cluster
#       node    nodename ...    -- must match uname -n
node    server1.example.com
node    server2.example.com
Now, on both servers - set up the virtual ips:

nano /etc/ha.d/haresources 
server1 10.0.0.200 IPv6addr::fe80:0000:0000:0000:0000:flee:dead:beef
server2 10.0.0.201 IPv6addr::fe80:0000:0000:0000:0000:baad:dead:beef
 Note: Be sure the name matches 'uname -n' output.

Restart heartbeat to make the changes take effect:

/etc/init.d/heartbeat restart 

Now server1 is the primary node for 10.0.0.200, and will take over 10.0.0.201 if server2 stops responding (and vice versa)

Configuring NFS

Install NFS and share /var/www on both servers:

apt-get install nfs-kernel-server -y
mkdir /var/www
echo "/var/www 10.0.0.0/19(rw,async,no_subtree_check)" >> /etc/exports


Now your clients can connect to the DNS pointing to both the virtual IPs, and enjoy the benefits of a highly available NFS server!

Friday, November 9, 2012

High-Availability Mysql with MyISAM support on Ubuntu 12.04

As it turns out, there's quite a few approaches to provide high-availability Mysql - of where the majority doesn't support MyISAM. This is quite annoying when the main use for the Mysql server is to host out-of-the-box open source applications, which utilize MyISAM full-text search more often than not.

For my solution, I've utilized:


 The setup:

  • Server1: server1.example.com, IP address: 10.0.0.100
  • Server2: server2.example.com, IP address: 10.0.0.101
  • Virtual IP address: 10.0.0.200 preferred server1, failover at server2
  • Virtual IP address: 10.0.0.201 preferred server2, failover at server1

Setting up circular Mysql master replication managed by mmm

The official mmm site has a good tutorial which will get you started. There are however a couple of things I've done differently:
  • Ubuntu 12.04 comes with mysql-mmm in package repos: apt-get install mysql-mmm-agent mysql-mmm-monitor
  • I only utilize two servers, one for primary write, and one for primary read - thus I have agent and monitor on both servers
  • My config for replication is done in /etc/mysql/conf.d/replication.cnf which is included by default for mysql-server package in Ubuntu 12.04
  • REMOVE startup links for mysql-mmm-monitor such that only heartbeat will start it: update-rc.d -f mysql-mmm-monitor remove

Setting up high availability for the mmm monitor

The monitor can only run on one node at the time, to avoid collisions. I'm using heartbeat to ensure that the monitor only runs at one node at the time.
apt-get install heartbeat -y
echo -e "auth 3 \n3 md5 secretpasswordhere" > /etc/ha.d/authkeys
chmod 600 /etc/ha.d/authkeys

Server1:
nano /etc/ha.d/ha.cf
# How many seconds between heartbeats
keepalive 2

# Seconds before declaring host dead
deadtime 10
 
# What UDP port to use for udp or ppp-udp communication?
udpport 694

bcast  eth0
mcast eth0 225.0.0.1 694 1 0
ucast eth0 10.0.0.101

# What interfaces to heartbeat over?
udp     eth0
logfacility     local0

# Don't allow ping pong effect on masters
auto_failback off

# Tell what machines are in the cluster
# node must match uname -n
node    server1.example.com
node    server2.example.com
  Server2:
nano /etc/ha.d/ha.cf  
#
#       keepalive: how many seconds between heartbeats
#
keepalive 2
#
#       deadtime: seconds-to-declare-host-dead
#
deadtime 10
#
#       What UDP port to use for udp or ppp-udp communication?
#
udpport        694
bcast  eth0
mcast eth0 225.0.0.1 694 1 0
ucast eth0 192.168.0.100
#       What interfaces to heartbeat over?
udp     eth0
#
#       Facility to use for syslog()/logger (alternative to log/debugfile)
#
logfacility     local0
 
# Don't allow ping pong effect on masters
auto_failback off 
 
#
#       Tell what machines are in the cluster
#       node    nodename ...    -- must match uname -n
node    server1.example.com
node    server2.example.com
Now, I'll set up heartbeat to prefer mysql-mmm-monitor script located in /etc/init.d/mysql-mmm-monitor running at server1 if possible, else it will spawn at any other node in the heartbeat cluster. Heartbeat utilize the return codes on the init script for start and stop to determine if it's running or not.

nano /etc/ha.d/haresources 
server1 mysql-mmm-monitor 
 Note: Be sure the name matches 'uname -n' output.

Restart heartbeat to make the changes take effect:

/etc/init.d/heartbeat restart