Sunday, August 8, 2021

Dynamic DNS using ChangeIP on Ubiquity Security Gateway (USG)

ChangeIP has been my favorite dynamic DNS provider now for many years, because they allow FREE NON-EXPIRING domains with what seems like unlimited subdomains! This seems to be quite unique in the marked, as many other popular vendors are either paid only or require the user to click some link from email every month or so to not expire the domains.

Navingating to the UniFi interface, you can configure dynamic DNS like shown in the screen below.

The USG use ddclient to make the actual update, but unfortunately changeip.com is not listed as a supported solution. However, since the server option of ddclient is exposed directly in the UI - we can input the ChangeIP API URL! ChangeIP supports Basic Auth and username/password as query parameters. By specifying the dyndns service, the USG (through ddclient) will leverage basic auth with the inserted username and password - and the query parameters are kept as is.

Example configuration

Service: dyndns
Hostname: your-hostname-with.changeip.com
Username: YourChangeIpUsername
Password: YourChangeIpPassword
Server: nic.changeip.com/nic/update?hostname=your-hostname-with.changeip.com

 

Wednesday, November 9, 2016

Make Travis-CI publish your module to Puppet forge automatically

Typically you want to release your module to Puppetforge to match your git tags. Example:
git tag '0.1.0' -m 'Version 0.1.0'
To have Travis publish this tag as a new version on the Puppet forge, you can add a deploy step to .travis.yml
---
language: ruby
script:
  - 'bundle exec rake $CHECK'
matrix:
  fast_finish: true
  include:
  - rvm: 2.1.9
    env: PUPPET_VERSION="~> 3.0" STRICT_VARIABLES="yes" CHECK=test
  - rvm: 2.1.9
    env: PUPPET_VERSION="~> 4.0" CHECK=test
  - rvm: 2.3.1
    env: PUPPET_VERSION="~> 4.0" CHECK=rubocop
  - rvm: 2.3.1
    env: PUPPET_VERSION="~> 4.0" CHECK=test DEPLOY_TO_FORGE=yes
deploy:
  provider: puppetforge
  user: yuav
  password:
    secure: "ZjnryQvoKXpvL8mCF+4VSQDsE"
  on:
    tags: true
    # all_branches is required to use tags
    all_branches: true
    # Only publish the build marked with "DEPLOY_TO_FORGE"
    condition: "$DEPLOY_TO_FORGE = yes"
With the above configuration, Travis will trigger the deploy step if tag is pushed using the credentials supplied.   Naturally, you don't want your Puppet forge credentials in clear text in the .travis.yml. Fortunately, Travis supports encrypting variables:
gem install travis
travis encrypt secretpassword --add deploy.password
This will add/update the password.secure section in .travis.yml. This key will be valid for this modules git repo only For more information, see: https://docs.travis-ci.com/user/environment-variables/#Encrypting-environment-variables Now you're good to go! To release to the Forge just do the following:
git tag '0.1.0' -m 'Version 0.1.0'
git push origin master --tags
Wait for tests to pass, and Travis publish to Puppet forge!

Wednesday, September 18, 2013

Allow host with dynamic ip through iptables firewall

I have a telnet enabled service, which doesn't have any form of authentication. Naturally I don't want to expose this to the public internet - so it's firewall. Now I want to allow my host to access it, but my host have a dynamic ip address. Iptables only support static ips and ip ranges - but for my use case I only wanted a single ip.

My solution is to update rules in iptables using cron, such that only the ip found at my dynamic dns is allowed through the firewall.
HOSTNAME=$1
IP=$(host $HOSTNAME | grep -iE "[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+" |cut -f4 -d' '|head -n 1)

# If chain for remote doesn't exist, create it
if ! /sbin/iptables -L $HOSTNAME -n >/dev/null 2>&1 ; then 
  /sbin/iptables -N $HOSTNAME >/dev/null 2>&1
fi

# Flush old rules, and add new
/sbin/iptables -F $HOSTNAME
/sbin/iptables -I $HOSTNAME -s $IP -j ACCEPT

# Add chain to INPUT filter if it doesn't exist
if ! /sbin/iptables -C INPUT -t filter -j $HOSTNAME >/dev/null 2>&1 ; then
  /sbin/iptables -t filter -I INPUT -j $HOSTNAME
fi

Example usage: 
./dnsallow.sh my.dynamic.dns.com

Thursday, March 14, 2013

Bind9 failover using Heartbeat and virtual IP

Bind9 master-slave setup is designed for failover, however when the primary dns server in /etc/hosts is down, there's a 5 second timeout by default before giving up, and trying the secondary server. This is experienced as everything being slow.

In order to avoid this 5 second lag, a virtual IP can be used - in order to simply move the primary ip over to the secondary server.

There's however one problem with this, and that is that Bind9 doesn't support listening to 0.0.0.0, as it actually looks up the ips when loading configuration (with listen {any;} set).

To solve this, the following configuration solves this quite nicely (thanks Christoph Berg)
# cat /etc/ha.d/haresources
server01 bind9release IPaddr::10.0.0.3 bind9takeover
# cat /etc/ha.d/resource.d/bind9release
#!/bin/sh
# when giving up resources, reload bind9
case $1 in
        stop) /etc/init.d/bind9 reload ;;
esac
exit 0
# cat /etc/ha.d/resource.d/bind9takeover
#!/bin/sh
# on takeover, reload bind9
case $1 in
        start) /etc/init.d/bind9 reload ;;
esac
exit 0

Wednesday, February 6, 2013

Installing Kibana under Passenger for Apache in Ubuntu 12.04

Kibana is a very nice frontend for Logstash gathered logs sent to ElasticSearch. It's likely to be merged with Logstash at some point, but at the moment it's installed on its own.

Kibana now conforms to Rack conventions, which makes it easy to deploy using Passenger. At the time of writing the downloadable zip file (v0.2.0) does not, so you need to download it from Github.

Install dependencies:
apt-get install libapache2-mod-passenger apache2 ruby ruby1.8-dev libopenssl-ruby rubygems git -y
Get Kibana and rubygems needed
cd /var/www
git clone --branch=kibana-ruby https://github.com/rashidkpc/Kibana.git --depth=1 kibana
cd kibana
gem install bundle
bundle install
Set up a virtualhost for Apache:
<VirtualHost *:80>
    ServerName kibana
    DocumentRoot /var/www/kibana/public
    <Directory /var/www/kibana/public>
        Allow from all
        Options -MultiViews
    </Directory>
</VirtualHost>
Then just restart apache, and browse to your newly installed Kibana!
Notes:
Elasticsearch server is set to localhost:9200 by default. If this is not the case, configuration is found in /var/www/KibanaConfig.rb

Monday, February 4, 2013

Cisco VOIP devices' syslog into Logstash

This how to explains how to retrieve and parse logs from video and VOIP equipment from Cisco.

In particular:
  • Cisco Telepresence Video Communications Server (VCS), formerly Tandberg Video Communications Server (VCS) version X7.2.2
  • Cisco TelePresence Server MSE 8710 version 3.0(2.24)
  • Cisco Unified Communications Manager (CUCM), formerly Cisco Unified CallManager version 9.1.1
  • Cisco Unity Connection version 8.6.1
 I'm utilizing Logstash 1.1.9 installed on Ubuntu 12.04 which is set up to read all config files in /etc/logstash, and thus I've split up my config slightly.

VCS and TelePresence Server


Logstash parsing is straight forward, as they're utilizing legacy BSD format by default.

To enable remote syslog:
  • VCS:  Maintenance -> Logging
  • MSE: Log -> Syslog

Then on the Logstash server, I'm using the following config to parse it:

# /etc/logstash/syslog.conf
input {
  # Default Syslog server port require root permissions due to port < 1024
  tcp {
    port => 514
    type => syslog_relay
    tags => [ "syslog" ]
  }
  udp {
    port => 514
    type => syslog_relay
    tags => [ "syslog" ]
  }
}

filter {
  # strip the syslog PRI part and create facility and severity fields.
  # the original syslog message is saved in field %{syslog_raw_message}.
  # the extracted PRI is available in the %{syslog_pri} field.
  #
  # You get %{syslog_facility_code} and %{syslog_severity_code} fields.
  # You also get %{syslog_facility} and %{syslog_severity} fields if the
  # use_labels option is set True (the default) on syslog_pri filter.
  grok {
    type => "syslog_relay"
    pattern => [ "^<[1-9]\d{0,2}>%{SPACE}%{GREEDYDATA:message_remainder}" ]
    tags => [ "syslog" ]
    add_tag => "got_syslog_pri"
    add_field => [ "syslog_raw_message", "%{@message}" ]
  }
  syslog_pri {
    type => "syslog_relay"
    tags => [ "got_syslog_pri" ]
  }
  mutate {
    type => "syslog_relay"
    tags => [ "got_syslog_pri" ]
    replace => [ "@message", "%{message_remainder}" ]
    remove => [ "message_remainder" ]
    remove_tag => "got_syslog_pri"
  }

  # strip the syslog timestamp and force event timestamp to be the same.
  # the original string is saved in field %{syslog_timestamp}.
  # the original logstash input timestamp is saved in field %{received_at}.
  grok {
    type => "syslog_relay"
    pattern => [ "^%{SYSLOGTIMESTAMP:syslog_timestamp}%{SPACE}%{GREEDYDATA:message_remainder}" ]
    tags => [ "syslog" ]
    add_tag => "got_syslog_timestamp"
    add_field => [ "received_at", "%{@timestamp}" ]
  }

  mutate {
    type => "syslog_relay"
    tags => [ "got_syslog_timestamp" ]
    replace => [ "@message", "%{message_remainder}" ]
    remove => [ "message_remainder" ]
    remove_tag => "got_syslog_timestamp"
  }
  date {
    type => "syslog_relay"
    tags => [ "got_syslog_timestamp" ]
    # season to taste for your own syslog format(s)
    syslog_timestamp => [ "MMM  d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601" ]
  }

  # strip the host field from the syslog line.
  # the extracted host field becomes the logstash %{@source_host} metadata
  # and is also available in the filed %{syslog_hostname}.
  # the original logstash source_host is saved in field %{logstash_source}.
  grok {
    type => "syslog_relay"
    pattern => [ "^%{SYSLOGHOST:syslog_hostname}%{SPACE}%{GREEDYDATA:message_remainder}" ]
    tags => [ "syslog" ]
    add_tag => "got_syslog_host"
    add_field => [ "logstash_source", "%{@source_host}" ]
  }
  mutate {
    type => "syslog_relay"
    tags => [ "got_syslog_host" ]
    replace => [ "@source_host", "%{syslog_hostname}" ]
    replace => [ "@message", "%{message_remainder}" ]
    remove => [ "message_remainder" ]
    remove_tag => "got_syslog_host"
  }

  # Extract the APP-NAME and PROCID if present
  grok {
    type => "syslog_relay"
    pattern => [ "%{SYSLOGPROG:syslog_prog}:%{SPACE}%{GREEDYDATA:message_remainder}" ]
    tags => [ "syslog" ]
    add_tag => "got_syslog_prog"
  }
  mutate {
    type => "syslog_relay"
    tags => [ "got_syslog_prog" ]
    replace => [ "@message", "%{message_remainder}" ]
    remove => [ "message_remainder" ]
    remove_tag => "got_syslog_prog"
  }

  # Filter to replace source_host IP with PTR (reverse dns) record
  dns {
    type => 'syslog_relay'
    reverse => [ "@source_host", "@source_host" ]
    action => "replace"
  }

  # Remove redundant fields
  mutate {
    type => "syslog_relay"
    tags => [ "syslog" ]
    remove => [ "syslog_hostname", "syslog_timestamp" ]
  }
}

output {
  # If your elasticsearch server is discoverable with multicast, use this:
  elasticsearch { }

  # If you can't discover using multicast, set the address explicitly
  #elasticsearch {
  #  host => "myelasticsearchserver"
  #}
}

Configuration does the following:
  • Sets up syslog server to listen to default port 514, which tags events with "syslog"
  • Ignore logs without syslog tag
  • Parse metadata from message, and puts it in separate fields (such as priority)
  • Replace @message with message remainder without metadata
  • Do reverse dns lookup of source ip
  • Sends parsed log event to Elasticsearch

CUCM and Unity

 For the legacy Cisco devices, the log messages are in what appears to be a custom format, and thus require slightly more parsing. Also, it only seem to support default syslog port 514 which requires Logstash to utilize root privileges in order to bind to ports below 1024. In addition, any legacy BSD parsing would fail on these logs, so we have to ensure that only legacy Cisco log parsing takes place. The tricky part here is when you have multiple events sent to the same input (type), and thus you must be careful to ensure that the right filters are applied to the right log messages.
 This can be solved this using tags and grep in combination with grok. Logfiles are read lexicographically, so you'd have to ensure the syslog tag is remove prior to the "main" syslog filter kicks in. This is solved using an appropriate filename. Resulting configuration below:

# /etc/logstash/10-cisco_legacy.conf
filter {
  # Filter for:
  #  - Cisco Unified Communications Manager (CUCM)
  #  - Cisco Unity Connection
  #
  # Legacy Cisco Unified Communications Solutions use a slightly different
  # syslog syntax and order than BSD, which require the use of a tailored
  # grok filter to parse it.
  #
  # Examples:
  #
  # CUCM: MSGID: : PROGRAM: MESSAGE
  # <187>578739: : ccm: 439053: cucm.cisco.com: Jan 25 2013 14:31:58.235 UTC :  %UC_CALLMANAGER-3-DbInfoError: %[DeviceName=][AppID=Cisco CallManager][ClusterID=CUCM1][NodeID=cucm]: Configuration information may be out of sync for the device and Unified CM database
  # <30>578740: : snmpd[20884]: Connection from UDP: [127.0.0.1]:56722
  # <85>578741: : sudo: database : TTY=console ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/logrotate /usr/local/cm/bin/ccmlogRotate
  # <86>578642: : crond[29895]: pam_unix(crond:session): session closed for user root
  # <85>578643: : sudo: database : TTY=console ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/logrotate /usr/local/cm/bin/ccmlogRotate
  #
  # Unity:
  # <85>77071: : sudo: database : TTY=console ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/logrotate /usr/local/cm/bin/ccmlogRotate
  # <85>77072: : sudo:  servmgr : TTY=console ; PWD=/ ; USER=ccmservice ; COMMAND=/usr/local/cm/bin/soapservicecontrol.sh perfmonservice PerfmonPort status 8443
  # <85>77073: : sudo: cusysagent : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/opt/cisco/connection/lib/config-modules/dbscripts/mailstore/refreshmbxdbsizes.sh
  # <78>77074: : crond[2510]: (root) CMD (  /etc/rc.d/init.d/fiostats show)
  # <86>77075: : crond[2507]: pam_unix(crond:session): session opened for user root by (uid=0)

  # First use grep to tag messages as Cisco legacy format
  grep {
    match => [ "@message", "^<[1-9]\d{0,2}>\d*: : (?:[\w._/%-]+)(?:\[\b(?:[1-9][0-9]*)\b\])?:" ]
    add_tag => "CISCO_LEGACY"
    drop => false

    # Halt default syslog parsing (make sure other syslog filters require this tag)
    remove_tag => [ "syslog" ]
  }

  # Parse header
  grok {
    type => "syslog_relay"
    tags => [ "CISCO_LEGACY" ]
    pattern => [ "^<(?:\b(?:[1-9]\d{0,2})\b)>%{POSINT:msgid}:%{SPACE}:%{SPACE}%{SYSLOGPROG}:%{SPACE}%{GREEDYDATA:message_remainder}" ]
    add_tag => "got_legacy_header"
    add_field => [ "syslog_raw_message", "%{@message}" ]
  }
  syslog_pri {
    type => "syslog_relay"
    tags => [ "got_legacy_header" ]
  }
  mutate {
    type => "syslog_relay"
    tags => [ "got_legacy_header" ]
    replace => [ "@message", "%{message_remainder}" ]
    remove => [ "message_remainder" ]
    remove_tag => "got_legacy_header"
  }
  # Look for additional fields if present
  grep {
    match => [ "@message", "\b(?[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b):\s*(?\b\w\w\w \d\d \d\d\d\d \d\d:\d\d:\d\d\.\d+\b)\s*UTC" ]
    drop => false
    add_tag => "extra_legacy_info"
  }
  grok {
    type => "syslog_relay"
    tags => [ "extra_legacy_info" ]
    break_on_match => true
    pattern => [ "%{SYSLOGHOST:syslog_hostname}:%{SPACE}(?\b\w\w\w \d\d \d\d\d\d \d\d:\d\d:\d\d\.\d+\b)%{SPACE}UTC%{SPACE}:%{SPACE}%{GREEDYDATA:message_remainder}" ]
  }
  mutate {
    type => "syslog_relay"
    tags => [ "extra_legacy_info", "CISCO_LEGACY" ]
    replace => [ "@source_host", "%{syslog_hostname}" ]
    replace => [ "@message", "%{message_remainder}" ]
    replace => [ "syslog_timestamp", "%{syslog_timestamp}+00:00"]
    remove => [ "message_remainder" ]
  }
  date {
    type => "syslog_relay"
    tags => [ "extra_legacy_info", "CISCO_LEGACY" ]
    match => [ "syslog_timestamp", "MMM dd yyyy HH:mm:ss.SZZ", "MMM dd yyyy HH:mm:ss.SSZZ", "MMM dd yyyy HH:mm:ss.SSSZZ" ]
    add_field => [ "received_at", "%{@timestamp}" ]
    remove_tag => [ "extra_legacy_info" ]
  }

  # Remove redundant fields
  mutate {
    type => "syslog_relay"
    tags => [ "CISCO_LEGACY" ]
    remove => [ "syslog_hostname" ]
    remove => [ "syslog_timestamp" ]
  }
}

This configuration ensures:
  • Any log message determined to be in legacy Cisco format, loose the syslog tag (must happen before the "default" syslog filter kicks in)
  • Messages are parsed using a tailored filter
  • Allows for multiple inputs to the default syslog port 514 with different log formats

Thursday, January 17, 2013

Installing Logstash as syslog server on Ubuntu Server 12.04 LTS

Centralized log server

Server for receiving logs in legacy BSD format

ElasticSearch

Install Java dependency (java 6 or newer)
apt-get install default-jre -y
Get Elasticsearch .deb from: http://www.elasticsearch.org/download/ and install it + dependencies
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.7.deb
dpkg -i elasticsearch-0.90.7.deb
apt-get install -f

Logstash

Logstash comes with a ready to run monolithic jar file, but I prefer .deb package including init script and sample configs since I find .deb more familiar to deploy and upgrade (E.G using Puppet)

Install dependencies for creating .deb
 apt-get install git rubygems -y
 gem install fpm
Create .deb:
git clone https://github.com/Yuav/logstash-packaging.git --depth=1
cd logstash-packaging
./package.sh
Install:
cd ..
dpkg -i logstash_1.2.2.deb
This will install logstash init scripts and sample config. For a quick test to see if it's working, try starting logstash and access the web interface on port 9292 after it's done spinning up.

At this point, you might want to optimize ElasticSearch to minimize storage footprint, depending on your setup.

Also, you probably want to install Kibana as a web frontend, which is due to replace the default web interface in logstash core at a later time.