Outbound Postfix with SASL Authentication against LDAP (Dovecot)

I recently had to set up an outbound Postfix server with SASL authentication against LDAP.

I’m a huge fan of Dovecot, so I did go with it instead of Cyrus which was a pain to set up a few years back. Not sure about now.

I hadn’t done that in a while, and if you look up on this site, you’ll see I actually did SASL auth against MySQL, a couple of years ago. Since then, Dovecot reached v2, which involves a lot of changes in the configuration files.

Tested on RHEL 6.3 + Postfix 2.6.6 + Dovecot 2.0.9 + Thunderbird 17 and Outlook 2003/7/10

I assume you install Postfix and Dovecot with yum from the RHEL repositories.

Just add the configuration in the configuration files and you should be fine. Tune to your liking. Some settings may not be the most sensible for your environment, so please do not blindly copy-paste and be done with it.


# SASL Auth
mynetworks =
smtpd_sasl_auth_enable = yes
broken_sasl_auth_clients = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_security_options = noanonymous
smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination

# Display the auth user in the headers
smtpd_sasl_authenticated_header = yes

# Next hop

# Postfix TLS
smtpd_tls_auth_only = no
smtp_use_tls = no
smtpd_use_tls = no
smtp_tls_note_starttls_offer = no
smtpd_tls_cert_file = /etc/ssl/certs/mycert.pem
smtpd_tls_key_file = /etc/ssl/certs/mycert.pem
smtpd_tls_loglevel = 1
smtpd_tls_received_header = yes
smtpd_tls_session_cache_database = btree:/var/lib/postfix/smtpd_tls_session_cache
smtpd_tls_session_cache_timeout = 3600s
tls_random_source = dev:/dev/urandom


smtps    inet    n    -    n    -    10    smtpd
    -o smtpd_sasl_auth_enable=yes
        -o smtpd_tls_wrappermode=yes
        -o smtpd_client_connection_count_limit=5
        -o smtpd_client_connection_rate_limit=5


auth_mechanisms = plain login
disable_plaintext_auth = no

# I'm only using Dovecot for SASL auth, not POP or IMAP
protocols = none

ssl = no

# Debug

# Postfix SASL auth
service auth {
  unix_listener /var/spool/postfix/private/auth {
    group = postfix
    mode = 0660
    user = postfix
  user = root

passdb {
  driver = ldap
  args = /etc/dovecot/ldap.conf.ext

I owe you a bit of explanation here though.

I’m using the auth_bind method. It means you check if the user is valid by making a bind with the provided credentials. This is nice, you don’t need to have access to the password string in LDAP (which I didn’t have), and deal with the hashing method used.

Also, something specific to my environment: users would configure their client with their username, which in ldap is under “companyLoginName”. Somehow, Dovecot needs to test against the CN, so you have to lookup the companyLoginName string in order to retrieve the cn, you can do so with pass_filter. Then you assign the resulting CN as the actual username (see: pass_attrs).


hosts = ldap.example.org
dn = cn=some,ou=string,ou=from,o=ldap
dnpass = password

debug_level = 0

auth_bind = yes
base = ou=somewhere,o=ldap
scope = subtree

pass_attrs = cn=user
pass_filter = (&(objectClass=companyAccount)(companyLoginName=%u))

Evaluating Ansible

I’m currently actively working on Salt, I actually have a dozen production servers at work, running critical services through it. I commit new things into the production branch every couple of days.
Since Ansible seems to get all the rage (at least convinced a couple of fellow FOSS friends, Fabian, Serge, etc), I decided to give it a try and compare the two solutions.

I’m detailing here how you can start working with Ansible in about 3 minutes. I’m limiting this post to remote execution and won’t cover playbooks. That’s really for my own future reference.

By default, it uses Paramiko as transport, but in this example, I’m using regular SSH. I’m also working with two “minions” (Salt terms).

master# apt-get -y install python-yaml python-jinja2
master# ssh-keygen
master# ssh-copy-id -i ansible root@host1
master# ssh-copy-id -i ansible root@host2
master# git clone git://github.com/ansible/ansible.git

In your .bashrc add :

source /root/ansible/hacking/env-setup
export ANSIBLE_HOSTS=/root/ansible/hosts

Edit /root/ansible/hosts and add :


Now let’s test a simple command :

master## ansible all -m ping 
host02 | success >> {
    "changed": false, 
    "ping": "pong"
host01 | success >> {
    "changed": false, 
    "ping": "pong"

master# ansible all -m command -a uptime
host02 | success | rc=0 >>
 20:27:33 up  1:50,  2 users,  load average: 0.00, 0.00, 0.00

host01 | success | rc=0 >>
 20:27:33 up  1:50,  2 users,  load average: 0.00, 0.00, 0.00

Tomcat 6 webapp authentication against AD

Tested on RHEL6

Add the following in /etc/tomcat6/server.xml (before the ending host tag) :

<Realm className="org.apache.catalina.realm.JNDIRealm" debug="99"



Add your users to the group (role in Tomcat terms, which we’ll call “myapplication” in this example) in AD.

Now edit /etc/tomcat6/tomcat-users.xml with the users :

<user name="user01" roles="myapplication" />

So here we have a group “myapplication” (matching query ‘roleName=cn’) with member=user01

You webapp must be configured to require auth and define which roles are allowed, this is an example :

WEB-INF/web.xml :

      <web-resource-name>Entire Application</web-resource-name>


      The role allowed in the app

VLAN trunking with Cisco Catalyst 2950 + WAP4410N


On the 2950, configure the port to the WAP4410N as trunk :

switch#conf t
switch(config)#interface fastEthernet 0/12
switch(config-if)#description WAP4410N
switch(config-if)#switchport trunk native vlan 30
switch(config-if)#switchport trunk allowed vlan 10,20,30
switch(config-if)#switchport mode trunk

By default all VLAN are allowed on a trunk. It is recommended to specify which VLAN you want on the trunk.

Native VLAN will be the VLAN of any untagged frame. This is somewhat useless here as WLAN are tagged.

Review the configuration :

switch#show interfaces trunk

Port        Mode         Encapsulation  Status        Native vlan
Fa0/12      on           802.1q         trunking      30

Port      Vlans allowed on trunk
Fa0/12      10,20,30

Port        Vlans allowed and active in management domain
Fa0/12      10,20,30

Port        Vlans in spanning tree forwarding state and not pruned
Fa0/12      10,20,30

Configure the WAP4410N as such :
Screen Shot 2013-06-30 at 11.12.06

The IP in “Setup > Basic Setup” should be in the subnet of VLAN 20. It allows you to manage the unit remotely.

Under “Administration > Management” make sure you enable “Wireless web access”.

Debian installation over PXE and dnsmasq

The DHCP/TFTP server holds the IP

All commands as root :

mkdir -p /srv/tftp

cd /srv/tftp

wget http://ftp.nl.debian.org/debian/dists/wheezy/main/installer-amd64/current/images/netboot/netboot.tar.gz

tar xvzf netbook.tar.gz

chown dnsmasq. * -R

vim /etc/dnsmasq.conf


/etc/init.d/dnsmasq restart

Repurposing a Barracuda Spam & Virus Firewall

I got my hands on a out of warranty/subscription/whatever Barracuda unit.

This unit is a Spam & Firewall 400 model from 2009 or something. Basically it’s regular computer hardware in a 1U rack, with a Barracuda logo on it.

The mainboard is an MSI MS-7309, the CPU is an Athlon clocking at 2.7 GHz (VT available and enabled by default) and 2 GB of RAM. Storage is two drives of 250 GB in software RAID set up.

At boot, press DEL.

BIOS password was bcndk1.

Enable boot from other device and remove the BIOS password if you wish. That’s about it.

Grab your favorite Linux distribution. I picked the Debian 7.1.0 netinstall ISO and put it on a USB drive (sudo dd if=debian.iso of=/dev/sdb bs=1M)

I plugged it in, started the appliance and could proceed with the installation.

As you can see, I didn’t even have to flash the BIOS or unlock anything.

Before you proceed with the installation of your favorite distribution, you may want to check the filesystem. It is not encrypted and you can get a look at what makes Barracuda Spam & Virus firewall. It’s basically Postfix and probably Amavis and Clam, with some proprietary stuff put on top, sold as a black box.

References :

Mainboard specs : http://www.cpu-world.com/CPUs/K10/AMD-Athlon%20II%20X2%20235e%20-%20AD235EHDK23GQ%20(AD235EHDGQBOX).html

CPU specs : http://www.msi.com/product/mb/K9N6SGM-V—K9N6PGM-FI—K9N6PGM-F.html


If that previous link disappears from the web, the possible passwords are :
BIOS PW: 322232 32232 BCNDK1 ADMINBN99
DEFAULT PASSWORD (GUI) admin or adminbn99


mod_proxy_balancer on RHEL6

Tested on RHEL 6. This is the simplest setup possible, for my own reference. I may come up with a Salt state in the future.

Reference : http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html

/etc/httpd/conf.d/balancer-manager.conf :

<Location /balancer-manager>
SetHandler balancer-manager
Order Deny,Allow
Deny from all
Allow from

/etc/httpd/conf.d/vhost.conf :

<VirtualHost *:80>
        ServerAdmin someadmin@example.org
        ServerName xyz.example.org
        <Proxy balancer://xyz_example_org>
                BalancerMember http://backend01.example.org:80
                BalancerMember http://backend02.example.org:80
        ProxyPass /balancer-manager !
        ProxyPass / balancer://xyz_example_org

Access the cluster through http://xyz.example.org
Manager the cluster through http://xyz.example.org/balancer-manager (as we prevent the path from being ‘reverse proxied’)

Salt Stack, a (serious) alternative to Puppet

I couldn’t write it better : see http://www.lecloud.net/post/29325359938/salt-to-the-rescue

So basically, Salt is a configuration management system (à la Puppet) and allows remote execution (à la Rundeck).

First thing first, it is very easy to install. I know Puppet now offers repositories and it’s probably as easy, but Salt is just a package with a couple of dependencies. Actually to achieve the same tasks you have to have Puppet and Mcollective, which are still two distinct products. Salt does the job from one package.

Then, it’s based on Python, YAML and Jinja.

The documentation is very good, and the community very active (got answers within 30 seconds in #salt on Freenode).

The last thing I like : minions keep a constant connection to the master. You can push  changes to minions immediately. I attended the Puppet Fundamentals training late last year and asked about a “push” of changes instead of a “pull”. It seems like there’s a solution but the trainer couldn’t get it working.

One thing they could improve is the frontpage of their site. When you go to http://www.saltstack.org you are redirected to http://saltstack.com/community.html instead of http://saltstack.com/about.html which explains what the product does.

Installation (RHEL) :

Server :

yum –enablerepo=epel install salt-master

Edit /etc/salt/master

    - /srv/salt
    - /srv/salt/dev
  - /srv/salt/prd

  - /srv/pillar

service salt-master restart

Client :

yum --enablerepo install salt-minion

Edit /etc/salt/minion

master: your.master.server.example.org

service salt-minion restart

Now you should see a pending key with “salt-key”. See “salt-key -h” for more info.

Basically, modules are called “states”.

Pillars are kind of variables you can use in your files.

This is the content of /srv on my master :


I have 5 environments :
- sandbox : where I develop states
- dev : development servers
- acc : staging servers
- prd : production servers
- common : states common to all environments (sshd, snmpd, etc.)

If you look in /etc/salt/master, you’ll see there’s a “base” environment. This is where your top.sls (the key component of your salt architecture) will reside :

# cat /srv/salt/top.sls
    - packages
    - users
    - groups
    - files
    - sudo

    - dev

    - acc

    - prd

    - motd
    - apache
    - ntpd
    - snmpd
    - sshd

You can see I started working with Salt only a couple of days ago. My states are still in the “sandbox” environment.

How you can push states to minions :

salt ‘*’ state.highstate


- convention-os


    {% if grains['os_family'] == 'RedHat' %}
      apache: httpd
      snmpd: net-snmp
      vim: vim-enhanced
    {% elif grains['os_family'] == 'Debian' %}
      apache: apache2
      snmpd: snmpd
      vim: vim
    {% endif %}
    {% if grains['os_family'] == 'RedHat' %}
      apache: httpd
      ntpd: ntpd
      sshd: sshd
    {% elif grains['os_family'] == 'Debian' %}
      apache: apache2
      ntpd: ntp
      sshd: ssh
    {% endif %}

States can be named this way /srv/salt/env/motd.sls or /srv/salt/env/motd/init.sls
I tend to prefer the later.

Here’s an example of state calling pillars :

    - installed
    - name: {{ pillar['convention-os']['pkg']['apache'] }}
    - running
    - name: {{ pillar['convention-os']['service']['apache'] }}

This is a pretty rough post, sorry about that. I just wanted to spread the word about Salt and hope you’ll consider joining in.

Documentation :
Online : http://docs.saltstack.com/
PDF : http://media.readthedocs.org/pdf/salt/latest/salt.pdf