Monday, July 28, 2014

Multiple Domain LDAP or DS389

Having recently been tasked with setting up a new LDAP system and to take into account sub-domains, and to enable users from different domains to allow access to systems in specific domains I thought I'd write up how it was done, since most LDAP set ups on the web only deal with 1 domain, and those that state more than one only show 1 domain and then use organisation units to do the rest of the work.
This set up will use the following sub-domains with a root;
root: dc=example,dc=com
sub0: dc=dev,dc=example,dc=com
sub1: dc=stg,dc=example,dc=com
sub2: dc=prod,dc=example,dc=com

Now that we have the above domains, we can set about creating the LDAP configuration.  During the project work Puppet manifests were used to create the general LDAP configuration files as per many of the example on the web (e.g. http://ostechnix.wordpress.com/2013/02/05/setup-ldap-server-389ds-in-centosrhelscientific-linux-6-3-step-by-step/) , but to create the master LDIF files I used Puppet files to load the domain initial and any data for the domain, such as user and groups and then DS 389 was started.

Each sub-domain was placed in the LDIF file as a DC and not an OU which most examples show.  This keeps in line with the sub-domains of our environment.

The diagram for the set up looks as follows;
The arrows in the diagram represent replication from the master to the replicas.

In the case of the environment the master had all domains made available to it, thus allowing centralised administration and backups.

The most important part of configuring LDAP replicas through the LDIF files is to remove the following line and have an empty top level domain for the LDIF to allow for replication to add the relevant configuration and data;

aci: (targetattr = "*")(version 3.0; acl "SIE Group"; allow (all) groupdn = "ldap:///cn=slapd-ldaplocal,cn=389 Directory Server,cn=Server Group,cn=master.example.com,ou=example.com,o=NetscapeRoot";)

This line defines the server and top level domain, and is only required in the master server, but removed from all replica configurations.

Your master server should have the following line in it's LDIF file;

aci: (targetattr ="*")(version 3.0;acl "Directory Administrators Group";allow (all) (groupdn = " ldap:///cn=Directory Administrators, dc=example,dc=com");)

Master Replication Script

The following script enables you to set up the server replication.  Note you will need to make substitutions based on your hostames, etc, and that this is a Puppet template that generates a shell script.

# Create the Bind DN user that will be used for replication

ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=replication manager,cn=config
changetype: add
objectClass: inetorgperson
objectClass: person
objectClass: top
cn: replication manager
sn: RM
userPassword: <%= scope.function_hiera(['example_389_hiera::root_user_password']) %>
passwordExpirationTime: 20380119031407Z
nsIdleTimeout: 0

_END_

# DS389 needs to be restarted after adding this user

service dirsrv restart

# Create the change log
ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=changelog5,cn=config
changetype: add
objectclass: top
objectclass: extensibleObject
cn: changelog5
nsslapd-changelogdir: /var/lib/dirsrv/slapd-<%= @hostname %>/changelogdb

_END_

# Create the replica to share
ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=replica,cn="<%= scope.function_hiera(['example_389_hiera::base_dn']) %>",cn=mapping tree,cn=config
changetype: add
objectclass: top
objectclass: nsds5replica
objectclass: extensibleObject
cn: replica
nsds5replicaroot: <%= scope.function_hiera(['example_389_hiera::base_dn']) %>
nsds5replicaid: 7
nsds5replicatype: 3
nsds5flags: 1
nsds5ReplicaPurgeDelay: 604800
nsds5ReplicaBindDN: cn=replication manager,cn=config

_END_

<% @consumers.each do | consumer | -%>
ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=STGAgreement,cn=replica,cn="<%= scope.function_hiera(['example_389_hiera::base_dn']) %>",cn=mapping tree,cn=config
changetype: add
objectclass: top
objectclass: nsds5replicationagreement
cn: STGAgreement
nsds5replicahost: <%= consumer %>
nsds5replicaport: 389
nsds5ReplicaBindDN: cn=replication manager,cn=config
nsds5replicabindmethod: SIMPLE
nsds5replicaroot: <%= scope.function_hiera(['example_389_hiera::base_dn']) %>
description: agreement between <%= @hostname %> and <%= consumer %>
nsds5replicaupdateschedule: 0001-2359 0123456
nsds5replicatedattributelist: (objectclass=*) $ EXCLUDE authorityRevocationList
nsds5replicacredentials: <%= scope.function_hiera(['example_389_hiera::root_user_password']) %>
nsds5BeginReplicaRefresh: start

_END_
<% end -%>

Adding New Consumber

For each LDAP replica you wish to allow to replicate with the master you will need to add it to the master.  Here is some Puppet code that will generate the shell script that will add further replicas.
NOTE: The dn: cn=STGAgreement will need to change for each replica, so ensure that you name them accordingly (one per replica host).

ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.
function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=STGAgreement,cn=replica,cn="<%= scope.function_hiera(['example_389_hiera::base_dn']) %>",cn=mapping tree,cn=config
changetype: add
objectclass: top
objectclass: nsds5replicationagreement
cn: STGAgreement
nsds5replicahost: <%= consumer %>
nsds5replicaport: 389
nsds5ReplicaBindDN: cn=replication manager,cn=config
nsds5replicabindmethod: SIMPLE
nsds5replicaroot: <%= scope.function_hiera(['example_389_hiera::base_dn']) %>
description: agreement between <%= @hostname %> and <%= consumer %>
nsds5replicaupdateschedule: 0001-2359 0123456
nsds5replicatedattributelist: (objectclass=*) $ EXCLUDE authorityRevocationList
nsds5replicacredentials: <%= scope.function_hiera(['example_389_hiera::root_user_password']) %>
nsds5BeginReplicaRefresh: start

_END_

Authenticating The Replica/Consumer

The consumer must also participate in the replication.  The following script is run on the replica to enable it to replicate with the master.  The following Puppet code generates a shell script that can be run on the consumer;

# Create the Bind DN user that will be used for replication

ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=replication manager,cn=config
changetype: add
objectClass: inetorgperson
objectClass: person
objectClass: top
cn: replication manager
sn: RM
userPassword: <%= scope.function_hiera(['example_389_hiera::root_user_password']) %>
passwordExpirationTime: 20380119031407Z
nsIdleTimeout: 0

_END_

# DS389 needs to be restarted after adding this user

service dirsrv restart

ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=replica,cn="<%= scope.function_hiera(['example_389_hiera::base_dn']) %>",cn=mapping tree,cn=config
changetype: add
objectclass: top
objectclass: nsds5replica
objectclass: extensibleObject
cn: replica
nsds5replicaroot: <%= scope.function_hiera(['example_389_hiera::base_dn']) %>
nsds5replicatype: 2
nsds5ReplicaBindDN: cn=replication manager,cn=config
nsds5flags: 0
nsds5replicaid: 2

_END_

Final Steps

Once you have the replication agreements in place it is likely that the consumers are failing, and reporting that they have different generation IDs.  To fix this you need to resync the replication.

First check the log file on the master. This can be seen by;

tail -f /var/log/dirsrv/slapd-/errors

The error may look something like;
[28/Mar/2014:10:54:49 +0000] NSMMReplicationPlugin - agmt="cn=STGAgreement" (master.example.com:389): Replica has a different generation ID than the local data.

If you do see an error such as the above then run the following command from any LDAP server;

ldapmodify -h -p 389 -D "cn=directory manager" -w <<_end_ br="">dn: cn=STGAgreement,cn=replica,cn="dc=",cn=mapping tree,cn=config
changetype: modify
replace: nsds5beginreplicarefresh
nsds5beginreplicarefresh: start
_END_


The Clients

The clients must also participate in the LDAP authentication too.  This will mean that the following configurations will need changing;
  • /etc/ldap.conf
  • /etc/nslcd.conf
  • /etc/nsswitch.conf
  • /etc/ssh/sshd_config

/etc/ldap.conf

The information in this file that is key to enabling multiple domains, and restricting log ons is done as follows;

base example.com


base <%= base_dn %>

port 389
ldap_version 3
ssl no
pam_filter objectclass=posixAccount
pam_password md5
pam_login_attribute uid
pam_member_attribute uniquemember

# For each domain that you wish to access the host add the following;
nss_base_passwd ou=people,dc=mgm.example.com
nss_base_shadow ou=people,dc=mgm.example.com
nss_base_group  ou=Groups,dc=mgm.example.com
nss_base_netgroup  ou=Netgroup,dc=mgm.example.com
nss_base_passwd ou=people,dc=dev.example.com
nss_base_shadow ou=people,dc=dev.example.com
nss_base_group  ou=Groups,dc=dev.example.com
nss_base_netgroup  ou=Netgroup,dc=dev.example.com

nss_initgroups_minimum_uid 1000
nss_initgroups_minimum_gid 1000
nss_reconnect_tries 5
nss_reconnect_maxconntries 5
nss_reconnect_sleeptime 1
nss_reconnect_maxsleeptime 5
rootbinddn cn=Directory Manager
uri ldap://ldap.dev.example.com/
tls_cacertdir /etc/openldap/cacerts
URI ldap://ldap.dev.example.com>/
BASE dc=example,dc=com
bind_policy soft

# Tweaking some timeout values
# The following options are documented in man nss_ldap
sizelimit                       1000
idle_timelimit                  5
timelimit                       10
bind_timelimit                  5

/etc/nslcd.conf

The following attributes need to be set;

uri ldap://ldap.dev.example.com

binddn cn=Directory Manager

bindpw <%= bind_dn_password %>


# One for each domain that can log on to the host
base dc=mgm.example.com
base dc=dev.example.com

base   group  ou=Groups,<%= dc=mgm,dc=example,dc=com %>
base   passwd ou=People,<%= dc=mgm,dc=example,dc=com %>
base   shadow ou=People,<%= dc=mgm,dc=example,dc=com %>
base   group  ou=Groups,<%= dc=dev,dc=example,dc=com %>
base   passwd ou=People,<%= dc=dev,dc=example,dc=com %>
base   shadow ou=People,<%= dc=dev,dc=example,dc=com %>

/etc/nsswitch.conf

passwd:     files ldap
shadow:     files ldap
group:      files ldap

/etc/ssh/sshd_config

To enable LDAP authentication through ssh, and to allow ssh keys the following is required in this file;

AuthorizedKeysCommand /usr/libexec/openssh/ssh-ldap-wrapper
<%- if @operatingsystem == 'Fedora' -%>
AuthorizedKeysCommandUser nobody
<%- else -%>
AuthorizedKeysCommandRunAs nobody
<%- end -%>

NOTE: The above is Puppet code that determines if the O/S is Fedora or Red Hat, since the newer version of sshd requires AuthorizedKeysCommandUser.

Testing

You will need to restart sshd, and start nslcd.
Ensure that you are logged on as root in the event that something fails as you will want to disable LDAP if you are unable to log on to the host for some reason.

If all is working you should be able to log on as an LDAP user.

From this point onward you should add your users to the domain that they belong to.  Note, a user can only have a a user account in one domain, so it would be good to have an admin domain which is included on all hosts (e.g. the MGM domain shown above) along with either the dev, or the stg or the prod.  The dev, stg or prod domains should only appear on the hosts for those domains, whilst MGM appears across all domains.

Disclaimer

This is a shortened overview of what I implemented, and would be glad to consult on any organisation that would like to have a centralised multi-domain DS 389 or LDAP system with replicas and backup.

Monday, June 2, 2014

Puppet Issues

Some interesting issues today to resolve with Puppet.

First was an issue with obtaining the ca.pem from the master server.  This occurred as follows;

Could not request certificate: Neither PUB key nor PRIV key: header too long.

A quick goodle search came up with https://groups.google.com/forum/#!topic/puppet-users/IDg9Qmm3n4Q which states that the times or timezone is wrong.  Handy, but not what happened in this case.  Further investigation showed that the file system was full, which is a great reason why it wouldn't work at all.  Not being able to capture the cert.

Next a time out issue.  The bigger the puppet config the worse the system will be for being able to obtain it's catalogue in time.  To resolve this use;

--configtimeout nnn

This will tell the agent to wait longer than the default time for the server to build the catalogue.

Some helpful tips here is not to put your entire stack onto one host.

Tuesday, April 15, 2014

Cross Module Dependency in Puppet

Most puppet documentation shows dependencies within a module or within a manifest, but in a good puppet configuration you should have a modular design and when building hosts or roles, which means you may need to order how the modules should be actioned.

It's a known fact that puppet will perform modules in whatever order it likes but will keep dependencies within modules intact. You may think that using the following would ensure that your classes would happen in the specified order;

Class['abc'] -> Class['efg'] -> Class['xyz']

This you would think ensures that the modules abc will happen before egg followed by xyz. This is not the case even if you have require dependencies within your modules.  I have seen items happen in a module after another module which should have completed first using this method.

To ensure that a particular action must happen in a module before another, or better still ensuring that a module must complete before the next module you should do the following;

1. Ensure use of require our dependency arrows in the middle so that there is an exact for and that puppet cannot do other actions at random unless the order is not required, but this would or could cause issues so.

2. Identify last essential action that happens in each module and use those as the dependency link in you node or manifest that brings your modules together.

Example

Module abc has an exec{'first one':
Module efg has a service{'second one':
Module xyz has an exec{'third one':

In your manifest your relationship dependency to ensure modules complete in exact order would be;

Exec['first one'] -> Service['second one'] -> Exec['third one']

Because puppet loads the catalogue first so the types are already loaded in. Namespace is not required unless you have a class of names, but puppet would complain if you has two or more services of the same name so cross module dependency would require unique names for your types.

Now you can perform real module dependency order within puppet.

Wednesday, January 15, 2014

Bug in firewall-config-0.3.9-1.fc20 and firewalld-0.3.9-1.fc20

Having luckily only running an upgrade on 1 of my FC20 hosts I found that it has broken the firewall completely, so upgrade at your own peril of losing connectivity to your hosts.

A %x error, can't remember the full details as wanted to get back to a working firewall asap, so reinstalled firewall-config-0.3.8-1.fc20 and the firewalld of the same version.

This is to advise you NOT to update to the 0.3.9 version of the software and wait for the next release!

Thursday, November 28, 2013

Fedora 17 - Mobility Radeon HD5430 HDMI And Audio

Having recently upgraded to a nice new shiny HDMI TV/Monitor for my Fedora 17 server (soon to be 19) I needed to get the HDMI audio working since the TV I purchased claimed to have VGA (even though I wanted to eventually use HDMI) but did not.  So had to get the HDMI audio working since there was no other way of getting the audio out of the TV, and I definitely didn't want extra speakers cluttering up the office space.

Realising that the system had already identified the necessary hardware through the System Settings --> Sound it was obvious that no other drivers were required to install the system.  Many sites tell you to download the Nvidia drivers, but you don't need to if your monitor is working correctly and if your device is identified in the sound settings.

A simple configuration change is required to GRUB to make the HDMI audio respond.

The only real change required is to add the following to the linux line.

radeon.audio=1

Making your line look as follows;

linux   /vmlinuz-3.9.10-100.fc17.x86_64 root=UUID=ff744af5-abdf-4696-a87
b-3e3a5e5e055e ro rd.md=0 rd.lvm=0 rd.dm=0 KEYTABLE=uk rd.luks=0 LANG=en_US.UTF-
8 quiet nouveau.modeset=0 radeon.audio=1

Wednesday, November 13, 2013

Bio-metrics, why are we still chasing a dead end?

The following article on the BBC caught my eye today;
http://www.bbc.co.uk/news/business-24898367

It amazes me that there is still research into an area that has more security flaws than any other system already presented.  Currently the trusted third party Kerberos and 2 Factor auth are in most cases still the strongest method we have to date, until a human leaks the most essential part of the system.  Which at this point we should note that any weakness in any security system is the users and the people responsible for them.  We only need look at certain government agencies around the world to know this, and they have their own tests that claim these people are "trustworthy". Again another topic for another day.

Although in most circumstances bio-metrics seem like a great idea, they are at the end of the day something that Hollywood has created to make films look good, but in reality unless they have a backup system (which is essential) you could be wiped off the face of the earth and unrecognised in the space of seconds.

The saying "make sure you've got clean underpants on as you might be hit by a bus" springs to mind here.

Bio-metrics assumes that everyone is a healthy human being, that they don't grow or change and that they will never develop an illness that will disfigure them in anyway or form.  At this point I could stop as you now already understand why this as a security mechanism is not safe, but I won't I'll add some more weight behind this to let those in the Bio industry understand why they should only develop these systems with an immediate backup, which in reality should be what is the primary system as it would be better (as yet to be discovered).

So the flaws in Bio-metrics;
1. Fingerprints have been used on laptops, phones and many other devices.  Although I joke earlier about Hollywood they have proven the point that it is easy to obtain someones finger print, and the police have been doing it for years. Ah, but ... I hear you say.  No, no but.  Even with a heat detector to make sure that the person is alive can be fooled with a warm heat source at the right temperature.  Further buts.  Well OK, lets check for a pulse, ok next point.
2. The heart beat was one of the interesting ones recently announced, to state that the heart has a unique signature. True it does, but have we really done exhaustive tests?  Pacemakers have a similar signature, so already we've failed our security test.  Have we checked a persons heart after a heart attack or stroke to see if the rhythm remains the same?  Still this is not secure enough, and we only need a recording device to generate the relevant beat.
3. As for the voice and the BBC link saying that there is no recording equipment that does uncompressed recording, well I'm sorry I don't need to record you over the phone I can do it face to face and get full uncompressed audio direct from you, so no voice is not a safe mechanism and can easily be recorded and used to fool these systems.
4. Retinal scans.  Eyes can change too, even the unique pattern at the back.  Blood clots, cataracts, and more, not to mention losing them.

So I beg you stop trying to link humans up to machines, or trying to find parts of the body to use as a security mechanisms as the body is a fragile thing and fragile things can be broken and broken things won't allow users back into a system.

At the end of the day and as was done in the old days, if someone wants something bad they will get it and they will always find a way.  The safest way to deal with things in today's hi-tech world is to do it face to face.  I believe that too many places have tried to make things too convenient, and it appears with convenience comes higher risk.

Tuesday, November 12, 2013

Fedora 19 Custom Bootable DVD

I'm a stickler for the scripted installs. They're quick to produce provided you have a package list, and just as quick at installation as a LiveDVD.
The majority of web sites these days talk about create a Live distribution, which is close to performing a ghost of your system as you have to build it first and then use tools to create the DVD image for burning.

Even before PXE I was an avid fan of network installations (Solaris Jumpstart and IBM NIM on AIX).  When PXE arrived and Linux was able to perform the same thing I was over the moon.  The fact that you could take an equivalent (some mods) PXE style install and apply it to a DVD image with modifications to the isolinux.cfg file this made making automated DVD builds really easy, especially for one company that asked me to partly help modify their ability to build custom DVD images to build Fedora boxes.  This was easy on FC14 and matched closely the values in a pxelinux.cfg file.

Fedora 19 on the other hand needed some extra work since GRUB2 and a complete change to the attributes requires in the isolinux.cfg file.  However, having set down and hacked about with the isolinux.cfg file I can safely say that you can still build a scripted install of Fedora without having to use all those excess tools and without having to install an OS first.

The kickstart files remain unchanged in their answers (or so I've found so far).

To make the isolinux.cfg file recognise your script file you need to do the following;

1. Copy the DVD contents to a folder
2. Create your kickstart at the top level of the DVD rom directory structure
3. In the isolinux directory edit the isolinux.cfg file

The file I created had the following content;
default install

label install
  kernel vmlinuz
  append load_ramdisk=1 ramdisk_size=9216 initrd=initrd.img network ks=cdrom::/myInstall.ks inst.repo=cdrom inst.text

This will perform a text based installation rather than a GUI one.  The key change really to all this is that the ks argument requrires the device containing the kickstart file and between the : : either blank as above (no spaces) for the system to find the cdrom device, or the full path of the device, e.g. /dev/sr0.  The next change is the inst.repo which in this case is telling the boot loader that the install packages are on the cdrom and that it should be mounted.  Finally instead of just typing text we now have to type inst.text to perform a non-GUI installation.

Once isolinux is changed you can create an iso image of the directory structure and then burn to disk.

All the relevant arguments to the append line can be found at http://wwoods.fedorapeople.org/doc/boot-options.html#_inst_stage2