Saturday, July 4, 2015

XBMC iPlayer and other tv

Scenario

In this page I  will take you through getting iPlayer working in XBMC through the chromium web browser in full screen mode with audio.
BBC have decided that the public funding doesn't warrant them providing their services to those who use other types of computer systems to access the public service that they should be providing (shame on you BBC. I don't know why we have to pay you so much money,  it's disgusting). These include XBMC, and many other 3rd party applications.
The good news is that web browsers are still valid.
Another problem is that Adobe (most likely being paid by the beeb) are stopping support of flash played for Linux.  Don't they realise there is a growing trend of people using linux and it's only getting bigger?
This page is mainly for those using the XBMC o/s, but may contain useful elements if you've installed it on other operating systems (o/s).
Note that this method can be used for any of the on demand sites.

Requirements

  • XBMC plugin Advanced Launcher
  • pepper plugin for the o/s (to get flash audio working)
  • chromium-browser
  • wrapper script to stay chromium in XBMC nicely
  • http://www.bbc.co.uk/iplayer, or any other urls you want to view

Download and install Advanced Launcher

Download the plugin from:
http://ftp.heanet.ie/mirrors/xbmc/addons/eden/plugin.program.advanced.launcher/plugin.program.advanced.launcher-1.7.6.zip

Install the plugin through XBMC using the install from zip file through System -> Add-ons

Adobe Flash plugin is no longer being supported on Linux platforms so you need to obtain the pepper player flash plugin for chromium.  This can be done using apt-get as follows;

Install package with;
    sudo apt-get install

Install for chromium with;
  sudo apt-get install pepperflashplugin-nonfree
  sudo update-pepperflashplugin-nonfree --install

Wrapper scripts to run chromium within XBMC

I wrote 2 scripts to make it possible to run the web browser through XBMC without having to exit to a different window manager. I set various options with chromium to make it run in full screen mode. The key difference between the two scripts is whether we want the address bar or not. You will need to access the command line on your XBMC system for this,  which you can do with CTRL+ALT+F2. You will then be asked to login in. You will need to know the user and password that was created during install. I created a directory called bin in that users home directory and put the scripts in their.

Full screen kiosk mode script,  called browser.sh (remember to chmod +x browser.sh after creating);

#!/bin/bash
CHROMIUM_FLAGS=""
flashso="/usr/lib/pepperflashplugin-nonfree/libpepflashplayer.so"
flashversion=`strings $flashso|grep ^LNX|sed -e "s/^LNX //"|sed -e "s/,/./g"`
CHROMIUM_FLAGS="$CHROMIUM_FLAGS --ppapi-flash-path=$flashso --ppapi-flash-version=$flashversion"
openbox &
/usr/bin/chromium-browser --start-maximized --disable-new-tab-first-run --no-default-browser-check --no-first-run --kiosk $CHROMIUM_FLAGS $*
killall -9 openbox

Full screen with address bar, called browser-nonfull.sh;
#!/bin/bash
CHROMIUM_FLAGS=""
flashso="/usr/lib/pepperflashplugin-nonfree/libpepflashplayer.so"
flashversion=`strings $flashso|grep ^LNX|sed -e "s/^LNX //"|sed -e "s/,/./g"`
CHROMIUM_FLAGS="$CHROMIUM_FLAGS --ppapi-flash-path=$flashso --ppapi-flash-version=$flashversion"
openbox &
/usr/bin/chromium-browser --start-maximized --disable-new-tab-first-run --no-default-browser-check --no-first-run $CHROMIUM_FLAGS $*
#/usr/bin/firefox
killall -9 openbox

Adding the scripts directory as a source

You must make your bin directory available to the Advanced Launcher so that you can use your scripts.
  • Select Advanced Launcher
  • Right click Default
  • Select Manage Sources
  • Select Add Source
  • Select Browse and locate your bin directory

Creating the menu items

Create the menu items to launch the web browser with the right  starting page using the advanced launcher. Since we used the $* in the scripts we can pass extra parameters of which one is the url we require.
To do this go to the Programs menu in XBMC.
  • Select Advanced Launcher
  • Using the mouse right click the Default
  • Select add new launcher
  • Select standalone (in my example below it is the xbmc-scripts)
  • Locate your script.
  • Provide any arguments, e.g. the url, such as http://www.bbc.co.uk/iplayer
  • Give the program a name, e.g. BBC iPlayer
  • Select Linux as the operating system
  • Select ok for the next 2 items which are thumbnails and fanarts
  • The final link
You now have a link that will start iPlayer in a web browser and goes straight to the web page. You will however still need a keyboard and mouse since applications don't use the remote. On the upside you do have access to lots of other online services that are retorted as broken in XBMC or Kodi.

One thing you need to be aware of though, is that the audio will take a little while to become available as XBMC likes to work out that it's not using it and then lets the web browser use it.  This can take several minutes, so be patient.

Thursday, June 18, 2015

Red Hat 6 to Red Hat 7 system start up configuration

The Linux start up sequence for many years had been a hybrid of BSD and SVR4 stay up files. These files consisted of the /etc/inittab, /etc/rc.local and the /etc/init.d/* and /etc/rc[0-6].d/* locations. The sequence was sequential and in most cases required each command to complete before moving on to the next. Debian based systems atempted to tackle this issue and sped up the start up process by introducing upstart which enabled milestones and dependencies, but was cumbersome and not as effective as the Solaris 10 start up method. Red Hat 7 took its startup sequence from the Fedora project, with Fedora 18 and 19 making the change to systemd which by Fedora 20 was firmly in place and saw the demize of the old mechanism, but like Red Hat 7 still allows the old style SVR4 scripts to be used.
In the rest of this document I will take you through the necesary changes from v6 to v7 start up, covering;
  • Essential commands to start and stop services
  • Commands to disable or enable a service at boot
  • How to change the default run level
  • How to configure your own boot script

Essential service commands

In this section we will look at how you start, stop and view the status services in Red Hat 7.

Starting a service Red Hat 6

The service command in Red Hat 6 is used to control daemon process and associated files, such as the pid file, subdydtem lock file and others. In the event that a service (daemon) terminates unexpectedly these files remain in place as a foot print to let us know that the system didn't stop cleanly. To get the service back up and running you would need to remove these files from within the /var subdirectories. They are normally in /var/run and /var/lock.
service serviceName start
E.g. service dhcpd start

Starting a service Red Hat 7

systemctl start serviceName
E.g. systemctl start dhcpd.service

Stopping a service Red Hat 6

service serviceName stop
E.g. service network stop

Stopping a service Red Hat 7

systemctl stop serviceName
E.g. systemctl stop network.service
You'll notice that the syntax for the systemctl command lead to better history editing since the name of the service comes after the action unlike the old service commands, alowing us to simply recall the command and remove the service name and put in the next.

Systemctl syntax

systemctl action serviceName
systemctl [start|stop|status] serviceName

Listing services Red Hat 6

chkconfig --list [serviceName]
This will list whether a service is set to run at boot time if the service name is supplied, or list all services run ability at boot.
If you wish to se current state of all processes then you would require the following shell script;
for serviceName in $(chkconfig --list | cut -f1 -d' ')do  service $serviceName statusdone

Listing services Red Hat 7

systemctl -a
This one command tells us whether the service is running or is set to run at boot time.
You can also see if an individual service is enabled with;
systemctl is-enabled serviceName
Another useful command for checking a daemons status is the new journalctl command which shows information that will have been written to the log file. This command shows up if you start a service from the command line, and mentions the use of option -xn.
To list all available services;
systemctl list-unit-files --type=service

Red Hat 6 enable or disable services at boot

The chkconfig command was used up to and including Red Hat 6 systems to define if a service should be included in the run level as part of its start up sequence. This command like the service command can still be used in v7, but you should become familiar with the new commands.
Enabling a service for any runlevel was performed with;
chkconfig serviceName on
And to disable it;
chkconfig serviceName off
To specify the runlevel you would use the --level option
E.g. chkconfig --level 23 network on
These commands simply created the correct S or K script in the relevant rc.d directories.
To add your own boot script to a system you used;
chkconfig --add scriptName
Where scriptName is the name of the boot script in the /etc/rc.d/init.d directory that you created. There is also a corresponding --del to remove it from the boot sequence, but not the init.d directory.

Red Hat 7 enable or disable services at boot

In the v7 systems we continue to use the one command, systemctl.
Enable a service or resource with;
systemctl enable serviceName
E.g. systemctl enable cupsd.service
Disable a service with;
systemctl disable serviceName

Modifying the boot sequence

In this section we will look at how to set the default runlevel, add your own service and change the runlevel on boot.

Red Hat 6 setting default runlevel

The /etc/inittab file is used up to and including v6. One line in the file needed the numerical value changed. The initdefault line required the number in column 2 to be set to your desired run level. E.g.
id1:3:initdefault:
Note that with the lines in the inittab it was always essential to have the correct number of : characters on the line.
The above example sets the default run level to 3.

Red Hat 7 setting default runlevel

The inittab has now been retired and a symlink system is now in use that points to the relevant configuration file containing the runlevel to be started on boot. The files that define the levels are located in the /lib/systemd/system directory and end .system, e.g. graphical.target.
To find out the current and default runlevel use the following command;
systemctl get-default
To change the resources to start on boot you use the set-default option as follows;
systemctl set-default multi-user.target
If you wish to use run levels in v7 you can by specifying;
runlevel?.target
The ? should be changed for the number of the run level you want to set. You can then use;
systemctl set-default runlevel5.target
The above example is the same as graphical.target.

Red Hat 6 temporarily change runlevel at boot

Through grub boot menu you would press a key to halt the countdown. You then press e to edit the kernel boot line you wish to start the system with and add the runlevel number to the end of the kernel line and press the relevant key to boot the system using your current settings.
This of course can be used to alter any kernel values.

Red Hat 7 temporarily change runlevel at boot

The process is the same as v6, but instead of specifying the numerical run level you will need to use the target name at the end of the kernel line. For example to use rescue mode;
systemd.unit=rescue.target

Red Hat 6 boot scripts

These are simple bash shell scripts which make use of the case statement, but require 2 special comments to appear in the file. The 2 comments required are;
#description:
#chkconfig:
Note that they must have the colon (:) symbol immediately street the name.
Following the colon are the directives to define what the script is for and what the default start and stop run levels are.
Example:
#!/bin/bash#description: Describes what your service does, used by apps that can provide more info#chkconfig 235 99 01# the above line sets the service to start at # run levels 2, 3 and 5 as the 99th item, e.g S99xxx# and spots add the 1st, e.g K01case $1 in  'start')    # start the service here and create pid and lock files    ;;  'stop')    # stop the service using pid file, remove lock file    ;;  'status')    # use pid file to check process is running
    # and if lock file exists
    ;;
esac

Red Hat 7 boot scripts

The equivalent of the init.d script is to create a .sevice file in the /lib/systemd directory. Notice that you can still use your old style init.d script by placing it into /usr/lib/systemd/scripts.
[Unit]Description=Describe your service[Service]Type=oneshotExecStart=/usr/lib/systemd/scripts/yourInitScript startExecStop=/usr/lib/systemd/scripts/yourInitScript stopRemainAfterExit=yes[Install]WantedBy=multi-user.target
You change the relevant parts, e.g. what this script needs before it can start, the daemon to run our the script.

Useful references


Tuesday, April 14, 2015

XBMC useful repos and add ons

This page is my list of useful xbmc links for add-ons.

Live sport streams
http://www.besttvbox.com/best-xbmc-sports-channel-list/

Navi-x from http://code.google.com/p/navi-x/downloads/list
Super repo
Instructions to install https://superrepo.org/get-started/
The repo link http://srp.nu

Live tv streams from around the world including UK
https://kinkin-xbmc-repository.googlecode.com/files/repository.Kinkin-1.2.zip

Unofficial addons
http://addons.tvaddons.ag/

If you want to stream video direct from the web xbmc can make use of various streaming protocols,  e,.g. rtmp;
Add the following line to a file called bt-sport1.strm;
rtmp://80.82.78.87:443/liverepeater playpath=28 swfUrl=http://popeoftheplayers.eu/atdedead.swf live=1 pageUrl=http://popeoftheplayers.eu/crichd.php?id=28&width=530&height=370 token=#atd%#$ZH

As long as the strm file is in your videos directory it will attempt to play if the link is good and the options correct.

Tuesday, November 25, 2014

XBMC 13.2 Gotham random changing resolution

Having recently updated to XBMC Gotham i have found some interesting issues,  such as;
- screen keeps changing size during watching videos which messes up the GUI and overscans too far
- location of some settings that used to be under system settings have gone.

To clear this up i eventually worked out where these can be fixed.

Screen changing size randomly
Most web sites tall about seeing the overscan in the system settings,  which works until one of your videos changes the resolution and then your overscan is to far and the video beyond the screen.  Many others refer to setting the resolution through xorg.conf, which on Gotham does not exist by default and does not need to.

You should be able to set the resolution to the highest possible for your hdmi tv, and it may be that xbmc shows a narrow view of the GUI add though it is wide screen. Also if the video chances size during viewing and then the GUI is too big after the video finishes then it's probably not XBMC as i find out,  but your tv.

My tv had an auto size which could choose 16:9, 4:3 and so on.  By changing this to 16:9 the screen stored changing resolution,  so check your TV for aspect ratio setting.

Black lines
The black lines and videos having different assist us note dealt with by playing a video and then pressing your remote OK button.

Select the video reel icon,  and this pants various video options.  Here at your aspect ratio and remove black lines,  and other settings until your videos look good,  then toward the end of the list you'll find "set as default", which will apply it to all your videos.

You can also use the same technique for audio to boost the volume,  by selecting the speaker icon next to the movie reel.

Now i can enjoy my videos without the screen keep changing.  So watch for conflict between tv and XBMC aspect ratio.

Building Vagrant Base Boxes

In this page we will look at how to create Vagrant base boxes of any type and store them so that you can have multiple developers use them. This will be a step by step guide and i will build a CentOS 7 console and a Ubuntu 14.04 desktop VM. You will learn how to configure the system through VirtualBox and then create a Vagrantfile to allow others to deploy your base boxes.
Step 1
Download the versions of the operating systems you wish to use with Vagrant. You can use either the live ISOs or the full install media,  the difference being is that you have greater flexibility with the full install for creating minimal builds. The live ISOs may include to much.
NOTE: you can customise your base builds using a provisioning script as part of vagrant to add to your base boxes, so don't put too much on them to start with.
Your base box is exactly what it says, the core common components that everyone needs, which means you would have console only and/or GUI base boxes for each OS.
Downloads are obtainable for many ISO images from the main Linux distros.
- http://www.centos.org
- http://www.ubuntu.com
- http://www.linuxmint.com/
- http://www.slackware.com/
- https://access.redhat.com/downloads
- https://www.suse.com/
- http://www.opensuse.org/en/
Step 2
All users of your system will want to have the same virtual environment installed, in this one we will use VirtualBox,  and it is important to install the guest additions too (known as the extension pack on the download page).
- https://www.virtualbox.org/wiki/Downloads
Once you have VurtualBox installed go to File --> Preferences. Select Extensions, and add the extension pack. This will ensure that your VMs can make use of the vboxsf file system for mounting directories for running provision scripts and more, by Vagrant.
Step 3
Create your base VM in VirtualBox.
First let's create the base box for a CentOS console server build. For this we will install a minimal Linux build.
VM requirements
To build a CentOS host you will need the following minimal hardware requirements;
- 1024mb Ram
- make the virtual disk dynamic and the bigger the better. The VM will grow to use the space and will be small to start with.
NOTE: the disk should be VMDK this allows vagrant to make the disk available to different VM hosting software.
- cdrom iso image
- for a console you don't need much in the way of video ram so the default 12mb is sufficient
- attach the CentOS 7 iso to the cdrom
- the 1st network adapter should be NAT. All other NICs can be added in the Vagrantfile when building new VMs from the base box.
Once you have configured the VM hardware install the operating system.  In this case chose a minimal install for CentOS.
One key difference in the OS choice is whether ssh daemon is installed by default. You should ensure that ssh is available on the system either during or after installing.
Another difference is whether the C compiler and kernel headers are installed.  You should include these in the install so the guest additions can be installed.
During the install set root password as vagrant.
If you can add users during the install create a user called vagrant with password of vagrant.
Step 4
Preparing the VM for Vagrant requires some extra configuration.  In this step we will cover all necessary steps to configure the VM to be a bar box regardless of the OS.
Log in as root.
NOTE: on Debian type systems you may need to install dkms;
$ apt-get install -y dkms linux-kernel-headers
NOTE: on Red Hat type systems you may need kernel headers and development tools;
$ yum -y install kernel-headers
$ yum -y groupinstall "Development tools"
Update the system;
For Debian;
$ apt-get update -y
$ apt-get upgrade -y
For Red Hat;
$ yum -y update
Install guest additions.
Select Devices --> Install guest additions
In a console only VM this will insert the CD into the virtual drive, so you'll need to mount it;
$ mount /dev/cdrom /mnt
Run the VBoxLinuxAdittions.run command to install;
$ /mnt/VBoxLinuxAdittions.run
After installation is complete unmount and eject the cdrom;
$ umount /mnt
$ eject cdrom
Add the vagrant user if you were not able to during install;
$ useradd -m vagrant
$ passwd vagrant
Set the password to vagrant
Add the vagrant user to the /etc/sudoers file by adding the forewing line;
vagrant  ALL=(ALL)   NOPASSWD:   ALL
You will also need to comment the line the requires a tty to do sudo.
In /etc/sudoers make sure you have this line;
Default  requirestty
Change to;
#Default  requirestty
Add the ssh key for the vagrant user;
$ mkdir -p /home/vagrant/.ssh
$ curl -k https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub > /home/vagrant/.ssh/authorized_keys
# Ensure we have the correct permissions set
$ chmod 0700 /home/vagrant/.ssh
$ chmod 0600 /home/vagrant/.ssh/authorized_keys
$ chown -R vagrant /home/vagrant/.ssh
NOTE: here we have used the generic vagrant public key,  which means your users will need to configure their ssh to use the vagrant private key which is available from the same site that the wget command points to.
NOTE: you may also need to install wget, or curl for the above wget step.
You may need to install the ssh service.
On Debian based systems;
$ apt-get install -y openssh-server
On Red Hat systems;
$ yum -y install openssh
Edit the /etc/ssh/sshd_config file and ensure the following is set;
- Port 22
- PubKeyAuthentication yes
- AuthorizedKeysFile %h/.ssh/authorized_keys
- PermitEmptyPasswords no
Restart the sshd service;
$ service ssh restart
At this stage you can add any further software to this base build before we package it,  or extra software can be installed using vagrant.
Shut down the VM;
$ init 0
Step 5
Now we are ready to package the box. Using the vagrant commands.
$ vagrant package --base "Name of VirtualBix VM"
NOTE: replace "Name of VirtualBix VM" with the name of the VM shown in your VirtualBox list of VMs.
This will export the VM and turn it into a compressed package.box that vagrant understands.
You can then move the package.box to your Web server that hosts vagrant base boxesso that other users can use them.
Step 6
If you use a Web server to host your vagrant base boxes you can then create Vagrantfile environments.  For example;
Vagrant::Config.run do |config|
  config.vm.define :default_fail do |failconfig|
    # This is here to fail the config if all machines are started by mistake
    failconfig.vm.provision :shell, :path => 'FAIL: This box is invalid, use vagrant up BOXTYPE'
  end
  # Test build
  config.vm.define :testme do |testme|
    testme.vm.customize ["modifyvm", :id, "--name", "testme", "--memory", "1024"]
    testme.vm.network :hostonly, "33.33.33.54"
    testme.vm.host_name = "testme"
  end
  # GUI build
  config.vm.define :gui do |gui|
    gui.vm.customize ["modifyvm", :id, "--name", "gui", "--memory", "1024"]
    gui.vm.network :hostonly, "33.33.33.55"
    gui.vm.host_name = "gui"
    gui.vm.provision :shell, :path => 'bin/gui_profile.sh'
  end
 
  ###
  # General config
  ###
  config.ssh.timeout = 60
  config.vm.box     = "default.box"
  config.vm.box_url = "http://myvagrant.websrv.net/baseboxes/CentOS7_x86-64.box"
  config.ssh.username = "vagrant"
  config.ssh.private_key_path= "/home/user/.ssh/vagrant_id"
end
The above configuration builds a basic minimal install through the testme build which if the Vagrantfile is in your directory and you run;
vagrant up testme
Will build the minimal CentOS build.
The gui configuration will use the minimal build and then executes a shell script to install more software and configure the system further. You can use this method to run puppet or other orchestration tools.
The general configuration area specifies what would apply to any build unless you override,  e.g. If you want a different base box then you would add a vm.box_url to your vm define.
And finally
I haven't discussed the Web server build he as that silly requires the ability to list files,  so a simple web server where you can simply add the base box files.
Now you should go play.

Monday, July 28, 2014

Multiple Domain LDAP or DS389

Having recently been tasked with setting up a new LDAP system and to take into account sub-domains, and to enable users from different domains to allow access to systems in specific domains I thought I'd write up how it was done, since most LDAP set ups on the web only deal with 1 domain, and those that state more than one only show 1 domain and then use organisation units to do the rest of the work.
This set up will use the following sub-domains with a root;
root: dc=example,dc=com
sub0: dc=dev,dc=example,dc=com
sub1: dc=stg,dc=example,dc=com
sub2: dc=prod,dc=example,dc=com

Now that we have the above domains, we can set about creating the LDAP configuration.  During the project work Puppet manifests were used to create the general LDAP configuration files as per many of the example on the web (e.g. http://ostechnix.wordpress.com/2013/02/05/setup-ldap-server-389ds-in-centosrhelscientific-linux-6-3-step-by-step/) , but to create the master LDIF files I used Puppet files to load the domain initial and any data for the domain, such as user and groups and then DS 389 was started.

Each sub-domain was placed in the LDIF file as a DC and not an OU which most examples show.  This keeps in line with the sub-domains of our environment.

The diagram for the set up looks as follows;
The arrows in the diagram represent replication from the master to the replicas.

In the case of the environment the master had all domains made available to it, thus allowing centralised administration and backups.

The most important part of configuring LDAP replicas through the LDIF files is to remove the following line and have an empty top level domain for the LDIF to allow for replication to add the relevant configuration and data;

aci: (targetattr = "*")(version 3.0; acl "SIE Group"; allow (all) groupdn = "ldap:///cn=slapd-ldaplocal,cn=389 Directory Server,cn=Server Group,cn=master.example.com,ou=example.com,o=NetscapeRoot";)

This line defines the server and top level domain, and is only required in the master server, but removed from all replica configurations.

Your master server should have the following line in it's LDIF file;

aci: (targetattr ="*")(version 3.0;acl "Directory Administrators Group";allow (all) (groupdn = " ldap:///cn=Directory Administrators, dc=example,dc=com");)

Master Replication Script

The following script enables you to set up the server replication.  Note you will need to make substitutions based on your hostames, etc, and that this is a Puppet template that generates a shell script.

# Create the Bind DN user that will be used for replication

ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=replication manager,cn=config
changetype: add
objectClass: inetorgperson
objectClass: person
objectClass: top
cn: replication manager
sn: RM
userPassword: <%= scope.function_hiera(['example_389_hiera::root_user_password']) %>
passwordExpirationTime: 20380119031407Z
nsIdleTimeout: 0

_END_

# DS389 needs to be restarted after adding this user

service dirsrv restart

# Create the change log
ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=changelog5,cn=config
changetype: add
objectclass: top
objectclass: extensibleObject
cn: changelog5
nsslapd-changelogdir: /var/lib/dirsrv/slapd-<%= @hostname %>/changelogdb

_END_

# Create the replica to share
ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=replica,cn="<%= scope.function_hiera(['example_389_hiera::base_dn']) %>",cn=mapping tree,cn=config
changetype: add
objectclass: top
objectclass: nsds5replica
objectclass: extensibleObject
cn: replica
nsds5replicaroot: <%= scope.function_hiera(['example_389_hiera::base_dn']) %>
nsds5replicaid: 7
nsds5replicatype: 3
nsds5flags: 1
nsds5ReplicaPurgeDelay: 604800
nsds5ReplicaBindDN: cn=replication manager,cn=config

_END_

<% @consumers.each do | consumer | -%>
ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=STGAgreement,cn=replica,cn="<%= scope.function_hiera(['example_389_hiera::base_dn']) %>",cn=mapping tree,cn=config
changetype: add
objectclass: top
objectclass: nsds5replicationagreement
cn: STGAgreement
nsds5replicahost: <%= consumer %>
nsds5replicaport: 389
nsds5ReplicaBindDN: cn=replication manager,cn=config
nsds5replicabindmethod: SIMPLE
nsds5replicaroot: <%= scope.function_hiera(['example_389_hiera::base_dn']) %>
description: agreement between <%= @hostname %> and <%= consumer %>
nsds5replicaupdateschedule: 0001-2359 0123456
nsds5replicatedattributelist: (objectclass=*) $ EXCLUDE authorityRevocationList
nsds5replicacredentials: <%= scope.function_hiera(['example_389_hiera::root_user_password']) %>
nsds5BeginReplicaRefresh: start

_END_
<% end -%>

Adding New Consumber

For each LDAP replica you wish to allow to replicate with the master you will need to add it to the master.  Here is some Puppet code that will generate the shell script that will add further replicas.
NOTE: The dn: cn=STGAgreement will need to change for each replica, so ensure that you name them accordingly (one per replica host).

ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.
function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=STGAgreement,cn=replica,cn="<%= scope.function_hiera(['example_389_hiera::base_dn']) %>",cn=mapping tree,cn=config
changetype: add
objectclass: top
objectclass: nsds5replicationagreement
cn: STGAgreement
nsds5replicahost: <%= consumer %>
nsds5replicaport: 389
nsds5ReplicaBindDN: cn=replication manager,cn=config
nsds5replicabindmethod: SIMPLE
nsds5replicaroot: <%= scope.function_hiera(['example_389_hiera::base_dn']) %>
description: agreement between <%= @hostname %> and <%= consumer %>
nsds5replicaupdateschedule: 0001-2359 0123456
nsds5replicatedattributelist: (objectclass=*) $ EXCLUDE authorityRevocationList
nsds5replicacredentials: <%= scope.function_hiera(['example_389_hiera::root_user_password']) %>
nsds5BeginReplicaRefresh: start

_END_

Authenticating The Replica/Consumer

The consumer must also participate in the replication.  The following script is run on the replica to enable it to replicate with the master.  The following Puppet code generates a shell script that can be run on the consumer;

# Create the Bind DN user that will be used for replication

ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=replication manager,cn=config
changetype: add
objectClass: inetorgperson
objectClass: person
objectClass: top
cn: replication manager
sn: RM
userPassword: <%= scope.function_hiera(['example_389_hiera::root_user_password']) %>
passwordExpirationTime: 20380119031407Z
nsIdleTimeout: 0

_END_

# DS389 needs to be restarted after adding this user

service dirsrv restart

ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=replica,cn="<%= scope.function_hiera(['example_389_hiera::base_dn']) %>",cn=mapping tree,cn=config
changetype: add
objectclass: top
objectclass: nsds5replica
objectclass: extensibleObject
cn: replica
nsds5replicaroot: <%= scope.function_hiera(['example_389_hiera::base_dn']) %>
nsds5replicatype: 2
nsds5ReplicaBindDN: cn=replication manager,cn=config
nsds5flags: 0
nsds5replicaid: 2

_END_

Final Steps

Once you have the replication agreements in place it is likely that the consumers are failing, and reporting that they have different generation IDs.  To fix this you need to resync the replication.

First check the log file on the master. This can be seen by;

tail -f /var/log/dirsrv/slapd-/errors

The error may look something like;
[28/Mar/2014:10:54:49 +0000] NSMMReplicationPlugin - agmt="cn=STGAgreement" (master.example.com:389): Replica has a different generation ID than the local data.

If you do see an error such as the above then run the following command from any LDAP server;

ldapmodify -h -p 389 -D "cn=directory manager" -w <<_end_ br="">dn: cn=STGAgreement,cn=replica,cn="dc=",cn=mapping tree,cn=config
changetype: modify
replace: nsds5beginreplicarefresh
nsds5beginreplicarefresh: start
_END_


The Clients

The clients must also participate in the LDAP authentication too.  This will mean that the following configurations will need changing;
  • /etc/ldap.conf
  • /etc/nslcd.conf
  • /etc/nsswitch.conf
  • /etc/ssh/sshd_config

/etc/ldap.conf

The information in this file that is key to enabling multiple domains, and restricting log ons is done as follows;

base example.com


base <%= base_dn %>

port 389
ldap_version 3
ssl no
pam_filter objectclass=posixAccount
pam_password md5
pam_login_attribute uid
pam_member_attribute uniquemember

# For each domain that you wish to access the host add the following;
nss_base_passwd ou=people,dc=mgm.example.com
nss_base_shadow ou=people,dc=mgm.example.com
nss_base_group  ou=Groups,dc=mgm.example.com
nss_base_netgroup  ou=Netgroup,dc=mgm.example.com
nss_base_passwd ou=people,dc=dev.example.com
nss_base_shadow ou=people,dc=dev.example.com
nss_base_group  ou=Groups,dc=dev.example.com
nss_base_netgroup  ou=Netgroup,dc=dev.example.com

nss_initgroups_minimum_uid 1000
nss_initgroups_minimum_gid 1000
nss_reconnect_tries 5
nss_reconnect_maxconntries 5
nss_reconnect_sleeptime 1
nss_reconnect_maxsleeptime 5
rootbinddn cn=Directory Manager
uri ldap://ldap.dev.example.com/
tls_cacertdir /etc/openldap/cacerts
URI ldap://ldap.dev.example.com>/
BASE dc=example,dc=com
bind_policy soft

# Tweaking some timeout values
# The following options are documented in man nss_ldap
sizelimit                       1000
idle_timelimit                  5
timelimit                       10
bind_timelimit                  5

/etc/nslcd.conf

The following attributes need to be set;

uri ldap://ldap.dev.example.com

binddn cn=Directory Manager

bindpw <%= bind_dn_password %>


# One for each domain that can log on to the host
base dc=mgm.example.com
base dc=dev.example.com

base   group  ou=Groups,<%= dc=mgm,dc=example,dc=com %>
base   passwd ou=People,<%= dc=mgm,dc=example,dc=com %>
base   shadow ou=People,<%= dc=mgm,dc=example,dc=com %>
base   group  ou=Groups,<%= dc=dev,dc=example,dc=com %>
base   passwd ou=People,<%= dc=dev,dc=example,dc=com %>
base   shadow ou=People,<%= dc=dev,dc=example,dc=com %>

/etc/nsswitch.conf

passwd:     files ldap
shadow:     files ldap
group:      files ldap

/etc/ssh/sshd_config

To enable LDAP authentication through ssh, and to allow ssh keys the following is required in this file;

AuthorizedKeysCommand /usr/libexec/openssh/ssh-ldap-wrapper
<%- if @operatingsystem == 'Fedora' -%>
AuthorizedKeysCommandUser nobody
<%- else -%>
AuthorizedKeysCommandRunAs nobody
<%- end -%>

NOTE: The above is Puppet code that determines if the O/S is Fedora or Red Hat, since the newer version of sshd requires AuthorizedKeysCommandUser.

Testing

You will need to restart sshd, and start nslcd.
Ensure that you are logged on as root in the event that something fails as you will want to disable LDAP if you are unable to log on to the host for some reason.

If all is working you should be able to log on as an LDAP user.

From this point onward you should add your users to the domain that they belong to.  Note, a user can only have a a user account in one domain, so it would be good to have an admin domain which is included on all hosts (e.g. the MGM domain shown above) along with either the dev, or the stg or the prod.  The dev, stg or prod domains should only appear on the hosts for those domains, whilst MGM appears across all domains.

Disclaimer

This is a shortened overview of what I implemented, and would be glad to consult on any organisation that would like to have a centralised multi-domain DS 389 or LDAP system with replicas and backup.

Monday, June 2, 2014

Puppet Issues

Some interesting issues today to resolve with Puppet.

First was an issue with obtaining the ca.pem from the master server.  This occurred as follows;

Could not request certificate: Neither PUB key nor PRIV key: header too long.

A quick goodle search came up with https://groups.google.com/forum/#!topic/puppet-users/IDg9Qmm3n4Q which states that the times or timezone is wrong.  Handy, but not what happened in this case.  Further investigation showed that the file system was full, which is a great reason why it wouldn't work at all.  Not being able to capture the cert.

Next a time out issue.  The bigger the puppet config the worse the system will be for being able to obtain it's catalogue in time.  To resolve this use;

--configtimeout nnn

This will tell the agent to wait longer than the default time for the server to build the catalogue.

Some helpful tips here is not to put your entire stack onto one host.