Tuesday, November 25, 2014

XBMC 13.2 Gotham random changing resolution

Having recently updated to XBMC Gotham i have found some interesting issues,  such as;
- screen keeps changing size during watching videos which messes up the GUI and overscans too far
- location of some settings that used to be under system settings have gone.

To clear this up i eventually worked out where these can be fixed.

Screen changing size randomly
Most web sites tall about seeing the overscan in the system settings,  which works until one of your videos changes the resolution and then your overscan is to far and the video beyond the screen.  Many others refer to setting the resolution through xorg.conf, which on Gotham does not exist by default and does not need to.

You should be able to set the resolution to the highest possible for your hdmi tv, and it may be that xbmc shows a narrow view of the GUI add though it is wide screen. Also if the video chances size during viewing and then the GUI is too big after the video finishes then it's probably not XBMC as i find out,  but your tv.

My tv had an auto size which could choose 16:9, 4:3 and so on.  By changing this to 16:9 the screen stored changing resolution,  so check your TV for aspect ratio setting.

Black lines
The black lines and videos having different assist us note dealt with by playing a video and then pressing your remote OK button.

Select the video reel icon,  and this pants various video options.  Here at your aspect ratio and remove black lines,  and other settings until your videos look good,  then toward the end of the list you'll find "set as default", which will apply it to all your videos.

You can also use the same technique for audio to boost the volume,  by selecting the speaker icon next to the movie reel.

Now i can enjoy my videos without the screen keep changing.  So watch for conflict between tv and XBMC aspect ratio.

Building Vagrant Base Boxes

In this page we will look at how to create Vagrant base boxes of any type and store them so that you can have multiple developers use them. This will be a step by step guide and i will build a CentOS 7 console and a Ubuntu 14.04 desktop VM. You will learn how to configure the system through VirtualBox and then create a Vagrantfile to allow others to deploy your base boxes.
Step 1
Download the versions of the operating systems you wish to use with Vagrant. You can use either the live ISOs or the full install media,  the difference being is that you have greater flexibility with the full install for creating minimal builds. The live ISOs may include to much.
NOTE: you can customise your base builds using a provisioning script as part of vagrant to add to your base boxes, so don't put too much on them to start with.
Your base box is exactly what it says, the core common components that everyone needs, which means you would have console only and/or GUI base boxes for each OS.
Downloads are obtainable for many ISO images from the main Linux distros.
- http://www.centos.org
- http://www.ubuntu.com
- http://www.linuxmint.com/
- http://www.slackware.com/
- https://access.redhat.com/downloads
- https://www.suse.com/
- http://www.opensuse.org/en/
Step 2
All users of your system will want to have the same virtual environment installed, in this one we will use VirtualBox,  and it is important to install the guest additions too (known as the extension pack on the download page).
- https://www.virtualbox.org/wiki/Downloads
Once you have VurtualBox installed go to File --> Preferences. Select Extensions, and add the extension pack. This will ensure that your VMs can make use of the vboxsf file system for mounting directories for running provision scripts and more, by Vagrant.
Step 3
Create your base VM in VirtualBox.
First let's create the base box for a CentOS console server build. For this we will install a minimal Linux build.
VM requirements
To build a CentOS host you will need the following minimal hardware requirements;
- 1024mb Ram
- make the virtual disk dynamic and the bigger the better. The VM will grow to use the space and will be small to start with.
NOTE: the disk should be VMDK this allows vagrant to make the disk available to different VM hosting software.
- cdrom iso image
- for a console you don't need much in the way of video ram so the default 12mb is sufficient
- attach the CentOS 7 iso to the cdrom
- the 1st network adapter should be NAT. All other NICs can be added in the Vagrantfile when building new VMs from the base box.
Once you have configured the VM hardware install the operating system.  In this case chose a minimal install for CentOS.
One key difference in the OS choice is whether ssh daemon is installed by default. You should ensure that ssh is available on the system either during or after installing.
Another difference is whether the C compiler and kernel headers are installed.  You should include these in the install so the guest additions can be installed.
During the install set root password as vagrant.
If you can add users during the install create a user called vagrant with password of vagrant.
Step 4
Preparing the VM for Vagrant requires some extra configuration.  In this step we will cover all necessary steps to configure the VM to be a bar box regardless of the OS.
Log in as root.
NOTE: on Debian type systems you may need to install dkms;
$ apt-get install -y dkms linux-kernel-headers
NOTE: on Red Hat type systems you may need kernel headers and development tools;
$ yum -y install kernel-headers
$ yum -y groupinstall "Development tools"
Update the system;
For Debian;
$ apt-get update -y
$ apt-get upgrade -y
For Red Hat;
$ yum -y update
Install guest additions.
Select Devices --> Install guest additions
In a console only VM this will insert the CD into the virtual drive, so you'll need to mount it;
$ mount /dev/cdrom /mnt
Run the VBoxLinuxAdittions.run command to install;
$ /mnt/VBoxLinuxAdittions.run
After installation is complete unmount and eject the cdrom;
$ umount /mnt
$ eject cdrom
Add the vagrant user if you were not able to during install;
$ useradd -m vagrant
$ passwd vagrant
Set the password to vagrant
Add the vagrant user to the /etc/sudoers file by adding the forewing line;
vagrant  ALL=(ALL)   NOPASSWD:   ALL
You will also need to comment the line the requires a tty to do sudo.
In /etc/sudoers make sure you have this line;
Default  requirestty
Change to;
#Default  requirestty
Add the ssh key for the vagrant user;
$ mkdir -p /home/vagrant/.ssh
$ curl -k https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub > /home/vagrant/.ssh/authorized_keys
# Ensure we have the correct permissions set
$ chmod 0700 /home/vagrant/.ssh
$ chmod 0600 /home/vagrant/.ssh/authorized_keys
$ chown -R vagrant /home/vagrant/.ssh
NOTE: here we have used the generic vagrant public key,  which means your users will need to configure their ssh to use the vagrant private key which is available from the same site that the wget command points to.
NOTE: you may also need to install wget, or curl for the above wget step.
You may need to install the ssh service.
On Debian based systems;
$ apt-get install -y openssh-server
On Red Hat systems;
$ yum -y install openssh
Edit the /etc/ssh/sshd_config file and ensure the following is set;
- Port 22
- PubKeyAuthentication yes
- AuthorizedKeysFile %h/.ssh/authorized_keys
- PermitEmptyPasswords no
Restart the sshd service;
$ service ssh restart
At this stage you can add any further software to this base build before we package it,  or extra software can be installed using vagrant.
Shut down the VM;
$ init 0
Step 5
Now we are ready to package the box. Using the vagrant commands.
$ vagrant package --base "Name of VirtualBix VM"
NOTE: replace "Name of VirtualBix VM" with the name of the VM shown in your VirtualBox list of VMs.
This will export the VM and turn it into a compressed package.box that vagrant understands.
You can then move the package.box to your Web server that hosts vagrant base boxesso that other users can use them.
Step 6
If you use a Web server to host your vagrant base boxes you can then create Vagrantfile environments.  For example;
Vagrant::Config.run do |config|
  config.vm.define :default_fail do |failconfig|
    # This is here to fail the config if all machines are started by mistake
    failconfig.vm.provision :shell, :path => 'FAIL: This box is invalid, use vagrant up BOXTYPE'
  end
  # Test build
  config.vm.define :testme do |testme|
    testme.vm.customize ["modifyvm", :id, "--name", "testme", "--memory", "1024"]
    testme.vm.network :hostonly, "33.33.33.54"
    testme.vm.host_name = "testme"
  end
  # GUI build
  config.vm.define :gui do |gui|
    gui.vm.customize ["modifyvm", :id, "--name", "gui", "--memory", "1024"]
    gui.vm.network :hostonly, "33.33.33.55"
    gui.vm.host_name = "gui"
    gui.vm.provision :shell, :path => 'bin/gui_profile.sh'
  end
 
  ###
  # General config
  ###
  config.ssh.timeout = 60
  config.vm.box     = "default.box"
  config.vm.box_url = "http://myvagrant.websrv.net/baseboxes/CentOS7_x86-64.box"
  config.ssh.username = "vagrant"
  config.ssh.private_key_path= "/home/user/.ssh/vagrant_id"
end
The above configuration builds a basic minimal install through the testme build which if the Vagrantfile is in your directory and you run;
vagrant up testme
Will build the minimal CentOS build.
The gui configuration will use the minimal build and then executes a shell script to install more software and configure the system further. You can use this method to run puppet or other orchestration tools.
The general configuration area specifies what would apply to any build unless you override,  e.g. If you want a different base box then you would add a vm.box_url to your vm define.
And finally
I haven't discussed the Web server build he as that silly requires the ability to list files,  so a simple web server where you can simply add the base box files.
Now you should go play.

Monday, July 28, 2014

Multiple Domain LDAP or DS389

Having recently been tasked with setting up a new LDAP system and to take into account sub-domains, and to enable users from different domains to allow access to systems in specific domains I thought I'd write up how it was done, since most LDAP set ups on the web only deal with 1 domain, and those that state more than one only show 1 domain and then use organisation units to do the rest of the work.
This set up will use the following sub-domains with a root;
root: dc=example,dc=com
sub0: dc=dev,dc=example,dc=com
sub1: dc=stg,dc=example,dc=com
sub2: dc=prod,dc=example,dc=com

Now that we have the above domains, we can set about creating the LDAP configuration.  During the project work Puppet manifests were used to create the general LDAP configuration files as per many of the example on the web (e.g. http://ostechnix.wordpress.com/2013/02/05/setup-ldap-server-389ds-in-centosrhelscientific-linux-6-3-step-by-step/) , but to create the master LDIF files I used Puppet files to load the domain initial and any data for the domain, such as user and groups and then DS 389 was started.

Each sub-domain was placed in the LDIF file as a DC and not an OU which most examples show.  This keeps in line with the sub-domains of our environment.

The diagram for the set up looks as follows;
The arrows in the diagram represent replication from the master to the replicas.

In the case of the environment the master had all domains made available to it, thus allowing centralised administration and backups.

The most important part of configuring LDAP replicas through the LDIF files is to remove the following line and have an empty top level domain for the LDIF to allow for replication to add the relevant configuration and data;

aci: (targetattr = "*")(version 3.0; acl "SIE Group"; allow (all) groupdn = "ldap:///cn=slapd-ldaplocal,cn=389 Directory Server,cn=Server Group,cn=master.example.com,ou=example.com,o=NetscapeRoot";)

This line defines the server and top level domain, and is only required in the master server, but removed from all replica configurations.

Your master server should have the following line in it's LDIF file;

aci: (targetattr ="*")(version 3.0;acl "Directory Administrators Group";allow (all) (groupdn = " ldap:///cn=Directory Administrators, dc=example,dc=com");)

Master Replication Script

The following script enables you to set up the server replication.  Note you will need to make substitutions based on your hostames, etc, and that this is a Puppet template that generates a shell script.

# Create the Bind DN user that will be used for replication

ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=replication manager,cn=config
changetype: add
objectClass: inetorgperson
objectClass: person
objectClass: top
cn: replication manager
sn: RM
userPassword: <%= scope.function_hiera(['example_389_hiera::root_user_password']) %>
passwordExpirationTime: 20380119031407Z
nsIdleTimeout: 0

_END_

# DS389 needs to be restarted after adding this user

service dirsrv restart

# Create the change log
ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=changelog5,cn=config
changetype: add
objectclass: top
objectclass: extensibleObject
cn: changelog5
nsslapd-changelogdir: /var/lib/dirsrv/slapd-<%= @hostname %>/changelogdb

_END_

# Create the replica to share
ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=replica,cn="<%= scope.function_hiera(['example_389_hiera::base_dn']) %>",cn=mapping tree,cn=config
changetype: add
objectclass: top
objectclass: nsds5replica
objectclass: extensibleObject
cn: replica
nsds5replicaroot: <%= scope.function_hiera(['example_389_hiera::base_dn']) %>
nsds5replicaid: 7
nsds5replicatype: 3
nsds5flags: 1
nsds5ReplicaPurgeDelay: 604800
nsds5ReplicaBindDN: cn=replication manager,cn=config

_END_

<% @consumers.each do | consumer | -%>
ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=STGAgreement,cn=replica,cn="<%= scope.function_hiera(['example_389_hiera::base_dn']) %>",cn=mapping tree,cn=config
changetype: add
objectclass: top
objectclass: nsds5replicationagreement
cn: STGAgreement
nsds5replicahost: <%= consumer %>
nsds5replicaport: 389
nsds5ReplicaBindDN: cn=replication manager,cn=config
nsds5replicabindmethod: SIMPLE
nsds5replicaroot: <%= scope.function_hiera(['example_389_hiera::base_dn']) %>
description: agreement between <%= @hostname %> and <%= consumer %>
nsds5replicaupdateschedule: 0001-2359 0123456
nsds5replicatedattributelist: (objectclass=*) $ EXCLUDE authorityRevocationList
nsds5replicacredentials: <%= scope.function_hiera(['example_389_hiera::root_user_password']) %>
nsds5BeginReplicaRefresh: start

_END_
<% end -%>

Adding New Consumber

For each LDAP replica you wish to allow to replicate with the master you will need to add it to the master.  Here is some Puppet code that will generate the shell script that will add further replicas.
NOTE: The dn: cn=STGAgreement will need to change for each replica, so ensure that you name them accordingly (one per replica host).

ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.
function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=STGAgreement,cn=replica,cn="<%= scope.function_hiera(['example_389_hiera::base_dn']) %>",cn=mapping tree,cn=config
changetype: add
objectclass: top
objectclass: nsds5replicationagreement
cn: STGAgreement
nsds5replicahost: <%= consumer %>
nsds5replicaport: 389
nsds5ReplicaBindDN: cn=replication manager,cn=config
nsds5replicabindmethod: SIMPLE
nsds5replicaroot: <%= scope.function_hiera(['example_389_hiera::base_dn']) %>
description: agreement between <%= @hostname %> and <%= consumer %>
nsds5replicaupdateschedule: 0001-2359 0123456
nsds5replicatedattributelist: (objectclass=*) $ EXCLUDE authorityRevocationList
nsds5replicacredentials: <%= scope.function_hiera(['example_389_hiera::root_user_password']) %>
nsds5BeginReplicaRefresh: start

_END_

Authenticating The Replica/Consumer

The consumer must also participate in the replication.  The following script is run on the replica to enable it to replicate with the master.  The following Puppet code generates a shell script that can be run on the consumer;

# Create the Bind DN user that will be used for replication

ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=replication manager,cn=config
changetype: add
objectClass: inetorgperson
objectClass: person
objectClass: top
cn: replication manager
sn: RM
userPassword: <%= scope.function_hiera(['example_389_hiera::root_user_password']) %>
passwordExpirationTime: 20380119031407Z
nsIdleTimeout: 0

_END_

# DS389 needs to be restarted after adding this user

service dirsrv restart

ldapmodify -v -h <%= @hostname %> -p 389 -D "cn=directory manager" -w <%= scope.function_hiera(['example_389_hiera::root_user_password']) %> <<_end_ font="">
dn: cn=replica,cn="<%= scope.function_hiera(['example_389_hiera::base_dn']) %>",cn=mapping tree,cn=config
changetype: add
objectclass: top
objectclass: nsds5replica
objectclass: extensibleObject
cn: replica
nsds5replicaroot: <%= scope.function_hiera(['example_389_hiera::base_dn']) %>
nsds5replicatype: 2
nsds5ReplicaBindDN: cn=replication manager,cn=config
nsds5flags: 0
nsds5replicaid: 2

_END_

Final Steps

Once you have the replication agreements in place it is likely that the consumers are failing, and reporting that they have different generation IDs.  To fix this you need to resync the replication.

First check the log file on the master. This can be seen by;

tail -f /var/log/dirsrv/slapd-/errors

The error may look something like;
[28/Mar/2014:10:54:49 +0000] NSMMReplicationPlugin - agmt="cn=STGAgreement" (master.example.com:389): Replica has a different generation ID than the local data.

If you do see an error such as the above then run the following command from any LDAP server;

ldapmodify -h -p 389 -D "cn=directory manager" -w <<_end_ br="">dn: cn=STGAgreement,cn=replica,cn="dc=",cn=mapping tree,cn=config
changetype: modify
replace: nsds5beginreplicarefresh
nsds5beginreplicarefresh: start
_END_


The Clients

The clients must also participate in the LDAP authentication too.  This will mean that the following configurations will need changing;
  • /etc/ldap.conf
  • /etc/nslcd.conf
  • /etc/nsswitch.conf
  • /etc/ssh/sshd_config

/etc/ldap.conf

The information in this file that is key to enabling multiple domains, and restricting log ons is done as follows;

base example.com


base <%= base_dn %>

port 389
ldap_version 3
ssl no
pam_filter objectclass=posixAccount
pam_password md5
pam_login_attribute uid
pam_member_attribute uniquemember

# For each domain that you wish to access the host add the following;
nss_base_passwd ou=people,dc=mgm.example.com
nss_base_shadow ou=people,dc=mgm.example.com
nss_base_group  ou=Groups,dc=mgm.example.com
nss_base_netgroup  ou=Netgroup,dc=mgm.example.com
nss_base_passwd ou=people,dc=dev.example.com
nss_base_shadow ou=people,dc=dev.example.com
nss_base_group  ou=Groups,dc=dev.example.com
nss_base_netgroup  ou=Netgroup,dc=dev.example.com

nss_initgroups_minimum_uid 1000
nss_initgroups_minimum_gid 1000
nss_reconnect_tries 5
nss_reconnect_maxconntries 5
nss_reconnect_sleeptime 1
nss_reconnect_maxsleeptime 5
rootbinddn cn=Directory Manager
uri ldap://ldap.dev.example.com/
tls_cacertdir /etc/openldap/cacerts
URI ldap://ldap.dev.example.com>/
BASE dc=example,dc=com
bind_policy soft

# Tweaking some timeout values
# The following options are documented in man nss_ldap
sizelimit                       1000
idle_timelimit                  5
timelimit                       10
bind_timelimit                  5

/etc/nslcd.conf

The following attributes need to be set;

uri ldap://ldap.dev.example.com

binddn cn=Directory Manager

bindpw <%= bind_dn_password %>


# One for each domain that can log on to the host
base dc=mgm.example.com
base dc=dev.example.com

base   group  ou=Groups,<%= dc=mgm,dc=example,dc=com %>
base   passwd ou=People,<%= dc=mgm,dc=example,dc=com %>
base   shadow ou=People,<%= dc=mgm,dc=example,dc=com %>
base   group  ou=Groups,<%= dc=dev,dc=example,dc=com %>
base   passwd ou=People,<%= dc=dev,dc=example,dc=com %>
base   shadow ou=People,<%= dc=dev,dc=example,dc=com %>

/etc/nsswitch.conf

passwd:     files ldap
shadow:     files ldap
group:      files ldap

/etc/ssh/sshd_config

To enable LDAP authentication through ssh, and to allow ssh keys the following is required in this file;

AuthorizedKeysCommand /usr/libexec/openssh/ssh-ldap-wrapper
<%- if @operatingsystem == 'Fedora' -%>
AuthorizedKeysCommandUser nobody
<%- else -%>
AuthorizedKeysCommandRunAs nobody
<%- end -%>

NOTE: The above is Puppet code that determines if the O/S is Fedora or Red Hat, since the newer version of sshd requires AuthorizedKeysCommandUser.

Testing

You will need to restart sshd, and start nslcd.
Ensure that you are logged on as root in the event that something fails as you will want to disable LDAP if you are unable to log on to the host for some reason.

If all is working you should be able to log on as an LDAP user.

From this point onward you should add your users to the domain that they belong to.  Note, a user can only have a a user account in one domain, so it would be good to have an admin domain which is included on all hosts (e.g. the MGM domain shown above) along with either the dev, or the stg or the prod.  The dev, stg or prod domains should only appear on the hosts for those domains, whilst MGM appears across all domains.

Disclaimer

This is a shortened overview of what I implemented, and would be glad to consult on any organisation that would like to have a centralised multi-domain DS 389 or LDAP system with replicas and backup.

Monday, June 2, 2014

Puppet Issues

Some interesting issues today to resolve with Puppet.

First was an issue with obtaining the ca.pem from the master server.  This occurred as follows;

Could not request certificate: Neither PUB key nor PRIV key: header too long.

A quick goodle search came up with https://groups.google.com/forum/#!topic/puppet-users/IDg9Qmm3n4Q which states that the times or timezone is wrong.  Handy, but not what happened in this case.  Further investigation showed that the file system was full, which is a great reason why it wouldn't work at all.  Not being able to capture the cert.

Next a time out issue.  The bigger the puppet config the worse the system will be for being able to obtain it's catalogue in time.  To resolve this use;

--configtimeout nnn

This will tell the agent to wait longer than the default time for the server to build the catalogue.

Some helpful tips here is not to put your entire stack onto one host.

Tuesday, April 15, 2014

Cross Module Dependency in Puppet

Most puppet documentation shows dependencies within a module or within a manifest, but in a good puppet configuration you should have a modular design and when building hosts or roles, which means you may need to order how the modules should be actioned.

It's a known fact that puppet will perform modules in whatever order it likes but will keep dependencies within modules intact. You may think that using the following would ensure that your classes would happen in the specified order;

Class['abc'] -> Class['efg'] -> Class['xyz']

This you would think ensures that the modules abc will happen before egg followed by xyz. This is not the case even if you have require dependencies within your modules.  I have seen items happen in a module after another module which should have completed first using this method.

To ensure that a particular action must happen in a module before another, or better still ensuring that a module must complete before the next module you should do the following;

1. Ensure use of require our dependency arrows in the middle so that there is an exact for and that puppet cannot do other actions at random unless the order is not required, but this would or could cause issues so.

2. Identify last essential action that happens in each module and use those as the dependency link in you node or manifest that brings your modules together.

Example

Module abc has an exec{'first one':
Module efg has a service{'second one':
Module xyz has an exec{'third one':

In your manifest your relationship dependency to ensure modules complete in exact order would be;

Exec['first one'] -> Service['second one'] -> Exec['third one']

Because puppet loads the catalogue first so the types are already loaded in. Namespace is not required unless you have a class of names, but puppet would complain if you has two or more services of the same name so cross module dependency would require unique names for your types.

Now you can perform real module dependency order within puppet.

Wednesday, January 15, 2014

Bug in firewall-config-0.3.9-1.fc20 and firewalld-0.3.9-1.fc20

Having luckily only running an upgrade on 1 of my FC20 hosts I found that it has broken the firewall completely, so upgrade at your own peril of losing connectivity to your hosts.

A %x error, can't remember the full details as wanted to get back to a working firewall asap, so reinstalled firewall-config-0.3.8-1.fc20 and the firewalld of the same version.

This is to advise you NOT to update to the 0.3.9 version of the software and wait for the next release!