Monday, June 2, 2014

Puppet Issues

Some interesting issues today to resolve with Puppet.

First was an issue with obtaining the ca.pem from the master server.  This occurred as follows;

Could not request certificate: Neither PUB key nor PRIV key: header too long.

A quick goodle search came up with https://groups.google.com/forum/#!topic/puppet-users/IDg9Qmm3n4Q which states that the times or timezone is wrong.  Handy, but not what happened in this case.  Further investigation showed that the file system was full, which is a great reason why it wouldn't work at all.  Not being able to capture the cert.

Next a time out issue.  The bigger the puppet config the worse the system will be for being able to obtain it's catalogue in time.  To resolve this use;

--configtimeout nnn

This will tell the agent to wait longer than the default time for the server to build the catalogue.

Some helpful tips here is not to put your entire stack onto one host.

Tuesday, April 15, 2014

Cross Module Dependency in Puppet

Most puppet documentation shows dependencies within a module or within a manifest, but in a good puppet configuration you should have a modular design and when building hosts or roles, which means you may need to order how the modules should be actioned.

It's a known fact that puppet will perform modules in whatever order it likes but will keep dependencies within modules intact. You may think that using the following would ensure that your classes would happen in the specified order;

Class['abc'] -> Class['efg'] -> Class['xyz']

This you would think ensures that the modules abc will happen before egg followed by xyz. This is not the case even if you have require dependencies within your modules.  I have seen items happen in a module after another module which should have completed first using this method.

To ensure that a particular action must happen in a module before another, or better still ensuring that a module must complete before the next module you should do the following;

1. Ensure use of require our dependency arrows in the middle so that there is an exact for and that puppet cannot do other actions at random unless the order is not required, but this would or could cause issues so.

2. Identify last essential action that happens in each module and use those as the dependency link in you node or manifest that brings your modules together.

Example

Module abc has an exec{'first one':
Module efg has a service{'second one':
Module xyz has an exec{'third one':

In your manifest your relationship dependency to ensure modules complete in exact order would be;

Exec['first one'] -> Service['second one'] -> Exec['third one']

Because puppet loads the catalogue first so the types are already loaded in. Namespace is not required unless you have a class of names, but puppet would complain if you has two or more services of the same name so cross module dependency would require unique names for your types.

Now you can perform real module dependency order within puppet.

Wednesday, January 15, 2014

Bug in firewall-config-0.3.9-1.fc20 and firewalld-0.3.9-1.fc20

Having luckily only running an upgrade on 1 of my FC20 hosts I found that it has broken the firewall completely, so upgrade at your own peril of losing connectivity to your hosts.

A %x error, can't remember the full details as wanted to get back to a working firewall asap, so reinstalled firewall-config-0.3.8-1.fc20 and the firewalld of the same version.

This is to advise you NOT to update to the 0.3.9 version of the software and wait for the next release!

Thursday, November 28, 2013

Fedora 17 - Mobility Radeon HD5430 HDMI And Audio

Having recently upgraded to a nice new shiny HDMI TV/Monitor for my Fedora 17 server (soon to be 19) I needed to get the HDMI audio working since the TV I purchased claimed to have VGA (even though I wanted to eventually use HDMI) but did not.  So had to get the HDMI audio working since there was no other way of getting the audio out of the TV, and I definitely didn't want extra speakers cluttering up the office space.

Realising that the system had already identified the necessary hardware through the System Settings --> Sound it was obvious that no other drivers were required to install the system.  Many sites tell you to download the Nvidia drivers, but you don't need to if your monitor is working correctly and if your device is identified in the sound settings.

A simple configuration change is required to GRUB to make the HDMI audio respond.

The only real change required is to add the following to the linux line.

radeon.audio=1

Making your line look as follows;

linux   /vmlinuz-3.9.10-100.fc17.x86_64 root=UUID=ff744af5-abdf-4696-a87
b-3e3a5e5e055e ro rd.md=0 rd.lvm=0 rd.dm=0 KEYTABLE=uk rd.luks=0 LANG=en_US.UTF-
8 quiet nouveau.modeset=0 radeon.audio=1

Wednesday, November 13, 2013

Bio-metrics, why are we still chasing a dead end?

The following article on the BBC caught my eye today;
http://www.bbc.co.uk/news/business-24898367

It amazes me that there is still research into an area that has more security flaws than any other system already presented.  Currently the trusted third party Kerberos and 2 Factor auth are in most cases still the strongest method we have to date, until a human leaks the most essential part of the system.  Which at this point we should note that any weakness in any security system is the users and the people responsible for them.  We only need look at certain government agencies around the world to know this, and they have their own tests that claim these people are "trustworthy". Again another topic for another day.

Although in most circumstances bio-metrics seem like a great idea, they are at the end of the day something that Hollywood has created to make films look good, but in reality unless they have a backup system (which is essential) you could be wiped off the face of the earth and unrecognised in the space of seconds.

The saying "make sure you've got clean underpants on as you might be hit by a bus" springs to mind here.

Bio-metrics assumes that everyone is a healthy human being, that they don't grow or change and that they will never develop an illness that will disfigure them in anyway or form.  At this point I could stop as you now already understand why this as a security mechanism is not safe, but I won't I'll add some more weight behind this to let those in the Bio industry understand why they should only develop these systems with an immediate backup, which in reality should be what is the primary system as it would be better (as yet to be discovered).

So the flaws in Bio-metrics;
1. Fingerprints have been used on laptops, phones and many other devices.  Although I joke earlier about Hollywood they have proven the point that it is easy to obtain someones finger print, and the police have been doing it for years. Ah, but ... I hear you say.  No, no but.  Even with a heat detector to make sure that the person is alive can be fooled with a warm heat source at the right temperature.  Further buts.  Well OK, lets check for a pulse, ok next point.
2. The heart beat was one of the interesting ones recently announced, to state that the heart has a unique signature. True it does, but have we really done exhaustive tests?  Pacemakers have a similar signature, so already we've failed our security test.  Have we checked a persons heart after a heart attack or stroke to see if the rhythm remains the same?  Still this is not secure enough, and we only need a recording device to generate the relevant beat.
3. As for the voice and the BBC link saying that there is no recording equipment that does uncompressed recording, well I'm sorry I don't need to record you over the phone I can do it face to face and get full uncompressed audio direct from you, so no voice is not a safe mechanism and can easily be recorded and used to fool these systems.
4. Retinal scans.  Eyes can change too, even the unique pattern at the back.  Blood clots, cataracts, and more, not to mention losing them.

So I beg you stop trying to link humans up to machines, or trying to find parts of the body to use as a security mechanisms as the body is a fragile thing and fragile things can be broken and broken things won't allow users back into a system.

At the end of the day and as was done in the old days, if someone wants something bad they will get it and they will always find a way.  The safest way to deal with things in today's hi-tech world is to do it face to face.  I believe that too many places have tried to make things too convenient, and it appears with convenience comes higher risk.

Tuesday, November 12, 2013

Fedora 19 Custom Bootable DVD

I'm a stickler for the scripted installs. They're quick to produce provided you have a package list, and just as quick at installation as a LiveDVD.
The majority of web sites these days talk about create a Live distribution, which is close to performing a ghost of your system as you have to build it first and then use tools to create the DVD image for burning.

Even before PXE I was an avid fan of network installations (Solaris Jumpstart and IBM NIM on AIX).  When PXE arrived and Linux was able to perform the same thing I was over the moon.  The fact that you could take an equivalent (some mods) PXE style install and apply it to a DVD image with modifications to the isolinux.cfg file this made making automated DVD builds really easy, especially for one company that asked me to partly help modify their ability to build custom DVD images to build Fedora boxes.  This was easy on FC14 and matched closely the values in a pxelinux.cfg file.

Fedora 19 on the other hand needed some extra work since GRUB2 and a complete change to the attributes requires in the isolinux.cfg file.  However, having set down and hacked about with the isolinux.cfg file I can safely say that you can still build a scripted install of Fedora without having to use all those excess tools and without having to install an OS first.

The kickstart files remain unchanged in their answers (or so I've found so far).

To make the isolinux.cfg file recognise your script file you need to do the following;

1. Copy the DVD contents to a folder
2. Create your kickstart at the top level of the DVD rom directory structure
3. In the isolinux directory edit the isolinux.cfg file

The file I created had the following content;
default install

label install
  kernel vmlinuz
  append load_ramdisk=1 ramdisk_size=9216 initrd=initrd.img network ks=cdrom::/myInstall.ks inst.repo=cdrom inst.text

This will perform a text based installation rather than a GUI one.  The key change really to all this is that the ks argument requrires the device containing the kickstart file and between the : : either blank as above (no spaces) for the system to find the cdrom device, or the full path of the device, e.g. /dev/sr0.  The next change is the inst.repo which in this case is telling the boot loader that the install packages are on the cdrom and that it should be mounted.  Finally instead of just typing text we now have to type inst.text to perform a non-GUI installation.

Once isolinux is changed you can create an iso image of the directory structure and then burn to disk.

All the relevant arguments to the append line can be found at http://wwoods.fedorapeople.org/doc/boot-options.html#_inst_stage2


Wednesday, October 30, 2013

Linux Automatic Login At Command Line

Since I'm about to build a tiny little media player with the Raspberry PI, I thought I'd write myself some notes to remind me of some set up features that I might require.  Creating an automated log on for the GUI is fairly straight forward as the generally relies on messing around with the GDM files.

It turns out it is also easy for the command line too.

/etc/init/tty.conf is used to control what happens with console log on.

The line;
exec /sbin/mingetty $TTY
Is the default line that tells the getty process to perform the log on prompt. You can change this to suit your needs, so if you wanted root to log on you could do;

exec /sbin/mingetty --autologin root $TTY

This will log the system in as root at a command prompt.

Other options to mingetty;
--loginprog=/sbin/someprogram
--chdir=/somedirectory
--chroot=/jaildirectory

man mingetty will tell you the rest

For systemd based versions you need to do the following;
Change to the /etc/systemd/system folder.
If you need more terminals then do the following;
cp /lib/systemd/system/getty@.service /etc/systemd/system/autologin@.service
ln -sf /etc/systemd/system/autologin@.service  /etc/systemd/system/getty.target.wants/getty@tty1.service

In the file getty.target.wants/getty@tty1.service change the following;
ExecStart=-/sbin/agetty --autologin root %I

Obviously substitute root for your specific user.