Sunday, November 12, 2023

Why you should NOT run Docker containers as root

Why a Docker container should never run as root!

Most organisations ensure that you need to have sudo access to run the Docker commands on your systems, but even if you're using Kubernetes or OpenShift, or if you have added users to the docker group you run the risk of being compromised.

As a simple demonstration try out the following steps that would expose your /etc/shadow file to a docker container ran by an ordinary user;

1. Make sure your ordinary user is added to the docker group:
       sudo usermod -G docker steve
2. If you are logged in as that user you will need to log out and back in for the group to take affect
3. Check the user can run the docker command:
       docker ps
    You should see the docker ps header and any running containers on that system
4. Now let's cat the /etc/shadow file using a container from Docker Hub:
       docker run -it --rm -v /etc:/hack python:3.12 cat /hack/shadow
5. You'll notice that what you will be looking at is your hosts shadow file and not the shadow file in the container since that is in /etc/shadow.

If you're not doing the following to secure your environment then you will be at risk of people being able to crack the root password on your systems.

1. Create and use only your own organisations private Docker registry
2. Block access to docker.io, public.ecr.aws, quay.io, and any other public repositories you know of.  Access to these should only be allowed by those who will be building the base container images for your organisations
3. Any base images downloaded from the registries in 2 should be built as new images that run with a specific user ID greater than 1000.
4. If developers need to install software onto containers then this should be performed in the developer environments only, or through CI pipelines which perform the root level installs and then add the user to the end of the build.

Root running containers should only be used where software needs to be installed that the basic user cannot, which is why it is important to have the necessary base images available for developers so that they only need worry about their code and libraries.

Tuesday, July 5, 2022

Disable DNF Dragora on Fedora

Synopsis

Being a person that likes to manage updates on my own schedule, rather than being reminded, and also not liking processes running that don't need to be, I searched around to find how to disable the DNF Dragora application.

This application is a Fedora GUI application and the background task is not a systemd task, but an autostart task.

Locations

Autostart directories are located in 1 of 2 places:
  • /etc/xdg/autostart
  • $HOME/.config/autostart
The first is system wide since it is under /etc, where as the other is personal to you and things that you have decided you want to be running once you've logged into the GUI.

You should always start by checking your home version first and renaming the file if you're not sure if you want it stopped, or deleting it if you want it permanently gone.  In most cases you can also deal with start up applications through gnome-tweaks.

Finding DNF Dragora

Having check my home directory $HOME/.config/autostart for dnfdragora as follows:

grep -ri drag ~/.config/autostart

I found no files containing the dnfdragora command.

Locating other autostart folders with:

sudo find / -name autostart

We find the /etc/xdg/autostart folder.

Using the following grep command we find the following file:

grep -ri drag /etc/xdg/autostart

/etc/xdg/autostart/org.mageia.dnfdragora-updater.desktop

Disabling DNF Dragora

To disable it (rather than removing it, if you change your mind) you simply do the following:

cd /etc/xdg/autostart

mv org.mageia.dnfdragora-updater.desktop org.mageia.dnfdragora-updater.desktop.old

Simply making sure that .desktop is not the last part of the file name will prevent the file being seen.

Log out and back in and DNF Dragora should no longer be there.

Friday, July 1, 2022

How to simply build a Jenkins server and Agent

Having been working on another DevOps Academy it was surprising with the research that the students did on how to build a Jenkins server with Agent, how complicated most people made it.

This example is based on using AWS with 2 EC2 instances, but would work on-prem and other clouds.

Steps to build

1. Create 2 Ubuntu instances both Medium

  • 1 is Controller
  • 1 is the Agent
  • Settings
    • t2.medium
    • 20GB disk
    • Ubuntu image
      • Select or create a security group (e.g. jenkinssg) that has the following inbound ports
        • 8080
        • 22
        • 8200

2. Install Jenkins on the controller

    • ssh onto the Jenkins controller
    • sudo apt update # It's Ubunutu after all
    • sudo apt -y install openjdk-11-jdk # Install Java, Jenkins needs it
    • wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -
    • sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
    • sudo apt update
    • sudo apt install jenkins
    • sudo cat /var/lib/jenkins/secrets/initialAdminPassword   # Get the login password

3. Web browser

    • Point your web browser at http://yourPublicIPforInstance:8080
      • Paste in the text from the file
      • Click Continue button
    • Click Install suggested plugins
    • Fill in the form to create the main Jenkins user
      • Username: admin
      • Password: secret123
      • Confirm password: secret123
      • Fullname: Administrator
      • E-mail address: root@nowhere.com
    • Click Save and Continue button
    • Change the IP address to the Private IP address of your Jenkins instance
      • Leave the http:// and the :8080/
    • Click Save and Finish button
    • Click Start using Jenkins button

4. Configuring Jenkins to see the new node to be the agent

    • After doing step 3 you should be logged in as Administrator
      • If not log in using admin and the password you set
    • Click Manage Jenkins in the left menu
    • Click Manage Nodes and Clients
    • Click New Node in the new menu
    • Set the Node name to Worker1
    • Select Permanent Agent
    • Click Create button
    • New screen
      • Number of executors: 2
      • Remote root directory: /home/ubuntu
      • Labels: all
      • Leave all other as default
      • Click Save
    • The worker node will show as not connected
    • Click on Worker1 link
    • You'll notice an error message
      • JNLP agent port is disabled and agents cannot connect this way. Go to security configuration screen and change it.
      • Click the link Go to security configuration screen and change it.
      • Scroll down to Agents
        • Select Fixed
        • Set the Port to 8200 (This is already allowed by jenkinssg)
      • Scroll to the bottom and click the Save button
    • Click Manage Nodes and Clients
    • Click Worker1
    • Right click the blue agent.jar link and Copy link address

5. Now ssh on to your Worker/Agent instance

    • sudo apt update
    • sudo apt -y install openjdk-11-jdk
    • wget http://52.213.211.75:8080/jnlpJars/agent.jar
      • Where the http:// link is pasted from the Copy link address
    • On the Jenkins web page copy the 2nd box echo line
      • Paste this line into the terminal of the Worker/Agent ssh session
    • On the Jenkins web page copy the 2nd box java -jar line
      • In the terminal of the Worker/Agent ssh session
      • Type the word   nohup
        • Then paste the java -jar line after this
        • Type a space and then   &    at the end of the line
      • e.g.
        • nohup java -jar agent.jar -jnlpUrl http://172.31.16.36:8080/computer/Worker1/jenkins-agent.jnlp -secret @secret-file -workDir "/home/ubuntu" &

6. Back to the Jenkins web site

    • Click Back to List if you are in the Agent Worker1 screen
    • or
    • Click Dashboards top left of the page
      • Click Manage Jenkins
      • Click Manage Nodes and Clouds
    • Note your agent is now connected

NOTE:
This configuration doesn't create a service to start the agent on reboot, but is purely an example to get it running.  To make the agent a proper service you would need to create the appropriate file in /etc/systemd/system (call it jenkins-agent.service) or for older systems the service script in /etc/init.d.

Wednesday, April 7, 2021

The non conforming shell (zsh)

Today whilst working with some graduates who are all using Apple Macs I find out that the default zsh that is now used as the command line does not follow the common Shell standards!

 This particular nugget that was found will actually cause you a problem if you're a real system administrator who knows how to clear out log files without having to use rm or rebooting the systems or restarting the process.

The issue I refer to is the use of the redirection symbols > and >>.

Most of you are familiar with doing things such as;

echo "Hello" >somefile

ps -ef > allprocesses

But the real system administrators reading this also know that you should be able to do;

>/var/log/messages

Obviously as root.

This command should empty the log file without removing it, and freeing up disk space on that partition.

This is the conforming standard for the use of redirection in the majority of Unix and Linux (GNU) shells.


ZSH however does not do this any more!  So beware.

Instead ZSH when you do;

>somefile

will wait for your to type something in until you press ^D on an empty line.  This as all sysadmins know is;

cat >somefile

So in ZSH to perform the same action as real shells now need to;

>somefile

^D


So the question begs, who did this and why?


Yet another reason I tell people to buy a better regular laptop without an operating system and simply install a version of Linux that you like the look of and customise it to your preferred look and feel.


Mac OS != Unis

Mac OS == Broken Unix.

Wednesday, July 22, 2020

Apache Restricting Content From Download

The Scenario

After releasing our latest Youtube video https://youtu.be/QWjub-nKNL4 we had some extra content that we wanted to share, but not allow download, since it was mentioned in the video.

What to do, what to do.  Previously I tried looking for options on how to stop the right click download, but that wouldn't stop people using other ways to download the content, and owning the web server wanted a way that would restrict the straight forward methods of downloading the content and only allowing it to be streamed using the player on the web site.

A few Apache configuration lines later, and the use of JWPlayer I solved my problem.  The joys of having your own web server - which means you can't do this if you're on a hosted platform.

Most players are client based, so the requirement to not have direct access to the file is something I'm very familiar with, but didn't want to write my own player.  In the past I've streamed Word documents without the URL being available so that clients had to be logged in to access the document, and never knew the real URL, but this time I want to make sure the content is not downloadable directly.

The Research

First I needed to see if anyone had worked out how to prevent download of files in Apache (httpd), so a quick Google search for "apache htaccess deny download of specific files" came up with;
RewriteEngine On
RewriteCond %{HTTP_REFERER} !^http://(www\.)?yourwebsite\.com/ [NC]
RewriteCond %{REQUEST_URI} !hotlink\.(mp3|mp4|mov) [NC]
RewriteCond %{HTTP_COOKIE} !^.*wordpress_logged_in.*$ [NC]
RewriteRule .*\.(mp3|mp4|mov)$ http://yourwebsite.com/ [NC]
From this the lines I needed would be;
  • RewriteEngine On
  • RewriteCond %{HTTP_REFERER} !^http://......
  • RewriteRule .*\.(mp3|wav)$ ....
I would need to make some modifications to this, but the .htaccess file was the way to go, since other directories I still wanted people to download from.  So some extra help from https://300m.com/stupid-htaccess-tricks/ to understand what goes on in the square brackets.

Secondly, because I wanted to use the .htaccess file I would need to modify my Apache configuration to AllowOverride All for the root directory.

Last but not least a player that would allow the media to be played through the web browser - https://www.jwplayer.com/.

Configuring Apache

To ensure Apache allows you to use the .htaccess files you need to change the main configuration files.  If you intend to be able to do different things in different directories then you should set your DocumentRoot so that the Directory setting has AllowOverride  All instead of the usual AllowOverride None.

Excerpt from the httpd.conf file;

DocumentRoot "/usr/local/apache2/htdocs"
<Directory "/usr/local/apache2/htdocs">
    Options Indexes FollowSymLinks
    IndexOptions FancyIndexing
    DirectoryIndex index.html

    AllowOverride All

    Require all granted
</Directory>

You should also ensure that the following module is enabled in the httpd.conf file;
LoadModule rewrite_module modules/mod_rewrite.so
This will ensure that the directives we add to the .htaccess file will work. 

Denying Download

Now locate the directory in you web server where you want to restricted the download of the content.  In this directory we will add the .htaccess file to prevent our MP3 and WAV files being downloaded.

The content of the .htaccess file;

RewriteEngine on

RewriteCond %{HTTP_REFERER} !^http://(www\.)?tps\.local [NC]

RewriteRule .*\.(mp3|wav)$ - [NC,F,L] 


The Lines Explained

RewriteEngine on

This line ensures that Apache can perform rewrites to the HTTP headers and allow the other 2 actions to work correctly.

ReWriteCond

This line checks that the REFERER in the header is tps.local for this example, but should be your domain name.  This ensures that only requests from your own domain are allowed.  The (www\.)? state that the www part might not be supplied.
The [NC] at the end of the line makes the whole statement case insensitive = No Case.

RewriteRule

This line defines what extensions we will be denying the download of.  In this case .mp3 and .wav.  The regular expression uses the or | notation to allow us to supply a list of mp3|wav  which is the same as saying mp3 or wav.  The .*\. before this will be anyfile name before the final full stop.  The $ at the end specifies the the mp3 or wav are the last characters at the end of the data.
The - symbol is there to replace the mp3 or wav file names with just an invalid filename of -.

Again at the end we see NC for case insensitive, F for forbidden to drive the 403 and L being that this is the last rule, don't do anything more.

Testing Download

Firstly we can try a successful download by using wget or curl or our web browser.

We should ensure an html file in the directory, and that should be displayed;

curl http://www.tps.local/index.html
It Works!

Now the mp3
curl http://www.tps.local/my.mp3
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /media/meditation/body.mp3
on this server.</p>
</body></html>

Using JWPlayer

First you need to get yourself an account on JWPlayer.

Log in and from the dashboard menu on the right select Players;

By default you will have 2 example players.  Select one that suits your needs and make modifications to it;

Change items such as;

  • Size
  • Playback
  • etc


Save your player and make a note of Cloud-Hosted Player Library URL as you will need this in your JavaScript code.

The following HTML is an example of using the player with a single file, let's say this is called index.html to make it easier to have the player launch when people enter the restricted directory;

<html>

<body>

<div id="myElement"></div>

<script type="text/JavaScript" src="https://cdn.jwplayer.com/libraries/example.js"></script>

<script type="text/JavaScript">

    jwplayer("myElement").setup({ 

        "playlist": [{

                "file": "my.mp3"

        }]

    });

</script>

</body>

</html>

Change the src to the URL in your JWPlayer account.  The file will play using the player.

For further customisation of JWPlayer see;
 

 


Sunday, May 31, 2020

Terraform Runtime Data

Terraform and Runtime values


The other day I was asked if we could use operating system variables in Terraform like we can in Ansible?

An interesting question.  Obviously those who know Terraform will shout out and say, of course you can you just put TF_VAR_ in front of the variable in the OS and Terraform will find it.

However, that's not the same as being able to get hold of say, HOME or PWD, etc as can be done in Ansible using the lookup function;

vars:
  cwd: "{{ lookup('env', 'PWD' }}"

The above in Ansible obtains the value of the PWD operating system environment variable and stores it in an Ansible variable called cwd at runtime.

So, how on earth do I do such a thing in Terraform, rather than relying on people configuring variables up front, or using a Makefile or wrapper script?

The data sources

The data source allows you to obtain data from various sources, see https://www.terraform.io/docs/configuration/data-sources.html, of which the best data source for our requirement is the external data source.

Example


The downside is that your external data source needs to return JSON data, so you can't just run a command or echo a variable.  But you can create the relevant JSON data.  I wrote a simple use at https://bitbucket.org/stevshil/terraform/src/master/envvar/.

To explain that code directly here I took the following steps;

  1. Create a shell script that will return JSON data, this way you don't have to work out how to escape characters, etc.  In the example code we created a script called mypwd containing the following;

    #!/bin/bash
    cat <<_END_
    {
      "dir": "$PWD"
    }
    _END_

  2. Create the Terraform code (getvar.tf) to grab the printed output;

    data "external" "example" {
     program = ["bash","./mypwd"]
    }
    output "pwd" {
      value = data.external.example.result.dir
    }

You'll notice that using the external data source, and a given name (example in this instance) we call our shell script.  The bash is there just in case someone forgot to make the script executable.

The output in this case is just to show that the data was returned in an attribute called dir as you'll note in the shell script code where we output JSON data which has a key called dir that contains the value of the operating system variable called $PWD.  We could just as easily have written Python code, or have ran another shell command that outputs the data we desire.

To retrieve the value output from our script we use the normal Terraform object attribute reference, but because it is a data source we prepend the word data to the resource - data.external.example.result.dir, with the result element being part of the data source, but the dir is our JSON data key.  This reference can be used anywhere within your code when you need to use the runtime value.

Goodbye to the TF_VAR_ potential error prone values if you need to make use of attributes from the operating system.

Monday, November 25, 2019

SSH config

A simple user based configuration file with lots of possible combinations is the $HOME/.ssh/config file.
This file is located in the user home directory, if the user has created one. If not you can create your own and start to define the SSH keys required to log on to particular hosts, the user you use to log on and lots more.

Example of defining a key and user to a specific host;

Host jenkins.tps.co.uk
  User ec2-user
  IdentityFile ~/.ssh/steve-jenkins.pem
  StrictHostKeyChecking no

The above file would log you on as ec2-user using the steve-jenkins.pem key located in the users .ssh directory inside their home directory. It also ignores the fingerprint prompt through the StrictHostKeyChecking.


Example of using a bastion/jump host;

Host bastion.tps.co.uk
  User admin
  StrictHostKeyChecking no
  ControlPersist 5m
  IdentityFile ~/.ssh/bastion.pem
Host 172.31.10.20
  User admin
  StrictHostKeyChecking no
  ProxyJump bastion.tps.co.uk

This will set the ability to SSH to the 172.31.10.20 host in the cloud through the host called bastion.tps.co.uk, logging on as admin with the bastion.pem file in the users .ssh directory. The ControlPersist sets a time out of 5 minutes where you will be logged out if no activity occurs for 5 minutes.

Using SSH command line through bastion to another host;
ssh -i ${privatesshkeyfile} -A user@${bastionnameorip} ssh ${farsidehost}