Introduction

If you are a Fedora Linux enthusiast or a developer working with multiple instances of Fedora Linux then you might benefit from the DNF local plugin. An example of someone who would benefit from the DNF local plugin would be an enthusiast who is running a cluster of Raspberry Pis. Another example would be someone running several virtual machines managed by Vagrant. The DNF local plugin reduces the time required for DNF transactions. It accomplishes this by transparently creating and managing a local RPM repository. Because accessing files on a local file system is significantly faster than downloading them repeatedly, multiple Fedora Linux machines will see a significant performance improvement when running dnf with the DNF local plugin enabled.

I recently started using this plugin after reading a tip from Glenn Johnson (aka glennzo) in a 2018 fedoraforum.org post. While working on a Raspberry Pi based Kubernetes cluster running Fedora Linux and also on several container-based services, I winced with every DNF update on each Pi or each container that downloaded a duplicate set of rpms across my expensive internet connection. In order to improve this situation, I searched for a solution that would cache rpms for local reuse. I wanted something that would not require any changes to repository configuration files on every machine. I also wanted it to continue to use the network of Fedora Linux mirrors. I didn’t want to use a single mirror for all updates.

Prior art

An internet search yields two common solutions that eliminate or reduce repeat downloads of the same RPM set – create a private Fedora Linux mirror or set up a caching proxy.

Fedora provides guidance on setting up a private mirror. A mirror requires a lot of bandwidth and disk space and significant work to maintain. A full private mirror would be too expensive and it would be overkill for my purposes.

The most common solution I found online was to implement a caching proxy using Squid. I had two concerns with this type of solution. First, I would need to edit repository definitions stored in /etc/yum.repo.d on each virtual and physical machine or container to use the same mirror. Second, I would need to use http and not https connections which would introduce a security risk.

After reading Glenn’s 2018 post on the DNF local plugin, I searched for additional information but could not find much of anything besides the sparse documentation for the plugin on the DNF documentation web site. This article is intended to raise awareness of this plugin.

About the DNF local plugin

The online documentation provides a succinct description of the plugin: “Automatically copy all downloaded packages to a repository on the local filesystem and generating repo metadata”. The magic happens when there are two or more Fedora Linux machines configured to use the plugin and to share the same local repository. These machines can be virtual machines or containers running on a host and all sharing the host filesystem, or separate physical hardware on a local area network sharing the file system using a network-based file system sharing technology. The plugin, once configured, handles everything else transparently. Continue to use dnf as before. dnf will check the plugin repository for rpms, then proceed to download from a mirror if not found. The plugin will then cache all rpms in the local repository regardless of their upstream source – an official Fedora Linux repository or a third-party RPM repository – and make them available for the next run of dnf.

Install and configure the DNF local plugin

Install the plugin using dnf. The createrepo_c packages will be installed as a dependency. The latter is used, if needed, to create the local repository.

sudo dnf install python3-dnf-plugin-local

The plugin configuration file is stored at /etc/dnf/plugins/local.conf. An example copy of the file is provided below. The only change required is to set the repodir option. The repodir option defines where on the local filesystem the plugin will keep the RPM repository.

[main]
enabled = true
# Path to the local repository.
# repodir = /var/lib/dnf/plugins/local # Createrepo options. See man createrepo_c
[createrepo]
# This option lets you disable createrepo command. This could be useful
# for large repositories where metadata is priodically generated by cron
# for example. This also has the side effect of only copying the packages
# to the local repo directory.
enabled = true # If you want to speedup createrepo with the --cachedir option. Eg.
# cachedir = /tmp/createrepo-local-plugin-cachedir # quiet = true # verbose = false

Change repodir to the filesystem directory where you want the RPM repository stored. For example, change repodir to /srv/repodir as shown below.

...
# Path to the local repository.
# repodir = /var/lib/dnf/plugins/local
repodir = /srv/repodir
...

Finally, create the directory if it does not already exist. If this directory does not exist, dnf will display some errors when it first attempts to access the directory. The plugin will create the directory, if necessary, despite the initial errors.

sudo mkdir -p /srv/repodir

Repeat this process on any virtual machine or container that you want to share the local repository. See the use cases below for more information. An alternative configuration using NFS (network file system) is also provided below.

How to use the DNF local plugin

After you have installed the plugin, you do not need to change how you use dnf. The plugin will cause a few additional steps to run transparently behind the scenes whenever dnf is called. After dnf determines which rpms to update or install, the plugin will try to retrieve them from the local repository before trying to download them from a mirror. After dnf has successfully completed the requested updates, the plugin will copy any rpms downloaded from a mirror to the local repository and then update the local repository’s metadata. The downloaded rpms will then be available in the local repository for the next dnf client.

There are two points to be aware of. First, benefits from the local repository only occur if multiple machines share the same architecture (for example, x86_64 or aarch64). Virtual machines and containers running on a host will usually share the same architecture as the host. But if there is only one aarch64 device and one x86_64 device there is little real benefit to a shared local repository unless one of the devices is constantly reset and updated which is common when developing with a virtual machine or container. Second, I have not explored how robust the local repository is to multiple dnf clients updating the repository metadata concurrently. I therefore run dnf from multiple machines serially rather than in parallel. This may not be a real concern but I want to be cautious.

The use cases outlined below assume that work is being done on Fedora Workstation. Other desktop environments can work as well but may take a little extra effort. I created a GitHub repository with examples to help with each use case. Click the Code button at https://github.com/buckaroogeek/dnf-local-plugin-examples to clone the repository or to download a zip file.

Use case 1: networked physical machines

The simplest use case is two or more Fedora Linux computers on the same network. Install the DNF local plugin on each Fedora Linux machine and configure the plugin to use a repository on a network-aware file system. There are many network-aware file systems to choose from. Which file system you will use will probably be influenced by the existing devices on your network.

For example, I have a small Synology Network Attached Storage device (NAS) on my home network. The web admin interface for the Synology makes it very easy to set up a NFS server and export a file system share to other devices on the network. NFS is a shared file system that is well supported on Fedora Linux. I created a share on my NAS named nfs-dnf and exported it to all the Fedora Linux machines on my network. For the sake of simplicity, I am omitting the details of the security settings. However, please keep in mind that security is always important even on your own local network. If you would like more information about NFS, the online Red Hat Enable Sysadmin magazine has an informative post that covers both client and server configurations on Red Hat Enterprise Linux. They translate well to Fedora Linux.

I configured the NFS client on each of my Fedora Linux machines using the steps shown below. In the below example, quga.lan is the hostname of my NAS device.

Install the NFS client on each Fedora Linux machine.

$ sudo dnf install nfs-utils

Get the list of exports from the NFS server:

$ showmount -e quga.lan
Export list for quga.lan:
/volume1/nfs-dnf pi*.lan

Create a local directory to be used as a mount point on the Fedora Linux client:

$ sudo mkdir -p /srv/repodir

Mount the remote file system on the local directory. See man mount for more information and options.

$ sudo mount -t nfs -o vers=4 quga.lan:/nfs-dnf /srv/repodir

The DNF local plugin will now work until as long as the client remains up. If you want the NFS export to be automatically mounted when the client is rebooted, then you must to edit /etc/fstab as demonstrated below. I recommend making a backup of /etc/fstab before editing it. You can substitute vi with nano or another editor of your choice if you prefer.

$ sudo vi /etc/fstab

Append the following line at the bottom of /etc/fstab, then save and exit.

quga.lan:/volume1/nfs-dnf /srv/repodir nfs defaults,timeo=900,retrans=5,_netdev 0 0

Finally, notify systemd that it should rescan /etc/fstab by issuing the following command.

$ sudo systemctl daemon-reload

NFS works across the network and, like all network traffic, may be blocked by firewalls on the client machines. Use firewall-cmd to allow NFS-related network traffic through each Fedora Linux machine’s firewall.

$ sudo firewall-cmd --permanent --zone=public --allow-service=nfs

As you can imagine, replicating these steps correctly on multiple Fedora Linux machines can be challenging and tedious. Ansible automation solves this problem.

In the rpi-example directory of the github repository I’ve included an example Ansible playbook (configure.yaml) that installs and configures both the DNF plugin and the NFS client on all Fedora Linux machines on my network. There is also a playbook (update.yaml) that runs a DNF update across all devices. See this recent post in Fedora Magazine for more information about Ansible.

To use the provided Ansible examples, first update the inventory file (inventory) to include the list of Fedora Linux machines on your network that you want to managed. Next, install two Ansible roles in the roles subdirectory (or another suitable location).

$ ansible-galaxy install --roles-path ./roles -r requirements.yaml

Run the configure.yaml playbook to install and configure the plugin and NFS client on all hosts defined in the inventory file. The role that installs and configures the NFS client does so via /etc/fstab but also takes it a step further by creating an automount for the NFS share in systemd. The automount is configured to mount the share only when needed and then to automatically unmount. This saves network bandwidth and CPU cycles which can be important for low power devices like a Raspberry Pi. See the github repository for the role and for more information.

$ ansible-playbook -i inventory configure.yaml

Finally, Ansible can be configured to execute dnf update on all the systems serially by using the update.yaml playbook.

$ ansible-playbook -i inventory update.yaml

Ansible and other automation tools such as Puppet, Salt, or Chef can be big time savers when working with multiple virtual or physical machines that share many characteristics.

Use case 2: virtual machines running on the same host

Fedora Linux has excellent built-in support for virtual machines. The Fedora Project also provides Fedora Cloud base images for use as virtual machines. Vagrant is a tool for managing virtual machines. Fedora Magazine has instructions on how to set up and configure Vagrant. Add the following line in your .bashrc (or other comparable shell configuration file) to inform Vagrant to use libvirt automatically on your workstation instead of the default VirtualBox.

export VAGRANT_DEFAULT_PROVIDER=libvirt

In your project directory initialize Vagrant and the Fedora Cloud image (use 34-cloud-base for Fedora Linux 34 when available):

$ vagrant init fedora/33-cloud-base

This creates a Vagrant file in the project directory. Edit the Vagrant file to look like the example below. DNF will likely fail with the default memory settings for libvirt. So the example Vagrant file below provides additional memory to the virtual machine. The example below also shares the host /srv/repodir with the virtual machine. The shared directory will have the same path in the virtual machine – /srv/repodir. The Vagrant file can be downloaded from github.

# -*- mode: ruby -*-
# vi: set ft=ruby : # define repo directory; same name on host and vm
REPO_DIR = "/srv/repodir" Vagrant.configure("2") do |config| config.vm.box = "fedora/33-cloud-base" config.vm.provider :libvirt do |v| v.memory = 2048 # v.cpus = 2 end # share the local repository with the vm at the same location config.vm.synced_folder REPO_DIR, REPO_DIR # ansible provisioner - commented out by default # the ansible role is installed into a path defined by # ansible.galaxy_roles-path below. The extra_vars are ansible # variables passed to the playbook. #
# config.vm.provision "ansible" do |ansible|
# ansible.verbose = "v"
# ansible.playbook = "ansible/playbook.yaml"
# ansible.extra_vars = {
# repo_dir: REPO_DIR,
# dnf_update: false
# }
# ansible.galaxy_role_file = "ansible/requirements.yaml"
# ansible.galaxy_roles_path = "ansible/roles"
# end
end

Once you have Vagrant managing a Fedora Linux virtual machine, you can install the plugin manually. SSH into the virtual machine:

$ vagrant ssh

When you are at a command prompt in the virtual machine, repeat the steps from the Install and configure the DNF local plugin section above. The Vagrant configuration file should have already made /srv/repodir from the host available in the virtual machine at the same path.

If you are working with several virtual machines or repeatedly re-initiating a new virtual machine then some simple automation becomes useful. As with the network example above, I use ansible to automate this process.

In the vagrant-example directory on github, you will see an ansible subdirectory. Edit the Vagrant file and remove the comment marks under the ansible provisioner section. Make sure the ansible directory and its contents (playbook.yaml, requirements.yaml) are in the project directory.

After you’ve uncommented the lines, the ansible provisioner section in the Vagrant file should look similar to the following:

.... # ansible provisioner # the ansible role is installed into a path defined by # ansible.galaxy_roles-path below. The extra_vars are ansible # variables passed to the playbook. # config.vm.provision "ansible" do |ansible| ansible.verbose = "v" ansible.playbook = "ansible/playbook.yaml" ansible.extra_vars = { repo_dir: REPO_DIR, dnf_update: false } ansible.galaxy_role_file = "ansible/requirements.yaml" ansible.galaxy_roles_path = "ansible/roles" end
....

Ansible must be installed (sudo dnf install ansible). Note that there are significant changes to how Ansible is packaged beginning with Fedora Linux 34 (use sudo dnf install ansible-base ansible-collections*).

If you run Vagrant now (or reprovision: vagrant provision), Ansible will automatically download an Ansible role that installs the DNF local plugin. It will then use the downloaded role in a playbook. You can vagrant ssh into the virtual machine to verify that the plugin is installed and to verify that rpms are coming from the DNF local repository instead of a mirror.

Use case 3: container builds

Container images are a common way to distribute and run applications. If you are a developer or enthusiast using Fedora Linux containers as a foundation for applications or services, you will likely use dnf to update the container during the development/build process. Application development is iterative and can result in repeated executions of dnf pulling the same RPM set from Fedora Linux mirrors. If you cache these rpms locally then you can speed up the container build process by retrieving them from the local cache instead of re-downloading them over the network each time. One way to accomplish this is to create a custom Fedora Linux container image with the DNF local plugin installed and configured to use a local repository on the host workstation. Fedora Linux offers podman and buildah for managing the container build, run and test life cycle. See the Fedora Magazine post How to build Fedora container images for more about managing containers on Fedora Linux.

Note that the fedora_minimal container uses microdnf by default which does not support plugins. The fedora container, however, uses dnf.

A script that uses buildah and podman to create a custom Fedora Linux image named myFedora is provided below. The script creates a mount point for the local repository at /srv/repodir. The below script is also available in the container-example directory of the github repository. It is named base-image-build.sh.

#!/bin/bash
set -x # bash script that creates a 'myfedora' image from fedora:latest.
# Adds dnf-local-plugin, points plugin to /srv/repodir for local
# repository and creates an external mount point for /srv/repodir
# that can be used with a -v switch in podman/docker # custom image name
custom_name=myfedora # scratch conf file name
tmp_name=local.conf # location of plugin config file
configuration_name=/etc/dnf/plugins/local.conf # location of repodir on container
container_repodir=/srv/repodir # create scratch plugin conf file for container
# using repodir location as set in container_repodir
cat <<EOF > "$tmp_name"
[main]
enabled = true
repodir = $container_repodir
[createrepo]
enabled = true
# If you want to speedup createrepo with the --cachedir option. Eg.
# cachedir = /tmp/createrepo-local-plugin-cachedir
# quiet = true
# verbose = false
EOF # pull registry.fedoraproject.org/fedora:latest
podman pull registry.fedoraproject.org/fedora:latest #start the build
mkdev=$(buildah from fedora:latest) # tag author
buildah config --author "$USER" "$mkdev" # install dnf-local-plugin, clean
# do not run update as local repo is not operational
buildah run "$mkdev" -- dnf --nodocs -y install python3-dnf-plugin-local createrepo_c
buildah run "$mkdev" -- dnf -y clean all # create the repo dir
buildah run "$mkdev" -- mkdir -p "$container_repodir" # copy the scratch plugin conf file from host
buildah copy "$mkdev" "$tmp_name" "$configuration_name" # mark container repodir as a mount point for host volume
buildah config --volume "$container_repodir" "$mkdev" # create myfedora image
buildah commit "$mkdev" "localhost/$custom_name:latest" # clean up working image
buildah rm "$mkdev" # remove scratch file
rm $tmp_name

Given normal security controls for containers, you usually run this script with sudo and when you use the myFedora image in your development process.

$ sudo ./base_image_build.sh

To list the images stored locally and see both fedora:latest and myfedora:latest run:

$ sudo podman images

To run the myFedora image as a container and get a bash prompt in the container run:

$ sudo podman run -ti -v /srv/repodir:/srv/repodir:Z myfedora /bin/bash

Podman also allows you to run containers rootless (as an unprivileged user). Run the script without sudo to create the myfedora image and store it in the unprivileged user’s image repository:

$ ./base-image-build.sh

In order to run the myfedora image as a rootless container on a Fedora Linux host, an additional flag is needed. Without the extra flag, SELinux will block access to /srv/repodir on the host.

$ podman run --security-opt label=disable -ti -v /srv/repodir:/srv/repodir:Z myfedora /bin/bash

By using this custom image as the base for your Fedora Linux containers, the iterative building and development of applications or services on them will be faster.

Bonus Points – for even better dnf performance, Dan Walsh describes how to share dnf metadata between a host and container using a file overlay (see https://www. redhat.com/sysadmin/speeding-container-buildah). This technique will work in combination with a shared local repository only if the host and the container use the same local repository. The dnf metadata cache includes metadata for the local repository under the name _dnf_local.

I have created a container file that uses buildah to do a dnf update on a fedora:latest image. I’ve also created a container file to repeat the process using a myfedora image. There are 53 MB and 111 rpms in the dnf update. The only difference between the images is that myfedora has the DNF local plugin installed. Using the local repository cut the elapse time by more than half in this example and saves 53MB of internet bandwidth consumption.

With the fedora:latest image the command and elapsed time is:

# sudo time -f "Elapsed Time: %E" buildah bud -v /var/cache/dnf:/var/cache/dnf:O - f Containerfile.3 .
128 Elapsed Time: 0:48.06

With the myfedora image the command and elapsed time is less than half of the base run. The :Z on the -v volume below is required when running the container on a SELinux-enabled host.

# sudo time -f "Elapsed Time: %E" buildah bud -v /var/cache/dnf:/var/cache/dnf:O -v /srv/repodir:/srv/repodir:Z -f Containerfile.4 .
133 Elapsed Time: 0:19.75

Repository management

The local repository will accumulate files over time. Among the files will be many versions of rpms that change frequently. The kernel rpms are one such example. A system upgrade (for example upgrading from Fedora Linux 33 to Fedora Linux 34) will copy many rpms into the local repository. The dnf repomanage command can be used to remove outdated rpm archives. I have not used the plugin long enough to explore this. The interested and knowledgeable reader is welcome to write an article about the dnf repomanage command for Fedora Magazine.

Finally, I keep the x86_64 rpms for my workstation, virtual machines and containers in a local repository that is separate from the aarch64 local repository for the Raspberry Pis and (future) containers hosting my Kubernetes cluster. I have separated them for reasons of convenience and happenstance. A single repository location should work across all architectures.

An important note about Fedora Linux system upgrades

Glenn Johnson has more than four years experience with the DNF local plugin. On occasion he has experienced problems when upgrading to a new release of Fedora Linux with the DNF local plugin enabled. Glenn strongly recommends that the enabled attribute in the plugin configuration file /etc/dnf/plugins/local.conf be set to false before upgrading your systems to a new Fedora Linux release. After the system upgrade, re-enable the plugin. Glenn also recommends using a separate local repository for each Fedora Linux release. For example, a NFS server might export /volume1/dnf-repo/33 for Fedora Linux 33 systems only. Glenn hangs out on fedoraforums.org – an independent online resource for Fedora Linux users.

Summary

The DNF local plugin has been beneficial to my ongoing work with a Fedora Linux based Kubernetes cluster. The containers and virtual machines running on my Fedora Linux desktop have also benefited. I appreciate how it supplements the existing DNF process and does not dictate any changes to how I update my systems or how I work with containers and virtual machines. I also appreciate not having to download the same set of rpms multiple times which saves me money, frees up bandwidth, and reduces the load on the Fedora Linux mirror hosts. Give it a try and see if the plugin will help in your situation!

Thanks to Glenn Johnson for his post on the DNF local plugin which started this journey, and for his helpful reviews of this post.

Similar Posts