Beyond APT: Software Management with Flatpak on Ubuntu

2 months 1 week ago
by George Whittaker

Ubuntu has long relied on APT and DEB packages for software management, with Snap becoming increasingly prevalent in recent releases. However, a third contender has risen to prominence in the Linux world: Flatpak. Designed as a universal software packaging and distribution framework, Flatpak offers a fresh, sandboxed approach to application management that works seamlessly across distributions. In this article, we’ll dive into how to manage software with Flatpak on Ubuntu, providing everything you need to get started, optimize your workflow, and compare it with existing solutions.

What is Flatpak?

Flatpak is a modern application packaging system developed by the Free Desktop Project. Its goal is to enable the distribution of desktop applications in a sandboxed environment, ensuring greater security, consistency, and compatibility across Linux distributions.

Key Benefits of Flatpak
  • Cross-distribution compatibility: A single Flatpak package works on any Linux distribution with Flatpak support.

  • Sandboxing: Applications run in isolation, reducing the risk of affecting or being affected by other software or the host system.

  • Bundle dependencies: Flatpak packages include all necessary dependencies, reducing compatibility issues.

  • Version control: Developers can ship and maintain multiple versions easily.

Limitations
  • Storage overhead: Applications may use more disk space due to bundled runtimes.

  • Redundancy: Ubuntu users already have Snap, which can lead to confusion or duplication.

Installing Flatpak on Ubuntu

Although Flatpak isn't pre-installed on Ubuntu, setting it up is straightforward.

Step 1: Install Flatpak

Open a terminal and run:

sudo apt update sudo apt install flatpak

Step 2: Install GNOME Software Plugin (Optional)

To integrate Flatpak apps into the Ubuntu Software GUI:

sudo apt install gnome-software-plugin-flatpak

This step allows Flatpak apps to appear alongside APT and Snap apps in GNOME Software.

Step 3: Reboot or Log Out

Restart your session to apply system changes and enable Flatpak integration fully.

Adding the Flathub Repository

Most Flatpak applications are hosted on Flathub, the central repository for Flatpak packages.

To add Flathub:

Go to Full Article
George Whittaker

Linux Boot Process? Best Geeks Know It!

2 months 1 week ago
by Nawaz Abbasi

The Linux boot process is a sequence of events that initializes a Linux system from a powered-off state to a fully operational state. The knowledge of Linux boot process is essential when it comes to technical interviews, but sometimes it becomes difficult to remember or recall the key steps in the process. This article discusses a quick and easy way to remember it - Best Geeks Know It! Yes, you only need to remember that.

Best Geeks Know It -> B – G – K – I -> BIOS – GRUB – KERNEL – INIT

This BGKI acronym provides a high-level overview of the Linux boot process. Each step builds upon the previous one, gradually bringing the system to a fully operational state. Of course, there are more detailed processes within each step, but this simplified version should give you a good foundation for understanding and remembering the Linux boot sequence.

 

Here's a concise expansion of B-G-K-I:

B - BIOS/UEFI

  • Performs Power-On Self-Test (POST)
  • Checks hardware: CPU, RAM, storage
  • Loads MBR (Master Boot Record) or GPT (GUID Partition Table)
  • Transfers control to bootloader

G - GRUB

  • Located in first 512 bytes of boot drive
  • Reads /boot/grub/grub.conf
  • Shows menu with kernel options
  • Loads selected kernel + initramfs (temporary root filesystem) into RAM
  • Passes boot parameters to kernel
  • Can handle multiple OS boot options

K - KERNEL

  • Decompresses itself into RAM
  • Initializes hardware and drivers
  • Mounts root filesystem, loads initramfs
  • Sets up memory management
  • Starts device detection
  • Creates kernel threads

I - INIT (systemd in modern systems)

  • PID 1 (first process)
  • Reads /etc/inittab (traditional) or unit files (systemd)
  • Sets default runlevel/target
  • Starts essential services in order:
    • System services
    • Network services
    • Display manager
    • User interface (CLI/GUI)
  • Reaches default target state

 

Key files to remember

/boot/grub/grub.conf  - GRUB configuration

/etc/systemd/system/  - systemd unit files

/etc/inittab                  - Init configuration (traditional)

Go to Full Article
Nawaz Abbasi

How to Build Custom Distributions from Scratch

2 months 2 weeks ago
by George Whittaker Introduction

In a world teeming with Linux distributions — from Ubuntu to Arch, Debian to Fedora — the idea of building your own may seem daunting, if not redundant. Yet, for many technologists, enthusiasts, and developers, creating a custom Linux distribution isn't just an exercise in reinvention; it's an act of empowerment. Whether your goal is to tailor a lightweight OS for embedded devices, create a secure workstation, develop an education-focused system, or simply understand Linux more intimately, building your own distribution is one of the most fulfilling journeys in open-source computing.

This guide walks you through every stage of creating your own Linux distribution — from selecting core components to building, customizing, and distributing your personalized operating system.

Understanding the Basics What is a Linux Distribution?

A Linux distribution (or "distro") is a complete operating system built on the Linux kernel. It includes:

  • Kernel – The core interface between hardware and software.

  • Init System – Handles booting and service management (e.g., systemd, OpenRC).

  • Userland Tools – Basic utilities from projects like GNU Coreutils and BusyBox.

  • Package Manager – Tool to install, upgrade, and remove software (e.g., APT, Pacman, DNF).

  • Optional GUI – A desktop environment or window manager (e.g., GNOME, XFCE, i3).

Why Create Your Own Distribution?

Reasons vary, but common motivations include:

  • Learning – Deepen your understanding of system internals.

  • Performance – Remove bloat for a leaner, faster system.

  • Branding – Create a branded OS for an organization or product.

  • Customization – Tailor software stacks for specific use-cases.

  • Embedded Applications – Create firmware or OS images for hardware devices.

Planning Your Custom Linux Distro Define Your Goals

Start by asking:

  • Who is the target user?

  • What hardware should it support?

  • Will it be a desktop, server, or headless system?

  • Should it boot live or be installed?

Choose a Foundation

You can either:

  • Build from scratch: Using projects like Linux From Scratch (LFS).

Go to Full Article
George Whittaker

Linux Data Recovery: How to Salvage Lost or Corrupted Files

2 months 2 weeks ago
by George Whittaker

Data loss is a nightmare for any computer user, and Linux users are no exception. Despite the robust architecture of Linux operating systems, disasters can strike in the form of accidental deletions, corrupted partitions, or failing storage devices. Whether you're a system administrator, developer, or everyday Linux user, understanding how to recover data can be the difference between a minor inconvenience and a major setback.

This guide will walk you through the practical strategies and essential tools for recovering lost or corrupted files on Linux.

Understanding Data Loss on Linux Common Causes of Data Loss

Data loss can occur for various reasons:

  • Accidental Deletion: Files removed with rm or cleared trash.

  • Filesystem Corruption: Caused by improper shutdowns, power failures, or software bugs.

  • Partition Issues: Misconfigured or overwritten partition tables.

  • Hardware Failures: Hard drive degradation, bad sectors, or failing SSDs.

How Deletion Works on Linux

Linux filesystems like ext4 don’t immediately erase data when a file is deleted. Instead, the filesystem marks the file's space as free. Until that space is overwritten, the data may be recoverable. This behavior is the cornerstone of most recovery techniques.

First Steps After Data Loss

The most critical step is to minimize system activity on the affected drive. Any write operation can potentially overwrite recoverable data.

Disconnect and Mount Read-Only

If the loss happened on a secondary drive, physically disconnect it and mount it read-only on another machine:

sudo mount -o ro /dev/sdX1 /mnt/recovery

Create a Disk Image

Use tools like dd or ddrescue to create a complete image of the drive for analysis:

sudo dd if=/dev/sdX of=/mnt/external/backup.img bs=4M status=progress

Or with ddrescue, which handles read errors more gracefully:

sudo ddrescue /dev/sdX /mnt/external/recovery.img /mnt/external/logfile

Work from the image to preserve the original drive.

Boot from a Live Environment

To avoid using the target system, boot into a Live Linux distribution like:

  • SystemRescueCD – tailored for system repair.

  • Ubuntu Live CD – user-friendly and widely available.

Go to Full Article
George Whittaker

Setting Up a Secure Mail Server with Dovecot on Ubuntu Server

2 months 3 weeks ago
by George Whittaker Introduction

Email remains a cornerstone of modern communication. From business notifications to personal messages, having a robust and reliable mail server is essential. While cloud-based solutions dominate the mainstream, self-hosting a mail server offers control, customization, and learning opportunities that managed services can't match.

In this guide, we will explore how to set up a secure and efficient mail server using Dovecot on an Ubuntu Server. Dovecot is a lightweight and high-performance IMAP and POP3 server that provides secure access to mailboxes. When paired with Postfix, it forms a powerful mail server stack capable of sending and receiving messages seamlessly.

Whether you're a system administrator, a DevOps enthusiast, or simply curious about running your own mail infrastructure, this article provides a deep dive into configuring Dovecot on Ubuntu.

Prerequisites

Before we dive into configuration and deployment, ensure the following requirements are met:

  • Ubuntu Server (20.04 or later recommended)

  • Root or sudo access

  • Static IP address assigned to your server

  • Fully Qualified Domain Name (FQDN) pointing to your server

  • Proper DNS records:

    • A record pointing your domain to your server IP

    • MX record pointing to your mail server’s FQDN

    • Optional: SPF, DKIM, and DMARC for email authentication

You should also ensure that your system is up-to-date:

sudo apt update && sudo apt upgrade -y

Understanding the Mail Server Stack

A modern mail server is composed of several components:

  • Postfix: SMTP server responsible for sending and routing outgoing mail.

  • Dovecot: Handles retrieval of mail via IMAP/POP3 and secure authentication.

  • SpamAssassin / ClamAV: For filtering spam and malware.

  • TLS/SSL: Provides encrypted communication channels.

Here's how they work together:

  1. Postfix receives email from external sources.

  2. It stores messages into local mailboxes.

  3. Dovecot lets users access their mail securely using IMAP or POP3.

  4. TLS/SSL encrypts the entire process, ensuring privacy.

Step 1: Installing Postfix and Dovecot Install Postfix

sudo apt install postfix -y

During installation, you will be prompted to choose a configuration. Select:

Go to Full Article
George Whittaker

Debugging and Profiling Linux Applications with GDB and strace

2 months 3 weeks ago
by George Whittaker

Debugging and profiling are critical skills in a developer's toolbox, especially when working with low-level system applications. Whether you're tracking down a segmentation fault in a C program or understanding why a daemon fails silently, mastering tools like GDB (GNU Debugger) and strace can dramatically improve your efficiency and understanding of program behavior.

In this guide, we’ll dive deep into these two powerful tools, exploring how they work, how to use them effectively, and how they complement each other in diagnosing and resolving complex issues.

The Essence of Debugging and Profiling What is Debugging?

Debugging is the systematic process of identifying, isolating, and fixing bugs—errors or unexpected behaviors in your code. It’s an integral part of development that ensures software quality and stability. While high-level languages may offer interactive debuggers, compiled languages like C and C++ often require robust tools like GDB for line-by-line inspection.

What is Profiling?

Profiling, on the other hand, is about performance analysis. It helps you understand where your application spends time, which functions are called frequently, and how system resources are being utilized. While GDB can aid in debugging, strace provides a view of how a program interacts with the operating system, making it ideal for performance tuning and root cause analysis of runtime issues.

Getting Hands-On with GDB What is GDB?

GDB is the standard debugger for GNU systems. It allows you to inspect the internal state of a program while it’s running or after it crashes. With GDB, you can set breakpoints, step through code, inspect variables, view call stacks, and even modify program execution flow.

Preparing Your Program

To make your program debuggable with GDB, compile it with debug symbols using the -g flag:

gcc -g -o myapp myapp.c

This embeds symbol information like function names, variable types, and line numbers, which are essential for meaningful debugging.

Basic GDB Commands

Here are some fundamental commands you'll use frequently:

gdb ./myapp # Start GDB with your program run # Start the program inside GDB break main # Set a breakpoint at the 'main' function break filename:line# Break at specific line next # Step over a function step # Step into a function continue # Resume program execution print varname # Inspect the value of a variable backtrace # Show the current function call stack quit # Exit GDB

Go to Full Article
George Whittaker

Debian Package Management: Aptitude vs. Apt-Get in Ubuntu

3 months ago
by George Whittaker

Package management is at the heart of every Linux system. It’s what makes installing, updating, and managing software on Linux-based distributions not just possible but streamlined and elegant. For users of Debian and its popular derivative Ubuntu, two powerful tools often stand at the center of debate: apt-get and aptitude. Though both are capable of managing packages effectively, they have unique characteristics that make them better suited to different use cases.

This article provides a comparison of apt-get and aptitude, helping you understand their roles, differences, and when to use one over the other.

Understanding the Debian Package Management Ecosystem

Before diving into the specifics, it's helpful to understand the ecosystem in which both tools operate.

What is a Package Manager?

A package manager is software that automates the process of installing, upgrading, configuring, and removing software packages from a computer. In Debian-based systems, packages are distributed in .deb format.

The APT System

APT, or Advanced Package Tool, is the foundation of package management in Debian-based systems. It works with core components such as:

  • dpkg – the base tool that installs and manages .deb files

  • apt-get / apt – command-line front-ends for retrieving and managing packages from repositories

  • apt-cache – used for searching and querying package information

  • aptitude – a higher-level package manager that interacts with APT and dpkg under the hood

What is apt-get? A Brief History

apt-get has been a trusted part of Debian since the late 1990s. It was designed to provide a consistent command-line interface to the APT system and has been widely used in scripts and system automation.

Core Features
  • Handles package installation, upgrade, and removal

  • Fetches and resolves dependencies automatically

  • Interacts directly with APT repositories

Common Commands

Here are some frequently used apt-get commands:

Go to Full Article
George Whittaker

Ubuntu Unity vs. GNOME: Choosing the Right Ubuntu Experience for Your Workflow

3 months 1 week ago
by George Whittaker

Ubuntu is one of the most popular Linux distributions, renowned for its ease of use, extensive community support, and frequent updates. While the core of Ubuntu remains consistent, the desktop environment—what users interact with visually—can vary. Two prominent options for Ubuntu users are Unity and GNOME. Each offers a distinct experience with unique design philosophies, features, and workflows.

Whether you're a seasoned Linux user or a curious newcomer, understanding the differences between Unity and GNOME can help you tailor your Ubuntu setup to better suit your needs. This article explores both environments to help you make an informed choice.

A Tale of Two Desktops: History and Evolution Unity: Canonical's Custom Vision

Unity was first introduced by Canonical in 2010 with the release of Ubuntu 10.10 Netbook Edition. It was developed to create a consistent user experience across desktop and mobile devices, long before convergence became a buzzword.

Unity became Ubuntu’s default desktop starting with Ubuntu 11.04. Its vertical launcher, global menu, and Dash search aimed to improve efficiency and streamline user interaction. However, despite its innovation, Unity had its critics. Performance issues on lower-end hardware and resistance to change from GNOME users caused friction in the community.

In 2017, Canonical made the unexpected decision to abandon Unity development and return to GNOME, starting with Ubuntu 17.10. But Unity didn’t disappear—it was adopted by the open source community and lives on in the form of Ubuntu Unity, an official Ubuntu flavor.

GNOME: The Linux Standard

GNOME is one of the oldest and most respected desktop environments in the Linux ecosystem. Launched in 1999, it focuses on simplicity, accessibility, and ease of use. The release of GNOME 3 in 2011 marked a major redesign, introducing GNOME Shell, which departed from the traditional desktop metaphor in favor of a more modern and minimal interface.

GNOME became the default Ubuntu desktop again in 2017 and has since seen continuous refinement. With support from major distributions like Fedora, Debian, and Ubuntu, GNOME enjoys a broad user base and robust development activity.

Interface Design and User Experience Unity: Efficiency Meets Innovation

Unity's interface is distinct and immediately recognizable. Here are some key components:

  • Launcher (Dock): Positioned vertically on the left side, the Launcher holds pinned and running applications. It’s space-efficient and easily navigated via mouse or keyboard.

Go to Full Article
George Whittaker

The Power of Linux Shell Environment Variables

3 months 1 week ago
by George Whittaker

If you're working in a Linux environment, chances are you've encountered environment variables—even if you didn’t realize it at the time. They quietly power much of what goes on behind the scenes in your shell sessions, influencing everything from what shell prompt you see to which programs are available when you type a command. Whether you're an experienced sysadmin or a new Linux user, mastering environment variables is essential for customizing and controlling your shell experience.

In this guide, we'll take a dive into environment variables in the Linux shell. By the end, you'll not only know how to view and set these variables, but also how to persist them, use them in scripts, and troubleshoot issues effectively.

What Are Environment Variables?

At a basic level, environment variables are dynamic named values that affect the behavior of running processes on your Linux system. Think of them as configuration settings that your shell (like Bash or Zsh) and applications refer to in order to understand how they should operate.

For example:

  • The PATH variable tells the shell where to look for executable files.

  • The HOME variable stores the path to your home directory.

  • The LANG variable defines your system’s language and character encoding.

Environment Variables vs Shell Variables

There is an important distinction between shell variables and environment variables:

  • Shell variables are local to the shell session in which they are defined.

  • Environment variables are shell variables that have been exported, meaning they are inherited by child processes spawned from the shell.

Viewing Environment Variables

Before you can modify or use environment variables, it's important to know how to inspect them.

View All Environment Variables

printenv

or

env

Both commands list environment variables currently set for the session.

View a Specific Variable

echo $HOME

This will display the current user's home directory.

View All Shell Variables

set

This command displays all shell variables and functions. It's broader than printenv.

Setting and Exporting Environment Variables

You can define your own variables or temporarily change existing ones within your shell.

Go to Full Article
George Whittaker

Git on Linux: A Beginner’s Guide to Version Control and Project Management

3 months 2 weeks ago
by George Whittaker

Version control is a fundamental tool in modern software development, enabling teams and individuals to track, manage, and collaborate on projects with confidence. Whether you're working on a simple script or a large-scale application, keeping track of changes, collaborating with others, and rolling back to previous versions are essential aspects of development. Among various version control systems, Git has emerged as the most widely used and trusted tool — especially on Linux, where it integrates seamlessly with the system's workflow.

This guide will walk you through the basics of Git on Linux, explaining what Git is, how to install it, and how to start using it to manage your projects efficiently. Whether you're a new developer or transitioning from another system, this comprehensive introduction will help you get started with Git the right way.

What Is Git and Why Use It?

Git is a distributed version control system (DVCS) originally created by Linus Torvalds in 2005 to support the development of the Linux kernel. It allows developers to keep track of every change made to their source code, collaborate with other developers, and manage different versions of their projects over time.

Key Features of Git:
  • Distributed Architecture: Every user has a full copy of the repository, including its history. This means you can work offline and still have full version control capabilities.

  • Speed and Efficiency: Git is optimized for performance, handling large repositories and files with ease.

  • Branching and Merging: Git makes it easy to create and manage branches, allowing for efficient parallel development and experimentation.

  • Integrity and Security: Every change is checksummed and stored securely using SHA-1 hashing, ensuring that your project’s history cannot be tampered with.

Compared to older systems like Subversion (SVN) or CVS, Git offers far greater flexibility and is better suited to both small personal projects and large collaborative efforts.

Installing Git on Linux

Installing Git on Linux is straightforward thanks to package managers available in every major distribution.

For Ubuntu/Debian-based Systems:

sudo apt update sudo apt install git

For Fedora:

sudo dnf install git

For Arch Linux:

sudo pacman -S git

After installation, verify it with:

git --version

Go to Full Article
George Whittaker

Mastering Linux File Permissions and Ownership

3 months 2 weeks ago
by George Whittaker

In the world of Linux, where multi-user systems and server security are foundational principles, understanding file permissions and ownership is crucial. Whether you're a beginner exploring your first Linux distribution or a seasoned system administrator managing critical servers, knowing how permissions work is key to ensuring the integrity, privacy, and functionality of your system.

This guide will take you deep into the core of Linux file permissions and ownership—what they are, how they work, how to modify them, and why they matter.

Why File Permissions and Ownership Matter in Linux

Linux is built from the ground up as a multi-user operating system. This means:

  • Multiple users can operate on the same system simultaneously.

  • Different users have different levels of access and control.

Without a permissions system, there would be no way to protect files from unauthorized access, modification, or deletion. File permissions and ownership form the first layer of defense against accidental or malicious activity.

Linux Permission Basics: Read, Write, Execute

Each file and directory in Linux has three basic types of permissions:

  • Read (r) – Permission to view the contents of a file or list the contents of a directory.

  • Write (w) – Permission to modify a file or create, rename, or delete files within a directory.

  • Execute (x) – For files, allows execution as a program or script. For directories, allows entering the directory (cd).

Permission Categories: User, Group, Others

Permissions are assigned to three distinct sets of users:

  • User (u) – The file's owner.

  • Group (g) – A group associated with the file.

  • Others (o) – Everyone else.

So for every file or directory, Linux evaluates nine permission bits, forming three sets of rwx, like so:

rwxr-xr--

This breakdown means:

  • rwx for the owner

  • r-x for the group

  • r-- for others

Understanding the Permission String

When you list files with ls -l, you’ll see something like this:

-rwxr-xr-- 1 alice developers 4096 Apr 4 14:00 script.sh

Let’s dissect it:

Go to Full Article
George Whittaker
3 hours 39 minutes ago
Subscribe to Linux Journal feed