How I switched from macOS to Pop!_OS — and why an AI made all the difference
I'm a software engineer with 20+ years of experience. I've been running macOS for most of that time. Not because I loved it, but because it worked. Linux was always the system I wanted to use but couldn't justify the time investment to make it work.
That changed this year. Not because Linux suddenly got better at hardware support (it did, a bit). Not because the software ecosystem matured (it didn't need to — more on that later). It changed because I now have an AI that can debug kernel driver issues at 2am without complaining.
But first, let me tell you why I left macOS.
macOS: Death by a Thousand Paper Cuts
Window management on macOS is a dumpster fire. I don't use that phrase lightly — I've been patient with it for years. But it's 2026 and here we are.
I use a vertical monitor. macOS can split two fullscreen apps on one screen — but only side by side. On a vertical monitor, that gives you two windows that are comically tall and unusably narrow. This has been the case for years. There's no option to stack them vertically. There's no workaround. It's just broken for this use case, and nobody at Apple seems to care.
Programs open on random monitors. If you use multiple desktops or fullscreen apps, launching something yanks you to a random monitor and a random desktop. It's disorienting every single time.
Spotlight — the app launcher on a machine with an M1 Max, a chip that would have ranked in the top 10 of supercomputers not that many years ago — needs multiple seconds after typing two letters to match against a list of maybe 50 application names. Multiple seconds. For string matching. On a supercomputer.
I run multiple identical external monitors. After sleep, macOS swaps the windows between them. Every. Single. Time. You think Linux is the only OS that can't get multi-monitor right? Try macOS.
And then there's COSMIC, Pop!_OS's tiling window manager. Keyboard-driven window control, workspaces that make sense, tiling that actually tiles. After using it for a week, macOS feels like a fossil.
So the motivation was there. The question was: can I actually survive on Linux?
My 15-Year Linux History (in 4 Stages)
I didn't jump blind. I used Linux until 15 years ago, when I got tired of regularly spending days fiddling with my OS to keep basic stuff from falling apart. But I've been test-driving it every couple of years since, buying a laptop, installing Linux, and seeing how far I get. My experiences roughly followed these stages:
Stage 1: Hardware support is bad, software is mediocre. Hours of wrestling with wpa_supplicant to get WiFi working. Bluetooth? Forget it. You have a dGPU and want to switch between integrated and discrete? Enjoy your evening of X11 and kernel debugging. And the battery life — let's not talk about the battery life.
Stage 2: Hardware support is okay, software is still mediocre. Just when an email client reaches maturity, development stalls and someone starts a new one from scratch. GNOME introduces Vala as the new scripting language for desktop apps? Sure, let's rewrite everything in Vala. Result: unmaintained app in language X, half-baked app in Vala. Classic developer disease — finished, working products are boring, let's throw them away and rebuild.
Stage 3: Hardware support is good, software becomes irrelevant. Because I barely use native software anymore. Everything runs in the browser. Email? Webmail. Calendar? Browser. Messaging? WhatsApp Web, Discord, Slack — all browser or Electron (which is also a browser). My daily drivers are SaaS apps in Firefox, VS Code (also basically a browser), and the terminal. Those three things work great on Linux. So forget native software — the OS just needs to handle hardware, a terminal, and a browser. But I still stumbled over the usual traps: suspend/wake killing USB controllers, webcam not working, microphone silent, external monitors refusing to connect over USB-C. All solvable in theory, all weeks of frustration in practice. This is where I'd usually give up and return the laptop.
Stage 4 (now): Hardware and software status unchanged. But generative AI is good enough to help me tackle those small but deeply technical problems without disappearing into the basement for weeks and emerging as a full-time Linux sysadmin. This is the stage that finally got me to not return that laptop again.
My 4-Day Setup Journey (Which Would Have Been 4 Weeks a Few Years Ago)
The Webcam
My Dell 14 Premium uses Intel's IPU6 camera subsystem where the sensor's power management runs over a USB bridge. New hardware, incomplete mainline support. The stock kernel couldn't even detect the sensor. The error messages in dmesg were cryptic — GPIO pin error, sensor probe failures, ACPI dependency issues.
Claude and I went through four kernel versions, tracing the issue from ACPI tables through GPIO driver initialization to sensor probing. The AI brought deep Linux internals knowledge — reading kernel source, understanding driver initialization sequences, interpreting ACPI dumps. I brought the judgment calls: when I noticed we were going in circles on the mainline kernel, I researched Dell's Ubuntu-certified models with similar hardware. That led us to the Dell Pro MA14250 — which didn't hand us a fix on a silver platter, but shifted the problem from "hopelessly broken" to "solvable with the right kernel version." From there, it was a matter of testing OEM kernel builds until we found one where all the pieces aligned.
After getting the kernel right, we still needed to fix the upside-down camera image (v4l2-relayd config), the silent microphone (wrong alsa-ucm-conf version), and some more. Each sub-problem would have been its own research rabbit hole.
The External Monitor
After a suspend/wake cycle, one of my two identical USB-C monitors went dark. First assumption: USB port is dead. Claude helped me work through the obvious candidates — USB controller resets, xHCI module reloading, PCI-level resets. None of it worked, but each attempt systematically ruled out potential causes. We checked DRM connector states, TypeC port status, Thunderbolt domains — and found that the physical USB-C connection was established for both monitors, but the DisplayPort tunnel was only being created for one of them. The Thunderbolt stack had failed to rebuild the DP tunnel after suspend.
In the past, this is exactly where I would have given up and returned the laptop. "Something with the display output is broken, probably hardware, back to macOS." Instead, the systematic debugging — even the steps that didn't directly fix anything — narrowed the problem from "something is broken" to "this specific subsystem didn't properly reinitialize after sleep," which pointed us to a fix: forcing a Thunderbolt state refresh and reconnecting the monitor.
The Keyboard Layout
I use a US keyboard layout but wanted less clumsy German umlaut shortcuts, proper text-cursor control — jumping to line start/end, word breaks, that sort of thing. We went through xkb custom layouts (which don't apply on Wayland the same way), COSMIC's own config system (which ignores gsettings/dconf), and finally landed on keyd — a kernel-level key remapper that works everywhere, including the bare TTY console. When the first config didn't work because keyd had recently shipped a v2 with breaking syntax changes, I pointed Claude at the current manpage and we had it sorted in minutes.
The Rest
If you've ever set up remote devcontainers — automated environment setup, credential forwarding, dotfiles so your terminal in the container feels like your terminal on the host, port forwarding, tunneling, the whole stack — you can probably imagine how much time an AI saves when it knows all of these tools and how they interact.
Why Claude Changed the Equation
Every fix I applied was based on publicly available information. There are a quadrillion CLI tools for debugging kernel and hardware state. The kernel bug was documented in GitHub issues. The Thunderbolt behavior is in kernel docs. The keyd syntax is in its manpage. None of this is secret knowledge.
But the gap between "the information exists somewhere" and "a normal person can find it, understand it, and apply it correctly" is enormous.
Historically, Linux on the desktop failed for many reasons: missing commercial software, poor hardware vendor support, ecosystem fragmentation. Most of those blockers are fading. Software moved to the browser. Hardware support improved dramatically. Fragmentation matters less when your workflow is terminal + browser + VS Code. What remains is the long tail of small, deeply technical issues — the webcam that needs a specific kernel, the monitor that dies after suspend, the keyboard layout that requires three different config systems to get right. And that long tail is a place where generative AI proved to be of incredible help.
It's interactive — it responds to my specific dmesg output, not a generic wiki page. It holds context across a multi-day debugging session, remembering which kernels we already tried and what each one broke. It bridges knowledge domains — the webcam issue alone required understanding ACPI tables, kernel driver initialization, GPIO pin configurations, USB bridge architecture, media pipeline setup, and audio codec profiles. No single human is an expert in all of those simultaneously, but the AI navigates comfortably across all of them. And it's patient. It never says "RTFM." It doesn't close your issue as a duplicate.
But it's not a replacement for the human in the loop. It gets things wrong. It suggests approaches based on outdated information. It sometimes goes in circles. The human needs to maintain the big picture, question whether the current direction makes sense, and know when to reframe the problem entirely. The combination is what works.
The Vision
Imagine this AI assistance built into the desktop itself. Your webcam doesn't work — the system reads its own dmesg, identifies the issue, and either fixes it or walks you through the fix in plain language. Your monitor doesn't reconnect after sleep — the Thunderbolt workaround applies automatically before you even notice. Every fix I described was a series of terminal commands that could be scripted. The AI's role was figuring out which commands to run — and that's exactly what AI is good at. And it's a workflow that works especially well on Linux.
AI won't make Linux perfect. But for me, it is the bridge between Linux's current state — where everything is fixable but rarely easy — and a genuinely usable desktop experience for everyone.
Maybe 2026 really is the year.
Disclaimer: Of course, Claude also helped to write this article. I'm not a native English speaker and I sound considerably less eloquent in English than in German — Claude fixed that. It also reviewed multi-day debugging conversations and condensed them to the relevant parts so you don't have to read the equivalent of a book. It's helpful. Get used to it.