Hello! I am Cem, and this is my little blog that I rarely ever use. I am a Software Engineering student at the Berlin CODE university, the maintainer of Carbs Linux, and I occasionally compose music (mostly jazz and classical).
My repositories are really scattered, so see the software section if you are looking for one of my works.
This page is curl friendly! Just replace ‘.html’ with ‘.txt’, and you will be able to view this site in your favourite pager! In your terminal simply type:
curl -sL cemkeylan.com/index.txt | less
Date: Oct 12 2021
With the gaining popularity of the s6-rc, I have recently decided to check out execline. It is a shell-like scripting language that is built around chaining commands together. There is no interpreter, there is no ‘built-in’ functionality even though the docs might make you think there are. Execline is best described as a suite of tools that imitate the functionality of a shell.
There are a ton of information on the execline’s page, especially discussing
why skarnet prefers execline instead of /bin/sh
. Those points are mostly
valid, shells are a nightmare, and they suck at being compliant to the POSIX
specification. What I don’t agree with the why not sh page, however, is the
part on performance. Even though execline does not have an interactive shell
implementation of its own, it is still much slower than other shells simply by
its design. Since the design of execline is built on process chaining, it
requires spawning new processes for things as basic as variable declaration.
Variable manipulation is the cheapest operation you would expect from a shell,
but in execline, every operation costs the same regardless of how simple it is.
Throughout my weeks of toying around with execline, I have came to the
conclusion that execline is much better in simple scripts only. Execline is
as usable as any shell out there, but even with its advantages over sh
,
execline is only better if it’s simple. Execline is really good for certain
specific situations such as service scripts (as used in s6-rc), or where you
were already meant to chain a couple of commands together. After all, skarnet
already presents these limitations on the website of execline.
Execline can be leveraged as how s6-rc compiles service databases with other utilities, but I don’t think it can be used as a shell replacement itself. It’s not the next shiny thing to jump on to replace all your shell scripts with (unless you have really basic shell scripts). It does not have the flexibility nor the performance of the shell for scripts that can be considered a little over than the “basic”.
sysmgr
in CDate: Oct 02 2020
For a while, I have been thinking about implementing sysmgr in C. I started
thinking about the inefficiencies of sysmgr. POSIX sh isn’t particularly
designed to have ultimate control over child processes. There are basic job
management features that are just enough for sysmgr to do its job. The
biggest pain is having to use tools like sleep(1)
and kill(1)
. Calling
sleep every second, and using the kill command to check whether a process is
alive or not is extremely inefficient. Some shells do include these commands
built-in, but it isn’t specified by POSIX, but one should never take this as
granted.
Lately, I have been adding C utilities to sysmgr to make it more efficient. This
defeats the initial purpose of sysmgr, being a service manager in pure POSIX
shell. My main purpose, however, is making sysmgr efficient and simplistic. It
mostly imitates runit
without dealing with all the complexity of the
over-thinked supervise
directory, nor the logging stuff. Most of these can be
handled by the service script itself anyway. That’s why instead of this ugly
C/POSIX sh hybrid, I decided to implement it all in C.
I am not a C expert or anything, I am learning a lot as I am writing the program. I want it to be C99 and portable (for BSD). It’s currently not functional at all, but, you can see its current state here.
EDIT Oct 10 2020:
I did the initial release of this C version of sysmgr, which is more stable, and performant than the POSIX sh version. It still has rough edges, but is completely usable.
Date: Sep 08 2020
A few days ago, in the #kisslinux
IRC channel, jedahan mentioned an
implementation for trust in the package manager. While I was intrigued by the
idea initially, I decided not to implement this for the upcoming 4.0.0 release.
That is because the package manager and the distribution itself is already
centered on trust. However, this idea made me think a lot about “trust” in
distributed environments.
Who and what would you trust? Would you trust Microsoft? Would you trust a binary? Would you only trust a so called “reproducible” build?
Jedahan mentioned the possibility that a repository maintainer could create a package that would be normally in the distribution so they could mess your system up. He suggested a “source” system where you know where each package comes from. This way the package manager can warn you when the source of a package is changed. As I have said this idea intrigued me at the beginning, but here is why it is complex and unnecessary.
The package manager would warn you every time you fork a package and apply your changes. Both with kiss and CPT, you already see git logs when the repositories are fetched. Those logs address each and every file that has been edited, added, removed, or renamed. CPT also has support for rsync, which is called verbosely. While not as descriptive, rsync also shows what’s changed/added and what’s deleted.
Also, back on April, I have added submodule support to my fork of kiss, which Dylan adapted on May 19. I have added this feature because it solves a similar issue. I want to have only some packages from a repository and leave the rest of them. This way I am the one in control of what goes inside my repositories.
Minor annoyances aside, would this solve the issue of trust? Maybe this evil repository maintainer decides to botch a package that was already in their repository not provided by your distribution. Should we then track the source files, build files as well? But those change all the time.
I believe that this environment is as trustworthy as it can get, a repository system with package build instructions that easy to read and understand, easy to history check, easy to limit, and easy to allow. KISS and Carbs Linux have a single maintainer. I maintain Carbs and Dylan maintains KISS. You are not trusting an organization, you are trusting individuals that you can easily converse on the internet. The same goes for most community repository maintainers out there. Trying to implement more would be a “security theater” that would be a hassle for the maintainers, the users and the package manager without a noticeable benefit to any.
Date: Aug 28 2020
I have this script named wpa_add
, which I use to easily add new WiFi when I
am outside, possibly in a cafe. I have written this script because I don’t like
the way my girlfriend looks at me while thinking that I am an absolute moron for
not using Windows 10, and the entirety of Linux is a circlejerk. It is only
natural that she thinks this way. I use my own distribution that doesn’t have
things like dbus
, or NetworkManager
, or one of those common desktop
environments. You could install it by creating a simple package, but I am happy
to not have any of those in my system.
This script uses wpa-supplicant to add a new network and reconfigure. It uses
dmenu for input, however you could replace dmenu calls with some command line
prompts. I am doing the following assumptions:
- You can manipulate wpa_supplicant
without root access.
- The configuration is on /etc/wpa_supplicant.conf
.
- You can edit /etc/wpa/supplicant.conf
.
If you want to ensure the above just do the following (as root):
# Add yourself to the wheel group if you aren't already.
adduser user wheel
# Change the ownership of /etc/wpa_supplicant.conf
chown root:wheel /etc/wpa_supplicant.conf
# Make sure the configuration can be edited by the wheel group.
chmod 664 /etc/wpa_supplicant.conf
Your wpa_supplicant
configuration must include the following line (or something similar):
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
Here is the script
#!/bin/sh
# Script to add wpa_supplicant networks through dmenu
if [ "$1" ]; then
name=$1
else
name=$(dmenu -p "Please enter network name, leave empty if you want to search" <&-)
fi
[ "$name" ] || {
wpa_cli scan
name=$(
wpa_cli scan_results | sed 1,2d | while read -r _ _ _ _ ssid _; do
# Hidden wifi are not to be returned
[ "$ssid" ] || continue
echo "$ssid"
done | sort -u | dmenu -l 10 -p "Please choose WiFi")
[ "$name" ] || exit 1
}
pass=$(dmenu -P -p "Please enter your password, leave empty if the network has open access.")
if [ "$pass" ]; then
wpa_passphrase "$name" <<EOF>> /etc/wpa_supplicant.conf
$pass
EOF
else
printf 'network={\n\tssid="%s"\n\tkey_mgmt=NONE\n\tpriority=-999\n}\n' "$name" >> /etc/wpa_supplicant.conf
fi
wpa_cli reconfigure
As I have said, you could do something similar in a command-line-only tool as
well. This one uses fzf
on WiFi selection.
#!/bin/sh -e
stty="$(stty -g)"
trap "stty $stty" EXIT INT TERM HUP
if [ "$1" ]; then
name=$1
else
printf 'Network Name, leave empty if you want to search: '
read -r name
fi
[ "$name" ] || {
wpa_cli scan >/dev/null
name=$(
wpa_cli scan_results | sed 1,2d | while read -r _ _ _ _ ssid _; do
# Hidden wifi are not to be returned
[ "$ssid" ] || continue
echo "$ssid"
done | sort -u | fzf --prompt "Please choose WiFi: ")
}
[ "$name" ] || exit 1
stty -echo
printf 'Please enter your password, leave empty if the network has open access.\nPassword: '
read -r pass
if [ "$pass" ]; then
wpa_passphrase "$name" <<EOF>> /etc/wpa_supplicant.conf
$pass
EOF
else
printf 'network={\n\tssid="%s"\n\tkey_mgmt=NONE\n\tpriority=-999\n}\n' "$name" >> /etc/wpa_supplicant.conf
fi
wpa_cli reconfigure
These scripts can be found as a gist here
Date: Aug 28 2020
While I was working on a new initramfs generator for Carbs, I was once again reminded of the advantages of static linking software. Previously, I was using some really dumb script that was just basically using the package manager as a library for building the whole initramfs system from scratch. This system structure was completely statically linked, and the whole thing weighed around 1.3MiB.
Now, while rd
(the small script that I had written) was good enough for me, it
wouldn’t be a fit to distribute with the system. It doesn’t deal with dynamic
binaries, kernel modules or library installation. So I have written this script
that deals with those (kernel modules aren’t done yet, though).
The issue with build systems today are that the binaries are built dynamically unless you build the whole thing static. As long as there are shared libraries, the binaries will be dynamic as well. That’s why the core repository of Carbs still contains dynamic binaries for gcc, binutils, util-linux and some other packages.
The size of the new image with exactly the same binaries is a whopping 1.9MiB. While a size increase of 600KiB might not seem like a huge deal, I just want to tell you that busybox is static in both images, leaving ONLY TWO binaries that I install to my image; fsck and e2fsck. By switching from a static binary to dynamic + lib for only two programs, you require 600 KiB more space, and I have been talking about a gzip compressed cpio archive throughout this whole post.
Date: Aug 12 2020
Most people who don’t use a desktop environment use the startx
command to
initialize their X windowing system. Now, startx
is a shell script that runs
the C program xinit
which basically runs xorg-server
. Using xinit obviously
has some nice perks. It makes some checks and runs your .xinitrc file. We don’t
need any of that though. Here is my X launcher:
#!/bin/sh
export DISPLAY=${DISPLAY:-:0}
trap "$HOME/.xinitrc" USR1
(
trap '' USR1
exec X -keeptty :0 vt1
) &
wait
You need to keep in mind that your .xinitrc should be an executable.
Date: May 08 2020
Over the years, I have used many many Linux distributions. The reason I am now using a distribution maintained by me, is because I am never truly satisfied about other people’s work. Not that they are bad at they do, it’s just that they don’t share the same vision as me. And I have felt this way with almost every distribution I have used.
Arch Linux itself feels like it became a ‘meme distribution’. Their user-base is a cult-like community that think they are superior for using Arch Linux. Now, you might be an Arch user, and might not be like this. I used Arch for a long time, and didn’t feel this way, ever. I only see this level of cultism for Arch and systemd.
If you ever call Arch bloated on an online community website, you will get lots of crap for it. But in fact, Arch Linux is bloated. Now this isn’t due to having too many packages in the base installation. This is because of their packaging of software.
Arch enables almost every single option in the package configuration, meaning lots of unnecessary dependencies, and packages with humongous sizes.
Pacman is a rather slow package manager, and missing alternatives. For me, an alternatives system is a must.
If you want to use a better binary distribution, use Void Linux. They have saner package configurations, and the environment just feels more UNIXy. xbps is really fast, and has an alternatives system.
This will be the longer part, because my dislike for Gentoo is bigger than my dislike towards Arch. If you want to see how NOT to maintain a distribution, check out Gentoo.
I’ve used Gentoo for a few months, and I’m saying this right out of the gate. Portage is the slowest piece of software that I have ever used on Linux. Maybe that’s because I deliberately avoid software using Python, but Portage is most probably the slowest package manager that is being used.
Portage depends on Bash, Python, and GNU wget. I have got a line count from
cloc
, doing a find . \( -name '*.sh -o -name '*.py' \) -exec cloc {} +
.
The source code of just *.sh
and *.py
files are over 100k lines of code.
Then I got curious and runned cloc against the whole repository. Here is
the output.
--------------------------------------------------------------------------------
Language files blank comment code
--------------------------------------------------------------------------------
Python 703 20009 21411 102180
Bourne Shell 13 643 678 3911
Bourne Again Shell 44 583 434 3172
diff 17 31 298 574
YAML 6 32 80 573
XSD 1 27 27 494
C 2 56 128 291
make 1 7 6 19
INI 1 1 0 15
reStructuredText 1 5 4 9
XSLT 1 0 0 5
--------------------------------------------------------------------------------
SUM: 790 21394 23066 111243
--------------------------------------------------------------------------------
That’s quite a lot.
Portage is a package manager that tries to ease the configuration process of packages, but at the process makes it terribly complex to compose packages, and adds billions of portage configuration options. Configuring your first kernel is literally easier than configuring portage in a way you want. Users just do not know that they would be better off doing an LFS build for a much stabler system. My system was broken countless times while using Gentoo. Maintaining a Gentoo system is honestly harder than maintaining my own distribution.
EAPI, probably the worst thing about the Portage ecosystem. It is the most complex, hard to read, hard to learn packaging system ever made. Creating a USE flag system shouldn’t have been this hard.
Date: Apr 13 2020
To this day, I have tried lots of IDEs and text editors. Visual Studio, PyCharm, Sublime, Notepad++, Vim, Emacs, Pico, Atom, etc. The list goes on. I have even unironically used ed, and ironically used cat for a while.
I have settled down after years and years of “editor-hopping”. I now have 3 main editors that I use on a daily basis! Yeah, you have read it correct. I use 3 editors on a daily basis. Those are,
Emacs is a beast. Defining Emacs as a text-editor is wrong. It is a lisp interpreter, with text manipulation abilities.
Now, I do like the concept of Integrated Development Environments. It’s a shame that all of them suck. With Emacs I can fine-tune everything according to my taste, install the packages I need, configure them the way I like. With IDEs you get some nice plugins, and a tiny bit of customization, but that’s it. You get an environment limited by the vision of someone else. Not to mention that most IDEs are proprietary software.
I have stopped using Vim, because it is only meant to be a text editor. You can extend its features with plugins, but you can really see the impact with just a few of them. Vimscript is also really primitive, that’s why people write plugins with Python, JS, and such. This further affects the speed of Vim. Most Emacs packages I have encountered are written in pure lisp. I have over 70 packages, yet my load time and overall speed is better than when I had Vim with 8 plugins.
Also, let’s not forget that Emacs uses an ancient Lisp dialect.
I mostly use Emacs when I am dealing with projects. If my aim is to just make simple changes when I am on the terminal, I just pop up vi provided by busybox. I just like that it is fast and featureless. It barely gets the job done, and that’s why I like it.
rc
file, it uses the $EXINIT
variable instead. Available options are limited. For example, you cannot
convert the “tab” action to use space instead of tabs.v/V
isn’t implemented in busybox vi.I use sed for when I am making small changes to small files, because it is faster than opening a file, making a change, saving, and exiting. Using regular expressions are much faster and efficient at such things.