I got a Q1 Steam Deck, mostly for testing my own games on it, and making them work well. Finally, a stable target for Linux gaming!
Much to my surprise, the regular Steam Deck does not seem to have all the devkit services set up right away: I set my Deck to Developer Mode, installed the SteamOS Devkit Client on my main computer (as per the official instructions), but my Deck is nowhere to be seen! Now, I may have just missed something — and I’d definitely like to know if there was an easier way to do this — but I ended up installing the needed service manually, and set it up like I would a ‘‘hackendeck’’ Manjaro system. Since I didn’t find any guides for doing this with the Steam Deck specifically, I decided to write my own.
First, open Desktop Mode (press the Steam button, select Power, then Switch to Desktop) and then Firefox. Open this blog (or the Steamworks article, I found it by googling “How to load and run games on Steam Deck”), and open the direct install link, and it’ll install it via Steam.
Now the SteamOS Devkit Service can be found in your Steam Library, but it probably doesn’t launch yet. This is because the setup script checks if you’re running Manjaro, which the Steam Deck is not, and stops there. Now, let’s set it up so that it works:
cd .steam/root/steamapps/common/SteamOSDevkitService
kwrite configure-hackendeck.py
../steam-devkit-tool
. It should print “Only supported by
Manjaro - please check documentation” but continue running after
that.Now the service should run as expected, you can launch the service from Gaming Mode, and see your Deck on your main PC. However, this is not ideal, as the game is running in the foreground, which means that it’ll often get cut off, and it isn’t very automatic.
kwrite /home/deck/.config/systemd/user/steam-devkit-service.service
.[Unit]
Description=Manually created SteamOS Devkit Service
Wants=network.target
After=network-online.target
[Service]
Restart=always
WorkingDirectory=/home/deck/.steam/root/steamapps/common/SteamOSDevkitService
ExecStart=/home/deck/.steam/root/steamapps/common/SteamOSDevkitService/steamos-devkit-service.py --hooks hooks
[Install]
WantedBy=default.target
systemctl --user enable --now steam-devkit-service
Now, pretty much whenever your Steam Deck is powered on, it should be accessible to other computers on the same network which are running the SteamOS Devkit Client. You can install it for your main computer with this direct install link in case you still need it.
I have officially gone off the deep end. I am writing a C program just for fun. And it has been fun! I will admit, the experience of writing and debugging the code is much more frustrating than Rust, but the results are very rewarding. My client is currently clocking at 99KB, dynamically linking against libssl, libcrypto, and SDL2. And it’s a fully visual Gemini client! It’s always pleasant to write dense software.
Find it here, though at the time of writing, it’s not usable yet:
=> https://git.sr.ht/~neon/nemini
Here’s a bag of assorted thoughts I came across development.
About a week into development, I learned of Lagrange.
=> https://gmi.skyjake.fi/lagrange/
It uses SDL and OpenSSL, is written in C, and made by a finnish person as well. It’s one thing to make “yet another program to do X”, and another thing to make “yet another program to do X based on Y written in Z”. Oh well. Good thing I’m writing this client for fun, then ;)
Here’s a trick I learned many moons ago, which I finally got to use during this project:
#!/usr/bin/tcc -run
You know, if you want to write a quick little script, but want to do it in style, with C. I wrote a short script to encode a binary file into a C source file with this:
#!/usr/bin/tcc -run
/* Reads bytes from stdin and prints them out as C source code to stdout. */
#include <stdio.h>
#include <unistd.h>
int main(void) {
int bytes_read;
unsigned char buf[75 / 5]; /* 5 chars per byte: 0xFF, */
printf("const unsigned char data[] = {");
do {
printf("\n");
bytes_read = read(STDIN_FILENO, buf, sizeof(buf));
for (int i = 0; i < bytes_read; i++) {
unsigned char byte = buf[i];
printf("0x%02X,", byte);
}
} while (bytes_read != 0);
printf("};\n");
return 0;
}
Usage:
clangifyer.c <binaryfile.bin >cfilewiththedata.c
There might be bugs, but it seemed to work for me. I included the Atkinson Hyperlegible Font into my client with it!
=> https://www.brailleinstitute.org/freefont
You might ask: is this really the first time you had the opportunity to use C for scripting, after having learned of this “trick” “many” “moons” ago? No, not really, but I had C on my mind this time.
It’s actually quite a simple library to use. I based my code completely on gmni’s TLS code, and I’m not sure I know how the whole thing works yet, so I won’t write a tutorial here. But it wasn’t that much code! Don’t be intimidated by the fact that it’s a big scary system library like I was initially.
=> https://git.sr.ht/~sircmpwn/gmni
Writing C sure has been an adventure. Here’s some highlights:
Do you have a Gemini client handy? Are you from the future? You could read this blog post on my gemlog too:
If there’s nothing there, I haven’t set it up yet. Likely because I’m still hacking away at Nemini instead. I’ll probably make another post when I set that up, so sign up to my Atom feed if you’re interested.
To start this off, I would like to make the disclaimer that I am not a professional Quake player. In fact, I have only played Quake 1 and Quake Live, and my combined playtime is in the magnitude of hours, not tens or thousands.
That said, every now and then, I get the feeling to play Quake. One should play the classics, right? The feeling never seems to last, but I do always enjoy Quake’s movement. Delightful to zoom around that starting chamber. Since I am currently working on a first-person puzzle game, I figure I should find out what makes Quake feel so good, to replicate that feeling in my game.
The most visually distinct “effect” in Quake is the way the camera leans when strafing. This might also be happening when you move forward and turn—which could imply that this lean is actually based on the player’s velocity on the right/left axis, and that turning has a bit of inertia—but the effect is hard to detect by just eyeballing. It should also be noted, that this lean does not apply to the gun in the middle: it actually accentuates the lean by staying upright.
The camera bobs slightly on the up/down axis, while the gun has a more pronounced bob, down and towards the player. The gun doesn’t seem to bob in a sine wave either, like the camera: it seems to stay relatively still, until it dips noticeably whenever the camera dips. Modern gun bobs are more elaborate, but I do think there is a certain charm to Quake’s bob.
When releasing the movement keys, the player will slow to a stop over a short period of time. I can not tell if there is acceleration when you start moving, but if there is, it is very sharp. Still, moving forward does feel very smooth, so maybe it does accelerate over a few frames. This effect is even more pronounced in Quake Live: the movement feels really smooth. I hope the source will enlighten me further on this.
If you would like to check out the source for yourself, it is on id’s GitHub.
Lucky me, the leaning code is the first function of view.c. The lean is based on the player’s velocity on the right axis, as I suspected, and it seems linear. Simple to implement, for a nice effect.
Head bobbing is implemented in the very next function of view.c. The code is pretty much what you’d expect, with a bobbing period of 0.6 seconds. The bobbing intensity is based on the player’s velocity on the XY plane. This is good for fading out the effect, taking into account how quickly the player accelerates / brakes.
Turns out, the gun does not actually bob up and down with varying speed: it bobs in sync with the camera, forwards and backwards. Because of how perspective projection works, the effect is more pronounced as it gets closer to the camera, which led me to believe the bob is not a sine wave, assuming the motion was up/down. This is why it is great to have the source available!
Next up, from sv_user.c: SV_UserFriction.
The friction function is probably the spiciest code I have reviewed for this post, it’s easiest to demonstrate by just showing the code (edited for readability):
float speed = length(velocity.xz);
float control = speed < sv_stopspeed.value ? sv_stopspeed.value : speed;
float newspeed = (speed - host_frametime * control * friction) / speed;
if (newspeed < 0) newspeed = 0;
velocity *= newspeed;
Is that not wild? Maybe not. It is basically just like, friction, but
what gets me is the control
variable. The amount of friction applied
goes down as your velocity decreases, until you pass the “stop speed”
threshold, after which the friction stays constant. This causes this
sort of braking effect in game, which I really enjoy.
The acceleration function is what you would expect, like the friction was, but has no extra spices: SV_Accelerate calculates an acceleration value based on the target speed, then applies that acceleration to the player’s velocity. The acceleration is 10 * frametime * wishspeed, so wishspeed is reached in about a tenth of a second, according to my napkin calculations. 100 milliseconds is many frames! I am not sure how I was fooled into thinking the movement was so sharp I could not tell if there was acceleration, but testing it out now, the acceleration is quite clear. Never trust eyeballed observations!
I originally intended on putting the demo right here, but I thought it’d be better to warn you before loading up a game in your browser. So be warned, the demo link will open a web game written in JavaScript, with three.js. Controls: WASD to move, arrow keys to look around, space to jump. Mind the non-axis-aligned walls, they are not quite solid. Demo.
What did I learn? That I should never trust my own judgement on movement. Also, that a good acceleration function combined with a good friction function make for some good movement! With a little bobbing and leaning sprinkled on top, you get some excellent movement out of a few lines of code.
So screens today are quite high resolution. For example, I have a 2560x1440 resolution screen as my main viewport to the digital world. While lots of pixels can be used to make very sharp graphics, they also present a problem: old applications are still drawing letters that are 16 pixels high, which are very small on these screens! There are at least two solutions to this:
Stop working in pixels, and start working in physical units instead when making graphics, and let the underlying graphics systems handle the conversion into pixels. This way, one string of text is the same size on every screen, but might not be what we want: desktop screens are viewed from a further distance than phones, which would make text either too big on phones, or too small on desktops.
Keep working in pixels, and keep thinking in terms of “this text should be printed at a font size of 16 pixels,” but secretly multiply all values by some factor depending on how high resolution your screen is, and how far people are generally going to look at the screen. So, you might use a multiplier of 2x for a 4K desktop screen, or even 3x for a 2K phone screen. I’ll be calling this multiplier the “DPI scaling factor” from now on.
It should seem relatively obvious that we have gone the second route. People are doing everything on phones nowadays, so whatever system we come up with has to fit that need with minimal developer effort. I kind of wish we had gone the first route, as it would’ve annihilated any hopes of even trying to think in pixels, which might’ve led us to some vector-graphics-based paradise where every graphic is smooth and well antialiased. Sadly, this is not the case, and everyone is still rendering bitmaps that may or may not be the same resolution as the picture that ends up on the screen of the user.
Based on my experience, it seems that the current way to get nice looking sprites on screens of differing DPIs, is by having manually scaled versions of every asset for every possible DPI scaling factor. This is a bit bothersome, but it looks great! The artists can make sure every pixel is just right, and they’ll end up rendered on the user’s screen exactly how they intended. It would be really bothersome to write all graphics layouts for each DPI scaling factor however, but that can thankfully be generalized (sort of): just work in some “logical” pixel coordinate space, which gets converted into physical pixels later on.
Dun dun dunn.
Turns out, logical pixels aren’t the savior they’re made up to be. Well, they are, to a point. As long as your DPI scaling factor is an integer, you’ll be fine. Every logical pixel will correspond to some specific physical pixel, and no shenanigans will happen that could ruin your artist’s pixel-perfect asset. But! Some systems can set the DPI scaling to a fractional number, like Windows. I have set my Windows to scale everything by 1.25x, because that makes text comfortable to read on my 1440p screen, while still giving me the working space I want on a desktop screen. But this presents a problem, which I will demonstrate via three examples.
This is the normal, easy situation. The artist has drawn a 8x8 sprite,
which I then render at the logical coordinate (2, 2)
, at a logical
resolution of 8x8
. Because there is no scaling, this results in a
nice, crisp sprite rendered at (2, 2)
, with a resolution of 8x8
,
displaying every pixel of the sprite in all their glory.
The artist has now drawn me a 1.25x version of a sprite for my
game. The unscaled version of the sprite is 8x8
, and so the 1.25x
version is 10x10
. I render it at the logical coordinates (0, 0)
with the logical size 8x8
to test out that it looks good on my 1.25x
scaled screen, and voilà: I get a 10x10
(which is 8x8
scaled by
1.25x) sprite rendered at (0, 0)
(which is (0, 0)
scaled by
1.25x), which looks great, pixel-perfect!
Now that I have the 1.25x sprite, I decide to playtest my game a
bit. The player ends up at the logical coordinates (10, 10)
, but
wait! The resolution of the sprite is still correct, just as it was in
the previous example, but what about the coordinates? Well, (10, 10)
scaled by 1.25x is (12.5, 12.5)
.
Well.
What happens now depends on how you handle fractional physical pixels. In my case, rendering with OpenGL and using floats to describe the pixel coordinates, you end up with a mushed sprite that’s sampling the texture in all the wrong spots, resulting in a sprite with a bad 1px blur. But is that okay? Is that what Microsoft intended to have be displayed when the coordinates don’t match up? Or should I round the coordinates, and occasionally end up getting weird 1px gaps between objects that are really just 0.01 pixels apart, enough to make them round to the different side? I don’t really know, and as such I ended up adding this as a parameter to my quad-drawing function in the library I’m writing. On one hand, the blur could be fine, you’re on a high DPI screen anyways, so you probably won’t mind a very slight blur. But on the other hand, if you’re working with very fine details, that blur might cause havoc! I’d be interested to hear if you have come up with a rigorous solution to this, heard of one, or just have opinions about this.
Not my pet of the genus Mammut, no. Just my old fediverse instance. Sometimes you want to move domains, sometimes software. Of course, you could migrate your old instance over to the new software, but what if there is no migration path, or you’d like to start anew? This is the situation that inspired this post: I moved to Pleroma when they finally released 1.0, and started a new instance on another subdomain, on my own hardware. I would like to avoid paying for my Mastodon instance’s server costs now that I am not using it anymore.
Disclaimer: I have not contributed to any fediverse software, and I have no idea about how they really work. My fedi experience consists of hosting an instance for two years. So, consume this document with much salt.
You may ask, how do you put an extinct proboscidean to sleep? I do not
know either, so I asked on the fedi, but nobody came up with a solid
battle-plan. After some searching, I found the tootctl self-destruct
command, which seems like a good way to make everyone forget your
server ever existed (it sends account delete activities to
everyone). That is not quite the magnitude I want, I would prefer to
leave a gravestone, not absolutely annihilate my old persona. But hey,
that might be what you want, so there you go.
Imagine a few hours of system administration before reading onwards, that’s what I did at this point in the blog. I do so enjoy writing an article as I do the stuff I write about.
Based on the previous information, I had a plan to cache the
/users/neon
endpoint, and respond with error 410 everywhere else. I
did that, and then realized that it probably is not good to pretend to
be an active instance (through providing a cached response on one
endpoint), and ended up just serving 410 on the whole domain.
So, how do you put a big furry creature to sleep? Make it respond to everything with error 410. That’s what I did in any case, I hope it’s asleep now.
When I have been looking for a new language to learn, or gauging the
usefulness of a language, I tend to appreciate build systems as part
of the language. I think an important property of a tool is its
simplicity of use, and for a long time I avoided C because it did not
have a mvn
, cargo
, or npm
. I thought the lack of an in-built
build tool made C unnecessarily complicated to get started with. After
reading bits and pieces of the K&R, making an almost whole game in C,
and still not having touched makefiles, I have come to the conclusion
that I was wrong. I have been happily writing C and building it with
shell-scripts that build the program for each specific system. For
example, I have a batch script for Windows that builds the program
with cl.exe
, providing the correct flags, one for Linux, and so on.
The reason I wrote the shell scripts instead of makefiles is because I
already knew how shell scripts work, and learning to use cc
and
cl.exe
was quite similar to learning to use the aforementioned build
tools. If the compilers did not do something I needed from a build
tool, I would just write that part in the script myself. In addition
to this, I have heard a lot about makefiles not being super portable,
and the existence of qmake
and cmake
seem to prove that idea. And
in my own experience, when I have had to call make
manually, it has
stumbled on my system being misconfigured somehow, which makes it seem
prone to breakage.
That said, I definitely feel like I should learn how to write a
Makefile
, if for no other reason than to make my negative opinion of
them more valid. Or invalid, as the case may be ;)
A few days ago, I got a copy of The Unix Programming Environment by
Brian W. Kernighan and Rob Pike from the library. Why? The main reason
was because I wanted to explore the Unix philosophy a bit further than
what I had learned from hearing it repeatedly in tech articles and
social media (not much). The other reason was because I imagined it
would have a bit on make
.
It does, sort-of. The eighth chapter is about writing a big program,
and has three digressions on make
. Based on that, I got the
following impression of it: makefiles define dependencies between
generated and manually edited files, and how those dependencies
manifest (that is, how you get from the manually written files to the
generated ones). The connection to C becomes obvious once you think
about C programming in those terms: first you manually edit the C
source code, then you generate object files out of that, and then you
generate a final executable from those.
In this light, make
seems like an obvious fit for C, but very
general as well. I did not realize what the specific role of make
was before, probably because other programming languages have built
similar functionality into their build tools, but after this
realization, it is obvious.
On one hand, I appreciate how cc
and make
split the job of making
lots of source code into an executable (something something Unix
philosophy), but on the other hand, I can not help but feel like it is
pointlessly complicated for the end-user. Almost every program follows
a similar build-pipeline, or could at least be refactored to fit, so
why not combine the functionalities to a singular build tool? Why do I
need to learn yet another syntax just to build my C? I imagine that is
what a lot of people thought, since at least a few trillion different
tools have been written to build a C program. I think. Ninja
just
sounds so out-there, that all the good names must have been taken
already, which would infer many such tools. I do not actually know if
ninja
is a build tool. But I digress.
Make(1)
is neat. I wrote a Makefile
for this blog to test it out,
and I enjoyed the experience after ironing out a few
misunderstandings. It is a shame that make
is the most universal
tool to build C programs, I would much prefer a more cargo
/go build
-like experience, but it is not that bad either. If you are of a
similar mindset to mine at the beginning of this post, just go ahead
and use make
. It is not evil. At the very least, it can not be as
bad as picking one of the million other C build tools, because you do
not get anywhere by making yet another standard, and you learn
a new tool you can use for many other things as well. And make sure to
be POSIX-compatible, just to be a good citizen :)
As a final note, The Unix Programming Environment has been a pretty fun read, if a bit obsolete. At least the C is, did you know functions looked like this in the 1970s?
main(argc, argv)
char *argv[];
{
// ...
}