Skip to content


Avoiding “$f2bV_matches” in fail2ban reports to AbuseIPDB

I just set up AbuseIPDB with one of my fail2ban instances, mostly just out of curiosity and because it seemed simple enough. However, following that guide with an old-ish version of fail2ban made me end up with “$f2bV_matches” as a report comment, which doesn’t look too good.

A quick search led me to this Github issue, which is quite a long read and a bit confusing. But long story short, there was a bug in the provided “action.d/abuseipdb.conf” configuration file, prior to a somewhat unclear fail2ban version (0.10.3?). Note that I suspect it could apply to a later version if you somehow keep installed configuration files when upgrading.

Anyhow, since it’s all in that configuration file, you can just grab the appropriate line in the fixed version, which I’ll copy here as well:
actionban = lgm=$(printf '%%.1000s\n...' "<matches>"); curl -sSf "https://api.abuseipdb.com/api/v2/report" -H "Accept: application/json" -H "Key: <abuseipdb_apikey>" --data-urlencode "comment=$lgm" --data-urlencode "ip=<ip>" --data "categories=<abuseipdb_category>"
Then put that line as a replacement of the existing one in /etc/fail2ban/action.d/abuseipdb.conf
Note that while you’re at it, you can set your API key in this file as well (abuseipdb_apikey = ... at the very bottom). This way, you don’t have to put it in every single jail, which helps make things more readable and maintainable, IMO.

And that’s about it, if you’ve followed the rest of the setup instructions provided by AbuseIPDB. Don’t forget to at least reload fail2ban (sudo systemctl reload fail2ban), although for me it seemed that something restarting it worked better (in such case, don’t forget that it may submit duplicate reports, which you should delete)

A few other useful commands (mostly for my own copy-pasting convenience 👀):

sudo tail /var/log/fail2ban.log
sudo tail /var/log/fail2ban.log > /home/export.txt
sudo tail /var/log/auth.log
sudo nano /etc/fail2ban/fail2ban.local
sudo nano /etc/fail2ban/jail.conf

And while I’m at it, let’s get that contributor badge going (I hope it works with subdomains) (edit: yes it does):

AbuseIPDB Contributor Badge

Update 2024-03-19

If getting fail2ban [490]: ERROR Failed during configuration: Have not found any log file for sshd jail, it probably means logs are not being written to /var/log/auth.log because syslog was not installed. A fix could be to either install syslog (or rsyslog), or to configure fail2ban to use systemd as a backend, by adding backend = systemd to the jail configuration. Cf also this ticket on Github.

Some more useful commands

sudo systemctl status fail2ban
sudo systemctl status ssh.service
sudo apt policy openssh-server
sudo fail2ban-client unban --all

Last but not least, make sure these packages are installed, otherwise the ban jail will fail to execute fully:

  • iptables
  • curl

Posted in servers, web filtering.


Bitcoin and the blockchain for dummies

I often see people asking how Bitcoin/the blockchain works, and all the resources I’ve seen are either too technical or way too superficial. So I thought I’d finally try to fill the gap.

In this guide I’ll try to explain the fundamental principles of Bitcoin (and more generally, of any block chain), while leaving out the details that are not necessary for understanding. So you won’t find here all the details needed to implement a working implementation (that’s definitely not the idea) and I will take shortcuts, as long as they don’t negatively affect the logical coherence of the whole system, but hopefully you’ll find a sufficient amount of details so that you can say “OK, I get how the magical blockchain works – and it’s definitely not magical, nor even that complex”.

Concepts that you need to know

And by “know”, I mean that you don’t need to know the details (but you could find interesting to dig into that by yourself), but you need to have some basic knowledge about these concepts. Namely, asymmetric cryptography, digital signature and hashing.

Concept 1: asymmetric cryptography

By the way, “crypto” is short for “cryptography”, not for “cryptocurrency” (which is already short for “cryptography-based currency”).

Imagine Alice wants anyone in the world to be able to send her encrypted messages, without her having to provide each single person a unique key (as she would need if she used symmetric encryption). She can generate a key pair, with on one side a private key (secret, Alice keeps it) and on the other side a public key, which she can publish on her website – or her Tiktok profile or wherever.
Then if Bobs wants to write to Alice, he can encrypt his message with Alice’s public key, and send it, even publicly: only Alice, with her private key, will be able to decrypt it. That’s the point of asymmetric encryption: anyone can encrypt with the public key, but only the person who has the corresponding private key can decrypt.

Such characteristics are obtained via complex maths problems. For instance, for RSA algorithms, if I oversimplify a lot, the private key is a couple of very big prime numbers, P and Q, and the public key is their product N. The encryption’s asymmetry is based on the fact that it’s a very hard problem to find P and Q given only N, so in practice you can create a function that will encrypt based on N, and then another function to decrypt based on P and Q. Again this is very oversimplified, but that’s the concept.

If you want to study this further, see Wikipedia’s RSA page and the Wikipedia’s public-key cryptography page

Concept 2: digital signature

If you skipped asymmetric cryptography, I’m afraid you’ll need to go back to it first.

As we saw, Alice has her private key which she keeps secret. Imagine now that she wants to sign a message. First, she writes the message. Then, using her private key, she can apply a signing algorithm on the message, and obtain a signature. She then sends the message with the signature (note that she could encrypt it on top, or not).
Bob receives the message. Using the public key, he can apply the corresponding signature verifying algorithm on the message and signature to confirm that the signature is valid and comes for Alice’s private key. Even though he doesn’t know the private key, as this is all based on asymmetric cryptography.

And if Alice decides to publish her signed message publicly, anyone can use the public key to confirm that she signed it indeed. The “message” doesn’t have to be text. It can be any data. For instance, on this page, KeePass publishes signatures for their software releases (they are in the “[OpenPGP ASC]” files). They also publish hashes, which will be our next concept.

If you want to study this further, see Wikipedia’s digital signature page

Concept 3: hashing

If you’ve ever heard of MD5, that’s a hashing function, and if you know the concept, you probably know enough for this part. If not, read on.

A hash function can be applied to input data of any size (it can be an empty string but it can also be a several gigabyte file), and returns a fixed-size value (some hash functions return variable size hashes, but these are not common), named “hash digest”, or just “digest” or “hash”. For instance, an MD5 hash is 128 bits long (i.e. 16 bytes), and a SHA-256 hash is, as the name suggests, 256 bits long.

For cryptographic use, a simple hash function isn’t enough: we need a cryptographic hash function, which is basically any hash function that has these important properties:
– given random inputs, the output values must have a uniform probability distribution (i.e., it looks random)
– given a hash value, it is “impossible” to find the input value (except by brute force)
– given an input, it is “impossible” to find another input that has the same hash
– it is “impossible” to find two inputs that have the same hash

This means, notably, that 2 messages with only 1 letter changed will have a totally different hash. For instance, the MD5 of “hello” is 5d41402abc4b2a76b9719d911017c592 and the MD5 of “Hello” is 8b1a9953c4611296a827abf8c47804d7.
MD5 and SHA-256 were both meant as cryptographic hash algorithms, but MD5 has been shown much vulnerable for a long time now, so don’t use it in cases where you do need a cryptographic hash. I used MD5 for my example simply because it’s shorter than state-of-the-art cryptographic hashes.

If you want to study this further, see Wikipedia’s page on cryptographic hash functions

I didn’t mention it earlier and this isn’t a necessary detail for the purpose of this guide, but hashing is notably used in the process of digital signing: typically, a signing function will first produce a hash of the message to be signed, and then sign the hash instead of signing the whole message. This is because signing algorithms typically have limitations that make them weaker (or even vulnerable) if you use them on a large amount of data, and also they are much slower than a hashing function, so by signing just a hash you go faster.

On to the Bitcoin block chain

Before building our block chain, let’s summarize the problem: building a system to store and move “money” (bitcoins), in a context where you can’t trust people (so the transactions must be verifiable based on math / cryptography / hard proofs). Also, reward people that make the system run (hence the “miners” get BTC from fees plus a block reward) but avoid an infinite inflation (hence the block reward gets lower and lower as time passes).

Part 1: spending money

We have to start building somewhere. And what better place to start than what money is for: spending!

Alice, again, has a private key, and obviously the public keys that matches. Let’s call this key pair “ALICE001”. This is also her Bitcoin address. That’s right, a Bitcoin address is simply a public key. Or more precisely, the hash of a public key (this difference doesn’t matter much, except that it means you can keep your address secret until you use it).
Alice can have as many Bitcoin addresses as she wants, and she stores them in her wallet: basically, a Bitcoin wallet is just a file containing all the private keys of your addresses. It actually stores more stuff, but the strict minimum is the private keys, and notably it doesn’t contain coins. The coins don’t really move, they just get assigned to different Bitcoin addresses.

Because this information is publicly written in the blockchain, everyone in the world knows that “ALICE001” owns 1 bitcoin.

Alice wants to send 0.5 Bitcoin to Bob. She finds that Bob’s address is “BOB001”. She then creates a message that says: “I send 0.5 BTC to BOB001, I give 0.0001 BTC as network fee, and the remaining 0.4999 BTC go back to ALICE001” (NB: she could send to more addresses at the same time, and also she could send the remainder to a new ALICE002 address instead of ALICE001 to improve security and privacy a bit). She uses her ALICE001 secret key to sign the message, and she sends all this to the Bitcoin network.

Step 1 complete: now everyone knows that ALICE001 transferred 0.5 BTC to BOB001. But this isn’t over.

Part 2: recording the spend

The world knows that ALICE001 sent 0.5 BTC to BOB001, but this isn’t enough. Because nobody can (or should) be trusted. So for instance, based on her original balance, ALICE001 could send 1 BTC to CAROL001 and 1 BTC to DAVE001, and these would be all potentially valid transactions because we cannot trust the time at which ALICE001 made the transactions so we don’t know which one is “first” and valid, and which ones are later and to be rejected.

Let’s go back to the end of part 1, so Alice didn’t try to make a mess and only sent one transaction (0.5 BTC to BOB001), and she broadcasted it to the Bitcoin network.
The Bitcoin network consists of nodes, which are people running Bitcoin Core. Among those, some are people who just run it in order to have a local copy of the blockchain (for instance to send transactions themselves), and some are “miners”.
What miners do is that they gather all new transactions, they select the ones they like (typically those with the highest fees, or fee/size ratio), they stash them into a block (in Bitcoin’s case, the maximum size of a block is 1,000,000 bytes, but a miner can decide to not fill it), and then… they try to get their block accepted.

Since Alice included a decent fee, most miners will probably put her transaction in their next block soon.

But then, how to decide which block gets accepted as the next block?

Part 3: building and inserting a block

Schematically, a block contains (this isn’t the full list, to keeps things concise):
– the hash of the previous block (this is where the “chain” part is, by the way: each block refers to another parent block… and just like that we have a chain of blocks!)
– the timestamp
– the target difficulty (more on that a few lines below)
– all the valid transactions the miner decided to include (up to the maximum size)
– a transaction that rewards the miner’s address with all the block’s fees plus the fixed block reward (initially 50 BTC but divided by 2 every 210,000 blocks: this is how, as time passes, the maximum number of BTC added into the system will get very close to 21 millions)
– plus a small arbitrary part

Then the miner computes a SHA-256 hash of all this.
And here let me introduce the target difficulty: Bitcoin was designed with the idea that a new block should be inserted, on average, every 10 minutes. With that in mind, every 2016 blocks, the Bitcoin network decides a target difficulty that the next blocks should fulfill, based on the time it took to mine those last 2016 blocks, and also based on the previous difficulty (to avoid having too strong fluctuations).
Long story short, the difficulty corresponds to how many zeroes the hash should start with. As we saw earlier, the hash in unpredictable, so the only way to find a hash that starts with enough zeroes is to modify the block and try again. The block can change if a new transaction is added in the meantime, or otherwise simply by incrementing the “small arbitrary part” at the end of the list.
So what a miner does, to get his block accepted, is try many many time to increment the “small arbitrary part” and compute the hash and hope it starts with enough zeroes.

Once a miner finds a new block with a hash that has enough zeroes, it sends it as fast as it can to the network (in case another miner finds a good block more or less at the same time, the one that is lucky enough to spread first eventually wins), where each node will verify it (be it a miner or not) and pass it on.

Part 4: disagreements

As I briefly discussed, it’s possible that 2 different new blocks are “mined” by 2 different miners almost at the same time, and it’s also possible that Alice sends several valid transactions at the same time (this is referred to as a double spend).

In case of 2 new blocks, first, the one that is based on the longest chain wins.
If they both have the same parent (so they have the same chain length), a fork will occur, where a part of the network will work on adding on top of one block while part of the network will work on adding on top of the other block. Eventually though, one of the 2 chains will grow longer than the other, and the shorter chain will be dropped.

As for Alice, if she sends multiple conflicting transactions, eventually one of them will be included in the blockchain. Once this has happened, the other transactions won’t be valid anymore (as they are now impossible) and will eventually be dropped. If a fork occurs at the same time, though, it’s possible that one transaction is included in one chain, and a different transaction in another chain. Eventually though, only one chain will remain and will be the one that decides which transaction is kept.
The number of blocks added on top of the block containing a transaction is commonly referred to as the number of confirmations. The higher this number is, the surer you can be that the transaction is definitely inserted in the main chain. Nowadays, many exchanges and merchants require 6 confirmations before considering a transaction final. This is completely arbitrary, but also a reasonable value. Even though for small transactions you’re probably good enough with just one confirmation.

Part 5: getting it started

Just for the fun of it, I asked ChatGPT to tell us about the first block. I’ll put the whole answer, uncut, because I found it amusing how even GPT gets a bit philosophical in its answers.

The first Bitcoin block, also known as the “genesis block,” was mined by Bitcoin’s creator, Satoshi Nakamoto, on January 3, 2009. This block, with the block height of 0, served as the starting point of the Bitcoin blockchain.

The content of the genesis block was unique compared to subsequent blocks. It contained a single transaction, which awarded 50 bitcoins to a specific Bitcoin address. This transaction is often referred to as the “coinbase transaction” and served as the issuance of the initial bitcoins.

In addition to the coinbase transaction, the genesis block included a message embedded in the block’s coinbase parameter, which reads: “The Times 03/Jan/2009 Chancellor on brink of second bailout for banks.” This message is widely interpreted as a commentary on the instability of the traditional financial system and serves as a timestamp for the creation of the Bitcoin network.

Overall, the genesis block laid the foundation for the decentralized and trustless system of peer-to-peer electronic cash that Bitcoin represents.

So all in all, just make a somewhat arbitrary first block, and then pile up new blocks on top of it as above-described, and there you have the Bitcoin blockchain.

I used ChatGPT for that part mainly as a suggestion that, in case you need more details on a specific point, you can ask some of these large language models, as this is something they should be good at detailing, given that it’s something technical, with source code and documentation available, and that it’s old enough (way older than the cut-off point in training data for those models). But the rest was, as Aldo Stérone would point out, “written by a human enjoy it while it lasts”.

Closing remarks

Except for the first one about storage, these are mostly random rants, so feel free to skip.
If you have questions, or if you think I missed something important, the comments are, as always, down there.
See you in another post book lengthy post.

Storing all this

Nowadays, most blocks are close to (or at) their maximum capacity of 1MB. Meaning the blockchain grows roughly by 1MB every 10 minutes, or 144MB per day, or… more than 50GB per year.
We definitely need to have a full record of that, otherwise we can’t verify the transaction history so we can’t verify who owns which Bitcoin and either the system collapses or we have to trust someone with starting values that are different from the very first block. But the whole concept is “trust noone”.
So we need many people running Bitcoin Core and opting to store the whole blockchain.
So we need many people running Bitcoin Core and dedicating 500 GB to it as of January 2024 + 50 additional GB every year.
Enough said.

This also means that the Bitcoin network currently operates pretty much at max capacity. A workaround to this is the Lightning Network, which is basically an extra layer on top of the blockchain, to perform smaller and faster transactions outside of the blockchain.
If you want to study this further, see Wikipedia’s page on Lightning Network

Binance, Coinbase, Kraken, and the like

These services were originally created to exchange bitcoins for dollars, euros, etc. Nowadays however, many people don’t keep their keys themselves, instead they trust these platforms to hold their bitcoins for them. Remember how the initial problem was to create a trust-less system? OOPS.

Ponzis and shitcoins

As I mentioned at the top, and as you now have (hopefully) realized, the blockchain is, in itself, a simple concept. Very ingenious and creative (hence hard to invent) but simple (thus easy to implement). It does use complex cryptographic stuff, but in practice, you use ready-made, open-source (and FLOSS) libraries for these. On top of that, Bitcoin Core itself is open-source, as cryptography-centered software should be.
As a result, making a copycat is easy.

As you may have guessed, whoever creates the first block gets “some” spare change, plus then that person gets to mine “a few” more easy blocks alone, and later with a slowly growing amount of early adopters. As such, people have long criticized Bitcoin as being a bit of a ponzi scheme. I think I’ve even heard that as an argument in favor of the “G1” (pronounced “June”) alternative currency, which claims to avoid this pitfall itself, barely a few years ago. I agree with that criticism, but on the other hand, 1) how else could they have gotten it started and 2) this is now part of a more and more distant past.

Anyway, since making a copycat is easy and being the first (or an early adopter) is a jackpot, there is a incessant influx of “alternative coins”, aka “altcoins” aka shitcoins. All of them with founders hoping to be the head of their own ponzi. White Bitcoin couldn’t avoid starting a bit like a ponzi, those altcoins certainly could: by not being created in the first place. But eh, that’s the modern-day casino, where people feel like they’re “trading” rather than gambling on which shitcoin will still have buyers next week… but I must be getting old and grumpy 🤷

Posted in cryptography.


Setting up an OpenVPN server on Linux: notes and comments

A few days ago, I posted a “quick” guide on how to set up an OpenVPN server on a VPS. Despite my efforts to keep it short, it wasn’t that short so I chose to keep my comments to a minimum, and to put the biggest chunk of them in this companion post.
If you don’t want the blah-blah, background story, setup improvement ideas, philosophical considerations, the meaning of life, etc., go straight to the guide there. Otherwise here it goes. Ah, and the bibliography is here too, though. Maybe that’s a reason to scroll all the way to the bottom before leaving 👀

Motivations / background story

And no, this isn’t just for the sake of telling my life. Although coming back here after a couple hours of writing, I guess I did a bit lot. Oops.
TL;DR: this explains, in more details than it should, why the security aspects were (almost) not a concern, and more globally why I wanted to go fast, to the minimum working setup.

A long time ago, this “not-a-blog” was hosted on a 1&1 VPS. Due to some “issues“, I then moved to OVH (10 years ago already, damn…). Funny thing about that, I noticed a few days ago that Ionos (new brand for the 1&1 hosting) finally added the ability to register a secondary e-mail to receive invoices. That change was made almost exactly 10 years after I posted about the “issues”. Happy 10-year anniversary to me, lol.
The not-so-cool thing they did when they implemented that feature, though, is that now I get a duplicate copy of the invoice messages even in my internal Ionos message box… What can I say, when you’re not smart, you’re not smart 🤷

About a year ago, I noticed that Ionos now had some interestingly cheap VPSs, so in January of this year-soon-to-be-last-year, I picked one. To give it a spin, and maybe more later. With just 512 MiB of RAM though, I could only do so much with it, and I basically used it as a proxy (SSH tunnels are so great and underrated, underappreciated, under…thing, etc) and to run some scripts that I wanted to keep running even when my own computer had to be off.
A nice great feature compared to my OVH VPSs was (and still is) that the IP I got was less detected as “EVIL FROM A DATA-CENTER ME BLOCK YOU HAHAHA” by the usual Internet shitheads such as CloudFlare and the like, compared to all the IPs I ever got from OVH. Here I want to point out that I don’t do weird traffic from my VPSs, I only host my own stuff, run very specific scripts such as Steam idling bots, and use them as personal proxies; so when the assholes from CloudFlare, Reddit, and similar services (sorry I forgot their names but I despise them just like CloudFlare – I’ll try to remember to come back and add them to this post as I encounter them) give me a captcha Hell (if they don’t just purely block me) it’s not because of a “bad” activity it’s because they suck and just blanket ban anything with “datacenter” on it. On a side-side-side note: that’s why we can’t have competition against Google, thanks very much!
Another nice feature is that the speed was better: officially 400Mbps, compared to 100Mbps at OVH, and even though I don’t think I ever reached that speed, it was in that ballpark.
So, I was happy to keep it for the duration of my initial contract period (initially 1 year, then it would be monthly) even though it appeared clear that it wouldn’t make sense to use it for hosting.

As the initial contract came close to an end, a few days ago (with a bit more than a month remaining), I browsed their VPS offering again. And I was very pleased to find that they had improved it significantly: twice the RAM (1 GiB) and more than twice the speed (1Gbps), for the same price. Yay.
Unfortunately, it wasn’t possible to migrate my old VPS to the new offer. No big deal, I canceled (it’s still running, until the end of the yearly contract) and picked up a new VPS. I actually did that over the phone (as I wanted to ask them about the migration), and they very kindly zeroed the last bill for my former VPS, considering that I was upgrading and signing again for a year.

So here I am, with my new VPS, reinstalling it already minutes after getting it because Ubuntu 22.04 felt a bit too old after all compared to the 5-months old Debian 12 (keeping up with recent versions isn’t Ionos’s strong suit). Another point in favor of for Debian 12 was that, right after installation, df -h reported 1.5 GiB used space for Ubuntu 22.04 but only 688 MiB for Debian 12 (up to 950 MiB after just running apt-get update && app-get upgrade). Considering the VPS only comes with 10 GiB of disk space, that’s always good to take.

I then proceeded to install Nodejs and my first script, a home-made Steam idler and name changer/randomizer. And boom, the letdown: the script times out when logging in… After trying a few more times, it eventually works, but in the process I realized that the timeouts are due to an excessive CPU usage, or to put it another way: the CPU of the new VPS sucks. Well, I guess Steam’s login process sucks too (on a side note, maybe now I know one of the many reasons why their client is so bloody slow), but I can’t quite change it.
If you are curious, cat /proc/cpuinfo for my “old” VPS gives (I kept just the most relevant parts – hopefully I didn’t cut too much):

cpu family      : 6
model           : 63
model name      : Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz
stepping        : 0
microcode       : 0x5003302
cpu MHz         : 2099.999
cache size      : 36608 KB

and the new VPS gives:

cpu family      : 25
model           : 1
model name      : AMD EPYC-Milan Processor
stepping        : 1
microcode       : 0x1000065
cpu MHz         : 1996.250
cache size      : 512 KB

I assume there’s a little difference in the way the cache size is computed between both cases ^^
But besides that, I wasn’t aware that the cores of a AMD’s Epycs were so rubbish. Unless Ionos decided to cut costs and make it so that a “vCPU” for their VPS was less than half a core (I don’t know virtualization enough to know how feasible this is).
Or maybe it’s a result of having less cache, since, as far as I know, this Xeon(R) Gold 6230R has 22 cores (44 threads) so that would be 36608 / 44 = 832 KiB of cache per thread, significantly more than the 512 KiB of that EPYC-Milan 🤔

I then tried running another script, which I was running on one of my OVH VPSs so far. It worked for a while, but crashed at some point. Not sure why because I didn’t monitor it closely, but I assume it’s due to a lack of RAM this time (the server where it ran before had 2 GiB, and that’s quite a wasteful script RAM-wise).

So I was left with the question: what to do with that VPS where I can’t seem to run anything useful? Should I refund it and keep the old one? As I understood, despite the 1-year contract, refunding is possible. But considering I was offered the last month on the other server for giving it up, won’t it create troubles (although now maybe I understand why they were happy about my switch…)?
I can still use it as a proxy with my dear SSH tunnels though. I’ve had a lot of fun with SSH tunnels in the past (tags are great, I should use them more – I edited one of these posts for the first time since June 2010 just to add it the tunnel tag 👀).
I also had a soon-to-be-over-and-definitely-not-renewed NordVPN subscription. With no replacement yet. And, while SSH tunnels are great for Firefox on desktop, they are sadly not that possible for some other software, and, particularly, for the stupid phones.

So, how about… setting up my own OpenVPN server? On that VPS running Debian 12?
I had looked into this (briefly) a few times in the past already, and from what I gathered, setting this up looked like quite a painful process. Particularly if you compare it with my dear SSH tunnels. This would also be 1) just for my own use and 2) most likely temporary, because I don’t really expect to be happy to stick to a single IP rather than happily hop from a server to another at a commercial VPN/proxy. I also had in mind that WireGuard should hopefully replace this horrible OpenVPN one day (I say horrible because, as far as I understood and observed, at least on Windows, it writes all the bloody network data to the disk, wearing out your SSD for no good reason), so I’m not that interested in learning how to set up OpenVPN perfectly.
Which is why I opted to go for the minimum effort installation (where “minimum” still isn’t quite tiny), keeping default settings and taking shortcuts as often as I could.

Comments on my setup

For this section, you may want to have the other post (the one with the guide) open next to this one.

To not root or not to root

(that title doesn’t contain a typo)
I’ll start by the obvious that I did everything as root and that “it’s very bad”. For a clean set up, you’ll want to never use the root account but one with sudo privileges, and you’ll want to create a user (and group) just for openvpn. I wish you a lot of fun with that! (and with the permission hell, hahaha)

EasyRSA

The easy-rsa package is automatically installed when you install the openvpn package. But trying to run “easyrsa” gave nothing. All-in-all, it’s mostly just a single script file so it just works if you get the script as I did. But if you want your certificate to “look good”, you’ll probably want to get the sample configuration file and edit it, and place it wherever it should be (don’t ask me…), so as to put a name, company name, country, city, etc in your signing certificate.

I looked at multiple guides to come up with my “quick and dirty” process, and I’ll put all the links in a separate section at the end of this post, but for creating (and signing) the certificates, I found that the Easy-RSA page on the Archlinux wiki was the most helpful. You may want to use SHA512 instead of the default SHA256, or elliptic curves (or Twisted Edwards curve) instead of RSA, and they tell you how. I haven’t tried though, since as I wrote earlier I had issue with my Easy-RSA setup, including finding/placing its configuration file(s).
If using RSA, you may also want to use a 4096 bits key instead of 2048, and same for the Diffie-Hellman (DH) prime number of your DH parameters file.

Keys and certificates

To save time, and also because I only had one machine to trash anyway, I generated all the keys on the same machine: the Certificate Authority (CA), the OpenVPN server, the OpenVPN client.
In an ideal configuration, you’ll want to keep the CA private key on a separate machine (maybe even offline, why not, you’re the one doing it in hard mode), and keep the OpenVPN server private key on the machine that will run the server, and ditto for the client private key. And to sign, you’ll generate requests (the .req files) on the machines where the private keys are, then transfer these request files to the CA machine, sign them there, then transfer the signed certificates (.crt files) back the the appropriate machine. Not really complicated, but more tedious than doing everything in the same place, isn’t it?

Generating the client profile

Not that much here, ovpngen is a simple tool, for once, where all options are specified with command line arguments. Compared to what I put in the guide, you also have 2 additional, optional arguments : port number and protocol (default is 1194, which matches the default port for the OpenVPN server).
The generated configuration file is just a text file that concatenates certificates and parameters, so you can easily edit it later, either to change something or add more parameters.

Configuring OpenVPN on the server

Some of the things you configure here will have to be reflected in the client configuration, for instance, obviously, the port and protocol (UDP vs TCP).

The sample configuration file is quite well-made, with many comments that should make it rather self-explanatory for most fields. Something you might want to change in particular is the default DNS servers, as OpenDNS is, unlike the name suggests, not quite “open”. They’ve even censored domains they didn’t like in the past, something that is usually done by ISPs and a motivation to switch from your ISP’s DNS to “some other DNS” like OpenDNS, not something you expect from the latter, and yet…

Other possibly interesting things, at first glance: the keepalive settings, the cipher, maybe enable compression too (unless you have more CPU issues than bandwidth issues), the log file verbosity.

Tweaking the server’s networking parameters

I don’t have much do say about this part, except that it’s some wicked mysterious voodoo which should convince any sensible person that the VPN technology was never meant to primarily be a proxy. And yet that’s the mainstream usage these days (NordVPN, Cyberghost VPN, Surfshark VPN, etc, etc). Go figure 🤷

2 tutorials were of great help for this, this one from DigitalOcean, in particular to find the name of my network interface, and this one from an obscure web host I had never heard of before (MonoVM), for the magic iptables command. I’m really not a fan of their verbiage, though (“With this command, you’re staking your claim in the digital realm, preparing to issue certificates like a digital monarch” … are you high or what? :s)

I guess that now is a good place to repeat that a SOCKS proxy with an SSH tunnel is so much easier. You don’t have anything to configure on the server at all, as long as you’re able to connect via SSH (something that you’d better be able to do if you want to do anything at all with your server, meaning that generally it’s just a given).

That’s about it for my constructive comments. I thought I had more, maybe I already forgot them. Onto more rambling, if you wish (and the promised links are, as promised, at the very bottom).

Even more off-topic “bonus”: what happened to that crappy VPS in the end

TL;DR: I got a refund.

While I was still writing this post, I took the time to run a home-made “benchmark”. Simply, in the evening I started a slow AV1 encoding process on several machines (using ffmpeg with libaom-av1), and when I got up I compared progresses. The results were as clear-cut as it gets:
– the old Ionos VPS (1.2€/month, yes it’s cheap): 1179 frames (yup, that lib is slow, at least with its default settings) (a day and a half later: 4875)
– the new Ionos VPS (1.2€/month too): 521 frames, so around 55% slower… proof that new or “progress” doesn’t necessarily mean better (a day and a half later: 1531 so even 68% slower)
– just for fun, some other OVH VPS (3.84€/month) : 1055 frames, so the old and cheap Ionos VPS does provide good value, CPU-wise (it has 4 times less RAM and half the disk space of this one though, so the price difference does have reasons). This particular VPS was also running a web crawler (MJ12node), so obviously this must have affected (note how I don’t say “impacted” because I’m not a retard) performance.
The cpuinfo of that last VPS is below. Fairly close to the old Ionos it seems.

cpu family      : 6
model           : 60
model name      : Intel Core Processor (Haswell, no TSX)
stepping        : 1
microcode       : 0x1
cpu MHz         : 2394.454
cache size      : 16384 KB

I also ran the “benchmark” on my laptop, but forgot to write down the exact number. It was around 4000. Note that all the VPSs are single core while my laptop obviously isn’t, however that encoding is very poorly multi-threaded (at least with the default settings), so I’d say it used barely a core and a half (in all this, I talk about logical cores since everything here uses HT or SMT). I also had quite a bit of stuff running at the same time and configured my CPU with a very low power (15W total), so roughly I could increase that to 60W and have the same power per used core (but a lot more cooling needs).

So the 2 fast VPS run around 60% slower (1100/(4000/1.5)), per core, as my laptop. Which I guess is not too horrible given their price and that my laptop is quite recent (Ryzen 6800H).

The slow new VPS, though, is 80% slower, which is getting quite bad, particularly considering it’s the newest of all (the offer started in summer 2023). That new VPS also had worse network speed than the old one, even though it was supposed to be faster in that respect (1Gbps vs 400Mbps, the reality was more 200Mbps vs 350Mbps…)

So a week after getting it, I eventually used my right of withdrawal to cancel the 1 year contract. Ionos was very fast and processed my request in less than 20 minutes (I sent an e-mail and they called back), so nothing to complain about, they do have a very available customer service and I certainly wish my banks were so easy to reach.
Maybe just a little negative detail, is that they filed it under their 30-days money-back guarantee instead of the right of withdrawal, which could have implications should I need to cancel a similar bad surprise in the future (right of withdrawal applies all the time but for 14 days only, their money-back guarantee applies for 30 days but only once ever per customer, according to their ToS). Not that I plan to use it again, as it’s never pleasant to invest time on a new server only to drop it because it’s too bad… but it’s even worse if you then get stuck paying for that zombie server for a year 🤔

That was definitely too long

I should write a novel
… oh wait, I just did!

Bibliography

As promised, my sources, in no particular order (although I tried to put the most interesting guides at the top):

Posted in Internet, servers.


Pale Moon’s developers’ strange conception of privacy

I don’t often talk about Pale Moon (or is it Palemoon?), but I still have it installed, even on my last PC which I set up a year ago only. I don’t use it much, but I find it convenient to have it as a default browser, so that shitty surprise link-openers all end up in a dedicated space that never has direct Internet access without my firewall asking me for permission first. The silly things we have to do to compensate Windows’s faults… 🙄

Today was no different, and some random crapware fired up Palemoon to open a surprise link. But for some reason I hung around a bit, and ended up on the Palemoon forums. And a topic caught my eyes: “Remain active to keep your account”. Beside my interest for privacy, a reason why it caught my eyes is that they use phpBB, and I’ve run phpBB for a long time in the past and as far as I remember, account deletion with this has always been a tricky matter (like with most bulletin board software). Although maybe it’s different now with GDPR, I don’t know, my use was a while before this law.

The first post explains, to put it simply, that they now purge inactive accounts after roughly 2 to 3 years (in January every year, accounts inactive for over 2 years are goners). They describe the “purge” as removing all account data except the nickname and the posts, and explain that this comes from their privacy policy.
Gladly, someone eventually called them out on this:

I actually don’t […] understand what this has to do with privacy. If an account […] is removed and the only two things remaining is said nickname and the post(s) attached to it, this doesn’t fulfill the users “Right to be forgotten” but only removes his opportunity to have control over his posted content, eg. deleting or adapting his posts later on.
Thus […] it actually decreases the adherence to the principle of “My data belongs to me” because you lock out the owner […]

The lead developer’s (I think) response is… wild:

There is no “right to be forgotten” as in a right to erase your entire footprint from history. There is only the “right to have your personally-identifiable information removed”. A nickname is not personally-identifiable information.

First, GDPR doesn’t talk about “personally-identifiable information” but simply about “personal data”, plain and simple. Let’s not make up big obscure expressions to try and get people confused. Personal data. Two words, 12 letters, no dash.

Second, Article 4 of the GDPR (Chapter 1) defines personal data very clearly (but broadly): ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.

With that in mind, a nickname taken purely alone isn’t personal data. But first, it is never alone: at the very least, it is accompanied by “a person with this nickname had an account there”. Second, it is an online identifier which might be linked to an identifiable natural person, for instance if the nickname looks like “FirstnameLastname”), or is something the user used elsewhere too, or if the user included contents that made them identifiable (even indirectly) in their forum posts, or if the forum software keeps IPs and timestamps after the purge (something not indicated, a long time ago phpBB did keep IPs forever, but I don’t know if this is still the case).
In all these cases where the nickname can be linked to a natural person, then all the other data that are kept along with it (and there is always at least a tiny bit of those) are personal data.

The response continues:

Even users with active accounts can’t delete their own posts after a short grace period, so in that respect there is 0 difference, and nothing is “taken away” from users who have their personally-identifiable information removed from the database in a purge.

So this is even worse, and it looks like someone never heard that two wrongs don’t make a right.

And it goes on to my favorite:

On top, as with any website you use with a posted privacy policy, it is your responsibility to be aware of the practices of the websites you make use of, including any data purges that may be part of account management. Ignorance of our privacy policy is no excuse.

That’s very American: “My Terms of Use are Law”. But in some countries, notably where the GDPR comes from, you can’t strip people of their rights via crappy Terms of Service. “Ignorance of our laws is no excuse”, I could say.

Another user then raised other interesting points (but less privacy-oriented), the developer also insisted a bit more on “we do that primarily to protect you / your data / your privacy”, so go have a read if you wish and haven’t already.

That said, that last quote did contain a valid remark: I should read that privacy policy, shouldn’t I? But… where the hell is it? I looked a bit everywhere in the forums, even went to the registration form, because of all places, that’s the one where you are usually welcomed with the whole legal jibber-jabber, and… Nope, not here not there. Not even a mention in the board rules. So much for “you should know our privacy policy” if you keep it hidden, huh?
I did not give up though, and I cheated. It didn’t get me directly there, but I found this post, with a promising link to http://www.palemoon.org/privacy.shtml, but… it’s dead. Ugh. But from here I found a privacy link in the footer, and that was it, yay.

Now I won’t comment it in full because 1) that’s wasn’t quite the point of this post, which just started as a rant about how stupid it is to delete user accounts while retaining their username and posted contents and proudly claim you do that for the sake of protecting their privacy, 2) that privacy policy seems to cover Pale Moon operations as a whole and more particularly the browser and 3) I’m not a lawyer anyway. But 2 things caught my attention.

First, the data pruning part, since this is what started all this. The funny thing about it is that the privacy policy never says precisely what data is purged vs what is kept. You have to go to the forum post I’ve been rambling about here in order to find the details. So much (again) for “you’re supposed to know our privacy policy by heart”: even if you do, you don’t really know what’s going on with the purges…

Second, a juicy part because it’s plain and simple in violation of GDPR. The rest of my rant points out questionable ethics, but nothing absolutely broken, as I assume it may ultimately be treated via human intervention for the most problematic cases. And precisely this part is about human intervention:

You may instruct us to provide you with any personal information we hold about you. Providing said information will be subject to (in that order):
1. The payment of a non-refundable fee (fixed at €10).

I will just copy here Paragraph 5 of Article 12 of GDPR (Chapter 3, Section 1):

Information provided under Articles 13 and 14 and any communication and any actions taken under Articles 15 to 22 and 34 shall be provided free of charge. Where requests from a data subject are manifestly unfounded or excessive, in particular because of their repetitive character, the controller may either:
(a) charge a reasonable fee taking into account the administrative costs of providing the information or communication or taking the action requested; or
(b) refuse to act on the request.
The controller shall bear the burden of demonstrating the manifestly unfounded or excessive character of the request.

In case you didn’t know (and didn’t guess either), those Articles 13 to 22 are basically those about the right to access/modify/delete your data. Basically, if you don’t do it too often (and in particular if you do it only once) and you didn’t flood the forums with your personal data, it should MUST be free.

A positive note

Yup, I put the only header in this wall of text here just to catch your attention down there.

All that being said, the privacy policy is otherwise globally sensible and no worse than many, many others, and in particular better than those that massively screw you despite following GDPR to the letter – hurray for the “legitimate interest” loophole. The browser is globally good (otherwise I wouldn’t have it installed…) and, even more importantly, is part of the very exclusive club of “Browsers that are not yet-another-ChromeCrap but are not Lynx either”.
If you had never heard of it, you should give it a try. If it was for you a distant memory, maybe have a quick look at it again. If you’re under 20, maybe run it to see what browsers used to look like when you were in elementary school or kindergarten (or below). If you’re a web developer, try running your projects in it to see how far you’ve strayed from the good old simple web (Pale Moon’s rendering isn’t that outdated, but still you can tell it doesn’t like fancy-fatty front-end frameworks much).

My (much longer than planned) rant was mostly about the attitude rather than big privacy issues. This paternalistic “we do [insert crap here] to protect you because we know what’s good for you [and you don’t]” mindset is bad and terribly annoying, and should remind us all that the road to Hell is paved with good intentions.

Update on 2023-12-29

Something quite hilarious happened to me a few minutes ago. Well it’s not hilarious in itself, but it is when you consider I wrote this lengthy post not even 48h before.
While we have these guys here who find it normal to delete (or, more accurately, deadlock) forum accounts after 2 years, I just visited a forum for the first time since 2 years, 5 months and 5 hours (give or take a few minutes), which was on a different computer, and I… was still logged in!!
And to think that if this had been the Pale Moon forums, my account would be not just logged out, but gone*. Choices, eh. (also congrats Flag Counter, you get the “don’t bother me with forms” platinum medal, please do keep it up)

* okay, this was for the dramatic effect, as technically it might be gone or not be gone, because the purge occurs “around the beginning of the year”

Update part 2: I realized, while writing the small note right above, that they use CloudFlare and they block Tor. But they “care about privacy”. That’s hilarious too.

Posted in privacy, Totally pointless.


How to set up an OpenVPN server on Linux

Due to how long this guide is, despite the fact that I made it as minimalist as possible, I will keep kept my remarks and comments in a separate post, except from the bare minimum.
Here I will just point out that I tried to make this as straight to the point as I could, with the sole objective of getting “something that works”, with, in particular, little to no consideration for security aspects and nothing that isn’t strictly necessary. This guide will (should) get you as fast as possible with a working setup (including connecting your OpenVPN client to it), but leaves a lot of room for post-install improvements.

Prerequisites

My objective was to set up an OpenVPN server on a VPS running Debian 12, so obviously having exactly that would be ideal. But any machine or virtual machine with Debian or Ubuntu should do, possibly with some tweaks. Any decent Linux distro should do too, but then with even more modifications.
Have root access on it.
We’ll do everything as root here for the sake of simplicity (again and for the last time, this guide is not security-focused). You’ll have to “sudo” most of the commands if you don’t. And, I guess, work from a different folder.

Installing OpenVPN

apt-get install openvpn

Installing EasyRSA

I didn’t manage to use the one from the package manager, so I just grabbed this 3.1.7 release, extracted just the “easyrsa” file, and uploaded it inside the /root/easyrsa folder (so the file’s full name is /root/easyrsa/easyrsa)
Alternatively, you can just wget a slightly different version.

Assuming you are currently in /root:

mkdir easyrsa
cd easyrsa
wget https://raw.githubusercontent.com/OpenVPN/easy-rsa/v3.1.8/easyrsa3/easyrsa
chmod 744 easyrsa

Generating keys and certificates

Initialize a new Public Key Infrastructure (PKI), generate a Certificate Authority (CA) keypair, and copy the CA public key to the OpenVPN config folder:

./easyrsa init-pki
./easyrsa build-ca
cp pki/ca.crt /etc/openvpn/server/

Make sure you take note of the password you choose for the CA, as you can’t leave it empty and you’ll need it later. Beside that, you can leave all default values for the rest of the prompts.

Generate the server key and certificate, the Diffie-Hellman (DH) parameters file, the Hash-based Message Authentication Code (HMAC) key

./easyrsa gen-req server01 nopass
openssl dhparam -out /etc/openvpn/server/dh.pem 2048
openvpn --genkey secret /etc/openvpn/server/ta.key

Sign the server certificate, and copy the server key and certificate to the OpenVPN settings folder:

./easyrsa sign-req server server01
cp pki/issued/server01.crt /etc/openvpn/server/
cp pki/private/server01.key /etc/openvpn/server/

Generate the client key and certificate, sign it and copy it (just the certificate) to the OpenVPN settings folder:

./easyrsa gen-req client01 nopass
./easyrsa sign-req client client01
cp pki/issued/client01.crt /etc/openvpn/client/

Generating the client profile

Get ovpngen:

wget https://raw.githubusercontent.com/graysky2/ovpngen/master/ovpngen
chmod 744 ovpngen

Then generate the profile. Note that we are still working in the /root/easyrsa folder, which we haven’t left since we created it at the beginning of the guide:

./ovpngen [server IP] pki/ca.crt pki/issued/client01.crt pki/private/client01.key /etc/openvpn/server/ta.key > client01.ovpn

I won’t detail how to configure your OpenVPN client, but basically you just need to install an OpenVPN client like the one here https://openvpn.net/community-downloads/, then import that client01.ovpn file in it and you can connect.
Except we’re not done yet, with configuring the server

Configuring and starting OpenVPN

Copy the sample configuration file into the OpenVPN settings folder, then open it with nano:

cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf /etc/openvpn/
nano /etc/openvpn/server.conf

In this file, set these values:

port 1194
proto udp
dev tun
ca /etc/openvpn/server/ca.crt
cert /etc/openvpn/server/server01.crt
key /etc/openvpn/server/server01.key
dh /etc/openvpn/server/dh.pem
tls-auth /etc/openvpn/server/ta.key 0

Also uncomment these lines:

push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 208.67.222.222"
push "dhcp-option DNS 208.67.220.220"

Finally, start OpenVPN:

systemctl start openvpn@server

First test

You can skip this as we’re not fully done yet, but you now have enough to be able to connect your OpenVPN client to your OpenVPN server. If your are unable to connect, you should probably double-check that you didn’t miss anything.
However, if you check it, you’ll notice that your IP still is your client machine’s IP, not your server’s IP… So on to the next part.

Setting up the server’s networking parameters

Enable IP forwarding:

nano /etc/sysctl.conf

add at the end:

net.ipv4.ip_forward=1

Find the name of your server’s public network interface. It may often be “eth0”, but for me it wasn’t:

ip route | grep default

it will output something like

default via [default gateway IP] dev ens6 proto dhcp src [server IP] metric 100

In this, the value of interest is what’s after “dev”, so in my case, “ens6”

Set up a firewall rule to enable “masquerading”, a network address translation (NAT) setup allowing traffic from the VPN network (10.8.0.0/24) to exit via your server’s public network interface:

iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o ens6 -j MASQUERADE

(of course, replace ens6 with the value you found above)

Final test

That’s it, you should now have enough to connect to your OpenVPN server, and then actually use the server’s IP address from your client machine.

Further improvements

As indicated at the start, here I focused on just getting something working as fast as possible, and (some) further improvements will be discussed in the complementary post that is to come in order to keep this post short(-ish).
But I feel that there is still something important missing, even for “something that just works”: making sure the described setup still works… after a reboot.

For this, 2 things which I actually haven’t tested as I’m writing these lines:
1) Save the iptables parameters:

iptables-save > /etc/iptables/rules.v4

2) Make the OpenVPN service run at startup:

systemctl enable openvpn@server

And now, that should be it.

Posted in Internet, privacy, servers.


More various drafts again

Well it’s been a long while (pretty much over a decade) since the last similar post, so I guess I can allow myself to post this and empty my todo list from these posts I’ll never find time to properly wrap up.

Most of the links I put down there are not clickable, that’s on purpose because, as I don’t have time to compose the post properly, I don’t have time to decide what’s really relevant and what should rather be dropped. And I don’t want to end up with a ton of not-so-relevant links, thanks stupid search engines (looking at you big G) and the ridiculous SEO constraints they put on us.


Why iframes are kind of dead now (not sure why I had this pending, unlike other crap Firefox did this one seems to mostly make sense): https://support.mozilla.org/en-US/kb/xframe-neterror-page


Firefox 98 totally fucked up how downloads are saved by default: https://support.mozilla.org/en-US/kb/manage-downloads-preferences-using-downloads-menu
That one is annoying, as far as I remember now every time I set up Fx I have to manually that that for each file type I want Fx to ask where to save it, while before you had a global toggle for it. Not 100% sure they haven’t improved it since then, but still it shows they do employ moronic designers.

How to prevent Windows from turning off idle hard drives

This one deserves a bit of context.
On my previous computer, I was able to configure when to turn off idle hard drives right from the Windows power management settings (I won’t go into details here, it’s easy to find by yourself in the settings, and if not you can find plenty of written or video guides elsewhere, including in the 2 links I post below as they cover both methods). But on my newest one, running Windows 10 just like the other, the power management settings have somehow much much fewer options and in particular nothing about how long to wait before turning off an idle hard drive. Worse, the default settings felt incredibly short, and indeed it turned out they were like 10 to 30 seconds (I don’t remember the exact value, but that’s the order of magnitude and it was definitely NOT the 20 minutes default that I read about in one of the linked articles).
So needless to say it was hard on my external HD, and also hard on me because any time I waited a tiny bit between 2 file accesses on it, I had to waste precious seconds waiting for the HD to start spinning again.

So I had to look how to directly configure that, via console commands. I don’t remember exactly how I found the proper commands, as the links I saved don’t have them all. Maybe I just figured them myself by reading the help (with command powercfg /?).
First the links:
– https://www.tenforums.com/tutorials/21454-turn-off-hard-disk-after-idle-windows-10-a.html
https://www.top-password.com/blog/prevent-windows-from-turning-off-hard-drive-after-idle/ ⇐ this one is shown in a code tag because WordPress somehow tried to include a miniature of that post in my post… FFS when will they stop forcing stupid shit on us by default?

Then the commands:
powercfg /LIST ⇒ list power schemes
powercfg /Q 381b4222-f694-41f0-9685-ff5bb260df2e ⇒ show details of the current power scheme. It can be different for you, just copy the appropriate ID from the list. You may want to do that for several power schemes if you do you several power schemes. I just use the one.
That list is quite big, but should hopefully not exceed the console history size (at least in ConEmu that was fine for me)
powercfg /Q SCHEME_CURRENT ⇒ same as above but using the alias. Should work, but in case it does not, you know how to get the GUID
powercfg /Q SCHEME_BALANCED SUB_DISK DISKIDLE ⇒ to just get the value that interests us, with aliases and supposing the active plan is balanced (you can just use SCHEME_CURRENT, I used balanced here just for variety)
powercfg /Q 381b4222-f694-41f0-9685-ff5bb260df2e 0012ee47-9041-4b5d-9b77-535fba8b1442 6738e2c4-e8a5-4a42-b16a-e040e769756e ⇒ same with GUID (you can mix GUIG and aliases, too)
powercfg /SETDCVALUEINDEX SCHEME_BALANCED SUB_DISK DISKIDLE 0x8ca0 ⇒ sets the idle off timer to 10h (36000 seconds).
I noted that I used hexadecimal because, at the time I first tried, decimal values didn’t think to work, but as I’m writing this post while messing with the settings again, I realize that decimal values now work… Also worth noting, it seems that my external HD now never turns off, I’m not sure why as I realize that it should turn off after “only” 10h with those settings (before looking into this today, I believed I had set this to something much higher, given that I don’t care much about this drive staying on all the time when it’s plugged). Maybe the computer touches it more than once every 10h and thus it’s never idle that long. Or maybe I screwed up another setting.
powercfg /SETACVALUEINDEX SCHEME_BALANCED SUB_DISK DISKIDLE 36000 ⇒ sets the idle off timer to 10h, for when the computer is connected to external power (yup that’s a laptop, and the previous setting with for battery power)

Last but not least, reboot. The settings won’t apply until then (even though they show as modified if you run powercfg /Q SCHEME_CURRENT).

That’s all for now, I hope this helped

Posted in published drafts.


aToad #33: OneClickFirewall & simplewall

More free firewalls, one being open-source

I’ll be brief because I haven’t tested them (yet), as I’m still on Comodo Firewall.

I was starting my computer and noticed, as usual, a notification from my firewall asking what to do about freaking NVDisplay.Container.exe that keeps trying to connect. But this time I figured I tried to look a bit more into it – not really hoping much but rather a bit curious about what other people say/do about it. One of my first results was this forum (yay) post on Techpowerup (yay again, forums are not all dead, and memories from this site that I used to visit a lot more in the past).

There wasn’t much to see though, just basically a confirmation of what I suspected: NVDisplay.Container.exe tries to connect to update stuff like its DLSS libraries and can’t be prevented from doing so because NVIDIA. The more interesting part of this thread though, was a couple of people went on talking about their own current firewalls. So I figured, the thread is recent (less than 2 months ago) and involves people who care about properly blocking telemetry crap, so let’s pocket the list of those firewalls for future use. So the 3 nominees are:

  • Comodo Firewall, which I already mentioned
  • OneClickFirewall, which seems to have been last updated in 2016, but is also reportedly still working
  • simplewall, which is FLOSS / free software, was first released in 2016 (a few months after OneClickFirewall’s last update), and seems to be really actively maintained

I don’t have time to mess around with my firewall now, and Comodo works fine and is all set up, but next time I set up a computer I’ll be sure to give simplewall a try first. Beside being free and open source, it’s incredibly lightweight (<1 MB), has a portable mode, blocks everything by default and more generally seems quite targeted at power users who don’t want obscure shenanigans on their network.

Posted in A Tool A Day, Internet.


How to permanently disable Windows Defender in Windows 10 21H2

It appears that Microsoft made it harder to get rid of Windows Defender in the latest versions of their out-of-control OS.

Previously, in version 1607 for instance, you could simply disable it by opening the Local Group Policy Editor (just start typing it in the Start Menu to find it), going to Computer Configuration => Administrative Templates => Windows Components => Microsoft Defender Antivirus and setting “Turn off Microsoft Defender Antivirus” to “Enabled”.
Later on, they added a “Tamper Protection” in Windows Security settings that you must first turn off in order to be able to enable the above-mentioned policy (see below for more details).

Now, they made it so that if you only do those 2 things, or equivalent stuff described for instance there or there, Windows Defender will eventually (and much sooner than later) re-enable itself, removing your added registry keys and/or policies. It took me a few days of “trial and error” (or should I say, trial and getting screwed by MS) to figure it out, and maybe what I ended up doing is a bit overkill, but here is what worked for me:

Step 1: disable everything in the Virus & threat protection settings (you should be able to search for these straight from the start menu or from the settings “app”). Which, as I’m writing those lines, is:

  • Real-time protection (the one that said “yay you can disable me but f*** you I’ll re-enable myself very fast anyway, haha, screw you, user”)
  • Cloud-delivered protection
  • Automatic sample submission
  • Tamper protection

Probably only the 4th one is truly needed here, as step 2 should take care of the rest, but it doesn’t cost much to click a few extra buttons, does it?

Step 2: go lock all this in the Local Group Policy Editor. Run it by typing its name in the start menu, or also via Windows key + R then “gpedit.msc”, then navigate to Computer Configuration => Administrative Templates => Windows Components => Microsoft Defender Antivirus and:

  • Set “Turn off Microsoft Defender Antivirus” to “Enabled” (yup, so intuitive, you need to enable a “disable-ation”… Microsoft is still Microsoft…)
  • Set “Turn off routine remediation” to “Enabled” too

Then go into the “Real-Time Protection” subfolder and:

  • Set “Turn off real-time protection” to “Enabled”
  • Set “Turn on behavior monitoring” to “Disabled”
  • Set “Scan all downladed files and attachments” to “Disabled”
  • Set “Monitor file and program activity on your computer” to “Disabled”

And that’s “all”. So simple. So user-friendly.
With all this, you’re still able to run quick scans manually if you wish, but they shouldn’t run on their default daily schedule anymore. And the virus database should still be kept up-to-date.
Reverting the changes is easily done by removing those policies and changing the settings back to what they were. I guess just removing some of the policies might even be enough, considering Windows Defender’s tendency to turn itself back on spontaneously.

That’s all folks, until the next stupid update that changes stuff you don’t want changed.

Posted in Windows 10.


aToad #32: JDiskReport

Quickly visualize which folders are taking the most disk space

I’m currently migrating to a new computer, and in the process I have to move (or, if it appears to be a better choice, drop) all my browser profiles. And as it turns out, after some years, they get big. Huge, I’d even say. Notably Vivaldi (which turned out impossible to move properly because the idiots will drop both cookies and extensions while pretending it’s a useful feature), even though I didn’t use it much: 1.5 GiB profile size, mind you. That’s a bit more than my main Firefox profile, which I’ve used a lot a lot a lot more and which, very notably, includes around 800 MiB just for Telegram local storage (or should I say included, now I removed it once and for all).

Anyway, I thought that even though I didn’t want to start from scratch, it would be nice to tidy up a little bit. But that profile contains so many folders… That’s where JDiskReport becomes useful. Even though the UI of the new version 2 isn’t quite finished (the top PITA IMO is that you can’t copy/paste a folder path but you have to browse to it), it’s a pretty convenient and light tool to visualize the respective size of subfolders. Great to target the few big ones so that you don’t waste time on the tiny ones.
I’m aware that there are some more integrated tools, but since it’s not something I use more than once on twice a year, I like the fast that it’s purely portable. Nothing to install (except see below), just download and run the 3 MiB JAR file.

There really isn’t much to say, except maybe that it’s written in Java so yup, you’ll need that annoying JRE (but you probably already have it anyway, and in this day and age it’s not that big nor slow anyway). My current favorite is Adoptium / Eclipse Temurin, just don’t forget to pick the JRE because the JDK, which the download page defaults to, is much fatter (yes, I said fatter, not faster).

Update (2023-03-01): I just ran this on my new computer, and it’s actually super fast. Analyzed a folder with 3k+ sub-folders and 20k+ files within a split second. I guess it was slow-ish on my previous computer because I had a really slow SSD on it.

Posted in A Tool A Day.


How to get rid of Booking’s permanent notification alert

There’s something very wrong with Booking.com‘s notification system. They notify you for a lot of useless crap, an you can’t even disable all notifications (talking about the website or in-app notifications here, obviously push notifications can at least be killed at the system or browser level).
In particular, I have a notification about 1 person liking my review more than 1 year ago, and this notification always comes back as soon as I just reload the page. Just to clarify, I’m talking about this little number:
Booking.com notification badge counter always enabled

I find this extremely distracting. Unfortunately, I didn’t find a way to get rid of this notification without getting rid of all of them, but I figured removing them all would be better than nothing. Particularly since important (and truly new) notifications arrive by e-mail and/or push, so not having that red number shouldn’t make you miss anything important.

My (imperfect) solution is then to set up a custom filter for uBlock Origin. In uBlock Origin, go to the settings, then “My Filters”, and add the following filter:
booking.com##.js-uc-notifications-bell-count.bui-bubble--destructive.bui-bubble-container__value.bui-bubble
This removes just the little red number, so you can still access the notifications by clicking the bell, only you’re not being constantly nagged about it.

You could also remove the whole bell if you prefer, with this filter:
booking.com##.js-uc-notifications-toggle.bui-button--large.bui-button--light.bui-button
But that’s a bit overkill IMO, as it will make it impossible to reach the notifications without disabling the filter every time. Considering how worthless this menu item is most of the time, I guess that’s no big deal, but still, as long as it doesn’t have the red number it just doesn’t catch my attention anyway.

Anyhow, special thanks to the idiots in charge of UX/UI at Booking! Smart people working in Big Tech, as usual…

Bonus: earn a bit of space at the top

If like me you don’t care about flights, car rentals and whatever is not hotels, you can get rid of that menu and reclaim a bit of vertical space in that space-wasting UI, with the following filter:
booking.com##.bui-tab--rounded.bui-tab--light.bui-tab--borderless.bui-header__tab.bui-tab > .bui-tab__nav

Note that, as you see, these are pretty long names, so I assume they may change slightly over time. As of 13 February 2023 they work. Maybe I’ll update the post from time to time, or maybe I’ll forget, but uBlock makes it easy to create your own filter anyway: just use the picker from the menu.

Posted in Totally pointless.