Skip to content


How to catch wild pigs

You catch wild pigs by finding a suitable place in the woods and putting corn on the ground. The pigs find it and begin to come every day to eat the free corn. When they are used to coming every day, you put a fence down one side of the place where they are used to coming. When they get used to the fence, they begin to eat the corn again and you put up another side of the fence.

They get used to that and start to eat, again you continue until you have all four sides of the fence up with a gate in the last side. The pigs, who are used to the free corn, start to come through the gate to eat, you slam the gate on them and catch the whole herd.

Suddenly the wild pigs have lost their freedom. They run around and around inside the fence, but they are caught. Soon they go back to eating the free corn. They are so used to it that they have forgotten how to forage in the woods for themselves, so they accept their captivity.

It works on humans too: the government keeps pushing us toward communism/socialism and keeps spreading the free corn out in the form of programs such as supplemental income, tax credit for unearned income, tobacco subsidies, dairy subsidies, payments not to plant crops (CRP), welfare, medicine drugs, etc. while we continually lose our freedoms just a little at a time.

There is no such thing as a free lunch.

Source: http://www.crossroad.to/Victory/stories/wild-pigs.htm, but it’s actually a pretty common story

Posted in Uncategorized.


Cutting off work-related digital distractions at work

I recently realized that I wasn’t as productive as I wished I was at work. Sure, the colleagues playing pool at any random time of the day right next to my desk, or the whistling and singing (seriously!) in the open space don’t help, but I noticed I was also distracted by something sneakier: some of my very work tools. Namely, Slack and e-mails.

Slack

Slack’s business consists in empowering users to replace their too numerous short e-mails that span long threads with… a hundredfold more numerous instant messages that fill a screenful of channels. Gee, what an improvement! Even with desktop notifications off and my phone most often in airplane mode, the red icon in the Slack browser tab, and e-mail notification if I ignore it too long, guarantee regular distractions. I eventually resorted to some drastic measures:

  • Leaving some channels where I really wasn’t relevant. Like that channel where designers configured Zeplin to send notifications every time they commit a change
  • Muting chitchat channels like #random or #music
  • Starring as few as possible important channels, and hiding by default all channels except the starred ones and those with unread stuff
  • Limiting notifications to mentions and direct messages (and keywords, but I don’t have any), when I have to have notifications on (when working remotely)

I’m down to 6 starred channels and 4 muted channels out of around 25+. I also starred 3 private messaging channels, with tiny groups of people I regularly exchange with. I didn’t leave that many channels, I’d say about 3 or 4. But even then, Slack is now a lot less distracting. Unread stuff flashes way less often, and whenever I do check updates in those less important channels, as soon as I leave them they disappear again. Out of sight, out of mind.

Note that muted channels will reappear when you have unread messages in them, only they won’t be highlighted (unlike non-muted channels). Now that I think of it, this seems logical, but at first I was a bit surprised by this.

E-mails

That may be a bit trickier depending on your setup and habits. When I last changed my e-mail provider, from the start I added folders and I set up filters so that habitual incoming e-mails end up right where they belong, rather than flood my inbox. Try to do that. But not all at once: every time a new e-mail arrives, see if it’s a regular one that should fall into a folder. By regular, I don’t necessarily mean newsletters: it could also be for instance a contact with whom you exchange regularly.

Since I mentioned newsletters: ditch them. Seriously, if you do just one thing about your e-mails, I think that’s the one, and that’s easy enough. Like the incoming e-mail filter, don’t try to do it all at once, do it as they come. When a newsletter arrives, ask yourself: does it really interest me? Has this newsletter interested me at any time within the last X months? If no, hit that unsubscribe button. If yes, ask yourself if you really need to have that information pushed into your inbox, or if you can just actively consult it in your own time.
Unsubscribing is easier than ever now, as GDPR prompted newsletter managers to make sure unsubscribing is easy. Since I started the draft of this post, I think I unregistered from about 20 newsletters. My e-mail box feels so much quieter now 🙂

A last idea about your e-mails, although that one is hard to reach: try to keep your inbox empty. The previous tips are more important, and kind of a prerequisite, in order not to waste time moving e-mails around. Also, achieving a truly empty inbox might be a bad goal if you focus on it so much that it becomes in itself a distraction. But an empty, or near-empty, or at least an inbox where you can see the bottom of the list without scrolling feels quite relaxing to me. So I do try to keep my inbox to less than a screenful. Even if it means moving some e-mails into a “todo” folder that I process regularly: the inbox is where I land whenever I open my e-mail tab, a little stash out of sight in a todo folder feels better than a crowded inbox.

TL;DR

Slack: leave and mute channels, star the few important channels, hide non-starred channels, tune down (or fully turn off) notifications
E-mails: unsubscribe from newsletters, auto-sort regular incoming e-mails into folders, move the rest manually out of the inbox

Posted in Uncategorized.


Using Freenet with OpenJDK (AdoptOpenJDK) on Windows

Java/Oracle recently rolled out a new licensing policy. Frankly, I find it’s a mess and I don’t really understand what is and what isn’t allowed. It seems personal use and development use are both allowed, but still, downloading the SDK now requires creating an Oracle account. That broke the camel’s back. So I looked into alternatives.

AdoptOpenJDK seemed nice. It provides builds that seem regularly updated, for OpenJDK 8, 11 and 12, and it even lets you choose which Java VM you want, between HotSpot and OpenJ9. That JVM choice doesn’t seem to matter that much, from the few benchmarks I found, but still it’s appreciated.

Installation is straightforward, and I was soon able to get this in my console:

> java --version
openjdk 12.0.1 2019-04-16
OpenJDK Runtime Environment AdoptOpenJDK (build 12.0.1+12)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 12.0.1+12, mixed mode, sharing)

A nice upgrade from Java 8u201.

But, to my surprise, Freenet wasn’t able to find Java (so wasn’t able to run at all). After a brief search, I found that I was missing the registry entries for Java. Maybe I messed up during setup, but anyway it can be fixed quite trivially, by defining the following keys via Regedit (you could also just put this in a .reg file and “run” it):

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft]

[HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Runtime Environment]
"CurrentVersion"="12.0.1"

[HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Runtime Environment\12.0.1]
"JavaHome"="C:\\Program Files\\AdoptOpenJDK\\jdk-12.0.1.12-hotspot"

Note that, depending on your version, you’ll want to replace “12.0.1” with whatever you have (and of course, adapt the path too). Although I’m actually not that sure whether the version number matters for real, as long as both occurrences match.

Freenet should now be able to start.

Edit: I tried on another computer, and I tried the “Javasoft (Oracle) registry keys” option during setup. It created some keys automatically, but not the ones needed for Freenet to work: it created keys under HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\JDK, which I guess might turn out useful for development, but not under HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Runtime Environment, which is what Freenet needs.

Posted in software, Windows.


Installing Rust in a custom location on Windows

It’s actually pretty well described in Rust’s documentation. I’m just putting it here because the documentation is large and I appear to have a hard time finding those specific instructions in a timely manner every time I need them.

First, you could grab a GUI installer from there, but Visual Studio Code doesn’t seem to like it much. And I also remember having some issues running rustup in that context.
This is why I rapidly decided to use the “recommended” rustup-init.exe.

Before running said rustup-init.exe:
1) Set the CARGO_HOME environment variable to where you want cargo to be. I picked D:\PROG\PROGRAMMING\Rust\cargo. (NB: for a convenient way to edit environment variables, I recommend Rapid Environment Editor)
2) Set the RUSTUP_HOME environment variable to where you want cargo to be. I picked D:\PROG\PROGRAMMING\Rust\rustup (I’m so creative, I know).
3) If you’re using Rapid Environment Editor, make sure you SAVE (until you do, the environment variables that you created/modified/deleted/etc are NOT actually changed)
4) Make sure you start a new console to run rustup-init.exe. If you use a console that was already running before you added the environment variables, that console won’t have them. If by any chance you are using ConEmu, you need to close and reopen the whole ConEmu: just opening a new tab won’t do, if ConEmu was already running before you add the variables. I insist on this point, because rustup-init.exe will give you NO warning/notification as to where the install will be performed, until it’s all over. So if you’re not careful, you’ll end up with Rust installed in its default location before you can say “God fucking dammit”.

Now, you can (finally) run rustup-init.exe. Make sure to pick option 2) if you want to install the GNU/GCC version rather than the default MSVC, or if you want nightly rather than the default stable. I’m not a fan of using nightly, because it contains features that could get removed at any time, but sadly big frameworks like Rocket require it.

Posted in programming, Windows.


aToad #25: Rapid Environment Editor

A user-friendly GUI to edit Windows’s environment variables

A dozen years ago, some guy decided he couldn’t stand Windows’s default, painful environment variables editor and created this program, Rapid Environment Editor (or “RapidEE”), with a really nice interface to edit environment variables. It’s been regularly updated since then, and even though Windows 10 significantly improved the default UI for editing environment variables, it’s still quite superior IMO.

Notably, it provides a usable browsing interface when adding a folder to the PATH env variable (I just tried this with Win 10’s native UI, even with the improvements it’s hell, and I accidentally overwrote another folder name – I was able to cancel, though), and it highlights in bold red whenever an env variable points to a folder that no longer exists. If you start it as non-administrator, you’ll only be able to edit your user variables, but there’s a convenient button to directly restart RapidEE as Administrator.
Among the extra goodies, the website provides downloads for many older builds, in particular builds of the last versions that supported Windows NT/2000 and Windows 95/98/ME, and the software is available as a portable version, which consists of just a .exe file with a companion rapidee.ini.

Development seems to be pretty slow nowadays, but it’s simply feature-complete… It makes sense to have just some maintenance and translation updates.

Posted in A Tool A Day, Windows.


aToad #24: Bulk Rename Utility and Duplicate Files Finder

The names are pretty self-explanatory… A mass file renamer and a tool to find duplicate files

Bulk Rename Utility is a Windows freeware that makes it “easy” to mass rename files.
On the plus side, it has plenty of features, you can use regular expressions, use ID3 tags and EXIF meta data, change the case, add some numbering, preview file name changes, change the files creation date, etc, etc. On the minus side, that feature-richness comes at a price: it’s not very simple to use. Particularly when you don’t use it very often: since I rarely use it, most of the times I use it I have to check the help file first. Still, I find it pretty great.

Duplicate Files Finder is a multi-platform (Windows/BSD/Linux) free and open source software that allows you to detect duplicate files and delete them. Pick the directories you want to include in the scan and hit Go. Or optionally, you can include or exclude specific files names, and filter your search with minimal and maximal file sizes (excluding tiny files can speed up the search and make the results more readable).
The scan is pretty fast, as it first matches files with their size, and only then compares those that have the same size. The only big weakness of this software is that deletion of duplicates can only be done one by one, i.e. for every file that has duplicates, you have to manually select which one you want to keep. So, it’s good to deal with a few large duplicated files, but it’s not very effective to deal with many small duplicates (all the more reason to exclude smaller files from the search, IMO). Still, my typical use case is mostly on large-ish files that are not too numerous (or whole duplicate folders), so it does the job efficiently enough.

Posted in A Tool A Day.


Upgrading Ubuntu Server from a LTS to the next (non-LTS) version

The configuration file to edit should be shown when you run “do-release-upgrade”, as long as there is a new version after your LTS. But for some reason it wasn’t there (or I missed it) many times before when I previously ran this command. So at least now I won’t lose it anymore.

sudo apt-get update && apt-get upgrade (because the release upgrade command will require you to have all current updates)
sudo nano /etc/update-manager/release-upgrades
Change Prompt=lts to Prompt=normal
sudo do-release-upgrade

That’s pretty much it. Note that you can’t jump a version, so for instance if you’re on 17.10 and you want 18.10, you’ll have to upgrade first to 18.04 LTS and then to 18.10

Posted in Linux.


Renewing the Thecus N7510’s TLS certificate

The Thecus N7510 is a cheap NAS that used to be popular for its large amount of disks (7) while still being as cheap as (or even cheaper than) most 4-disks NAS.

It is powered by Thecus OS, but sadly it seems that its version of Thecus OS isn’t maintained very actively anymore. Particularly, the SSL/TLS certificate used for FTP over TLS expired about a month ago. Which is pretty annoying, because FileZilla refuses to let you permanently ignore a certificate expiration alert (for stupid reasons, but this isn’t the first time the FileZilla developers provide poor explanations for equally poor choices – we can only live with that).

So the only option I had left was to try to upgrade the NAS’s certificate by myself. Gladly, this turned out fairly easy, as I wrote a guide before on how to create your own self-signed certificate. So the only new (and minor) difficulty was to find where the current SSL/TLS certificate of the N7510 is. I quickly found that it’s named /etc/ssl/private/pure-ftpd.pem, which contains both the server private key and the signed certificate (something very slightly different from my previous guide: you just need to stash 2 files into one .pem file).

If they’re not already enabled, you need to enable SSH and SFTP from the ThecusOS control panel (the SSH & SFTP toggles are in Network Service > SSH)

Once this is done, here are the commands I used (cf the linked guide if you need more details) to generate the certificate:

cd /etc/ssl/private
openssl genrsa -des3 -out servPriv.key 4096
openssl req -new -key servPriv.key -out servRequest.csr
cp servPriv.key servPriv.key-passwd
openssl rsa -in servPriv.key-passwd -out servPriv.key
openssl x509 -req -days 3650 -in servRequest.csr -signkey servPriv.key -out signedStartSSL.crt

At this stage, you have everything you need excepted the “stashed” pem file.
At first, I tried to use nano to create it, but the Thecus N7510 doesn’t have nano 😡 So, I connected via SFTP (with FileZilla) as root (that’s why I told you to enable SFTP along with SSH earlier). Then I grabbed servPriv.key and signedStartSSL.crt, and put them both into a single text file (not sure if the order matters) name newcert.pem.

Just for the sake of clarity, newcert.pem looks like:

-----BEGIN RSA PRIVATE KEY-----
[base64 stuff]
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
[more base64 stuff]
-----END CERTIFICATE-----

Finally, I uploaded newcert.pem into /etc/ssl/private, renamed pure-ftpd.pem to pure-ftpd.pem.bak, and renamed newcert.pem to pure-ftpd.pem.

All is now ready, the last thing you need to do is to restart the FTP server. The easiest way to do it is to disable then re-enable it via the ThecusOS control panel (Network Service > FTP).

Now, when you connect with FileZilla to the FTP server, you’ll see your new, non-expired, certificate, and will be able to trust it permanently (that is, until it expires in about 10 years).

Posted in FTP, security, servers.


Buffing your Apache HTTPS configuration

Setting up HTTPS on Apache with a basic configuration is now both trivial and cheap. Optimizing it for a (slightly) better security level requires a bit more digging though. And a small trade-off: you’ll have to sacrifice fossil browsers, like MSIE pre-11, and generally most old versions of just any browser. Spoiler: noone really uses those anyway.

First, here is my old configuration. It still gets an A on SSL Labs as I’m writing this, but it’s starting to have issues.

<VirtualHost *:443>
   ServerName gal.patheticcockroach.com
   DocumentRoot "/home/gal/"
   <Directory "/home/gal/">
   allow from all
   Options -Indexes
   </Directory>
   SSLEngine on
   SSLProtocol all -SSLv2 -SSLv3
   SSLHonorCipherOrder On
   SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:HIGH:!CAMELLIA:!RC4:!MD5:!aNULL:!EDH
   SSLCertificateFile /etc/letsencrypt/live/gal.patheticcockroach.com/cert.pem
   SSLCertificateKeyFile /etc/letsencrypt/live/gal.patheticcockroach.com/privkey.pem
   SSLCertificateChainFile /etc/letsencrypt/live/gal.patheticcockroach.com/fullchain.pem
   SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
</VirtualHost>

Now, here is my new one:

<VirtualHost *:443>
   ServerName gal.patheticcockroach.com
   DocumentRoot "/home/gal/"
   <Directory "/home/gal/">
   Require all granted
   Options -Indexes
   AllowOverride All
   </Directory>
   Header always set Strict-Transport-Security "max-age=31536000"
   SSLEngine on
   SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
   SSLHonorCipherOrder On
   SSLCompression Off
   SSLSessionTickets Off
   SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256
   SSLCertificateFile /etc/letsencrypt/live/gal.patheticcockroach.com/cert.pem
   SSLCertificateKeyFile /etc/letsencrypt/live/gal.patheticcockroach.com/privkey.pem
   SSLCertificateChainFile /etc/letsencrypt/live/gal.patheticcockroach.com/fullchain.pem
</VirtualHost>

Note that this is using Apache version 2.4.29, while the old one was using something-older-not-sure-which-one. So, “allow from all” became “Require all granted”, and some new algorithms became available. But TLS 1.3 isn’t here yet.

First, I ditched SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown. Doesn’t really impact security, but is just useless now since the cipher suites we’ll pick aren’t supported by the MSIE versions that required this tweak.

Then, I disabled all SSL protocols but TLS 1.2. A more elegant way would be SSLProtocol -all +TLSv1.2, but I just wanted to keep the list for the moment. I’m actually not even sure if Apache still supports SSL v2, or even v3.
I handpicked some of the most modern cipher suites from here and there, disabled compression and session tickets (because reasons), and I added a Strict-Transport-Security header. About this last one, I believe a value of “max-age=31536000; includeSubDomains; preload” might be even better, 1) for preloading and 2) I’m not sure about includeSubDomains but I’ve seen it used in a bunch of guides.

And that’s basically it, already. With this I’m getting an A+ on SSL Labs and in other places. Most of which insist heavily on setting a very long HSTS (watch out, once you set it you have to keep maintaining the HTTPS version of your site, or people who already visited it won’t be able to access it anymore for a long while).

Last but not least, here’s a little list of services that you can use to test your HTTPS setup:
SSL Labs
HT Bridge
Cryptcheck

And here’s an even longer list, but sites other than those I already listed seem vastly inferior to me, with the exception of a few services that focus essentially on the “administrative” details of your certificate. Notably, this one will let you download the certificates that are missing from your chain, if any (it shouldn’t be useful, but it’s a fun feature still)

Posted in security, servers, web development.


An important bias to know about consumer reviews

In a previous life, I used, among other things, to search and avoid biases in scientific studies. Not the SJW kind of bias, the statistical kind of bias. Once you’ve acquired that mindset, you never fully abandon it and you just tend to casually check for biases everywhere they may exist.
A domain prone to bias is consumer reviews. If only because unhappy consumers tend to be more vocal than the happy ones. But that’s obvious and not what I’ll be writing about here.
A year and a half ago, I posted a short post about why Tomtop’s products all have high ratings. Long story short, there is what you could call a “selection bias” in customers’ reviews posted on Tomtop: all the reviews below 4 stars (or maybe the threshold is 3, I don’t remember, but it’s certainly not lower) are not “selected” (well, they’re plainly never published). I don’t know if it still applies today, but you could probably find out for yourself rather easily (just look for negative reviews for a while, and stop once you find one… or get tired of not finding any). End of the introduction story bias.

A few months ago, a UPS (uninterruptible power supply) I bought about 4 years ago became faulty. I know those things don’t last forever, notably because the battery wears out, but it wasn’t a “the battery is worn-out” issue, and anyway 4 years seemed a bit too short-lived. So I thought, eh I’ll leave a review on that product, which only had a few, all very positive reviews. I was thinking something along the lines of “well, it works, but it’s probably not as reliable as you’d expect”. And I wasn’t able to: the site where I bought it (a local site named LDLC) said, in its error message, that only customers who bought the product are able to review it. They still have the bill corresponding to my purchase, which I was able to download in my customer area, but the review error message tells me I didn’t buy it.
I assume their error message is at fault there, and that they rather mean that there is some time limit between the time when you made the purchase and the time when you can review. But then it means, you guessed it… there is a bias in the reviews: the reviews can be posted only during a limited time after purchases, meaning all potentially negative but important feedback about lifespan or reliability issues will be underrepresented.

So I thought okay, never mind reviewing the product, but I could still review the site that prevents me from reviewing an old purchase, right? Here comes Trustpilot, where I had already left a review about LDLC a long time ago (possibly after they suggested it, I’m not sure, but I’m really not a big user of such review sites). So I left a review on Trustpilot, explaining that LDLC didn’t let me review a product I bought long ago and which turned out to not last long. All done, or so I thought.

The following day, I received an e-mail from Trustpilot, with a rather cold and standardized note from LDLC saying (I tried to translate as faithfully as I could from French here): “For this review to be considered, an order number allowing to justify a consumer experience is required” (“Pour la prise en compte de cet avis, un numéro de commande permettant de justifier d’une expérience de consommation est requis”). It was followed by a standard message from Trustpilot, much more friendly, saying (again, translated from French, but it was easier to translate because unlike the other part it doesn’t sound robot-like): “Do you wish to send the information they asked? No pressure, you decide what you want to share.” I thought no thanks, I’m done talking to them, so I just let it slide. I got an automated follow-up 3 days later, which I let slide too.
8 days later, I receive another e-mail from Trustpilot, much different this time, saying LDLC reported my review because they “don’t think I had an authentic buying or service experience in the last 12 months”. As a result, the review was unpublished, pending additional information. I guess those LDLC assholes decided to go for it and gamble I wouldn’t react, in order to try to remove an embarrassing, truthful review (you don’t need to take my word for it, you can just check for yourself by trying to rate a product you ordered more than 4 years ago). Tough luck, I sent the required info… and made this post too because I felt I ended up with enough material to talk a bit about bias in reviews.

Not sure how to call this bias… “time bias” maybe? To sum it up:
– LDLC doesn’t allow reviews on old purchases (I could find no information about the delay during which you’re allowed to post a review after a purchase, all I know is reviewing my 4 year old purchase was impossible)
– Trustpilot also doesn’t allow reviewing old experiences (“an authentic buying or service experience in the last 12 months”, they say). It is unclear how it applies to my case: I want to review today an old purchase and I can’t: my experience of not being able to review happened less than 12 months ago, but the purchase that caused me to want to write the initial review is older. They asked me to provide proof of a purchase (not just use of the service) less than 12 months old. So most likely, if I hadn’t purchased something else in that timeframe, LDLC would have been able to remove the review even though it’s related to a (dis)service that happened less than a month ago.

All in all, when reading reviews, be aware of the “time bias” that the reviewing system may introduce. For instance, if a company screws someone over a purchase they did 12 months and 1 day ago, you won’t hear about that in a Trustpilot review (except maybe if the person had a more recent purchase at the same company). If LDLC sells products that last 4 years but not 5, you won’t hear about that in a LDLC review (except if the person buys the same product again just to give it a zero?).
Surely, other sites have similar time constraints. And sadly, it may be hard to be aware of them. In this case, neither LDLC nor Trustpilot prominently state that consumers are unable to leave reviews after a certain period of time. I can understand that choice in Trustpilot’s case, as verifying old proofs can be tedious. But as for LDLC, that’s just bloody convenient to avoid products’ ratings being affected by poor mid-/long- term reliability. And in both cases, I see no valid reason as to why the pages presenting the reviews won’t clearly state something like: “Beware, customers can only post review within X months/years after their purchase”.

Avoiding biases is hard, sometimes even impossible. I would even say that you can never get rid of all biases in a study. But the right attitude is to disclose them and discuss them. Not to conceal them and hope noone will notice.

Posted in reviews.