Skip to content

How to enable SSH (+/- root) password login

In recent Linux server distribs provided by dedicated server or VPN hosts, root login is often disabled, as well as password login. It doesn’t really improve security as long as you don’t use crappy passwords, and it’s one hell of an inconvenience. Restoring those is however easy (it’s also easy to find, but I prefer to keep my own copy here ^^):

sudo nano /etc/ssh/sshd_config

In it, set:
PasswordAuthentication yes
PermitRootLogin yes

Then restart the SSH daemon. For this 2 possible commands:
– I’ve always used sudo /etc/init.d/ssh restart
– I’ve also seen sudo service ssh restart

Also, don’t forget sudo passwd root to set the root password 😉

Posted in Linux.

Optimizing the Windows paging file

A long time ago, when RAM began to be cheap enough to stuff more than enough in any of my PCs, I started to systematically completely disable the Windows paging file, with the objective to make sure I never end up swapping. I found this all the more important when switching to SSD, where swapping sure wouldn’t be such a bad performance drag but would wear out the SSD.

However, in Windows 10 (not sure about Windows 7, as I recall at least the issue wasn’t so visible), I noticed that programs would start crashing / refusing to run due to “low memory” way before I reached my maximum capacity. Roughly, with 32 GiB of RAM (which is a large-ish amount, but not as “huge” by today’s standards as when I started using that amount 5 yeas ago), I started getting “low memory” warnings around barely 22-24 GiB of RAM used. What the hell was wrong with Windows so that it seemed to pretend the remaining 8-10 didn’t exist?

There are tons of random advice about swap file / page file management on the Internet, and most provide little justification on their choices. Globally, advice range from “set it to at least 1.5 (or 2) times the size of your RAM” to “you should absolutely disable it“. Which recurrently leads to quite a bit of confusion. The “no page file” arguments have always appeared clearer and more compelling to me (just remove it, see things still work fine, and enjoy no disk writes)… until those mysterious errors.

Eventually, I found the answer: when using RAM, Windows will commit some amount of RAM, larger than what will actually be used. And if there isn’t enough RAM left (including free space on the page file), counting by committed space rather than actually used space, it will fail with this “not enough RAM” error.

From this, my takeaway was that I should add a page file that’s roughly the size of the difference between my real RAM (32 GiB) and the RAM real usage at which I started getting that error (~22-24), so I set it to 8 GiB. With a varying size, since I don’t care about fragmenting that file, since it will +/- never be actually written too, it will just get allocated.
As for your own strategy, I’d say:
– If you don’t care about disk writes (and don’t mind a potential slowness when you start actually using the page file), set it to some high value.
– If you have an HDD and not an SSD, use a fixed-size pagefile
– If you want to minimize disk writes as much as possible, first set no page file, then use your computer normally and see when you start getting that not enough RAM error. Set your page file size to roughly the difference between your total RAM and the RAM actually used at that moment. A bit less if you really want to minimize disk writes as much as possible, a bit more if you want to be sure to be able to use all your RAM, even if it may mean you can have a few disk writes. In the former case, you can probably use a variable-size pagefile even on an HDD. In the latter, hard to decide. Why are you still using an HDD for your OS anyway? ^^

Posted in Windows.

Disabling Vivaldi auto-update, and a generic list of possible auto-start locations

How to disable Vivaldi’s auto-update check

I’ve been using Vivaldi since its beta, as a backup browser for poorly designed websites that require Google Chrome’s engine, because it combines both Blink and an UI that I find much better than Chrome’s. Another thing I like is the customization options, notably the control over auto-updates. Sadly, that last point evolved a few times. And again no later than a few days ago.

Yesterday, I upgraded to Vivaldi 4. No particularly visible change for me, except that my speed dial background picture got nuked and replaced with a super bright new one (ouch, my eyes, why inflict this to people who use the dark theme? :s), and I noticed the added e-mail client and stuff but I couldn’t care less.
Something I cared about, however, was that when I restarted my computer today, my firewall caught an unexpected piece of junk: update_notifer.exe, which I knew well from the previous versions but always kept at bay, automatically started. Without me even starting Vivaldi. What. The. Hell.
I guess at Vivaldi, just like at Google, “no” doesn’t mean “no”. It’s as if they can’t figure what kind of nightmare it would be if every single piece of software would do that crap. Imagine that, every time you start your computer, and/or every 24h, 300 auto-update tasks running on your computer. How lovely. How speedy. How environment-friendly.

Anyway, to disable this, the official thing to do is to go to Vivaldi’s settings, search for “update” (it used to have its own submenu, now I have no idea how to find it another way that typing its name…), click on “Show Update Settings” and uncheck “Notify About Updates”. By the way, nice way to bury the setting. I mean, there’s only ONE setting, why show/hide it via another button, instead of just always showing the setting instead, if not to make it harder to reach?

So, this should prevent getting a notification asking you to update. But the wording made me suspicious: after all, it’s a checkbox for “notifying” about updates, not “checking” for updates. If I was a sneaky bastard like most browser vendors, I’d still run the check but just not make a popup about it. I poked around a bit, and eventually checked the Windows Task Scheduler. What a nice thing, it’s one of those “new” auto-startup places that evade good old WinPatrol’s surveillance. And frankly, I don’t remember to check it often enough. And there it was, right in the root folder, a big fat “VivaldiUpdateCheck-[bunch of random characters]” task. Still active, despite my above-mentioned unchecked auto-update box. Isn’t this nice? The only question left, is how soon Vivaldi will say “fuck you” to my choices again. Time will sure tell.

So to wrap this up:
– in Vivaldi’s settings, search for “update” and uncheck “Notify About Updates”
– open Windows’s Task Scheduler and disable (or delete, I guess that works too – unless Vivaldi then auto-restores it) the VivaldiUpdateCheck task

Where to check for programs that automatically run at startup

That little issue prompted me to do a more global checkup, more specifically I launched WinPatrol and looked at what it listed. Found a few things that I wanted to remove, WinPatrol failed to remove them (apparently its feature to remove entries just doesn’t work – maybe a Windows 10 thing), so I went there manually and made a list.
Here are some common locations to make a program run at startup (PS: thanks Microsoft for making such a mess, would it kill you to make it more user-friendly and pick just a unique location?):

In regedit (type “regedit ” in the start menu):
(NB: all these have a sibling “RunOnce”, for running only once, which I guess should generally be empty)

The Startup folder of the Start Menu. Note a little trick: there are 2 of those, one for the current user, and one for everyone:
C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp
C:\Users\[username]\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup, aka %appdata%\Microsoft\Windows\Start Menu\Programs\Startup

The Task Scheduler (type “task scheduler” in the start menu), which contains both task that can run at any time and at startup (also I believe a task that was scheduled to run while the computer was off will in most cases run as soon as it’s powered on)

The Services (type “services” in the start menu), which can be started automatically or “manually”, or disabled. The latter will completely prevent it from running. “Manually” will allow it to get triggered by “something” else, and on numerous occasions I’ve found “manual” start services would still end up starting despite me not realizing I ever launched them. So I tend to be heavy-handed on the “disabled” option, and switch back to manual if I notice I broke something. Note that there are many system services (and quite a few other legit services, notably drivers), so don’t just go and disable everything, that would most likely make a big mess.

To wrap up this part, here’s a partial list of what I busted today hiding there:
– Adobe ARM in some regedit Run key
– Adobe Update in Services
– a CorsairGamingAudioConfig service (but I don’t have any Corsair audio hardware…)
– Vivaldi in the Task Scheduler (as said above)
– Intel Telemetry in Task Scheduler (pretty sure I deleted it already earlier, I guess that crap adds itself back every single time you run Intel XTU)
– a couple of Xbox Live tasks (XblGameSaveTask & XblGameSaveTaskLogon – no kidding)

Posted in Windows.

aToad #29: FileSeek

Freeware to search in huge text files

Big text files are a recurrent issue for me. Not that I deal with them often, but when I do, they are indeed huge, and the tools I usually use, such as Visual Studio Code or Notepad++, just can’t deal with them.

I eventually found a blog post listing a bunch of software supposedly able to cope with large text files. Some have apparently vanished, some can open the files but can’t really search in them. The one that I found best for my needs was FileSeek, which can’t open the files but can search in them (and display a bit of contents around the found text – the whole line actually -, which was pretty much what I wanted).

FileSeek free version, searching in multiple SQL backup files

On the minus side, it’s neither open source nor free. On the plus side, it does have a free version, with very few limitations compared to the paid one (I don’t even notice them). Also it’s able to search in multiple files at once, a feature I didn’t need at first but which turned out to be quite convenient.

Posted in A Tool A Day.

aToad #28: ReqBin

Online API testing tool

The last tool I wrote about was a desktop API testing tool, ARC, well here’s an online one this time, if you don’t want to bother with running software locally. ReqBin allows you to send pretty much any HTTP request you want, from their website, with their IP (you get to choose between, at the moment, 2 server locations: US or Germany).

Compared to ARC, I find it more convenient if I need to just run one quick request. Notably, the UI is more comfortable IMO. But if I want to run (and save) a bunch, I stick with ARC.
A big thing to be aware of is that requests go through their servers (as I already wrote above). It’s convenient to hide your IP, but it also means that whatever you send (API keys…), you send to them first. It’s up to you to see if you want to use production keys there…

Another issue is that, like Postman, they add a bunch of extra headers compared to your original request. Including your user agent (yup, they hide your IP, but not your browser). For instance, here are the headers in a request I made to httpbin (which is another nice service, that will mirror whatever request you send them) with ARC:
"headers": {
"Content-Length": "0",
"Host": "",
"X-Amzn-Trace-Id": "xxxx"

And here are the headers from the same request made using ReqBin:
"headers": {
"Accept": "application/json",
"Accept-Encoding": "deflate, gzip",
"Content-Length": "0",
"Content-Type": "application/json",
"Host": "",
"User-Agent": "xxxx",
"X-Amzn-Trace-Id": "xxxx"

As you can see, 4 extra junk headers… “Content-Length” and “Host” appear unavoidable, “X-Amzn-Trace-Id” seems to be added by httpbin.

So well, keeping those little caveats in mind, this service can be useful on occasions.

Posted in A Tool A Day.

SumatraPDF dark mode

SumatraPDF is a lightweight PDF reader (here). To give you an idea, its installer is less than 10 MB (around 5 MB for version 3.1.2, which I still use due to a bug in 3.2, and 9 MB for version 3.2), where Adobe Acrobat Reader’s installer is over 150 MB.

I wanted to read some books in it, but the (usual) white background is a bit hard on the eyes. SumatraPDF doesn’t provide a dark mode as-is, however it provides a way to customize a lot of things in its UI… including background and font colors. Long story short, you can make your own dark mode. Quite easily, if you have some very basic knowledge of HTML colors codes: go to Sumatra’s menu, then Settings -> Advanced Options. This will open a text file, SumatraPDF-settings.txt, containing a bunch of settings (possibly all?), and even your recent file history, if you chose to keep it.

In this file, go to the FixedPageUI section (which should be quite close to the top of the list). Then edit TextColor and BackgroundColor at will. I chose #cccccc for text and #333333 for background. Note that 3-letter short color codes (like #ccc) won’t work here (yes, I tried, I’m that lazy ^^). In Sumatra 3.2, you also have GradientColors, which is the color of the windows background outside of the page, but in Sumatra 3.1.2 that setting is absent (and I tried adding it and it didn’t work).

The settings should apply as soon as you save the file. If not, I guess you can always restart Sumatra.

Note that this does have an annoying side effect: if the PDF you read contains pictures with a transparent background, they’ll probably look very weird. I don’t think there’s a fix for this, so I just skip the pictures, or if I really must, I put the text and background color back to respectively black and white.

Update (2021-05-03): “pale” mode

As I mentioned above, this dark mode is often problematic when viewing pictures. I also had, on occasion, text that would remain black, over my black background, making it impossible to read.
So I eventually gave a shot to another configuration, using this time some kind of dark-ish grey for text and some not-as-dark grey for background. This results in a pale but globally eye-relaxing (IMO) theme, although I noticed its lack of contrast forces me to zoom a bit bigger to be able to read comfortably. The big plus is I haven’t encountered any text or picture that was rendered totally unreadable by this setup. My values are:
TextColor = #333333
BackgroundColor = #bbbbbb

Posted in Uncategorized.

vsftpd quick installation cheat sheet

I recently had to set up an FTP server. I know right, who still uses FTP nowadays? Well apparently, some big people still do, and switching them to SFTP wasn’t an option. Luckily, I had an old self-made documentation from 2013 on how to set up all my server things, which at the time did include an FTP server, vsftpd. A quick search showed me that it still was the go-to software for this, so hurray, and here is what it said:

apt-get install vsftpd
Config file: /etc/vsftpd.conf
In this config file, uncomment the lines local_umask=022 and write_enable=YES.
At the end, add:

Command to restart: service vsftpd restart
guide: (RIP :/)

I suppose some things had changed, as this left me with a couple of errors/warnings.

First, I got an error message saying “vsftpd: refusing to run with writable root inside chroot()”. My quick fix was to add this to the above-mentioned config file:
But for more details, you may want to read this

Second, I got a message in FileZilla saying “Server sent passive reply with unroutable address. Using server address instead”. As the message suggests, it’s not breaking for FileZilla, which still managed to connect. However, it’s a problem for some clients. My final fix was to add this to the config:
#pasv_address=[server IP]

Turns out passive mode wasn’t enabled by default in my case, pasv_enable solves that.
Then I had a firewall issue, as I used to believe FTP uses just port 21, but I learned on this occasion that passive mode will automatically use a random port between pasv_min_port and pasv_max_port. Since the server where I’m setting this up is behind a paranoid firewall and I have to open ports one by one, I set them both to the same value. Not sure what the implications are compared to multiple random ports.
The last line I just kept commented out for safe-keeping, as I found it as a possible solution but it turned out it didn’t help, and I found there that it seems best to keep it unset.

And that’s about it, all working now. Although it could probably use some security tweaks. My priority here was “just make that damned thing work”.

Update 2021-03-31

It was brought to my attention that FTP needs one port per concurrent transfer, so having pasv_min_port = pasv_max_port means the server will only accept one concurrent transfer. Good enough for my use case, but you may want to keep a wider range for yours.

Posted in FTP, servers.

Github got a dark mode, yay :)

Since it’s been ages since I last posted, and since I don’t really have any idea in the pipeline (nor any time to find one), I thought this would be a nice way to say I’m still alive ^^

So, Github finally got a dark theme. You can enable it there:

If like me, you’ve been using Dark Reader, it won’t make much of a visual difference, but still, I find Dark Reader to be tremendously detrimental to browser performance, so any time I can disable it and replace it with a native dark mode is very, very much appreciated.

Posted in Internet, programming.

TIL WebRTC fully bypasses SOCKS proxies

I knew WebRTC could (or rather, would) leak your real IP if you were trying to hide it behind a SOCKS proxy (or even behind a VPN), and this is why I disabled it in my main browsing profiles. But with the world-wide Covid confinement and all, I’ve had to use it quite a bit more (yay, let’s not use Mumble, let’s use Google freaking Hangout/Meet…), so what was bound to happen happened: while in a meeting, I eventually had connection trouble. Not a total connection loss, though, just very short interruptions, enough to cut off my SOCKS proxies but to keep anything else running fine.

And it struck me: despite my SOCKS proxies being all suddenly disconnected, the meeting went on. The freaking WebRTC had been ignoring the freaking proxy all the time. So it doesn’t just leak your IP here and there, it “leaks” it (not sure I should say “leaks” rather than just “uses”) constantly. Yikes. Not that it matters much, but still WTF. What a crappy protocol, thanks W3C (and all the creeps that work on or advocate for that plague).

Maybe a setting that would prevent this in Firefox:
No idea why the hell they don’t enable it by default though.
(cf also

Posted in privacy.

Migrating from request to got (part 2)

As I said in a previous post, I recently had to ditch request. I found about got as a replacement because it was mentioned a couple of times in the request deprecation ticket.

One of the first things I liked about got is that they provide a pretty extensive comparison table between them and alternatives. To be honest, when reading this table I was first drawn to ky, as I’m always interested in limiting dependencies (and dependencies of dependencies) as well as package size. But ky is browser-side only. Damn.

So I had a quick look at got’s migration guide, and it seemed easy enough (plus it had promises, yay). You’ll probably find it’s a great place to start, and maybe it will even be all you need. I still had to dig a bit more for a few things though.

The first one is that, by default, got will throw an exception whenever the return HTTP code isn’t 200. It might sound like a perfectly normal behavior to you, but I was using at least one API that would return a 404 code during normal operations (looking at you, MailChimp), and I certainly didn’t expect that to throw an exception!
The solution to that (apart from rewriting your code to deal with exceptions in cases were you used to deal with return data another way) is to set throwHttpErrors to false in your request parameters (cf example code at the end of this post).

The second one was to get binary data, but actually in this case I find got much clearer than request: in order to get binary data with request, you need to set encoding to null in the parameters. With got, you’ll replace that with responseType: "buffer".

Some other small things: the typings (@types/got) are better than request’s, at least concerning the request parameters, which means you’ll probably have to cast some fields there (for instance, method isn’t a string but a "GET" | "POST" | "PUT" ...). And the timeout is, as I understood, different in got and request: in request, it’s a timeout before the server starts responding, while in got I believe it’s the timeout before the server finishes responding. Interesting for me as sometimes I use request/got to download files, and I’m interested in dealing with slow download speeds if they happen to occur.

Finally, here are a couple of functions as they were with request and how they became with got:

function requestFile(url: string): Promise<Buffer> {
  return new Promise((resolve, reject) => {
    let requestParameters = {
      method: 'GET',
      url: url,
      headers: {
        'Content-Type': 'binary/octet-stream'
      // *Note:* if you expect binary data, you should set encoding: null (
      encoding: null,
      // todo: find a way to have timeout applied to read and not just connection
      timeout: 2_000
    request(requestParameters, (error, response, body) => {
      if (error) {
      } else {

async function requestFile(url: string): Promise<Buffer> {
  const reqParameters = {
    method: <'GET'>'GET',
    headers: {
      'Content-Type': 'binary/octet-stream'
    responseType: <'buffer'>'buffer',
    timeout: 3_000,
    throwHttpErrors: true
  const response = await got(url, reqParameters);
  return response.body;

function sendApiRequest(
  method: 'DELETE' | 'GET' | 'POST',
  route: string,
  body?: any,
  headers?: headersArray): Promise<any> {
  return new Promise((resolve, reject) => {
    let requestParameters = {
      method: method,
      url: this.API_ROOT + route,
      body: body ? JSON.stringify(body) : null,
      headers: <{'API-Authorization': string; [key: string]: string}>{
        'API-Authorization': this.apiKey
    if (headers) {
      for (const h of headers) {
        requestParameters.headers[] = h.value;

    request(requestParameters, (error, response, body) => {
      if (error) {
      } else {
        const parsedBody = JSON.parse(body);
        let res: any;
        if ( res =;
        else res = parsedBody;

async function sendApiRequest(
  method: 'DELETE' | 'GET' | 'POST',
  route: string,
  body?: any,
  headers?: headersArray
): Promise<any> {
  const reqUrl = this.API_ROOT + route;
  let reqParameters = {
    method: method,
    headers: <{[key: string]: string; 'API-Authorization': string}>{
      'API-Authorization': this.apiKey
    body: body ? JSON.stringify(body) : undefined,
    retry: 0,
    responseType: <'text'>'text',
    throwHttpErrors: false // seriously what a shitty default
  if (headers) {
    for (const h of headers) {
      reqParameters.headers[] = h.value;

  const response = await got(reqUrl, reqParameters);
  const parsedBody = JSON.parse(response.body);
  let res: any;
  if ( res =;
  else res = parsedBody;
  return res;

Posted in JavaScript / TypeScript / Node.js, programming, web development.