Skip to content

Livedrive: a backup disaster story

About 8 years ago, I posted about large and cheap storage/backup solutions. About that time, I subscribed to my favorite pick in this list. And it seemed pretty great. Lots of space for manual storage and “unlimited” space for automatic backups (NB: “unlimited” for automatic backups means it’s actually quite limited, by the storage size on your backed-up computers + the limited amount of folders you choose to back up). All good, so far.

It all began with a payment processing issue…

Fast forwarding to 8 years later. I’m a bit late for my renewal, as usual. Well, not late-late, but late “just a few days before the deadline”. Because I use single-use debit cards (“e-cards”), and the core concept of those is “you create it, you use it immediately”.
And for the first time in 8 years, it’s rejected. Not “not enough funds” rejected, not “bank declined for some reason” rejected, just “we failed to process your payment try again” rejected. I tried lots of things: other e-cards, even physical cards, completing my billing address, which had been partial from the beginning, without it being an issue ever, etc. Nothing worked.
Contacted support and waited.

… but support was unresponsive

It was Friday, more or less in the middle of the day… You guessed it, they didn’t reply before the weekend. Yikes.
But as we’ll see later on, it turned out to be a blessing in disguise.

So I waited, during the weekend.
But as we’ll see later on, that was a bit stupid.

Monday morning now. Last day paid for in my current subscription state. Getting a bit nervous there…
Midday. Getting a bad feeling about this. If you have a customer writing to you because they want to pay you, you wouldn’t leave them hanging, would you? I start thinking about holidays for some reason, so I look a bit online, and…
Spring Bank Holiday
Bloody Hell. Hurray for a UK company 😡

Made in Britain (from The IT Crowd)

So I started recovering my backup…

Sounds like a good time to start panicking. I assume they have a grace period, but I decide I’ll try to download as much of my backup as I can. A quick calculation shows I won’t have time to download everything before the end of the day (told you waiting had been a bit stupid). Great, now I have to pick. Good thing I mostly have large files: downloading the first few will give me a lot of time to think about the next ones. I start downloading.

And this is where things go incredibly south. Let’s say nothing about SFTP being so unusably slow that I had to use FTP (“Welcome to 2001… wait, aren’t we in 2021?”), and let a picture be worth a thousand words. Or a few thousand files:


… only to find it was largely unrecoverable

Did you see it? No? Look at the bottom. Still no? Bottom-left. That’s right: for 3944 total files, 262 failed! That’s over 6.6% (almost 6.66%, the number of the Beast 👀). And I retried a bunch of them: it didn’t change a thing. A lost file was a lost file. For. Fuck. Sake.

I also gave a try to downloading from the web interface. Admire how a 10 GB file turns into a 0-byte file, without the slightest error message. “Everything normal, SNAFU”. Splendid:

Systems normal - all fucked up

Fun fact: file size didn’t seem to matter in how likely it was for a file to be lost. If random clusters would get lost, and as a result would doom the large file they were a part of, I would expect to lose almost all my huge files, and not nearly as many small files. But no, I lost many “small” files (a few MB), and not that many “huge” files (more than 5 GB). I seriously wonder how the hell you lose random files rather than random clusters… My best guess is 1) no data replication at all and 2) each file is placed on a single, random hard drive (independently from the time of upload, as files I had uploaded together didn’t necessarily had the same conservation status).

So I left

From that moment on, obviously I was fully decided not to renew my subscription. So I kept downloading even faster. I actually managed to get back most of my data, as my account wasn’t cut off at midnight, but kind of late in the following afternoon. Had I let it run for the night (silly me… :/), I would probably have gotten back everything. That is, everything that hadn’t been lost.

So, as I said earlier, that payment failure was a blessing in disguise, since it made me realize I was paying for a backup that pretty much had no value. 168€/year for 5TB for storage (+ automated backups, this I haven’t replaced yet).

Let’s talk a bit about what happened afterwards. After the account was suspended, I regularly received e-mail reminders, more precisely: within 1-2 hours after suspension, then at day 5, 10, 20 and 1 month. That last e-mail mentioned that it was a final reminder and that the account would be erased after 30 days. I logged into my account just 4 weeks after receiving that e-mail, and it was indeed still there (suspended, with just a payment form to reactivate it).
All in all, they give the impression of an honest company that does its best to make sure data don’t get wiped by accident (although 2 months isn’t that long, I guess they can’t keep the data forever either, it does have a cost). Only their best is far, far below reasonable expectations, as far as data integrity goes.

A new dawn

Moving on, I’m currently giving a spin to Backblaze, which I had been very hesitant about for a while, notably because of their unusual transfer protocol: they use their own. Cyberduck can handle it, but that’s pretty much it as far as open source clients go, I believe.
Still $0.005/GB/month, so for the price of Livedrive, in pure storage, I get 2.8TB. With lots of redundancy this time. Note that they bill for lots of little things: if you know AWS S3, Azure or other “cloud” providers, it’s a bit the same concept, every single action / API call is billable. In Backblaze’s case, there’s a free daily allowance that should help you avoid most of these little extra costs, if you spread your activity regularly rather than do tons of things one day, then nothing for ages. And there’s another billable thing, which this time is a big one: downloads. $0.01/GB (1GB free per day). That can pile up pretty quickly, and if I take the example of my emergency escape from Livedrive, downloading 2TB is therefore $20. Which is both “not nothing”, but also “not that much”. Better than losing 5%+ of my files, anyway.
Another nice thing with Backblaze, is that it will give me an incentive to minimize my storage, aka to stop freaking hoarding ^^. While Livedrive’s pricing was on the contrary an incentive to hover around 5TB…

Posted in backups.

aToad #29: FileSeek

Freeware to search in huge text files

Big text files are a recurrent issue for me. Not that I deal with them often, but when I do, they are indeed huge, and the tools I usually use, such as Visual Studio Code or Notepad++, just can’t deal with them.

I eventually found a blog post listing a bunch of software supposedly able to cope with large text files. Some have apparently vanished, some can open the files but can’t really search in them. The one that I found best for my needs was FileSeek, which can’t open the files but can search in them (and display a bit of contents around the found text – the whole line actually -, which was pretty much what I wanted).

FileSeek free version, searching in multiple SQL backup files

On the minus side, it’s neither open source nor free. On the plus side, it does have a free version, with very few limitations compared to the paid one (I don’t even notice them). Also it’s able to search in multiple files at once, a feature I didn’t need at first but which turned out to be quite convenient.

Posted in A Tool A Day.

aToad #28: ReqBin

Online API testing tool

The last tool I wrote about was a desktop API testing tool, ARC, well here’s an online one this time, if you don’t want to bother with running software locally. ReqBin allows you to send pretty much any HTTP request you want, from their website, with their IP (you get to choose between, at the moment, 2 server locations: US or Germany).

Compared to ARC, I find it more convenient if I need to just run one quick request. Notably, the UI is more comfortable IMO. But if I want to run (and save) a bunch, I stick with ARC.
A big thing to be aware of is that requests go through their servers (as I already wrote above). It’s convenient to hide your IP, but it also means that whatever you send (API keys…), you send to them first. It’s up to you to see if you want to use production keys there…

Another issue is that, like Postman, they add a bunch of extra headers compared to your original request. Including your user agent (yup, they hide your IP, but not your browser). For instance, here are the headers in a request I made to httpbin (which is another nice service, that will mirror whatever request you send them) with ARC:
"headers": {
"Content-Length": "0",
"Host": "",
"X-Amzn-Trace-Id": "xxxx"

And here are the headers from the same request made using ReqBin:
"headers": {
"Accept": "application/json",
"Accept-Encoding": "deflate, gzip",
"Content-Length": "0",
"Content-Type": "application/json",
"Host": "",
"User-Agent": "xxxx",
"X-Amzn-Trace-Id": "xxxx"

As you can see, 4 extra junk headers… “Content-Length” and “Host” appear unavoidable, “X-Amzn-Trace-Id” seems to be added by httpbin.

So well, keeping those little caveats in mind, this service can be useful on occasions.

Posted in A Tool A Day.

SumatraPDF dark mode

SumatraPDF is a lightweight PDF reader (here). To give you an idea, its installer is less than 10 MB (around 5 MB for version 3.1.2, which I still use due to a bug in 3.2, and 9 MB for version 3.2), where Adobe Acrobat Reader’s installer is over 150 MB.

I wanted to read some books in it, but the (usual) white background is a bit hard on the eyes. SumatraPDF doesn’t provide a dark mode as-is, however it provides a way to customize a lot of things in its UI… including background and font colors. Long story short, you can make your own dark mode. Quite easily, if you have some very basic knowledge of HTML colors codes: go to Sumatra’s menu, then Settings -> Advanced Options. This will open a text file, SumatraPDF-settings.txt, containing a bunch of settings (possibly all?), and even your recent file history, if you chose to keep it.

In this file, go to the FixedPageUI section (which should be quite close to the top of the list). Then edit TextColor and BackgroundColor at will. I chose #cccccc for text and #333333 for background. Note that 3-letter short color codes (like #ccc) won’t work here (yes, I tried, I’m that lazy ^^). In Sumatra 3.2, you also have GradientColors, which is the color of the windows background outside of the page, but in Sumatra 3.1.2 that setting is absent (and I tried adding it and it didn’t work).

The settings should apply as soon as you save the file. If not, I guess you can always restart Sumatra.

Note that this does have an annoying side effect: if the PDF you read contains pictures with a transparent background, they’ll probably look very weird. I don’t think there’s a fix for this, so I just skip the pictures, or if I really must, I put the text and background color back to respectively black and white.

Update (2021-05-03): “pale” mode

As I mentioned above, this dark mode is often problematic when viewing pictures. I also had, on occasion, text that would remain black, over my black background, making it impossible to read.
So I eventually gave a shot to another configuration, using this time some kind of dark-ish grey for text and some not-as-dark grey for background. This results in a pale but globally eye-relaxing (IMO) theme, although I noticed its lack of contrast forces me to zoom a bit bigger to be able to read comfortably. The big plus is I haven’t encountered any text or picture that was rendered totally unreadable by this setup. My values are:
TextColor = #333333
BackgroundColor = #bbbbbb

Posted in Uncategorized.

vsftpd quick installation cheat sheet

I recently had to set up an FTP server. I know right, who still uses FTP nowadays? Well apparently, some big people still do, and switching them to SFTP wasn’t an option. Luckily, I had an old self-made documentation from 2013 on how to set up all my server things, which at the time did include an FTP server, vsftpd. A quick search showed me that it still was the go-to software for this, so hurray, and here is what it said:

apt-get install vsftpd
Config file: /etc/vsftpd.conf
In this config file, uncomment the lines local_umask=022 and write_enable=YES.
At the end, add:

Command to restart: service vsftpd restart
guide: (RIP :/)

I suppose some things had changed, as this left me with a couple of errors/warnings.

First, I got an error message saying “vsftpd: refusing to run with writable root inside chroot()”. My quick fix was to add this to the above-mentioned config file:
But for more details, you may want to read this

Second, I got a message in FileZilla saying “Server sent passive reply with unroutable address. Using server address instead”. As the message suggests, it’s not breaking for FileZilla, which still managed to connect. However, it’s a problem for some clients. My final fix was to add this to the config:
#pasv_address=[server IP]

Turns out passive mode wasn’t enabled by default in my case, pasv_enable solves that.
Then I had a firewall issue, as I used to believe FTP uses just port 21, but I learned on this occasion that passive mode will automatically use a random port between pasv_min_port and pasv_max_port. Since the server where I’m setting this up is behind a paranoid firewall and I have to open ports one by one, I set them both to the same value. Not sure what the implications are compared to multiple random ports.
The last line I just kept commented out for safe-keeping, as I found it as a possible solution but it turned out it didn’t help, and I found there that it seems best to keep it unset.

And that’s about it, all working now. Although it could probably use some security tweaks. My priority here was “just make that damned thing work”.

Update 2021-03-31

It was brought to my attention that FTP needs one port per concurrent transfer, so having pasv_min_port = pasv_max_port means the server will only accept one concurrent transfer. Good enough for my use case, but you may want to keep a wider range for yours.

Update 2021-09-03

Coming back to this machine half a year later, I got that “Server sent passive reply with unroutable address. Using server address instead” message from Hell again. Despite no change in the config.
So I dug up some more, and eventually found this. Long story short, my pasv_address was IPv4, so I had to set listen_ipv6=NO and listen=YES (the default is the other way around, for some strange reason).
Problem solved (again). Until it reappears again?

Posted in FTP, servers.

Github got a dark mode, yay :)

Since it’s been ages since I last posted, and since I don’t really have any idea in the pipeline (nor any time to find one), I thought this would be a nice way to say I’m still alive ^^

So, Github finally got a dark theme. You can enable it there:

If like me, you’ve been using Dark Reader, it won’t make much of a visual difference, but still, I find Dark Reader to be tremendously detrimental to browser performance, so any time I can disable it and replace it with a native dark mode is very, very much appreciated.

Posted in Internet, programming.

TIL WebRTC fully bypasses SOCKS proxies

I knew WebRTC could (or rather, would) leak your real IP if you were trying to hide it behind a SOCKS proxy (or even behind a VPN), and this is why I disabled it in my main browsing profiles. But with the world-wide Covid confinement and all, I’ve had to use it quite a bit more (yay, let’s not use Mumble, let’s use Google freaking Hangout/Meet…), so what was bound to happen happened: while in a meeting, I eventually had connection trouble. Not a total connection loss, though, just very short interruptions, enough to cut off my SOCKS proxies but to keep anything else running fine.

And it struck me: despite my SOCKS proxies being all suddenly disconnected, the meeting went on. The freaking WebRTC had been ignoring the freaking proxy all the time. So it doesn’t just leak your IP here and there, it “leaks” it (not sure I should say “leaks” rather than just “uses”) constantly. Yikes. Not that it matters much, but still WTF. What a crappy protocol, thanks W3C (and all the creeps that work on or advocate for that plague).

Maybe a setting that would prevent this in Firefox:
No idea why the hell they don’t enable it by default though.
(cf also

Posted in privacy.

Migrating from request to got (part 2)

As I said in a previous post, I recently had to ditch request. I found about got as a replacement because it was mentioned a couple of times in the request deprecation ticket.

One of the first things I liked about got is that they provide a pretty extensive comparison table between them and alternatives. To be honest, when reading this table I was first drawn to ky, as I’m always interested in limiting dependencies (and dependencies of dependencies) as well as package size. But ky is browser-side only. Damn.

So I had a quick look at got’s migration guide, and it seemed easy enough (plus it had promises, yay). You’ll probably find it’s a great place to start, and maybe it will even be all you need. I still had to dig a bit more for a few things though.

The first one is that, by default, got will throw an exception whenever the return HTTP code isn’t 200. It might sound like a perfectly normal behavior to you, but I was using at least one API that would return a 404 code during normal operations (looking at you, MailChimp), and I certainly didn’t expect that to throw an exception!
The solution to that (apart from rewriting your code to deal with exceptions in cases were you used to deal with return data another way) is to set throwHttpErrors to false in your request parameters (cf example code at the end of this post).

The second one was to get binary data, but actually in this case I find got much clearer than request: in order to get binary data with request, you need to set encoding to null in the parameters. With got, you’ll replace that with responseType: "buffer".

Some other small things: the typings (@types/got) are better than request’s, at least concerning the request parameters, which means you’ll probably have to cast some fields there (for instance, method isn’t a string but a "GET" | "POST" | "PUT" ...). And the timeout is, as I understood, different in got and request: in request, it’s a timeout before the server starts responding, while in got I believe it’s the timeout before the server finishes responding. Interesting for me as sometimes I use request/got to download files, and I’m interested in dealing with slow download speeds if they happen to occur.

Finally, here are a couple of functions as they were with request and how they became with got:

function requestFile(url: string): Promise<Buffer> {
  return new Promise((resolve, reject) => {
    let requestParameters = {
      method: 'GET',
      url: url,
      headers: {
        'Content-Type': 'binary/octet-stream'
      // *Note:* if you expect binary data, you should set encoding: null (
      encoding: null,
      // todo: find a way to have timeout applied to read and not just connection
      timeout: 2_000
    request(requestParameters, (error, response, body) => {
      if (error) {
      } else {

async function requestFile(url: string): Promise<Buffer> {
  const reqParameters = {
    method: <'GET'>'GET',
    headers: {
      'Content-Type': 'binary/octet-stream'
    responseType: <'buffer'>'buffer',
    timeout: 3_000,
    throwHttpErrors: true
  const response = await got(url, reqParameters);
  return response.body;

function sendApiRequest(
  method: 'DELETE' | 'GET' | 'POST',
  route: string,
  body?: any,
  headers?: headersArray): Promise<any> {
  return new Promise((resolve, reject) => {
    let requestParameters = {
      method: method,
      url: this.API_ROOT + route,
      body: body ? JSON.stringify(body) : null,
      headers: <{'API-Authorization': string; [key: string]: string}>{
        'API-Authorization': this.apiKey
    if (headers) {
      for (const h of headers) {
        requestParameters.headers[] = h.value;

    request(requestParameters, (error, response, body) => {
      if (error) {
      } else {
        const parsedBody = JSON.parse(body);
        let res: any;
        if ( res =;
        else res = parsedBody;

async function sendApiRequest(
  method: 'DELETE' | 'GET' | 'POST',
  route: string,
  body?: any,
  headers?: headersArray
): Promise<any> {
  const reqUrl = this.API_ROOT + route;
  let reqParameters = {
    method: method,
    headers: <{[key: string]: string; 'API-Authorization': string}>{
      'API-Authorization': this.apiKey
    body: body ? JSON.stringify(body) : undefined,
    retry: 0,
    responseType: <'text'>'text',
    throwHttpErrors: false // seriously what a shitty default
  if (headers) {
    for (const h of headers) {
      reqParameters.headers[] = h.value;

  const response = await got(reqUrl, reqParameters);
  const parsedBody = JSON.parse(response.body);
  let res: any;
  if ( res =;
  else res = parsedBody;
  return res;

Posted in JavaScript / TypeScript / Node.js, programming, web development.

Migrating from tslint to eslint and from request to got (part 1)

Last month was unexpectedly busy, update-wise, as 2 major dependencies I use for work-related projects suddenly announced, barely a few days apart, that they were being discontinued. The first was tslint, which probably every TypeScript developer has at least heard of once (~3.4M weekly downloads at the moment). The second was request, which probably every Node developer who’s ever had to make an HTTP request has already heard of.

Both projects had become pretty quiet, update-wise, so I probably could have seen this coming earlier had I paid attention more carefully. Anyway, replacements were easy to fine: tsling clearly suggested eslint, request didn’t suggest a particular replacement, but got and a few other seemed obivously widespread.

Part 1: From tslint to eslint

I installed eslint globally. Which isn’t recommended for some reason, but works perfectly fine for me. You’ll most like also need a couple of plugins for TypeScript, namely:
npm i -g eslint @typescript-eslint/eslint-plugin @typescript-eslint/parser

A key tool in the migration was tslint-to-eslint-config. Running it is as simple as npx tslint-to-eslint-config (after, if you don’t have it already, npm install -g npx). It provides an .eslintrc.js file that’s a good basis for you new eslint rules. Except that it’s in JavaScript and I definitely wanted JSON. I also created a base config file from eslint itself, via the eslint --init mentioned in getting started. And then I basically combined those.

Finally I browsed my code with the new rules, either fixing the new violations or adding rules for them. In the end, I couldn’t match my tsling rules perfectly, but I had just about 20 lines (on a ~30k lines codebase) that required a change that I was unhappy with, mostly about indenting and typecasting.

Last but not least, here are my config files:

Old tslint.json:

  "defaultSeverity": "warning",
  "extends": [
  "jsRules": {},
  "rules": {
    "array-type": [true, "array"],
    "arrow-parens": [true, "ban-single-arg-parens"],
    "curly": [true, "ignore-same-line"],
    "eofline": true,
    "max-classes-per-file": [true, 3],
    "max-line-length": [true, {"limit": 140, "ignore-pattern": "^import |^export {(.*?)}|class [a-zA-Z] implements |//"}],
    "no-angle-bracket-type-assertion": false,
    "no-consecutive-blank-lines": [true, 2],
    "no-console": false,
    "no-empty": false,
    "no-shadowed-variable": false,
    "no-string-literal": true,
    "no-string-throw": true,
    "no-trailing-whitespace": true,
    "object-literal-key-quotes": [true, "as-needed"],
    "object-literal-shorthand": [true, "never"],
    "object-literal-sort-keys": false,
    "one-line": [true, "check-catch", "check-finally", "check-else", "check-open-brace"],
    "only-arrow-functions": [true, "allow-named-functions"],
    "prefer-const": false,
    "quotemark": [true, "single", "avoid-escape"],
    "semicolon": [true, "always"],
    "trailing-comma": [true, {"multiline": "never", "singleline": "never"}],
    "triple-equals": [true, "allow-null-check", "allow-undefined-check"],
    "variable-name": [true, "ban-keywords", "check-format", "allow-leading-underscore"],
    "whitespace": [true, "check-branch", "check-decl", "check-operator", "check-module",
      "check-separator", "check-rest-spread", "check-type", "check-type-operator","check-preblock"]
  "rulesDirectory": []

New .eslintrc.json (yes, it’s a hell lot bigger):

  "env": {
    "es6": true,
    "node": true
  "extends": [
  "globals": {
    "Atomics": "readonly",
    "SharedArrayBuffer": "readonly"
  "parser": "@typescript-eslint/parser",
  "parserOptions": {
    "ecmaVersion": 2018,
    "sourceType": "module"
  "plugins": [
  "rules": {
    "@typescript-eslint/adjacent-overload-signatures": "warn",
    "@typescript-eslint/array-type": "warn",
    "@typescript-eslint/ban-types": "warn",
    "@typescript-eslint/camelcase": ["warn",
        "properties": "always",
        "ignoreDestructuring": false,
        "ignoreImports": true,
        "genericType": "never",
        "allow": ["child_process"]
    "@typescript-eslint/class-name-casing": "warn",
    "@typescript-eslint/consistent-type-assertions": "off",
    "@typescript-eslint/interface-name-prefix": ["warn", { "prefixWithI": "always" }],
    "@typescript-eslint/member-delimiter-style": [
          "multiline": {
              "delimiter": "semi",
              "requireLast": true
          "singleline": {
              "delimiter": "semi",
              "requireLast": false
    "@typescript-eslint/member-ordering": "warn",
    "@typescript-eslint/no-empty-function": "off",
    "@typescript-eslint/no-empty-interface": "warn",
    "@typescript-eslint/no-explicit-any": "off",
    "@typescript-eslint/no-extra-parens": "off",
    "@typescript-eslint/no-misused-new": "warn",
    "@typescript-eslint/no-namespace": "warn",
    "@typescript-eslint/no-parameter-properties": "off",
    "@typescript-eslint/no-use-before-define": "off",
    "@typescript-eslint/no-useless-constructor": "warn",
    "@typescript-eslint/no-var-requires": "warn",
    "@typescript-eslint/prefer-for-of": "warn",
    "@typescript-eslint/prefer-function-type": "warn",
    "@typescript-eslint/prefer-namespace-keyword": "warn",
    "@typescript-eslint/quotes": [
          "avoidEscape": true
    "@typescript-eslint/semi": ["warn", "always", {"omitLastInOneLineBlock": true}],
    "@typescript-eslint/triple-slash-reference": "warn",
    "@typescript-eslint/unified-signatures": "warn",
    "array-bracket-spacing": ["warn", "never"],
    "arrow-parens": ["warn", "as-needed"],
    "arrow-spacing": ["warn", {"after": true, "before": true}],
    "brace-style": ["warn", "1tbs", {"allowSingleLine": true}],
    "camelcase": "off",
    "comma-dangle": ["warn", "never"],
    "comma-spacing": ["warn", {"before": false, "after": true}],
    "complexity": "off",
    "computed-property-spacing": ["warn", "never", { "enforceForClassMembers": true }],
    "comma-style": ["warn", "last"],
    "consistent-this": ["error", "self"],
    "constructor-super": "warn",
    "curly": ["warn", "multi-line", "consistent"],
    "dot-notation": "warn",
    "eol-last": "warn",
    "eqeqeq": ["warn", "always"],
    "func-call-spacing": ["warn", "never"],
    "guard-for-in": "off",
    "id-blacklist": [
    "id-match": "warn",
    "indent": [
        "SwitchCase": 1,
        "MemberExpression": "off",
        "FunctionDeclaration": {
          "parameters": 1,
          "body": 1
        "FunctionExpression": {
          "parameters": 1,
          "body": 1
        "CallExpression": {"arguments": "first"},
        "ArrayExpression": "off",
        "ObjectExpression": 1,
        "flatTernaryExpressions": true,
        "ignoredNodes": []
    "keyword-spacing": ["warn", {"after": true, "before": true}],
    "linebreak-style": ["warn", "unix"],
    "max-classes-per-file": ["warn", 3],
    "max-len": ["warn", {"code": 140, "ignoreComments": true}],
    "new-parens": "warn",
    "no-bitwise": "warn",
    "no-caller": "warn",
    "no-cond-assign": "warn",
    "no-console": "off",
    "no-constant-condition": "off",
    "no-debugger": "warn",
    "no-empty": "off",
    "no-eval": "warn",
    "no-fallthrough": "off",
    "no-invalid-this": "off",
    "no-multi-spaces": "warn",
    "no-multiple-empty-lines": ["warn", {"max": 2}],
    "no-new-wrappers": "warn",
    "no-shadow": ["off", {"hoist": "all"}],
    "no-throw-literal": "off",
    "no-trailing-spaces": "warn",
    "no-undef-init": "warn",
    "no-underscore-dangle": "off",
    "no-unsafe-finally": "warn",
    "no-unused-expressions": "warn",
    "no-unused-labels": "warn",
    "no-unused-vars": "off",
    "no-useless-constructor": "off",
    "no-useless-rename": "warn",
    "no-var": "warn",
    "no-whitespace-before-property": "warn",
    "object-curly-spacing": ["warn", "never"],
    "object-shorthand": ["warn", "never"],
    "one-var": ["warn", "never"],
    "prefer-arrow-callback": "warn",
    "prefer-const": "off",
    "quote-props": ["warn", "as-needed"],
    "quotes": ["warn", "single", {"avoidEscape": true, "allowTemplateLiterals": true}],
    "radix": "warn",
    "semi": "off",
    "semi-spacing": ["warn", {"before": false, "after": true}],
    "space-before-blocks": ["warn", { "functions": "always", "keywords": "always", "classes": "always" }],
    "space-before-function-paren": ["warn", {"anonymous": "never", "named": "never", "asyncArrow": "always"}],
    "space-in-parens": ["warn", "never"],
    "space-infix-ops": ["warn", { "int32Hint": false }],
    "space-unary-ops": [2, {"words": true, "nonwords": false, "overrides": {}}],
    "spaced-comment": "off",
    "switch-colon-spacing": ["error", {"after": true, "before": false}],
    "unicode-bom": ["warn", "never"],
    "use-isnan": "warn",
    "valid-typeof": "off"

Note that some ESLint rules have been enhanced with TypeScript support in typescript-eslint. In case you want to use the typescript-eslint version, you should set the eslint version to “off” to avoid conflicts.
List of ESLint rules:
List of typescript-eslint rules:

Part 2: From request to got

To be continued in another post, as this one got huge enough already!

Posted in JavaScript / TypeScript / Node.js, programming, web development.

Linux bash script to update and start multiple MJ12node crawlers

This post is more of a note to myself. First here is the script, and explanations follow. For a first-time installation, refer to How to install MJ12node on Ubuntu 18.04.

tar xf mj12node_linux_v1721_net471.tgz
cp -r MJ12node/* MJ12nodeA
cp -r MJ12node/* MJ12nodeB
cp -r MJ12node/* MJ12nodeC
cp -r MJ12node/* MJ12nodeD
cp -r MJ12node/* MJ12nodeE
cp -r MJ12node/* MJ12nodeF
screen -dm bash -c 'cd MJ12nodeA; ./ -t; exec sh'
screen -dm bash -c 'cd MJ12nodeB; ./ -t; exec sh'
screen -dm bash -c 'cd MJ12nodeC; ./ -t; exec sh'
screen -dm bash -c 'cd MJ12nodeD; ./ -t; exec sh'
screen -dm bash -c 'cd MJ12nodeE; ./ -t; exec sh'
screen -dm bash -c 'cd MJ12nodeF; ./ -t; exec sh'

The first line downloads the file containing the new version (obviously, adapt with the proper URL).
Then we unpack it: its contents all goes into a “MJ12node” folder (again, adapt with the proper name).

Then we copy the contents of that new MJ12node folder into existing nodes. By default, cp should just overwrite without asking for confirmation. If not, you may want to look into this guide.

Finally, we launch each node in a dedicated and detached screen session. For this, this screen reference was pretty helpful as well as this StackExchange answer (wasn’t picked as the best answer, as it often happens with real best answers – meh).

Posted in servers, software.