Skip to content


Github got a dark mode, yay :)

Since it’s been ages since I last posted, and since I don’t really have any idea in the pipeline (nor any time to find one), I thought this would be a nice way to say I’m still alive ^^

So, Github finally got a dark theme. You can enable it there: https://github.com/settings/appearance

If like me, you’ve been using Dark Reader, it won’t make much of a visual difference, but still, I find Dark Reader to be tremendously detrimental to browser performance, so any time I can disable it and replace it with a native dark mode is very, very much appreciated.

Posted in Internet, programming.


TIL WebRTC fully bypasses SOCKS proxies

I knew WebRTC could (or rather, would) leak your real IP if you were trying to hide it behind a SOCKS proxy (or even behind a VPN), and this is why I disabled it in my main browsing profiles. But with the world-wide Covid confinement and all, I’ve had to use it quite a bit more (yay, let’s not use Mumble, let’s use Google freaking Hangout/Meet…), so what was bound to happen happened: while in a meeting, I eventually had connection trouble. Not a total connection loss, though, just very short interruptions, enough to cut off my SOCKS proxies but to keep anything else running fine.

And it struck me: despite my SOCKS proxies being all suddenly disconnected, the meeting went on. The freaking WebRTC had been ignoring the freaking proxy all the time. So it doesn’t just leak your IP here and there, it “leaks” it (not sure I should say “leaks” rather than just “uses”) constantly. Yikes. Not that it matters much, but still WTF. What a crappy protocol, thanks W3C (and all the creeps that work on or advocate for that plague).

Update:
Maybe a setting that would prevent this in Firefox: https://www.wilderssecurity.com/threads/media-peerconnection-ice-proxy_only-true.416692/
No idea why the hell they don’t enable it by default though.
(cf also https://wiki.mozilla.org/Media/WebRTC/Privacy)

Posted in privacy.


Migrating from request to got (part 2)

As I said in a previous post, I recently had to ditch request. I found about got as a replacement because it was mentioned a couple of times in the request deprecation ticket.

One of the first things I liked about got is that they provide a pretty extensive comparison table between them and alternatives. To be honest, when reading this table I was first drawn to ky, as I’m always interested in limiting dependencies (and dependencies of dependencies) as well as package size. But ky is browser-side only. Damn.

So I had a quick look at got’s migration guide, and it seemed easy enough (plus it had promises, yay). You’ll probably find it’s a great place to start, and maybe it will even be all you need. I still had to dig a bit more for a few things though.

The first one is that, by default, got will throw an exception whenever the return HTTP code isn’t 200. It might sound like a perfectly normal behavior to you, but I was using at least one API that would return a 404 code during normal operations (looking at you, MailChimp), and I certainly didn’t expect that to throw an exception!
The solution to that (apart from rewriting your code to deal with exceptions in cases were you used to deal with return data another way) is to set throwHttpErrors to false in your request parameters (cf example code at the end of this post).

The second one was to get binary data, but actually in this case I find got much clearer than request: in order to get binary data with request, you need to set encoding to null in the parameters. With got, you’ll replace that with responseType: "buffer".

Some other small things: the typings (@types/got) are better than request’s, at least concerning the request parameters, which means you’ll probably have to cast some fields there (for instance, method isn’t a string but a "GET" | "POST" | "PUT" ...). And the timeout is, as I understood, different in got and request: in request, it’s a timeout before the server starts responding, while in got I believe it’s the timeout before the server finishes responding. Interesting for me as sometimes I use request/got to download files, and I’m interested in dealing with slow download speeds if they happen to occur.

Finally, here are a couple of functions as they were with request and how they became with got:

function requestFile(url: string): Promise<Buffer> {
  return new Promise((resolve, reject) => {
    let requestParameters = {
      method: 'GET',
      url: url,
      headers: {
        'Content-Type': 'binary/octet-stream'
      },
      // *Note:* if you expect binary data, you should set encoding: null (https://www.npmjs.com/package/request)
      encoding: null,
      // todo: find a way to have timeout applied to read and not just connection
      timeout: 2_000
    };
    request(requestParameters, (error, response, body) => {
      if (error) {
        reject(error);
      } else {
        resolve(body);
      }
    });
  });
}

async function requestFile(url: string): Promise<Buffer> {
  const reqParameters = {
    method: <'GET'>'GET',
    headers: {
      'Content-Type': 'binary/octet-stream'
    },
    responseType: <'buffer'>'buffer',
    timeout: 3_000,
    throwHttpErrors: true
  };
  const response = await got(url, reqParameters);
  return response.body;
}


function sendApiRequest(
  method: 'DELETE' | 'GET' | 'POST',
  route: string,
  body?: any,
  headers?: headersArray): Promise<any> {
  return new Promise((resolve, reject) => {
    let requestParameters = {
      method: method,
      url: this.API_ROOT + route,
      body: body ? JSON.stringify(body) : null,
      headers: <{'API-Authorization': string; [key: string]: string}>{
        'API-Authorization': this.apiKey
      }
    };
    if (headers) {
      for (const h of headers) {
        requestParameters.headers[h.name] = h.value;
      }
    }

    request(requestParameters, (error, response, body) => {
      if (error) {
        reject(error);
      } else {
        const parsedBody = JSON.parse(body);
        let res: any;
        if (parsedBody.data) res = parsedBody.data;
        else res = parsedBody;
        resolve(res);
      }
    });
  });
}

async function sendApiRequest(
  method: 'DELETE' | 'GET' | 'POST',
  route: string,
  body?: any,
  headers?: headersArray
): Promise<any> {
  const reqUrl = this.API_ROOT + route;
  let reqParameters = {
    method: method,
    headers: <{[key: string]: string; 'API-Authorization': string}>{
      'API-Authorization': this.apiKey
    },
    body: body ? JSON.stringify(body) : undefined,
    retry: 0,
    responseType: <'text'>'text',
    throwHttpErrors: false // seriously what a shitty default
  };
  if (headers) {
    for (const h of headers) {
      reqParameters.headers[h.name] = h.value;
    }
  }

  const response = await got(reqUrl, reqParameters);
  const parsedBody = JSON.parse(response.body);
  let res: any;
  if (parsedBody.data) res = parsedBody.data;
  else res = parsedBody;
  return res;
}

Posted in JavaScript / TypeScript / Node.js, programming, web development.


Migrating from tslint to eslint and from request to got (part 1)

Last month was unexpectedly busy, update-wise, as 2 major dependencies I use for work-related projects suddenly announced, barely a few days apart, that they were being discontinued. The first was tslint, which probably every TypeScript developer has at least heard of once (~3.4M weekly downloads at the moment). The second was request, which probably every Node developer who’s ever had to make an HTTP request has already heard of.

Both projects had become pretty quiet, update-wise, so I probably could have seen this coming earlier had I paid attention more carefully. Anyway, replacements were easy to fine: tsling clearly suggested eslint, request didn’t suggest a particular replacement, but got and a few other seemed obivously widespread.

Part 1: From tslint to eslint

I installed eslint globally. Which isn’t recommended for some reason, but works perfectly fine for me. You’ll most like also need a couple of plugins for TypeScript, namely:
npm i -g eslint @typescript-eslint/eslint-plugin @typescript-eslint/parser

A key tool in the migration was tslint-to-eslint-config. Running it is as simple as npx tslint-to-eslint-config (after, if you don’t have it already, npm install -g npx). It provides an .eslintrc.js file that’s a good basis for you new eslint rules. Except that it’s in JavaScript and I definitely wanted JSON. I also created a base config file from eslint itself, via the eslint --init mentioned in getting started. And then I basically combined those.

Finally I browsed my code with the new rules, either fixing the new violations or adding rules for them. In the end, I couldn’t match my tsling rules perfectly, but I had just about 20 lines (on a ~30k lines codebase) that required a change that I was unhappy with, mostly about indenting and typecasting.

Last but not least, here are my config files:

Old tslint.json:

{
  "defaultSeverity": "warning",
  "extends": [
    "tslint:recommended"
  ],
  "jsRules": {},
  "rules": {
    "array-type": [true, "array"],
    "arrow-parens": [true, "ban-single-arg-parens"],
    "curly": [true, "ignore-same-line"],
    "eofline": true,
    "max-classes-per-file": [true, 3],
    "max-line-length": [true, {"limit": 140, "ignore-pattern": "^import |^export {(.*?)}|class [a-zA-Z] implements |//"}],
    "no-angle-bracket-type-assertion": false,
    "no-consecutive-blank-lines": [true, 2],
    "no-console": false,
    "no-empty": false,
    "no-shadowed-variable": false,
    "no-string-literal": true,
    "no-string-throw": true,
    "no-trailing-whitespace": true,
    "object-literal-key-quotes": [true, "as-needed"],
    "object-literal-shorthand": [true, "never"],
    "object-literal-sort-keys": false,
    "one-line": [true, "check-catch", "check-finally", "check-else", "check-open-brace"],
    "only-arrow-functions": [true, "allow-named-functions"],
    "prefer-const": false,
    "quotemark": [true, "single", "avoid-escape"],
    "semicolon": [true, "always"],
    "trailing-comma": [true, {"multiline": "never", "singleline": "never"}],
    "triple-equals": [true, "allow-null-check", "allow-undefined-check"],
    "variable-name": [true, "ban-keywords", "check-format", "allow-leading-underscore"],
    "whitespace": [true, "check-branch", "check-decl", "check-operator", "check-module",
      "check-separator", "check-rest-spread", "check-type", "check-type-operator","check-preblock"]
  },
  "rulesDirectory": []
}

New .eslintrc.json (yes, it’s a hell lot bigger):

{
  "env": {
    "es6": true,
    "node": true
  },
  "extends": [
    "eslint:recommended",
    "plugin:@typescript-eslint/eslint-recommended"
  ],
  "globals": {
    "Atomics": "readonly",
    "SharedArrayBuffer": "readonly"
  },
  "parser": "@typescript-eslint/parser",
  "parserOptions": {
    "ecmaVersion": 2018,
    "sourceType": "module"
  },
  "plugins": [
    "@typescript-eslint"
  ],
  "rules": {
    "@typescript-eslint/adjacent-overload-signatures": "warn",
    "@typescript-eslint/array-type": "warn",
    "@typescript-eslint/ban-types": "warn",
    "@typescript-eslint/camelcase": ["warn",
      {
        "properties": "always",
        "ignoreDestructuring": false,
        "ignoreImports": true,
        "genericType": "never",
        "allow": ["child_process"]
      }
    ],
    "@typescript-eslint/class-name-casing": "warn",
    "@typescript-eslint/consistent-type-assertions": "off",
    "@typescript-eslint/interface-name-prefix": ["warn", { "prefixWithI": "always" }],
    "@typescript-eslint/member-delimiter-style": [
      "warn",
      {
          "multiline": {
              "delimiter": "semi",
              "requireLast": true
          },
          "singleline": {
              "delimiter": "semi",
              "requireLast": false
          }
      }
    ],
    "@typescript-eslint/member-ordering": "warn",
    "@typescript-eslint/no-empty-function": "off",
    "@typescript-eslint/no-empty-interface": "warn",
    "@typescript-eslint/no-explicit-any": "off",
    "@typescript-eslint/no-extra-parens": "off",
    "@typescript-eslint/no-misused-new": "warn",
    "@typescript-eslint/no-namespace": "warn",
    "@typescript-eslint/no-parameter-properties": "off",
    "@typescript-eslint/no-use-before-define": "off",
    "@typescript-eslint/no-useless-constructor": "warn",
    "@typescript-eslint/no-var-requires": "warn",
    "@typescript-eslint/prefer-for-of": "warn",
    "@typescript-eslint/prefer-function-type": "warn",
    "@typescript-eslint/prefer-namespace-keyword": "warn",
    "@typescript-eslint/quotes": [
      "warn",
      "single",
      {
          "avoidEscape": true
      }
    ],
    "@typescript-eslint/semi": ["warn", "always", {"omitLastInOneLineBlock": true}],
    "@typescript-eslint/triple-slash-reference": "warn",
    "@typescript-eslint/unified-signatures": "warn",
    "array-bracket-spacing": ["warn", "never"],
    "arrow-parens": ["warn", "as-needed"],
    "arrow-spacing": ["warn", {"after": true, "before": true}],
    "brace-style": ["warn", "1tbs", {"allowSingleLine": true}],
    "camelcase": "off",
    "comma-dangle": ["warn", "never"],
    "comma-spacing": ["warn", {"before": false, "after": true}],
    "complexity": "off",
    "computed-property-spacing": ["warn", "never", { "enforceForClassMembers": true }],
    "comma-style": ["warn", "last"],
    "consistent-this": ["error", "self"],
    "constructor-super": "warn",
    "curly": ["warn", "multi-line", "consistent"],
    "dot-notation": "warn",
    "eol-last": "warn",
    "eqeqeq": ["warn", "always"],
    "func-call-spacing": ["warn", "never"],
    "guard-for-in": "off",
    "id-blacklist": [
        "warn",
        "any",
        "Number",
        "number",
        "String",
        "string",
        "Boolean",
        "boolean",
        "Undefined"
    ],
    "id-match": "warn",
    "indent": [
      "warn",
      2,
      {
        "SwitchCase": 1,
        "MemberExpression": "off",
        "FunctionDeclaration": {
          "parameters": 1,
          "body": 1
        },
        "FunctionExpression": {
          "parameters": 1,
          "body": 1
        },
        "CallExpression": {"arguments": "first"},
        "ArrayExpression": "off",
        "ObjectExpression": 1,
        "flatTernaryExpressions": true,
        "ignoredNodes": []
      }
    ],
    "keyword-spacing": ["warn", {"after": true, "before": true}],
    "linebreak-style": ["warn", "unix"],
    "max-classes-per-file": ["warn", 3],
    "max-len": ["warn", {"code": 140, "ignoreComments": true}],
    "new-parens": "warn",
    "no-bitwise": "warn",
    "no-caller": "warn",
    "no-cond-assign": "warn",
    "no-console": "off",
    "no-constant-condition": "off",
    "no-debugger": "warn",
    "no-empty": "off",
    "no-eval": "warn",
    "no-fallthrough": "off",
    "no-invalid-this": "off",
    "no-multi-spaces": "warn",
    "no-multiple-empty-lines": ["warn", {"max": 2}],
    "no-new-wrappers": "warn",
    "no-shadow": ["off", {"hoist": "all"}],
    "no-throw-literal": "off",
    "no-trailing-spaces": "warn",
    "no-undef-init": "warn",
    "no-underscore-dangle": "off",
    "no-unsafe-finally": "warn",
    "no-unused-expressions": "warn",
    "no-unused-labels": "warn",
    "no-unused-vars": "off",
    "no-useless-constructor": "off",
    "no-useless-rename": "warn",
    "no-var": "warn",
    "no-whitespace-before-property": "warn",
    "object-curly-spacing": ["warn", "never"],
    "object-shorthand": ["warn", "never"],
    "one-var": ["warn", "never"],
    "prefer-arrow-callback": "warn",
    "prefer-const": "off",
    "quote-props": ["warn", "as-needed"],
    "quotes": ["warn", "single", {"avoidEscape": true, "allowTemplateLiterals": true}],
    "radix": "warn",
    "semi": "off",
    "semi-spacing": ["warn", {"before": false, "after": true}],
    "space-before-blocks": ["warn", { "functions": "always", "keywords": "always", "classes": "always" }],
    "space-before-function-paren": ["warn", {"anonymous": "never", "named": "never", "asyncArrow": "always"}],
    "space-in-parens": ["warn", "never"],
    "space-infix-ops": ["warn", { "int32Hint": false }],
    "space-unary-ops": [2, {"words": true, "nonwords": false, "overrides": {}}],
    "spaced-comment": "off",
    "switch-colon-spacing": ["error", {"after": true, "before": false}],
    "unicode-bom": ["warn", "never"],
    "use-isnan": "warn",
    "valid-typeof": "off"
  }
}

Note that some ESLint rules have been enhanced with TypeScript support in typescript-eslint. In case you want to use the typescript-eslint version, you should set the eslint version to “off” to avoid conflicts.
List of ESLint rules: https://eslint.org/docs/rules/
List of typescript-eslint rules: https://github.com/typescript-eslint/typescript-eslint/tree/master/packages/eslint-plugin/docs/rules

Part 2: From request to got

To be continued in another post, as this one got huge enough already!

Posted in JavaScript / TypeScript / Node.js, programming, web development.


Linux bash script to update and start multiple MJ12node crawlers

This post is more of a note to myself. First here is the script, and explanations follow. For a first-time installation, refer to How to install MJ12node on Ubuntu 18.04.

wget https://example.com/mj12node_linux_v1721_net471.tgz
tar xf mj12node_linux_v1721_net471.tgz
cp -r MJ12node/* MJ12nodeA
cp -r MJ12node/* MJ12nodeB
cp -r MJ12node/* MJ12nodeC
cp -r MJ12node/* MJ12nodeD
cp -r MJ12node/* MJ12nodeE
cp -r MJ12node/* MJ12nodeF
cp -r MJ12node/* MJ12nodeG
cp -r MJ12node/* MJ12nodeH
cp -r MJ12node/* MJ12nodeI
screen -dm bash -c 'cd MJ12nodeA; ./run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeB; ./run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeC; ./run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeD; ./run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeE; ./run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeF; ./run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeG; ./run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeH; ./run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeI; ./run.py -t; exec bash'

The first line downloads the file containing the new version (obviously, adapt with the proper URL).
Then we unpack it: its contents all goes into a “MJ12node” folder (again, adapt with the proper name).

Then we copy the contents of that new MJ12node folder into existing nodes. By default, cp should just overwrite without asking for confirmation. If not, you may want to look into this guide.

Finally, we launch each node in a dedicated and detached screen session. For this, this screen reference was pretty helpful as well as this StackExchange answer (wasn’t picked as the best answer, as it often happens with real best answers – meh).

Update (2021-11-23)

Since I recently had to set everything up again, some additions for 1st-time installation:

mkdir MJ12nodeA
mkdir MJ12nodeB
mkdir MJ12nodeC
mkdir MJ12nodeD
mkdir MJ12nodeE
mkdir MJ12nodeF
mkdir MJ12nodeG
mkdir MJ12nodeH
mkdir MJ12nodeI
chmod 744 MJ12nodeA/run.py
chmod 744 MJ12nodeB/run.py
chmod 744 MJ12nodeC/run.py
chmod 744 MJ12nodeD/run.py
chmod 744 MJ12nodeE/run.py
chmod 744 MJ12nodeF/run.py
chmod 744 MJ12nodeG/run.py
chmod 744 MJ12nodeH/run.py
chmod 744 MJ12nodeI/run.py

(to mix it up with the commands listed higher, it’s first mkdir then cp then chmod then screen)

Additionally, I found a slightly more comfortable way to run in screen: the original version said things like screen -dm bash -c 'cd MJ12nodeH; ./run.py -t; exec sh', but replacing sh with bash gives more features.

Also, here’s an updated list of dependencies for Ubuntu 21.10:
sudo apt-get install python mono-runtime libmono-sqlite4.0-cil bind9
It appears that libmono-corlib4.5-cil is no longer necessary and that Python isn’t installed by default now.
Bind isn’t strictly mandatory, but if you don’t have it or an equivalent, you’ll get a much higher DNS error rate.

Lastly, a tip to speed up configuring your nodes: you can just start and configure one node, then copy the config file to other (non running) nodes. You then just need to change the node name in the config file (config.xml => PeerNodeCfg => NodeName), and you’re all good to go.

Update (2022-11-27)

Reinstalling this into Ubuntu 22.04 LTS, package “python” is no more so I had to install python2 instead (yup, MJ12 is not python 3 yet 🙁 ). After a brief search, I couldn’t find a trivial way to solve “bash: ./run.py: /usr/bin/python: bad interpreter: No such file or directory”, so I also replaced the starting command with “python2 run.py”, so let’s recap:

sudo apt-get install python2 mono-runtime libmono-sqlite4.0-cil bind9

screen -dm bash -c 'cd MJ12nodeA; python2 run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeB; python2 run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeC; python2 run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeD; python2 run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeE; python2 run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeF; python2 run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeG; python2 run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeH; python2 run.py -t; exec bash'
screen -dm bash -c 'cd MJ12nodeI; python2 run.py -t; exec bash'

Posted in servers, software.


aToad #27: Advanced REST Client

API testing tool

When I first looked for an API testing tool a few years ago, at first I found some stuff with really, really poor UX, and then someone recommended me Postman. It wasn’t very satisfying (seriously, all those tools aimed at developer or power-users and that don’t let you choose their installation folder… so stupid), but since I didn’t need to do much with it, I settled for making a portable version of it and using that.

That was until that time where I had to make an HTTP request with very specific headers. I found out that Postman adds a bunch of crap default headers, and… you just can’t turn that off. People have ranted about that major issues for years, and Postman hasn’t given a crap about it. Yup. There too.

Anyway, so I looked for something else, and I eventually found Advanced REST Client. Like Postman, it’s Electron-based, so it works pretty much everywhere. Unlike Postman, it’s free, libre, open-source software. And it allows crafting requests with no automatically enforced headers (note that by default it will add user-agent: advanced-rest-client and accept: */*, but this can be disabled in the settings).
Sadly they don’t provide a portable package but, on Windows, you can open setup.exe with, for instance, 7-Zip, and then manually extract $PLUGINSDIR/app-64.7z, which contains everything needed to run the 64 bits version of ARC.

Apart from the ability to finally have full controls of requests and headers and the fact that it’s FLOSS, here are other things I like about it:

  • it seems to work well with RESTful API Modeling Language (RAML) (haven’t tried it yet)
  • it seems to work well with OpenAPI Specification (OAS) (ditto)
  • it allows setting up hosts rules (more convenient than editing the hosts file for testing purposes)
  • it seems to be pretty privacy-friendly, although it should be noted that data sharing is opt-out rather than opt-in

Last but not least, when you create a request, it shows you code to run it with cURL, JavaScript, Python, C and Java, as well as the raw HTTP request. Which is convenient for copy/pasting code, but more importantly which is great for knowing exactly what request was sent, without resorting to annoying tricks like running Wireshark (which is a bit ridiculous when you’re the one supposed to be in full control of the request in the first place)

Posted in A Tool A Day.


aToad #26: WinMerge

A feature-rich file comparison tool

I was just trying to compare the file list of 2 folders (just by filename, as the files were meant to be different, but with matching names). The first relevant thing I found was WinMerge. Since it seemed to be able to do the job, plus it’s open source, I gave it a spin. And I found out it does so much more. I haven’t actually tried any of those other features, but I noticed that it includes an hexadecimal editor (Frhed), as well as an image comparison component. So it seems to be able to do detailed comparison of binary files and pictures.

From the little I tried, it did the job (although I didn’t find anything to just compare files by name, it also forced me to compare by modification time or size, as far as I rememver). But the UI is quite intense and using the more complex features will probably require some trial and error… or just reading the manual a bit.

Posted in A Tool A Day.


Firefox Armag-addon fix script

It’s rather useless now, but it’s been sitting in my notepad for ages and I don’t have the heart to discard it. It’s still interesting stuff for certificates management.

For those of you who live in a cavern (or are still using Chrome despite knowing better), about 5 months ago, Firefox had a huge snafu: the certificate for signing add-ons expired and they didn’t see it coming (ikr), and since they are geniuses who pretend to know better than their users what’s best for them, signature is mandatory for add-ons. Long story short: all add-ons were disabled as soon as Firefox ran its daily(-ish) signature check.

One of the early fix consisted in delivering an add-on, via the channel used for telemetry/studies, which installed a new certificate. Some smart cookie extracted the certificate from said add-on, and some other one posted instructions on how to import said certificate just by using the browser console, which I found pretty cool.
So here you go:

// Firefox Armag-addon fix script
// Sources:
// - https://www.reddit.com/r/firefox/comments/bkspmk/addons_fix_for_5602_older/
// - https://www.velvetbug.com/benb/icfix/

// Just run this in the browser console (Ctrl+Shift+J) (if you can't, set devtools.chrome.enabled to enable in about:config, although the console available via Ctrl+Shift+I should do the trick too)

// 1) add the certificate from the hotfix
let intermediate = "MIIHLTCCBRWgAwIBAgIDEAAIMA0GCSqGSIb3DQEBDAUAMH0xCzAJBgNVBAYTAlVTMRwwGgYDVQQKExNNb3ppbGxhIENvcnBvcmF0aW9uMS8wLQYDVQQLEyZNb3ppbGxhIEFNTyBQcm9kdWN0aW9uIFNpZ25pbmcgU2VydmljZTEfMB0GA1UEAxMWcm9vdC1jYS1wcm9kdWN0aW9uLWFtbzAeFw0xNTA0MDQwMDAwMDBaFw0yNTA0MDQwMDAwMDBaMIGnMQswCQYDVQQGEwJVUzEcMBoGA1UEChMTTW96aWxsYSBDb3Jwb3JhdGlvbjEvMC0GA1UECxMmTW96aWxsYSBBTU8gUHJvZHVjdGlvbiBTaWduaW5nIFNlcnZpY2UxJjAkBgNVBAMTHXNpZ25pbmdjYTEuYWRkb25zLm1vemlsbGEub3JnMSEwHwYJKoZIhvcNAQkBFhJmb3hzZWNAbW96aWxsYS5jb20wggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC/qluiiI+wO6qGA4vH7cHvWvXpdju9JnvbwnrbYmxhtUpfS68LbdjGGtv7RP6F1XhHT4MU3v4GuMulH0E4Wfalm8evsb3tBJRMJPICJX5UCLi6VJ6J2vipXSWBf8xbcOB+PY5Kk6L+EZiWaepiM23CdaZjNOJCAB6wFHlGe+zUk87whpLa7GrtrHjTb8u9TSS+mwjhvgfP8ILZrWhzb5H/ybgmD7jYaJGIDY/WDmq1gVe03fShxD09Ml1P7H38o5kbFLnbbqpqC6n8SfUI31MiJAXAN2e6rAOM8EmocAY0EC5KUooXKRsYvHzhwwHkwIbbe6QpTUlIqvw1MPlQPs7Zu/MBnVmyGTSqJxtYoklr0MaEXnJNY3g3FDf1R0Opp2/BEY9Vh3Fc9Pq6qWIhGoMyWdueoSYa+GURqDbsuYnk7ZkysxK+yRoFJu4x3TUBmMKM14jQKLgxvuIzWVn6qg6cw7ye/DYNufc+DSPSTSakSsWJ9IPxiAU7xJ+GCMzaZ10Y3VGOybGLuPxDlSd6KALAoMcl9ghB2mvfB0N3wv6uWnbKuxihq/qDps+FjliNvr7C66mIVH+9rkyHIy6GgIUlwr7E88Qqw+SQeNeph6NIY85PL4p0Y8KivKP4J928tpp18wLuHNbIG+YaUk5WUDZ6/2621pi19UZQ8iiHxN/XKQIDAQABo4IBiTCCAYUwDAYDVR0TBAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwFgYDVR0lAQH/BAwwCgYIKwYBBQUHAwMwHQYDVR0OBBYEFBY++xz/DCuT+JsV1y2jwuZ4YdztMIGoBgNVHSMEgaAwgZ2AFLO86lh0q+FueCqyq5wjHqhjLJe3oYGBpH8wfTELMAkGA1UEBhMCVVMxHDAaBgNVBAoTE01vemlsbGEgQ29ycG9yYXRpb24xLzAtBgNVBAsTJk1vemlsbGEgQU1PIFByb2R1Y3Rpb24gU2lnbmluZyBTZXJ2aWNlMR8wHQYDVQQDExZyb290LWNhLXByb2R1Y3Rpb24tYW1vggEBMDMGCWCGSAGG+EIBBAQmFiRodHRwOi8vYWRkb25zLm1vemlsbGEub3JnL2NhL2NybC5wZW0wTgYDVR0eBEcwRaFDMCCCHi5jb250ZW50LXNpZ25hdHVyZS5tb3ppbGxhLm9yZzAfgh1jb250ZW50LXNpZ25hdHVyZS5tb3ppbGxhLm9yZzANBgkqhkiG9w0BAQwFAAOCAgEAX1PNli/zErw3tK3S9Bv803RV4tHkrMa5xztxzlWja0VAUJKEQx7f1yM8vmcQJ9g5RE8WFc43IePwzbAoum5F4BTM7tqM//+e476F1YUgB7SnkDTVpBOnV5vRLz1Si4iJ/U0HUvMUvNJEweXvKg/DNbXuCreSvTEAawmRIxqNYoaigQD8x4hCzGcVtIi5Xk2aMCJW2K/6JqkN50pnLBNkPx6FeiYMJCP8z0FIz3fv53FHgu3oeDhi2u3VdONjK3aaFWTlKNiGeDU0/lr0suWfQLsNyphTMbYKyTqQYHxXYJno9PuNi7e1903PvM47fKB5bFmSLyzB1hB1YIVLj0/YqD4nz3lADDB91gMBB7vR2h5bRjFqLOxuOutNNcNRnv7UPqtVCtLF2jVb4/AmdJU78jpfDs+BgY/t2bnGBVFBuwqS2Kult/2kth4YMrL5DrURIM8oXWVQRBKxzr843yDmHo8+2rqxLnZcmWoe8yQ41srZ4IB+V3w2TIAd4gxZAB0Xa6KfnR4D8RgE5sgmgQoK7Y/hdvd9Ahu0WEZI8Eg+mDeCeojWcyjF+dt6c2oERiTmFTIFUoojEjJwLyIqHKt+eApEYpF7imaWcumFN1jR+iUjE4ZSUoVxGtZ/Jdnkf8VVQMhiBA+i7r5PsfrHq+lqTTGOg+GzYx7OmoeJAT0zo4c=";
let certDB = Cc["@mozilla.org/security/x509certdb;1"].getService(Ci.nsIX509CertDB);
certDB.addCertFromBase64(intermediate, ",,");

// 2) recheck all add-ons
ChromeUtils.defineModuleGetter(this, "XPIDatabase", "resource://gre/modules/addons/XPIDatabase.jsm");
XPIDatabase.verifySignatures();

Posted in Firefox.


Using DNSCrypt or DoH (or both) on Windows

A long time ago in a galaxy far, far the same, I setup my previous laptop with whatever was needed to send DNS queries to a DNSCrypt resolver instead of using my ISP’s.
At the time, it was kind of complicated (or at least tedious): I had to install dnscrypt-proxy, and because it had no caching mechanism, I had to also install Unbound on top of it. Both had to be installed as a service, ran at startup, Unbound had to listen to port 53 so that I could tell Windows to use 127.0.0.1 as a DNS server, dnscrypt-proxy had to listen to some arbitrary port, and Unbound had to be configured to query that. Not very fun. And as I was short in time, I never bothered formalizing all this into something that looks like a proper-ish guide. Which discouraged my from doing the same setup on other computers. Until today.

I decided to give it another shot. First surprise, Unbound now has a quite better documentation, with a whole guide (on PDF) for Windows. Before downloading it, I checked out DNSCrypt / dnscrypt-proxy, fearing the worst: last time I check it, the project was abandoned, and it wasn’t very clear what would replace it. Nice surprise there, there are now a bunch of clients, with DNSCrypt-proxy at the top (probably a full rewrite since it’s version 2.x and written in Go). And yet another nice surprise: as I was checking the documentation, I was directed to Simple DNSCrypt, which seems to be the recommended way to install dnscrypt-proxy, if you want to avoid getting a headache.

I don’t have much to say about Simple DNSCrypt, it’s really easy to use indeed (as long as you have a vague idea about how DNS things work), and if I didn’t want an excuse to safe-keep the links above I could have just made a short “aToad” post about it. The default configuration is globally nice, I’ll just mention a few points/tweaks:

  • You’ll probably have to manually toggle on the DNSCrypt Service, and to configure your network card(s) to use it (no need to go dig into your Windows network settings, Simple DNSCrypt provides a one-click button for that and I don’t think you can miss it).
  • By default, DNSCrypt will be configured to automatically select any resolver with DNSSEC support + no logs + no filter. This includes a CloudFflare server, so you may want to disable this one. This also includes both servers that use DNSCrypt and servers that use DNS over HTTPS, which I find pretty neat.
  • The query log (default off) can be useful to check that your computer is actually using dnscrypt-proxy (but you may want to turn it off as soon as you’ve check, as I guess it will grow big pretty fast). It also show which DNS resolver the request is sent to, so you should notice that dnscrypt-proxy rotates between your chosen servers. Which is great for privacy… and makes it more harmless if you choose to keep CloudFlare in your list.
  • The advanced settings tab lets you enable/disable a DNS cache. It’s great because it means I don’t need Unbound on top of it. However, the default value for the cache (256 entries) isn’t appropriate for me (I run a web crawler, so a value of 2048, for instance, sounds better) and it cannot be edited from the UI. To change it, you need to shut down Simple DNSCrypt (and possible dnscrypt-proxy too, not sure about that), then modify the cache_size line in C:\[path to Simple DNSCrypt]\dnscrypt-proxy\dnscrypt-proxy.toml.

It also has more advanced features, like a domain blacklist, which might be more comfortable than using the good old HOSTS file (although beware it obviously won’t block anymore if your system somehow switches back to your ISP’s DNS), and a “cloak and forward” feature, which I haven’t looked into (and I’m not sure what this does ^^)

I’ve been using this for a few days. So far, no issue, no extra latency, no abnormal DNS error rate in my web crawler… Looking good! And since it’s so easy to set up, I’ll probably put it in all my other computers soon 🙂

Posted in Internet, software, Windows.


A guide to shitty web designs (please don’t do this)

“Things used to be better in the past.” Sounds familiar? While it may be a simple sign of nostalgia, sometimes… it’s just true.
Website designs have improved in many ways over the last couple of decades. For instance, see a blast from the past here if you want to hurt your eyes with some ancient designs that are still online. Although they aren’t the worst (those have been taken down ^^). But not all changes are good, and modern websites tend to increasingly have some UX flaws that barely occurred, if at all, in the past.

Here’s a list of 8 of such annoyances. It’s not a “top 8”, it’s just things I wrote down as I was encountering those flaws. They’re not sorted, not exhaustive, and you’ll see they mostly revolve around login procedures for some reason.

1) The hidden login form

Some websites place an extreme focus on the sign-up form, and neglect/hide/bury the login form. Combined with a tendency to make sign up forms as simple/short as possible, it makes it very easy to get mixed up. These days, I regularly fill sign up forms by accident, when what I meant was to log in. Simply because some stupid UX designer put a 2-field registration form right where you would expect a log in form. And then the form tells me “this account already exists”. Yes, I know, I was trying to log in.
I don’t get the logic in their twisted mind: you only register once, you log in many times, why the hell make it longer to log in than to register?! To me, it signals that they desperately need more registrations… Gladly, it’s still rather rare.

2) The 2-step login form

When I started web development, a good practice, on a failed login attempt, was to show a generic error message like “invalid credentials”, giving the user no indication of whether or not the ID they entered was a valid ID in the first place.
I don’t know what the hell happened, but at some point this commonsense practice became an oddity. And then some morons started to design 2-step login forms like: 1) type your e-mail 2) we tell you if it’s a valid ID, and if so now you type your password. I don’t know who started it all, but the first time I saw that was for the… Gmail login. Kudos, Google!

When this was introduced, I remember a bunch of discussion on IT/dev forums, basically all agreeing that this was not just silly, but a security issue. With such a system, typically you can check if e-mail address X@Y.Z is a registered user on site shitdesign.com. Random example: imagine if Pornhub did that? (NB: I just had a look, they don’t*)
Some websites have thought about that. But they tried to be smart: instead of reverting back to the good, old-fashioned way to do a login form, they had the genius idea to keep the 2 steps and add… a CAPTCHA! And not just one, but 2: one after entering your ID, then another one after entering your password. Isn’t that brilliant? What did you say? You can’t believe people can be that stupid? Well, believe me now:

2-step login with a CAPTCHA, step 1 2-step login with a CAPTCHA, step 2

Oh and although it prevents massively checking if a list of e-mails have an account, it doesn’t prevent manually checking a few e-mails.

Apart from this problem, which people who “have nothing to hide” maybe won’t care about (although captchas are never very fun), those 2-step login forms have the very, highly annoying characteristic of making it a pain to use a password manager. RIP credentials autotype! Thanks smart UX designer!

3) Other BS that breaks password managers

Two-step login isn’t the only thing that breaks password managers. Some sites show cute modals and stuff, but sometimes those decorative features use weird JavaScript that makes the login form vanish as soon as it loses focus (say… when you want to switch to your password manager to trigger auto-typing). On the plus side, that’s not a voluntary “feature”, so you can expect it to be fixed, eventually (Namecheap was the example I had in mind for this, but I just checked and they did fix it, hurray). On the downside, there’s little chance that the website operator bothers fixing that if you ask them to, so you’ll probably have to wait for a while for a fix.

Another password manager annoyance comes from most banking sites, who provide a virtual keyboard (well, numpad) that you must use to enter your passcode. No copy-pasting, no auto-typing, you must use their damned numpad. For your safety. From a banking site that generally forces you to use a 6-digit password (but not your birthday, yes we made it just the right size for a birthday but don’t use that). Meh.

4) The non-working “remember me / stay logged in” feature

Not a big annoyance here, but when you have a checkbox, on the login form, that says “stay connected”, then when you check it you do expect to stay connected for “a while”. I.e., at least until you come back to your browser the day after. I’ve seen a few websites, typically financial, where “stay logged in” would still result in your session being terminated the day after, or even just after a few hours. I get that they want to disconnect people “for security reasons”, but then maybe… just drop the “stay logged in” checkbox?

5) The CAPTCHA on the first login attempt

When CAPTCHAs became standard good practice on login forms, in most of the places that use them, you’d be allowed to try to log in a few times (maybe 4 or 5 times) without any CAPTCHA. And only then, after a few failures, you’d get a CAPTCHA. Basically, at that time, the CAPTCHA was a quality-of-life improvement, as it came as a replacement to things like “after 5 failed login attempts, lock the account for a while”.

But eh, this still required counting failed login attempts. Too much work. Eventually, webmasters gave in to laziness: why bother counting failed attempts, when you can just shove a CAPTCHA down the user’s throat every single time? And here we are now, I don’t remember where and when was the last time I saw a site that would allow you to log in without a CAPTCHA on your first attempt (while showing one after a few failures).

6) 2FA with a mandatory phone number

I’ve seen some websites recommending an authenticator app for 2-factor authentication rather than SMS because “SMS is not secure”. It’s true, so fair enough. Yet those sites still forced people to use SMS to set up their 2FA… How rational is that?

6b) Mandatory 2FA

Just… Don’t… Combined with 6) (which applies to absolutely all 2FA implementations I’ve seen so far), it’s nothing more than an excuse to require / collect phone numbers.

7) Mobile-centric design

A punch in the face of PC users. “Yeah, we know you’ve got a better device, but we decided we only care about shit devices and we want you to have the same shitty user experience as mobile users”.
Having a design that works nicely on mobile is nice. But it shouldn’t come at the cost of destroying the user experience on larger clients that are more fit to display web pages. No matter what designers tell you, it’s not possible to have the same experience on a 6″ screen as on a 20″ screen. Until you decide you’ll waste 14″.

8) Not showing the date in blog posts / news articles

Seriously, wtf? When someone posts and article on Reddit, and you can’t figure out if it was published yesterday or 2 years ago.

Footnotes

* They even show a message saying “We have sent you an email with your username and a link in order to reset your password” to any password reset request (no matter if the typed e-mail actually owns an account or not). Which is the proper way to do things.

Posted in web development.