Skip to content


TRENDnet TEW-805UB review (spoiler: it’s not good)

(version française à la fin)

I usually use Ethernet, but for this machine I didn’t have a choice so I took the TRENDnet TEW-805UB, on a recommendation by a colleague.

First, it comes without a USB cable, unlike my previous TP-Link TL-WDN4200 (not available anymore), so expect to spend an extra $10 for a good USB 3 cable if you don’t want to neutralize 4 USB ports when plugging it.

Second, it’s been discontinued (which wasn’t mentioned on the site where I bought it). The drivers haven’t been updated since summer 2017 on the manufacturer’s website.

Finally, it randomly works or not on Windows 10. Sometimes it works directly, sometimes I have to unplug and re-plug it 2-3 times before it finally manages to connect to the WiFi network without instantly crashing. Another colleague, who uses Linux, has the same problem (to the point that he stopped using the key and now uses his phone as a WiFi key – not sure how he does that though). So it seems that this key only works properly on Mac.
And don’t get me started on the fluctuating, and generally low speed: sometimes I get around 200Mbps, but usually I’m around 10Mbps, on a 1Gbps connection.


French version:

J’allais poster cette évaluation sur LDLC, mais je trouve leurs CGUs absolument inacceptables : à la fois ils s’octroient des droits éditoriaux (et tous les droits intellectuels possibles et imaginables, en fait) sur les évaluations et en même temps (comme Macron) ils laissent à l’auteur l’entière responsabilité juridique. Quel bon deal !

Titre initial: Marche une fois sur 2

J’utilise habituellement ethernet mais pour cette machine je n’avais pas le choix et j’ai donc pris cette clé conseillée par un collègue.

Tout d’abord, elle est fournie sans cable USB, contrairement à ma précédente TP-Link TL-WDN4200 (plus en vente), donc prévoir 10€ de plus pour un bon cable USB 3 si vous ne voulez pas neutraliser 4 ports USB en la branchant.

Ensuite, c’est une fin de série, ce qui n’apparaît nulle part sur sa fiche. Les drivers n’ont pas été mis à jour depuis été 2017 sur le site du contructeur.

Enfin, elle marche quand elle veut sous Windows 10. Des fois elle marche directement, des fois je dois la débrancher et rebrancher 2-3 fois avant qu’elle daigne se connecter au réseau WiFi sans planter instantanément. Un autre collègue sous Linux a le même problème, bref on dirait qu’elle ne marche correctement que sous Mac.
Et je ne parle même pas du débit fluctuant et généralement bas (il m’arrive d’avoir du 200Mbps, mais en général je me traîne dans les 10Mbps, sur une connection 1Gbps)

Posted in reviews.


How to remove the Twitch Prime loot notifications

Twitch has this big fat Twitch Prime icon with a crown in their top menu, and very regularly they add some notification in it about some “free” shit that people who pay a hefty subscription fee are entitled to in various boring games. I find that red notification pretty distracting, and the menu item itself is actually useless as I know I’ll never subscribe to this. So here is how to hide it.

Since I already use uBlock Origin, I will make good use of it for a trivial solution:
1) open the uBlock Origin settings (if you don’t have uBlock Origin, obviously start by installing it)
2) go to the “My filters” tab
3) add a line containing: twitch.tv##.prime-offers
4) click “Apply changes” (or actually, “Ctrl + S” also works)

Et voilà, when you reload Twitch, the Prime menu item will be gone.

If for some reason you want to get rid of just the red icon (the red number) without removing the whole menu item, in step 3 change your line to: twitch.tv##.prime-offers__pill. This way you can still easily check out the offers whenever you feel like it, but you won’t get pestered by the notification number every time something new is added.

While we are at it, if like me you only come by very occasionally, you probably noticed that you get a popup with chat rules every-single-time in every-bloody-channel. You can remove that nuisance too, by adding a filter for: twitch.tv##.chat-rules

Posted in Internet.


How to install MJ12node (the Majestic-12 distributed crawler) on Ubuntu 17.10 / 18.04

A long time ago, I posted a guide to install MJ12node on Debian 7.

Since then, the process became a lot simpler because Linux got better, Mono got better and Majestic got better too. But still, I always have minor difficulties when trying to set up a new node, and apparently I’ve always forgotten to take notes and save them here… so far. Because it’s pretty trivial, I’ll just list the commands with very minimalist comments. They may need some adjustments for versions, but should remain quite stable, at least for a while:

Dependencies:

sudo apt-get install mono-runtime libmono-corlib4.5-cil libmono-sqlite4.0-cil

(that may seem small, but it will actually install a bunch of packages, for a total of about 40 MiB)

Get the latest node, unpack it and make it executable (for some reason, out-of-the-box it isn’t):

wget https://www.majestic12.co.uk/files/mj12node/mono/mj12node_linux_v1716_net45.tgz
tar xf mj12node_linux_v1716_net45.tgz
cd MJ12node
sudo chmod run.py 744

That’s it, you’re ready to start your node, use this command if you don’t want the web interface to be launched (it has no password and is on by default, hurray for security…)

./run.py -t

Note that in the console, you can at any time start and stop the web interface by pressing S for Start and T for sTop (but the point of running with -t is that it will remain off when the node auto-restarts, which is a setting I use and I recommend you to use, to avoid crashes)

Credits to myself lol, I actually found a forum post I made earlier while redacting this post, although I assume from the look of it that it was a summary of helping tips provided by other forum users 😉

On a side note, I also found another thread with instructions to run a more up-to-date Mono version. I guess you could do it if you really want to use cutting-edge Mono, but unlike a few years ago, the Mono version provided in recent Ubuntu distributions is stable enough. With MJ12node 1.7.16 and whatever Mono is the default in Ubuntu 17.10, and my node configured to restart every 12h, I’ve never had any crash that would cause my node to stop crawling forever, like I used to have before.

Posted in servers, software.


A Freenet disk space mystery

I’ve been running, on and off, a bunch of Freenet nodes for a while, and I’ve never had issues with them. But a couple of months ago, a server where I installed it pretty much as usual started behaving strangely. Its available disk space got consumed regularly, at a fairly large rate of 4-5 GiB/day, as I could see from the ever decreasing available space shown by Webmin and df -h.

I tried looking for what could be causing it, learning a few interesting commands in the process, for instance this one:
find / -mmin -2 -ls
to see all written files in the last 2 minutes, and this one:
du -h / | grep -P '^[0-9\.]+[GT]'
to list all folders with a size measured in TiB or GiB.
The latter returned just the /usr folder, with a bit more than 1 GiB, and Freenet’s folder (actually a subfolders chain to the Freenet datastore), with the expected (considering my setup), non-growing size of 1.6 TiB. This, while my 2TB disk was almost 100% full, because I didn’t have time to investigate it sooner. All-in-all, I had about 250 GiB unaccounted for.
I also tried a tool called ncdu, which didn’t give interesting results either.

Oh and by the way, if you’re wondering what happens when your disk is full, it’s not really an enjoyable experience: Apache HTTPd goes down (Webmin too as far as I remember), the computer is globally slow (for instance, running du -h / | grep -P '^[0-9\.]+[GT]' took ages, while it took seconds when I ran it earlier while there was some space left), and some basic feature like auto-completion in the console via TAB don’t work (you suddenly realize that for some reason they require a bit of disk space, even for what could be assumed to be read-only, namely fetching the file/folder names in the current folder)

Anyhow, I was pretty puzzled and, since asking for help didn’t work, I decided I would just free some space, do an up-to-date backup of the single service I was running there, and reinstall, updating Ubuntu in the process. How to free some space? Well, Freenet appeared to be the obvious choice, as deleting just one file from the datastore would give me more than a month of time, assuming the disk would keep filling up at 5 GiB a day.
But I wanted to do it clean, so first I tried shutting down Freenet the clean way, using ./run.sh stop. To my surprise, it worked without a scratch. I assumed that shutting down would require writing a few things that were in RAM, so I expected at least a slow down, but no: no error, not even an abnormally large delay.
Then I had to choose what to delete. I listed all files and I picked CHK-cache.hd, because 1) it was big and 2) I thought maybe I’d want to restart the node later and having a flushed cache sounded better than having a flushed store or other things. ls -la said CHK-cache.hd was 730 GiB.

I ran rm CHK-cache.hd. Something else about having a full drive: it makes deleting a file slow as hell. I could follow, via df -h, first the MiBs and then the GiBs slowly getting freed, which became faster and faster the more space was already freed up. And then, maybe half an hour later, the file was finally fully deleted. The whole 730 GiB file. And I now had 971 GiB of free space. Which, obviously, was 241 GiB too much. So this is the mystery, where did those 241 GiB vanish? How come ls -la returned a size of 730 GiB for a file which was actually taking 971 GiB? Not sure I’ll ever know. Was I lucky I picked this very file, or would another big Freenet datastore file have freed mystery extra space too? (actually, only CHK-store.hd was as big, the other big files were all smaller than 40 GiB)

This was the first time I experienced that, after running Freenet on maybe a dozen of other setups with never a single disk space issue… I hope it won’t happen against, but at least now if it happens I’ll know where to look, what to stop, and what to delete.

Posted in Linux, software.


Generating a large (>8KB) key with GnuPG 2.x

A long while ago, I posted a guide on how to compile GnuPG 1.x, for Windows, to generate a large key.
This time, here is a guide on how to compile GnuPG 2.x (2.2.7 at the time of writing) to generate a large key. However, because GnuPG 2 is so hard to compile for Windows, I’ll compile it “only” for Linux. If you’re not on Linux, you can just do this in a virtual machine, it’s practical enough nowadays.

Starting remarks

Before going further, you may have noticed that the title mentions >8KB, not >4KB, even though 4092 bits is the first limit you’ll hit when trying to create a big key. This is because there is a simple solution that doesn’t require compiling if you “only” want a 8192 bits key: simply use the --enable-large-rsa flag, i.e.:

gpg --full-generate-key --enable-large-rsa

This guide might miss dependencies, as I used a Linux install that wasn’t completely new (Kubuntu 18.04, upgraded from 17.04 via 2x do-release-upgrade), although I hadn’t used it much either. Notably, I already had Kleopatra and GnuPG installed (from the distribution-provided packages). If at some points you have dependencies missing, maybe check out the list in the mini-guide there and also simply go for apt-get install gcc. Hopefully, the error messages you may encounter about missing dependencies should be helpful enough in guiding you to the right packages.

Onto the guide per se now.

Compiling

First, grab the source code for the required packages on this page:
https://www.gnupg.org/download/index.html
I needed: GnuPG, Libgpg-error, Libgcrypt, Libksba, Libassuan, nPth (surprisingly, I didn’t need Pinentry)

Compile and install everything but GnuPG. For instance, for Libgcrypt:

tar xf libgcrypt-1.8.2.bz2
cd libgcrypt-1.8.2
./configure
make && make install

(NB: you might want to also run make test)

Now extract GnuPG’s source code but don’t compile just yet. Open gnupg-2.2.7/g10/keygen.c, and there are 2 lines you need to edit in order to remove the hard-coded 4096 bits cap:
Line 1642, change:

  const unsigned maxsize = (opt.flags.large_rsa ? 8192 : 4096);

for instance into:

  const unsigned maxsize = (opt.flags.large_rsa ? 8192 : 409600);

(NB: 409600 is way too large, but it doesn’t hurt either)

Line 2119, change:

      *max = 4096;

for instance into:

      *max = 409600;

Those line numbers are given for version 2.2.7, which is the current version as I’m writing those lines. They might move around a bit in future version, see the end of this post for larger snippets.

Then you can compile and install GnuPG. The commands are similar as for the libraries we dealt with earlier, except from one tweak:

cd gnupg-2.2.7
./configure --enable-large-secmem
make
make test
make install

The --enable-large-secmem flag is what will allow GnuPG to allocate a large enough amount of memory, hence to deal with large keys without crashing.

Generating the key

Run gpg --version to make sure you’re running the compiled version, since your distribution’s version will most likely always be a bit behind (for instance, I just compiled version 2.2.7 and the version from my distribution, which I can still obtain via gpg2 --version, is 2.2.4).

Then you can move on to the key generation, as usual:

gpg --full-generate-key

(if you edited like I did, do not use the –enable-large-rsa flag, as it will still have a size limit of 8912 bits)

If you use the gpg-agent from your distribution’s installation, you’ll get a warning saying gpg-agent is older than gpg: it’s not a problem (but it could be avoided using something like killall gpg-agent && gpg-agent --daemon --pinentry-program /usr/bin/pinentry.

The key generation will take _a lot_ of time. You can speed it up by using the computer (downloading a large file, moving the mouse, typing stuff…), which will help generate entropy.

Once your key is generated, you may want to edit its preferences to make sure compression is enabled. Maybe the GnuPG I compiled didn’t have compression support, but the result was my key had “uncompressed” as the only accepted compression.

gpg --edit-key [keyID]
setpref AES256 AES192 AES 3DES SHA512 SHA384 SHA256 SHA224 SHA1 BZIP2 ZLIB ZIP Uncompressed MDC

Appendixes

Fixing “agent_genkey failed: No pinentry”

If you get an error message saying:

gpg: agent_genkey failed: No pinentry
Key generation failed: No pinentry

it means that for some reason gpg-agent failed to load pinentry. Make sure pinentry is installed (apt-get install pinentry-qt should do the trick, or downloading and compiling and installing it from GnuPG’s site, like we did for the other dependencies), then:

killall gpg-agent
gpg-agent --daemon --pinentry-program /usr/bin/pinentry

You might want to use locate pinentry first, to make sure /usr/bin/pinentry is the right path. (thanks to this ticket for pointers to this fix)

Larger code snippets

Here are longer snippets for more context:

Line 1642 is in function static int gen_rsa (int algo, unsigned int nbits, KBNODE pub_root, u32 timestamp, u32 expireval, int is_subkey, int keygen_flags, const char *passphrase, char **cache_nonce_addr, char **passwd_nonce_addr), in the following block:

  int err;
  char *keyparms;
  char nbitsstr[35];
  const unsigned maxsize = (opt.flags.large_rsa ? 8192 : 4096);

  log_assert (is_RSA(algo));

Line 2119 is in function static unsigned int get_keysize_range (int algo, unsigned int *min, unsigned int *max), in the following block:

    default:
      *min = opt.compliance == CO_DE_VS ? 2048: 1024;
      *max = 4096;
      def = 2048;
      break;

Why no ECC / Elliptic Curves?

ECC allows for much smaller keys and faster computation than RSA for an equivalent security level. For instance, AES 128 bits has roughly the same security as ECC 256 bits and RSA 3300 bits. But both RSA and ECC are weak to quantum computers and, from what I understood, a quantum computer will need to have an amount of qbits proportional to the size of the key to be able to crack it. Tada! What used to be a weakness of RSA, the key length, turns out to be (kind of) a strength.

This is why I’m still generating an RSA key and not an ECC one. Hopefully, next time I renew my key, I’ll have a post-quantum option.
Sorry for not digging a bit more into details here, I’m redacting this from memory, as struggling with GnuPG’s compilation and writing the rest of this post drained me more than I expected.

Why don’t we also change SECMEM_BUFFER_SIZE?

A link I mentioned earlier suggests increasing SECMEM_BUFFER_SIZE. I suspect this would allow creating even larger keys without running into memory allocation failures. However, this would also allow you to create keys so big that only your modified GnuPG can handle them. I don’t think that’s an acceptable option, but if you really, really want a huge key and don’t care if it’s hard to use practically, I suppose you can go ahead and increase SECMEM_BUFFER_SIZE.

Posted in privacy, programming.


Saving Corsair mouse settings into the mouse

When my nice Logitech MX510 mouse died, I couldn’t find an equivalent replacement at Logitech and eventually went for a Corsair Sabre RGB.

It’s not quite the same, but overall it contains as many usable buttons (the one right under the wheel is just unusable to me) and looks like the wheel won’t catch dirt as easily. It also has lots of lights, which is quite a change coming from a mouse with no LEDs at all (although I’d rather not have those and have a lighter invoice…). The settings (speed, light, button mapping) are quite complete, however they only kick in when the control software (“Corsair Utility Engine”) starts, which I find a bit annoying. But this was also the case, to some extent with my previous mice.

Gladly, there is a feature to store said settings in the mouse. The only trick is: it can be bloody hard to find. It’s a little button with an icon that looks kind of like a memory card, which is located on the right of the left menu and only appear when some conditions are met. So here’s a little “map” in order to easily find it:

Corsair Utility Engine - saving hardware profile

Those are the 3 key points: make sure the mouse is selected (right-most highlighted button yellow), and that the profiles list is selected too (toggled by clicking the left-most highlighted button). Then the 3rd, tiny button should appear and allow you to save your profile into the mouse.

I saved those URLs because those posts helped me, but to be honest I don’t really remember what were their interesting points… Still, credits where due ^^
https://www.reddit.com/r/Corsair/comments/702x1j/corsair_k95_problem_creating_hardware_profiles/
http://forum.corsair.com/v3/showthread.php?t=175568

Posted in hardware, Windows.


Linux Bash scripting: multi-line command and iterating over an array

It’s simple, really, but I rarely use bash so I’m always a little lost when I try to do something not absolutely trivial in it…
I’ll just put my script here and explain below:

images=("pic 1.jpg" \
"pic 2.jpg" \
"pic 3.jpg")

for i in "${images[@]}"
do
   echo $i
   node ../path/to/myscript.js --source "$i"
done

What it does is:
– define an array: myArray = ( elem1 elem2 )
– use multiple lines. To do so: when you split your command over multiple lines, end each non-end line with “\” to indicate continuation on the next line
– then iterate over it, echoing each element and running a command (with quotes because the elements of my array contain spaces)

This (very recent) source was helpful: Bash For Loop Array: Iterate Through Array Values

Posted in Linux.


How to compare 2 folders using Windows PowerShell

Just copying here the Windows PowerShell script from this blog post because I really don’t want to ever lose it:

$folderA = Get-ChildItem -Recurse -path C:\folderA\
$folderB = Get-ChildItem -Recurse -path C:\folderB\
Compare-Object -ReferenceObject $folderA -DifferenceObject $folderB

I use 2 different computers to deploy a Serverless project I work on, and I noticed the deployment artifacts had different sizes depending on the computer I deployed from :s So I grabbed an archive from each deployment, unzipped them, and used this to compare them.

It turned out the culprit was Ava, which puts a bunch of js and js.map files in node_modules/.cache/ava. Excluding the whole node_modules/.cache/** from Serverless deployment (serverless.yml > package > exclude) allowed me to shrink the deployment artifact by a few hundreds of kB. On a side note, it appears that Ava leaves behind a lot of trash in this folder, so you may want to purge it manually from time to time.

Posted in JavaScript / TypeScript / Node.js, Windows.


How to disable OneSyncSvc (and other services)

I recently set up a new computer at work, and it uses Windows 10. That famous Windows that gives you even less control over your computer than the previous ones, going decreasingly with every new “Creator Update” or whatever fancy name they call those Service Packs.
As usual, I went to Services to locate and disable those that seem useless. Among those, one stood out: “OneSyncSvc_381d9”. As I tried to disable it via the “Services” control panel, it wouldn’t let me, saying “The parameter is incorrect” whenever I would try to set its startup type to “Disabled” or just “Manual”.

So I looked for another way to disable it, and the registry (if you still don’t know it, just run “regedit”) did the trick. Go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\OneSyncSvc_381d9, and then modify the Start DWORD value. 0 means “boot”, 1 means “system” (not sure what those 2 do, I don’t see any service using those values), 2 means start automatically, 3 means start manually, 4 is what interests us here and means disabled.

All other services can be configured there, although OneSyncSvc is actually the only service I didn’t manage to tame via the Services panel, simply by going to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\[service name].

Bonus: little list of useless services that you can most likely smash:
– Connected User Experiences and Telemetry
– Downloaded Maps Manager
– dmwappushsvc

Posted in Windows 10.


How to hide processes from other users in Linux’s “top”

A few months ago, I had to set up a server where a bunch of people would need to connect to directly access a MariaDB SQL database, with also an SSH access for tunneling. A few users would also use that server for other purposes, and I didn’t want everyone to view everyone else’s processes, which to my surprise was possible by default (if any user runs top, they can see everyone’s running processes :s).

Starting with Linux kernel version 3.2, a setting was (finally) added to prevent unprivileged users from seeing each others’ processes. Basically, you need to set the hidepid option to 2 for the /proc filesystem:

nano /etc/fstab
– Find the line starting with “proc”
– Add hidepid=2 to the options

For instance, the line:

proc            /proc   proc    defaults      0       0

Becomes

proc            /proc   proc    defaults,hidepid=2      0       0

Then don’t forget to save and restart

Note that sometimes the proc line can be missing (I have this case on a VPS), I’m not sure what should be done then… Maybe adding the proc line as quoted above would work (?)

Posted in Linux, servers.