Skip to content

How to enable IPv6 on Ubuntu Server 18.04

Last week or so, I migrated this site to a new server (OVH has this strange habit of pushing clients to migrate from older offers to new ones by not only releasing upgraded offers but also raising the prices of the old ones for current subscribers :x). In the process, I noticed that the IPv6 was kind of put forward (it used to be just in the control panel, now it’s also in the server activation e-mail right below the IPv4 address). So I figured, let’s use it this time.

I first though it was as simple as adding an AAAA record in the DNS zone in Bind. So I did. Tough, it didn’t work. The server doesn’t actually replies to queries sent to its IPv6 address. After a quick search, I found that it was because IPv6 wasn’t enabled/configured on the server.
At first I tested that with this online tool, but then I got a more convenient way using the console:
ping6 -c 1
The reply I got was:
connect: Network is unreachable

OVH provides a guide to configure IPv6. Sadly, as of today, it’s outdated and doesn’t work with Ubuntu 18.04.
So I kept looking and eventually found that I had to use netplan, as follow:

1) Go to folder /etc/netplan

2) Create a file named (for instance) 90-ipv6.yml with the following content:

    version: 2
            dhcp4: true
                macaddress: ab:cd:ef:12:34:56
            set-name: ens3
              - 1234:5678:9:3100:0:0:0:abc/64
            gateway6: 1234:5678:9:3100:0000:0000:0000:0001

NB: obviously, replace the interface name (ens3), the MAC address, the address and the gateway with your values. You should be able to find the interface name and MAC address in file /etc/netplan/50-cloud-init.yaml, and the address and gateway should be provided to you by your host. Note that even if your host only provides a /128, you need to enter it as a /64 in order for this to work for some reason.

3) This is not over yet, you need to run these commands in order to apply your changes:

netplan generate
netplan apply

And that’s it. It should work without a reboot (but if it doesn’t, I guess you can try to reboot), so ping6 should now work:

root@vps123456:/etc/netplan# ping6 -c 1
PING (2607:f8b0:4004:810::200e)) 56 data bytes
64 bytes from (2607:f8b0:4004:810::200e): icmp_seq=1 ttl=50 time=91.2 ms

--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 91.214/91.214/91.214/0.000 ms

And your AAAA record should work too.
Note that you might need to adjust your Apache HTTPd virtual hosts configuration. I didn’t need to, because my virtual hosts don’t use the IP:

<VirtualHost *:80>
 DocumentRoot "/path/to/docs/"
 RewriteEngine On
 RewriteCond %{HTTPS} off
 # RewriteRule (.*) https://%{SERVER_NAME}/$1 [R,L]
 RewriteRule ^(.*)$ https://%{SERVER_NAME}%{REQUEST_URI} [R,L]
 <Directory "/path/to/docs/">
  Require all granted
  Options -Indexes
  AllowOverride All
<VirtualHost *:443>
   DocumentRoot "/path/to/docs/"
   <Directory "/path/to/docs/">
   Require all granted
   Options -Indexes
   AllowOverride All
   SSLEngine on
   SSLProtocol all -SSLv2 -SSLv3
   SSLHonorCipherOrder On
   SSLCertificateFile /etc/letsencrypt/live/
   SSLCertificateKeyFile /etc/letsencrypt/live/
   SSLCertificateChainFile /etc/letsencrypt/live/
   SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown

But if yours do, you might find this guide useful.

How to add an IPv6 address and default route with netplan in Ubuntu 17.10 artful? – Ask Ubuntu
– (FR) Impossible de configurer IPv6 (Netplan <3 et Ubuntu 18.04) – Cloud / VPS – OVH Community

Posted in servers.


Still cleaning out my closet, I found that interesting piece that I saved back in August 2013.
Back in the days, I stumbled upon a creepy site called “”, which is now defunct (the domain name was abandoned and it’s actually not registered at all anymore). Not sure how I found it, probably by browsing some other site that used it for “protection” again “evil bots”. I always use some kind of proxy so I tend to trigger those kinds of paranoid protections, as well as getting a fair share of “Access denied” and a truckload of captchas – thanks Google and Cloudflare, which, fun fact, was spelled CloudFlare with a capital F back in 2013.

Anyway, all I kept in my draft was a verbatim copy of the site’s front page (or maybe it was the about page). And it’s a fun (although creepy) read. Even more so when you think that now there’s that little something called General Data Protection Regulation (GDPR).

The purpose of is to collect data on spammers/hackers/scrapers/crawlers and other pests that hammer web sites so that I, Lucia, can look at aggregated information. Others can find pages because it is indexed by google and I think it’s nice for people who were looking for a specific IP or host to be able to learn a few details. But it is not my intention to create a resource that permits the world to do “research”. Please don’t try to use this to conduct your own research, the filters on this site are really really tight and constantly changing and you are likely to get banned. Seriously, the rules at bannasties are draconian. The reason for this is that the site is boring to normal humans bot candy for bots. If you are human, you wish search a little bit and also want to avoid getting banned:
Don’t be a bot. Don’t even ‘look’ like a bot. Bot visits are strictly prohibitted.
Always start your search from here or by way of a search engine. It’s currently fairly safe to search for banned IPs, user agents or Hosts by using the appropriate search form. Click a link to load the search form:
Search Form to Find an IP. Example:
Search Form to Find a User Agent. This searches for partial strings provided they contain at least 4 characters: (e.g. “majestic”, “80legs” “mozilla”). .
Search Form to Find a Host. This searches for partial strings provided they contain at least 4 characters: (e.g. “kimsufi”, “” “server”). .
Do not try to search by guessing URI’s. Just don’t do it. The probability you will look like a bot is too high.
Don’t submit more than 3 queries an hour. Queries are submitted if you load a page containing a ‘?’ in the address or press a submit button. Once you’ve submitted more than 3, wait an hour then come back. This site is not intended to permit anyone other than the owner to do extensive personal research.
Don’t use a proxy server or vpn. At. All. (If I detect a proxy, you will be banned. )
Do pass referrers in the default way. That is: No blank referrers. No spoofed referrers. No fake referrers.
Don’t be from a spammy country. The list I consider spammy is constantly changing. But these countries will always be on it: China and Brazil. So are some other countries.
Don’t originate your request from server or web hosting company (e.g. dreamhost, bluehost, hostgator etc.). Use an ISP whose main service is providing internet connectivity (e.g. comcast, at&t, verison etc.) By the way: if this rule means you can’t visit bannasties from work, so be it.
Don’t use a mobile device. I know lots of people use those, but spammers, hackers and scrapers use them. If I detected them here I ban them.
Accept cookies from my site. (You may reject third party cookies.)
If you want permission to visit more I suggest you do something that will send you the ‘banned’ message. (For example, running to many queries times in an hour). You will be presented “the scary page”. The find the email link, click and email me. Explain to me why you want to run numerous searches and if I approve of the reason I might be able to arrange something. But.. really…the number of crawlers here is ridiculous. This site is mostly intended to be for me and provides a very limited amount of access to others who might have been sent here by Google.
Even following these rules does not guarantee you won’t get banned. There are bots ‘out there’ doing really ‘interesting’ things and I write rules based on behavior I observe. If your browser or bot does something that looks really weird, you are likely to get banned.
Privacy policy

This site grants you zero privacy. All connections are monitored. I set cookies; I check them. I change the information set on cookies all the time. Those caught by the spam/hack software are filtered and logged. Data collected when you visit this site is not kept private, could and likely will be discussed publicly especially if you turn out to be a server, vpn, violate any rules described above or even any I dream up in the future.

The End ^^

Posted in privacy, web filtering.

Fixing a couple of TypeScript compilation errors

This is an old draft that I never got to finish but that I don’t want to throw away either. I guess this is where this site is really used as a notepad ^^

Error 1:
node_modules/@types/graphql/subscription/subscribe.d.ts(17,4): error TS2304: Cannot find name 'AsyncIterator'.

=> add “esnext.asynciterable” lib to file tsconfig.json

Error 2:
node_modules/aws-sdk/lib/config.d.ts(39,37): error TS2693: 'Promise' only refers to a type, but is being used as a value here.

=> add “es2015.promise” lib to file tsconfig.json

My tsconfig.json file as it was when I started writing this (should be still good, I just added more stuff since then)

  "compileOnSave": true,
  "compilerOptions": {
    "target": "es5",
    "noImplicitAny" : true,
    "lib": [
    "skipLibCheck": false,
    "alwaysStrict": true,
    "removeComments": true
  "exclude": [
  "typeRoots": [ "node_modules/@types" ]

Posted in JavaScript / TypeScript / Node.js, published drafts.

Blackjack In Space score cheat

A bit off-topic I guess, although this notepad doesn’t have a very targeted topic aside from “computer stuff” (and a bit of stats, and a bit of health).

Blackjack In Space is this game and I found it has an interesting/unusual way of storing its score. The score is stored in the registry, in value HKEY_CURRENT_USER\SOFTWARE\JDRumble\Blackjack In Space\BlackjackBalance_h[some_number] (I suspect some_number is unique so I edited it out), which regedit identifies as an “invalid DWORD (32-bit) value”.

I didn’t search much, but I didn’t find an obvious way to compute the stored value from the actual score value. Instead, I played a few hands and noted the correspondence between score and stored value, here they are:
(format: [score] = [regedit value])
100 = 5940
200 = 6940
400 = 7940
800 = 8940
1600 = 9940
3200 = A940
6400 = B940
16000 = 40CF40

Feel free to post in the comments if you find how this is encoded, I’d be curious to know ^^

Posted in games.

TRENDnet TEW-805UB review (spoiler: it’s not good)

(version française à la fin)

I usually use Ethernet, but for this machine I didn’t have a choice so I took the TRENDnet TEW-805UB, on a recommendation by a colleague.

First, it comes without a USB cable, unlike my previous TP-Link TL-WDN4200 (not available anymore), so expect to spend an extra $10 for a good USB 3 cable if you don’t want to neutralize 4 USB ports when plugging it.

Second, it’s been discontinued (which wasn’t mentioned on the site where I bought it). The drivers haven’t been updated since summer 2017 on the manufacturer’s website.

Finally, it randomly works or not on Windows 10. Sometimes it works directly, sometimes I have to unplug and re-plug it 2-3 times before it finally manages to connect to the WiFi network without instantly crashing. Another colleague, who uses Linux, has the same problem (to the point that he stopped using the key and now uses his phone as a WiFi key – not sure how he does that though). So it seems that this key only works properly on Mac.
And don’t get me started on the fluctuating, and generally low speed: sometimes I get around 200Mbps, but usually I’m around 10Mbps, on a 1Gbps connection.

French version:

J’allais poster cette évaluation sur LDLC, mais je trouve leurs CGUs absolument inacceptables : à la fois ils s’octroient des droits éditoriaux (et tous les droits intellectuels possibles et imaginables, en fait) sur les évaluations et en même temps (comme Macron) ils laissent à l’auteur l’entière responsabilité juridique. Quel bon deal !

Titre initial: Marche une fois sur 2

J’utilise habituellement ethernet mais pour cette machine je n’avais pas le choix et j’ai donc pris cette clé conseillée par un collègue.

Tout d’abord, elle est fournie sans cable USB, contrairement à ma précédente TP-Link TL-WDN4200 (plus en vente), donc prévoir 10€ de plus pour un bon cable USB 3 si vous ne voulez pas neutraliser 4 ports USB en la branchant.

Ensuite, c’est une fin de série, ce qui n’apparaît nulle part sur sa fiche. Les drivers n’ont pas été mis à jour depuis été 2017 sur le site du contructeur.

Enfin, elle marche quand elle veut sous Windows 10. Des fois elle marche directement, des fois je dois la débrancher et rebrancher 2-3 fois avant qu’elle daigne se connecter au réseau WiFi sans planter instantanément. Un autre collègue sous Linux a le même problème, bref on dirait qu’elle ne marche correctement que sous Mac.
Et je ne parle même pas du débit fluctuant et généralement bas (il m’arrive d’avoir du 200Mbps, mais en général je me traîne dans les 10Mbps, sur une connection 1Gbps)

Posted in reviews.

How to remove the Twitch Prime loot notifications

Twitch has this big fat Twitch Prime icon with a crown in their top menu, and very regularly they add some notification in it about some “free” shit that people who pay a hefty subscription fee are entitled to in various boring games. I find that red notification pretty distracting, and the menu item itself is actually useless as I know I’ll never subscribe to this. So here is how to hide it.

Since I already use uBlock Origin, I will make good use of it for a trivial solution:
1) open the uBlock Origin settings (if you don’t have uBlock Origin, obviously start by installing it)
2) go to the “My filters” tab
3) add a line containing:
4) click “Apply changes” (or actually, “Ctrl + S” also works)

Et voilà, when you reload Twitch, the Prime menu item will be gone.

If for some reason you want to get rid of just the red icon (the red number) without removing the whole menu item, in step 3 change your line to: This way you can still easily check out the offers whenever you feel like it, but you won’t get pestered by the notification number every time something new is added.

While we are at it, if like me you only come by very occasionally, you probably noticed that you get a popup with chat rules every-single-time in every-bloody-channel. You can remove that nuisance too, by adding a filter for:

Posted in Internet.

We won’t have a decentralized Internet, and it’s your fault

Well possibly it’s not your own fault (you’re here, after all), but it’s most people’s fault. Here’s is why

1) People are lazy

Maintaining a contact book with names and e-mails? That’s so 2010! With half the internet population on Facebook, or on a few other social networks each more populated than a couple of continents, almost all your contacts are in your “apps”, with name and picture. So convenient. And those who aren’t there? Well you just don’t bother talking to them much, peer-pressuring them into becoming part of the Big Data horde.

2) People crave censorship

Seriously, they do. Of course, they don’t like to be censored themselves. But they love censoring contents they don’t like. They can’t imagine not censoring contents they don’t like.
A few months ago, I went to a presentation (well, it was more of a workshop actually) about ZeroNet. One of the first question from the (small and select) audience was “Can I block contents?”. Not as in “can I hide it from my view?”, but as in “can I prevent my node – if not the network – from spreading it?”. By the way, ZeroNet makes this possible (and actually really trivial and accessible) so if you’re committed to content-neutrality you’ll want to prefer things like Tor and Freenet.

And the “problem” with a decentralized Internet is that you can’t censor it. So people just won’t support it: as soon as they see something they don’t like and that they can’t get removed, they run back to Big Tech, which can trivially be bullied into removing anything deemed too insufficiently politically correct (if they don’t just do it by themselves before anyone asks them to)

3) People cheer for monopolies

Well, not monopolies, but quasi-monopolies or ultra-dominant actors. Which is pretty much the same, apart from the fact that it provides the “it’s not a monopoly” defense, in addition to “it’s okay as long as it’s cheap” (where the dollar-equivalent value of privacy is zero). They want to use a service that all their friends already use, making it a nightmare for new actors to pop up, and an even worse nightmare for companies to remain mid-sized: either you take all the market, or you get just a few customers who are only here because they care about diversity (and who likely won’t be numerous enough to make the business sustainable).

NB: Just stumbled on this old draft and figured I’ll never really finish it, so here it is, as is

Posted in privacy.

How to install MJ12node (the Majestic-12 distributed crawler) on Ubuntu 17.10 / 18.04

A long time ago, I posted a guide to install MJ12node on Debian 7.

Since then, the process became a lot simpler because Linux got better, Mono got better and Majestic got better too. But still, I always have minor difficulties when trying to set up a new node, and apparently I’ve always forgotten to take notes and save them here… so far. Because it’s pretty trivial, I’ll just list the commands with very minimalist comments. They may need some adjustments for versions, but should remain quite stable, at least for a while:


sudo apt-get install mono-runtime libmono-corlib4.5-cil libmono-sqlite4.0-cil

(that may seem small, but it will actually install a bunch of packages, for a total of about 40 MiB)

Get the latest node, unpack it and make it executable (for some reason, out-of-the-box it isn’t):

tar xf mj12node_linux_v1716_net45.tgz
cd MJ12node
sudo chmod 744

That’s it, you’re ready to start your node, use this command if you don’t want the web interface to be launched (it has no password and is on by default, hurray for security…)

./ -t

Note that in the console, you can at any time start and stop the web interface by pressing S for Start and T for sTop (but the point of running with -t is that it will remain off when the node auto-restarts, which is a setting I use and I recommend you to use, to avoid crashes)

Credits to myself lol, I actually found a forum post I made earlier while redacting this post, although I assume from the look of it that it was a summary of helping tips provided by other forum users 😉

On a side note, I also found another thread with instructions to run a more up-to-date Mono version. I guess you could do it if you really want to use cutting-edge Mono, but unlike a few years ago, the Mono version provided in recent Ubuntu distributions is stable enough. With MJ12node 1.7.16 and whatever Mono is the default in Ubuntu 17.10, and my node configured to restart every 12h, I’ve never had any crash that would cause my node to stop crawling forever, like I used to have before.

Posted in servers, software.

A Freenet disk space mystery

I’ve been running, on and off, a bunch of Freenet nodes for a while, and I’ve never had issues with them. But a couple of months ago, a server where I installed it pretty much as usual started behaving strangely. Its available disk space got consumed regularly, at a fairly large rate of 4-5 GiB/day, as I could see from the ever decreasing available space shown by Webmin and df -h.

I tried looking for what could be causing it, learning a few interesting commands in the process, for instance this one:
find / -mmin -2 -ls
to see all written files in the last 2 minutes, and this one:
du -h / | grep -P '^[0-9\.]+[GT]'
to list all folders with a size measured in TiB or GiB.
The latter returned just the /usr folder, with a bit more than 1 GiB, and Freenet’s folder (actually a subfolders chain to the Freenet datastore), with the expected (considering my setup), non-growing size of 1.6 TiB. This, while my 2TB disk was almost 100% full, because I didn’t have time to investigate it sooner. All-in-all, I had about 250 GiB unaccounted for.
I also tried a tool called ncdu, which didn’t give interesting results either.

Oh and by the way, if you’re wondering what happens when your disk is full, it’s not really an enjoyable experience: Apache HTTPd goes down (Webmin too as far as I remember), the computer is globally slow (for instance, running du -h / | grep -P '^[0-9\.]+[GT]' took ages, while it took seconds when I ran it earlier while there was some space left), and some basic feature like auto-completion in the console via TAB don’t work (you suddenly realize that for some reason they require a bit of disk space, even for what could be assumed to be read-only, namely fetching the file/folder names in the current folder)

Anyhow, I was pretty puzzled and, since asking for help didn’t work, I decided I would just free some space, do an up-to-date backup of the single service I was running there, and reinstall, updating Ubuntu in the process. How to free some space? Well, Freenet appeared to be the obvious choice, as deleting just one file from the datastore would give me more than a month of time, assuming the disk would keep filling up at 5 GiB a day.
But I wanted to do it clean, so first I tried shutting down Freenet the clean way, using ./ stop. To my surprise, it worked without a scratch. I assumed that shutting down would require writing a few things that were in RAM, so I expected at least a slow down, but no: no error, not even an abnormally large delay.
Then I had to choose what to delete. I listed all files and I picked CHK-cache.hd, because 1) it was big and 2) I thought maybe I’d want to restart the node later and having a flushed cache sounded better than having a flushed store or other things. ls -la said CHK-cache.hd was 730 GiB.

I ran rm CHK-cache.hd. Something else about having a full drive: it makes deleting a file slow as hell. I could follow, via df -h, first the MiBs and then the GiBs slowly getting freed, which became faster and faster the more space was already freed up. And then, maybe half an hour later, the file was finally fully deleted. The whole 730 GiB file. And I now had 971 GiB of free space. Which, obviously, was 241 GiB too much. So this is the mystery, where did those 241 GiB vanish? How come ls -la returned a size of 730 GiB for a file which was actually taking 971 GiB? Not sure I’ll ever know. Was I lucky I picked this very file, or would another big Freenet datastore file have freed mystery extra space too? (actually, only CHK-store.hd was as big, the other big files were all smaller than 40 GiB)

This was the first time I experienced that, after running Freenet on maybe a dozen of other setups with never a single disk space issue… I hope it won’t happen against, but at least now if it happens I’ll know where to look, what to stop, and what to delete.

Posted in Linux, software.

Generating a large (>8kb) key with GnuPG 2.x

A long while ago, I posted a guide on how to compile GnuPG 1.x, for Windows, to generate a large key.
This time, here is a guide on how to compile GnuPG 2.x (2.2.7 at the time of writing) to generate a large key. However, because GnuPG 2 is so hard to compile for Windows, I’ll compile it “only” for Linux. I’ll compile for Linux, but you’ll find at the end a link to another guide that covers cross-compiling on Linux for Windows. If you’re not on Linux, you can just do this in a virtual machine, it’s practical enough nowadays.

Starting remarks

Before going further, you may have noticed that the title mentions >8KB, not >4KB, even though 4092 bits is the first limit you’ll hit when trying to create a big key. This is because there is a simple solution that doesn’t require compiling if you “only” want a 8192 bits key: simply use the --enable-large-rsa flag, i.e.:

gpg --full-generate-key --enable-large-rsa

This guide might miss dependencies, as I used a Linux install that wasn’t completely new (Kubuntu 18.04, upgraded from 17.04 via 2x do-release-upgrade), although I hadn’t used it much either. Notably, I already had Kleopatra and GnuPG installed (from the distribution-provided packages). If at some points you have dependencies missing, maybe check out the list in the mini-guide there and also simply go for apt-get install gcc. Hopefully, the error messages you may encounter about missing dependencies should be helpful enough in guiding you to the right packages.

Onto the guide per se now.


First, grab the source code for the required packages on this page:
I needed: GnuPG, Libgpg-error, Libgcrypt, Libksba, Libassuan, nPth (surprisingly, I didn’t need Pinentry)

Compile and install everything but GnuPG. For instance, for Libgcrypt:

tar xf libgcrypt-1.8.2.bz2
cd libgcrypt-1.8.2
make && make install

(NB: you might want to also run make test)

Now extract GnuPG’s source code but don’t compile just yet. Open gnupg-2.2.7/g10/keygen.c, and there are 2 lines you need to edit in order to remove the hard-coded 4096 bits cap:
Line 1642, change:

  const unsigned maxsize = (opt.flags.large_rsa ? 8192 : 4096);

for instance into:

  const unsigned maxsize = (opt.flags.large_rsa ? 8192 : 409600);

(NB: 409600 is way too large, but it doesn’t hurt either)

Line 2119, change:

      *max = 4096;

for instance into:

      *max = 409600;

Those line numbers are given for version 2.2.7, which is the current version as I’m writing those lines. They might move around a bit in future version, see the end of this post for larger snippets.

Then you can compile and install GnuPG. The commands are similar as for the libraries we dealt with earlier, except from one tweak:

cd gnupg-2.2.7
./configure --enable-large-secmem
make test
make install

The --enable-large-secmem flag is what will allow GnuPG to allocate a large enough amount of memory, hence to deal with large keys without crashing.

Generating the key

Run gpg --version to make sure you’re running the compiled version, since your distribution’s version will most likely always be a bit behind (for instance, I just compiled version 2.2.7 and the version from my distribution, which I can still obtain via gpg2 --version, is 2.2.4).

Then you can move on to the key generation, as usual:

gpg --full-generate-key

(if you edited like I did, do not use the –enable-large-rsa flag, as it will still have a size limit of 8912 bits)

If you use the gpg-agent from your distribution’s installation, you’ll get a warning saying gpg-agent is older than gpg: it’s not a problem (but it could be avoided using something like killall gpg-agent && gpg-agent --daemon --pinentry-program /usr/bin/pinentry.

The key generation will take _a lot_ of time. You can speed it up by using the computer (downloading a large file, moving the mouse, typing stuff…), which will help generate entropy.

Once your key is generated, you may want to edit its preferences to make sure compression is enabled. Maybe the GnuPG I compiled didn’t have compression support, but the result was my key had “uncompressed” as the only accepted compression.

gpg --edit-key [keyID]
setpref AES256 AES192 AES 3DES SHA512 SHA384 SHA256 SHA224 SHA1 BZIP2 ZLIB ZIP Uncompressed MDC


Fixing “agent_genkey failed: No pinentry”

If you get an error message saying:

gpg: agent_genkey failed: No pinentry
Key generation failed: No pinentry

it means that for some reason gpg-agent failed to load pinentry. Make sure pinentry is installed (apt-get install pinentry-qt should do the trick, or downloading and compiling and installing it from GnuPG’s site, like we did for the other dependencies), then:

killall gpg-agent
gpg-agent --daemon --pinentry-program /usr/bin/pinentry

You might want to use locate pinentry first, to make sure /usr/bin/pinentry is the right path. (thanks to this ticket for pointers to this fix)

Larger code snippets

Here are longer snippets for more context:

Line 1642 is in function static int gen_rsa (int algo, unsigned int nbits, KBNODE pub_root, u32 timestamp, u32 expireval, int is_subkey, int keygen_flags, const char *passphrase, char **cache_nonce_addr, char **passwd_nonce_addr), in the following block:

  int err;
  char *keyparms;
  char nbitsstr[35];
  const unsigned maxsize = (opt.flags.large_rsa ? 8192 : 4096);

  log_assert (is_RSA(algo));

Line 2119 is in function static unsigned int get_keysize_range (int algo, unsigned int *min, unsigned int *max), in the following block:

      *min = opt.compliance == CO_DE_VS ? 2048: 1024;
      *max = 4096;
      def = 2048;

Why no ECC / Elliptic Curves?

ECC allows for much smaller keys and faster computation than RSA for an equivalent security level. For instance, AES 128 bits has roughly the same security as ECC 256 bits and RSA 3300 bits. But both RSA and ECC are weak to quantum computers and, from what I understood, a quantum computer will need to have an amount of qubits proportional to the size of the key to be able to crack it. Tada! What used to be a weakness of RSA, the key length, turns out to be (kind of) a strength.

This is why I’m still generating an RSA key and not an ECC one. Hopefully, next time I renew my key, I’ll have a post-quantum option.
Sorry for not digging a bit more into details here, I’m redacting this from memory, as struggling with GnuPG’s compilation and writing the rest of this post drained me more than I expected.

Update (2018-08-12): I randomly bumped into a paper titled Quantum Resource Estimates for Computing Elliptic Curve Discrete Logarithms, and it gives some interesting figures, notably solving a 256 bits Elliptic Curve Discrete Logarithm Problem (ECDLP) it would take 2330 qubits, vs 4719 qubits for a 521 bits ECDLP, 6146 qubits for a 3072 bits RSA and 30722 qubits for a 15360 bits RSA. So yup, definitely worth sticking to 8kb+ RSA in my opinion.

Why don’t we also change SECMEM_BUFFER_SIZE?

A link I mentioned earlier suggests increasing SECMEM_BUFFER_SIZE. I suspect this would allow creating even larger keys without running into memory allocation failures. However, this would also allow you to create keys so big that only your modified GnuPG can handle them. I don’t think that’s an acceptable option, but if you really, really want a huge key and don’t care if it’s hard to use practically, I suppose you can go ahead and increase SECMEM_BUFFER_SIZE.

Update (2019-01-24)

I just found this nice guide explaining how to cross-compile GnuPG on Linux for Windows. I tried it and it worked nicely. I still didn’t need to compile pinentry, nor ntbTLS. Zlib is optional too, but if you don’t compile it, your build will only have support for uncompressed data. I didn’t find how to support bzip2 (didn’t really look hard though, as my only interest was in making a build for exporting my key, not for daily use).
An important note if you want to play around with big keys: in order to successfully export my 16kb private key, I had to use both the customized gpg.exe and gpg-agent.exe

Posted in privacy, programming.