Skip to content


How to export a whole DynamoDB table (to S3)

For the most details, you’ll want to read the documentation, which provides a full section for this here. I’ll try to make this a lot more concise while still containing enough relevant details.

In your AWS console, go to https://console.aws.amazon.com/datapipeline/. Create a new pipeline. Set a meaningful name you like. In “Source”, select “Build using a template” and pick the template “Export DynamoDB table to S3”.

The “Parameters” section should be obvious: indicate which table to read, and which S3 bucket and subfolder you want the backup to be saved in. “DynamoDB read throughput ratio” is an interesting parameter: it allows you to configure the percentage of the provisioned capacity that the export job will consume. The default is 0.25 (25%), you may want to increase it a bit to speed up the export.

The “Schedule” section is useful if you want to run an export regularly, but if you don’t, pick “Run on pipeline activation”.

In “Pipeline Configuration”, I chose to disable logging. (note that every operation in itself is “free” but you’ll still have to pay any incurred costs, like the EC2 instance that will run the job and the S3 storage used by your backup, logs, etc)

In “Security/Access”, I just left IAM roles to “Default”. Not sure what the use cases are for this section.

You can add tags if you like, and click on “Edit in Architect” if you want to customize it more, but I’ll just click “Activate” here. It may tell you “Pipeline has activation warnings” (notably, there’s a warning if you disable logs), you can pick “Edit in Architect” to review the warnings, or just pick “Activate” again anyway.
If you do so, you’ll be redirected to your pipeline’s page, and it will start running shortly after (you may have to refresh the page, or go back to your pipelines list and again to the new pipeline’s page). The “WAITING_FOR_RUNNER” status will probably last almost 10 minutes before the job is actually “RUNNING”.

AWS Data Pipeline, job waiting for runner

Posted in web development.

Tagged with .


Getting Collabora Online to work in Nextcloud

Collabora Online is basically an open source Google Docs replacement with a very ugly UI and questionable performances. But it Just Works™, and at least it doesn’t spy on you.
I helped set up a Nextcloud instance, and people there wanted Collabora Online in it. It was tougher than expected, and none of the instructions I found were exhaustive (although these ones are pretty complete), so here’s a recap.

Prerequisites:

  • A Linux server
  • Nextcloud up and running
  • Apache and some knowledge about configuring it (or knowing how to replicate what I’ll describe on your HTTP server of choice)
  • Let’s Encrypt (certbot) or knowing how to obtain a TLS certificate otherwise

First, use Docker. It’s theoretically possible to install Collabora the classic way with your package manager, but I just didn’t manage to get it to work this way.
apt-get install docker.io
Then
docker pull collabora/code
We’ll start it later. For now, you need to configure a dedicated subdomain, ideally with HTTPS.

In your Apache configuration, make sure the following modules are enabled: proxy, proxy_wstunnel, proxy_http, and ssl
Then add an HTTP virtual host (will be used to validate your TLS certificate with Let’s Encrypt) as follow (of course, adapt it with you domain and paths):

<VirtualHost *:80>
   ServerName nextcloud.example.com
   ServerAlias collabora.example.com
   DocumentRoot "/home/example/www"
   # RewriteEngine On
   # RewriteCond %{HTTPS} off
   # RewriteRule (.*) https://%{SERVER_NAME}/$1 [R,L] 
   <Directory "/home/example/www">
   Require all granted
   Options -Indexes
   AllowOverride All
   </Directory>
</VirtualHost>

and restart (or reload) Apache: /etc/init.d/apache2 restart

Note that I set up the HTTP virtual host to accept 2 subdomains at the same time in order to use it to validate a certificate for both Nextcloud and Collabora at once.
To obtain your certificate (via Let’s Encrypt, assuming it’s already installed):

certbot certonly --webroot -w /home/example/www/ -d nextcloud.example.com collabora.example.com

You can now add the proxy virtual host (again, adapt it with you domain and paths):

<VirtualHost collabora.example.com:443>
  ServerName collabora.example.com:443

  # SSL configuration, you may want to take the easy route instead and use Lets Encrypt!
  SSLEngine on
   SSLCertificateFile /etc/letsencrypt/live/nextcloud.example.com/cert.pem
   SSLCertificateKeyFile /etc/letsencrypt/live/nextcloud.example.com/privkey.pem
   SSLCertificateChainFile /etc/letsencrypt/live/nextcloud.example.com/fullchain.pem
  SSLProtocol             all -SSLv2 -SSLv3
  SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
  SSLHonorCipherOrder     on

  # Encoded slashes need to be allowed
  AllowEncodedSlashes NoDecode

  # Container uses a unique non-signed certificate
  SSLProxyEngine On
  SSLProxyVerify None
  SSLProxyCheckPeerCN Off
  SSLProxyCheckPeerName Off

  # keep the host
  ProxyPreserveHost On

  # static html, js, images, etc. served from loolwsd
  # loleaflet is the client part of LibreOffice Online
  ProxyPass           /loleaflet https://127.0.0.1:9980/loleaflet retry=0
  ProxyPassReverse    /loleaflet https://127.0.0.1:9980/loleaflet

  # WOPI discovery URL
  ProxyPass           /hosting/discovery https://127.0.0.1:9980/hosting/discovery retry=0
  ProxyPassReverse    /hosting/discovery https://127.0.0.1:9980/hosting/discovery

  # Main websocket
  ProxyPassMatch "/lool/(.*)/ws$" wss://127.0.0.1:9980/lool/$1/ws nocanon

  # Admin Console websocket
  ProxyPass   /lool/adminws wss://127.0.0.1:9980/lool/adminws

  # Download as, Fullscreen presentation and Image upload operations
  ProxyPass           /lool https://127.0.0.1:9980/lool
  ProxyPassReverse    /lool https://127.0.0.1:9980/lool
</VirtualHost>

And restart Apache again

Now, you should be good to start up the Collabora Docker container:
docker run -t -d -p 127.0.0.1:9980:9980 -e "domain=nextcloud\\.example\\.com" --restart always --cap-add MKNOD collabora/code
Note that you need to indicate the Nextcloud domain here, not the Collabora one. If you don’t indicate the proper domain here, you’ll get an error saying “Unauthorized WOPI host”, somewhere in your Nextclound logs (FYI, they are in nextcloud/data/nextcloud.log)

You can now install the Collabora Online plugin in Nextcloud.
Then, in Settings → Asministation → Collabora Online, set Collabora Online server to https://collabora.example.com

Posted in LibreOffice & OpenOffice, servers, software.


How DynamoDB counts read and write capacity units

I happen to use AWS DynamoDB at work (ikr), and one of the things that are way harder to grasp than they should is the way they count consumed read and write capacity. It is however pretty simple, once you manage to find the right pages (with an s) of their documentation. I’ll try to summarize it here:

Read capacity

A read capacity unit (RCU?) allows you one strongly consistent read per second, if your read is up to 4KB in size. If your read is larger than 4KB, it will consume more (always rounded up to the nearest 4KB multiple). If you use eventually consistent reads, it counts half. The default reading mode is the eventually consistent one.
If you get an item (< 4KB), it counts for one read (or half a read if using eventually consistent). If you get X items (each < 4KB), it counts for 1 read per item, no matter if you do X Get or 1 BatchGet (so I’m not sure how useful BatchGet is, compared to the code complexity it adds).
If you query items, only the total size matters.

If you “just” count items (eg, a query with Count: true and Select: 'COUNT'), you will still consume as much capacity as if you had returned all items.
Note that if your result set it larger than 1MB, it will be cut at 1MB. To read more than 1MB of data, you’ll have to perform multiple queries, with pagination.

Practical examples:
– Get a 6.5KB item + get a 1KB item = 3 reads (if strongly consistent) or 1.5 reads (if eventually consistent)
– Query 54 items for a total of 39KB = 10 reads (if strongly consistent) or 5 reads (if eventually consistent)
– Count 748 items that have a total size of 1.1MB = 250 reads (if strongly consistent) or 125 reads (if eventually consistent) for the first 1MB + another count query for the remaining 100KB.

Write capacity

A write capacity unit (WCU?) allows you one write per second, if your write is up to 1KB in size (yup, that’s not the same size as for the reads… how not confusing!). Multiple items or items larger than 1KB work just as for reads. Also, I don’t remember where I read that, but I’m pretty sure I remember seeing that delete operations count like writes and that update operations count like writes, with as reference the size of the larger version of the modified object.

Practical examples:
– Write a 1.5 KB item + write a 200 bytes item = 3 writes
– Delete a 2.9KB item = 3 writes
– Update a 1.7KB item with a new version that’s 2.1KB = 3 writes
– Update a 1.1KB item with a new version that’s 0.7KB = 2 writes

On a side note, I’m not really sure if DynamoDB uses 1KB = 1000 bytes or 1KB = 1024 bytes.

Burst capacity

At the moment (apparently it may change in the future), DynamoDB retains up to 300 seconds of unused read and write capacity. So, for instance, with a provision of 2 RCU, if you do nothing for 5 minutes, then you can perform 1200 Get operations at once (for items < 4KB and using eventually consistent reads).

Sources and more details

I tried to focus on the most important points about read and write units. You can find more details about this topic in particular and, of course, about DynamodDB in general, in the docs. Notably I used those pages a lot here:
AWS DynamoDB Documentation – Throughput Settings for Reads and Writes
AWS DynamoDB Documentation – Best Practices for Designing and Using Partition Keys Effectively
AWS DynamoDB Documentation – Working with Queries

Posted in web development.


Fixing letsencrypt’s “expected xxx.pem to be a symlink”

Apparently, last time I migrated my server, I messed up my Let’s Encrypt configuration. Or maybe Let’s Encrypt changed its way of storing it. Anyway, renewing my certificates failed with this error:

expected /etc/letsencrypt/live/notepad.patheticcockroach.com/cert.pem to be a symlink
Renewal configuration file /etc/letsencrypt/renewal/notepad.patheticcockroach.com.conf is broken. Skipping.

Obviously, a file was supposed to be a symlink and it wasn’t. Which is strange, because I migrated just like the previous times, and a migration never caused that issue before. Anyway, I found a suggested solution that said to turn said .pem file into a symlink manually. Sounds a bit hackish to me.

I chose to just reissue new certificates for the same domain name. But if you do so, you must clean up properly, otherwise you’ll end up with new paths to your certificates, something like /etc/letsencrypt/live/yourdomain.com-0001/cert.pem, which would require you to also update your HTTP server configuration.

To clean up:

rm -rf /etc/letsencrypt/{live,renewal,archive}/{yourdomain.com,yourdomain.com.conf}

(source)
(NB: watch out, you should probably make a backup before running this)

Then you should be able to get a new certificate, under the same file and folder names, with the usual command:

certbot certonly --webroot -w /home/www/path -d yourdomain.com

Posted in security, servers, web development.


Normalizing audio with ffmpeg

Some videos have really messed up audio, with a globally low volume and/or a very high amplitude between the lowest volume (say, some quiet voice) and the highest (say, some loud music or explosion).

You can analyze the audio track and normalize it directly via ffmpeg, or, more conveniently, with a Python program called ffmpeg-normalize. NB: in case you don’t know about Python, you need to install Python first, then you can run pip. As I’m writing, Python 3.7 works fine with this. Oh, and on a side note, Python is ran by a bunch of sick SJWs so I encourage you to avoid it like the plague and just use it as a last resort.

pip install ffmpeg-normalize
ffmpeg-normalize input.mp4 -tp 0 -vn -c:a aac -b:a 256k -p -o output.mkv

The command as I customized it will skip the video track in the output file and compress in AAC 256kbps, which is most likely too much. I do that because I actually want to re-encode the audio properly afterwards: I use Opus (libopus) and ffmpeg-normalize doesn’t support all the Opus options I want to use. So then I would run for instance:

ffmpeg -i output.mkv -ac 2 -codec:a libopus -b:a 64k -vbr on -compression_level 10 -frame_duration 60 -application audio output2.mkv

If you just want to analyze a file:

ffmpeg -i input.mp4 -filter:a volumedetect -f null /dev/null

or on Windows, using NUL:

ffmpeg -i input.mp4 -filter:a volumedetect -f null NUL

Posted in multimedia, software.

Tagged with .


Hardening Tor Browser (or Firefox) a bit more

Tor Browser (I’ll later refer to it as “TBB”, short for Tor Browser Bundle) comes with lots of privacy / anti-tracking tweaks out of the box. But you can add even more. And make search a bit more convenient, too. Tor Browser is basically a very patched and tweaked Firefox and, for a few years now, there’s been an ongoing effort at Mozilla to uplift some Tor Browser patches into Firefox (ticket 1260929 on Bugzilla). So, many of the tweaks used in Tor Browser can be used in normal Firefox too.

Customizing (fully) the search engine

First, the search engine. Contrary to, say, Vivaldi, Firefox doesn’t provide a way for end-users to easily edit the search engines. This is particularly a problem in Tor Browser, because I want to use Duckduckgo via their .onion URL and without JavaScript, something which just can’t be done via the easy but locked-down process of adding a search engine. Search engines can (and usually do) provide an OpenSearch XML file, which you can then use to add it to your list of search engines in Firefox. Duckduckgo provides such a file in 2 versions: one with JS, and one without it, but neither support their .onion URL (the versions served via the onion URL still point to the non-onion domain).

Gladly, I found a website (Mycroft Project) where you can submit or create a custom OpenSearch file, and then “install” it. And even better, many contributors regularly submit OpenSearch files, so you’ll most likely find the one you want, without the need to create it yourself. For instance, files for Duckduckgo’s .onion URL, with HTTPS and with no JS, can be found here.

Privacy/security-related about:config parameters

Next, the detailed settings. The Firefox Privacy Task Force provides a list of settings that can be modified, via about:config, to enhance privacy (and also security, for instance webgl.disabled = true to disable WebGL). So, for Firefox, you can start with those. As for Tor Browser, I believe all these settings are already set to the most private value in Tor Browser, so if you’re using TBB I don’t think you’ll find more than an interesting read there.

I recently found a GitHub repository, user.js, which goes into a lot more details, to a point that it goes farther than Tor Browser, meaning it will be interesting no matter if you use just Fx or TBB. For instance, they disable the keyword.enabled setting, which can accidentally leak what you type in the address bar to your default search engine and which isn’t disabled in TBB by default, and they empty the breakpad.reportURL, used to send crash reports to Mozilla. If you’re using TBB, you might be particularly interested in ticket #367, which focuses on the differences between this user.js and the TBB default settings.

Posted in Firefox, privacy, security, Tor.


Why surveillance is not OK

This text comes from here, where it was posted 4 years ago by an anonymous poster. It’s a verbatim copy, nothing new, no particular reason to post it today rather than 1 year ago or 1 year later, I just happened to stumble on it and wanted to keep it where I’d find it easily for future reference.

I live in a country generally assumed to be a dictatorship. One of the Arab spring countries. I have lived through curfews and have seen the outcomes of the sort of surveillance now being revealed in the US. People here talking about curfews aren’t realizing what that actually FEELS like. It isn’t about having to go inside, and the practicality of that. It’s about creating the feeling that everyone, everything is watching. A few points:

1) the purpose of this surveillance from the governments point of view is to control enemies of the state. Not terrorists. People who are coalescing around ideas that would destabilize the status quo. These could be religious ideas. These could be groups like anon who are too good with tech for the governments liking. It makes it very easy to know who these people are. It also makes it very simple to control these people.

Lets say you are a college student and you get in with some people who want to stop farming practices that hurt animals. So you make a plan and go to protest these practices. You get there, and wow, the protest is huge. You never expected this, you were just goofing off. Well now everyone who was there is suspect. Even though you technically had the right to protest, you’re now considered a dangerous person.

With this tech in place, the government doesn’t have to put you in jail. They can do something more sinister. They can just email you a sexy picture you took with a girlfriend. Or they can email you a note saying that they can prove your dad is cheating on his taxes. Or they can threaten to get your dad fired. All you have to do, the email says, is help them catch your friends in the group. You have to report back every week, or you dad might lose his job. So you do. You turn in your friends and even though they try to keep meetings off grid, you’re reporting on them to protect your dad.

2) Let’s say number one goes on. The country is a weird place now. Really weird. Pretty soon, a movement springs up like occupy, except its bigger this time. People are really serious, and they are saying they want a government without this power. I guess people are realizing that it is a serious deal. You see on the news that tear gas was fired. Your friend calls you, frantic. They’re shooting people. Oh my god. you never signed up for this. You say, fuck it. My dad might lose his job but I won’t be responsible for anyone dying. That’s going too far. You refuse to report anymore. You just stop going to meetings. You stay at home, and try not to watch the news. Three days later, police come to your door and arrest you. They confiscate your computer and phones, and they beat you up a bit. No one can help you so they all just sit quietly. They know if they say anything they’re next. This happened in the country I live in. It is not a joke.

3) Its hard to say how long you were in there. What you saw was horrible. Most of the time, you only heard screams. People begging to be killed. Noises you’ve never heard before. You, you were lucky. You got kicked every day when they threw your moldy food at you, but no one shocked you. No one used sexual violence on you, at least that you remember. There were some times they gave you pills, and you can’t say for sure what happened then. To be honest, sometimes the pills were the best part of your day, because at least then you didn’t feel anything. You have scars on you from the way you were treated. You learn in prison that torture is now common. But everyone who uploads videos or pictures of this torture is labeled a leaker. Its considered a threat to national security. Pretty soon, a cut you got on your leg is looking really bad. You think it’s infected. There were no doctors in prison, and it was so overcrowded, who knows what got in the cut. You go to the doctor, but he refuses to see you. He knows if he does the government can see the records that he treated you. Even you calling his office prompts a visit from the local police.

You decide to go home and see your parents. Maybe they can help. This leg is getting really bad. You get to their house. They aren’t home. You can’t reach them no matter how hard you try. A neighbor pulls you aside, and he quickly tells you they were arrested three weeks ago and haven’t been seen since. You vaguely remember mentioning to them on the phone you were going to that protest. Even your little brother isn’t there.

4) Is this even really happening? You look at the news. Sports scores. Celebrity news. It’s like nothing is wrong. What the hell is going on? A stranger smirks at you reading the paper. You lose it. You shout at him “fuck you dude what are you laughing at can’t you see I’ve got a fucking wound on my leg?”

“Sorry,” he says. “I just didn’t know anyone read the news anymore.” There haven’t been any real journalists for months. They’re all in jail.

Everyone walking around is scared. They can’t talk to anyone else because they don’t know who is reporting for the government. Hell, at one time YOU were reporting for the government. Maybe they just want their kid to get through school. Maybe they want to keep their job. Maybe they’re sick and want to be able to visit the doctor. It’s always a simple reason. Good people always do bad things for simple reasons.

You want to protest. You want your family back. You need help for your leg. This is way beyond anything you ever wanted. It started because you just wanted to see fair treatment in farms. Now you’re basically considered a terrorist, and everyone around you might be reporting on you. You definitely can’t use a phone or email. You can’t get a job. You can’t even trust people face to face anymore. On every corner, there are people with guns. They are as scared as you are. They just don’t want to lose their jobs. They don’t want to be labeled as traitors.

This all happened in the country where I live.

You want to know why revolutions happen? Because little by little by little things get worse and worse. But this thing that is happening now is big. This is the key ingredient. This allows them to know everything they need to know to accomplish the above. The fact that they are doing it is proof that they are the sort of people who might use it in the way I described. In the country I live in, they also claimed it was for the safety of the people. Same in Soviet Russia. Same in East Germany. In fact, that is always the excuse that is used to surveil everyone. But it has never ONCE proven to be the reality.

Maybe Obama won’t do it. Maybe the next guy won’t, or the one after him. Maybe this story isn’t about you. Maybe it happens 10 or 20 years from now, when a big war is happening, or after another big attack. Maybe it’s about your daughter or your son. We just don’t know yet. But what we do know is that right now, in this moment we have a choice. Are we okay with this, or not? Do we want this power to exist, or not?

You know for me, the reason I’m upset is that I grew up in school saying the pledge of allegiance. I was taught that the United States meant “liberty and justice for all.” You get older, you learn that in this country we define that phrase based on the constitution. That’s what tells us what liberty is and what justice is. Well, the government just violated that ideal. So if they aren’t standing for liberty and justice anymore, what are they standing for? Safety?

Ask yourself a question. In the story I told above, does anyone sound safe?

I didn’t make anything up. These things happened to people I know. We used to think it couldn’t happen in America. But guess what? It’s starting to happen.

I actually get really upset when people say “I don’t have anything to hide. Let them read everything.” People saying that have no idea what they are bringing down on their own heads. They are naive, and we need to listen to people in other countries who are clearly telling us that this is a horrible horrible sign and it is time to stand up and say no.

Posted in privacy.


How to enable IPv6 on Ubuntu Server 18.04

Last week or so, I migrated this site to a new server (OVH has this strange habit of pushing clients to migrate from older offers to new ones by not only releasing upgraded offers but also raising the prices of the old ones for current subscribers :x). In the process, I noticed that the IPv6 was kind of put forward (it used to be just in the control panel, now it’s also in the server activation e-mail right below the IPv4 address). So I figured, let’s use it this time.

I first though it was as simple as adding an AAAA record in the DNS zone in Bind. So I did. Tough, it didn’t work. The server doesn’t actually replies to queries sent to its IPv6 address. After a quick search, I found that it was because IPv6 wasn’t enabled/configured on the server.
At first I tested that with this online tool, but then I got a more convenient way using the console:
ping6 -c 1 ipv6.google.com
The reply I got was:
connect: Network is unreachable

OVH provides a guide to configure IPv6. Sadly, as of today, it’s outdated and doesn’t work with Ubuntu 18.04.
So I kept looking and eventually found that I had to use netplan, as follow:

1) Go to folder /etc/netplan

2) Create a file named (for instance) 90-ipv6.yml with the following content:

network:
    version: 2
    ethernets:
        ens3:
            dhcp4: true
            match:
                macaddress: ab:cd:ef:12:34:56
            set-name: ens3
            addresses:
              - 1234:5678:9:3100:0:0:0:abc/64
            gateway6: 1234:5678:9:3100:0000:0000:0000:0001

NB: obviously, replace the interface name (ens3), the MAC address, the address and the gateway with your values. You should be able to find the interface name and MAC address in file /etc/netplan/50-cloud-init.yaml, and the address and gateway should be provided to you by your host. Note that even if your host only provides a /128, you need to enter it as a /64 in order for this to work for some reason.

3) This is not over yet, you need to run these commands in order to apply your changes:

netplan generate
netplan apply

And that’s it. It should work without a reboot (but if it doesn’t, I guess you can try to reboot), so ping6 should now work:

root@vps123456:/etc/netplan# ping6 -c 1 ipv6.google.com
PING ipv6.google.com(iad23s63-in-x0e.1e100.net (2607:f8b0:4004:810::200e)) 56 data bytes
64 bytes from iad23s63-in-x0e.1e100.net (2607:f8b0:4004:810::200e): icmp_seq=1 ttl=50 time=91.2 ms

--- ipv6.google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 91.214/91.214/91.214/0.000 ms

And your AAAA record should work too.
Note that you might need to adjust your Apache HTTPd virtual hosts configuration. I didn’t need to, because my virtual hosts don’t use the IP:

<VirtualHost *:80>
 ServerName www.patheticcockroach.com
 DocumentRoot "/path/to/docs/"
 RewriteEngine On
 RewriteCond %{HTTPS} off
 # RewriteRule (.*) https://%{SERVER_NAME}/$1 [R,L]
 RewriteRule ^(.*)$ https://%{SERVER_NAME}%{REQUEST_URI} [R,L]
 <Directory "/path/to/docs/">
  Require all granted
  Options -Indexes
  AllowOverride All
 </Directory>
</VirtualHost>
<VirtualHost *:443>
   ServerName www.patheticcockroach.com
   DocumentRoot "/path/to/docs/"
   <Directory "/path/to/docs/">
   Require all granted
   Options -Indexes
   AllowOverride All
   </Directory>
   SSLEngine on
   SSLProtocol all -SSLv2 -SSLv3
   SSLHonorCipherOrder On
   SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:HIGH:!CAMELLIA:!RC4:!MD5:!aNULL:!EDH
   SSLCertificateFile /etc/letsencrypt/live/www.patheticcockroach.com/cert.pem
   SSLCertificateKeyFile /etc/letsencrypt/live/www.patheticcockroach.com/privkey.pem
   SSLCertificateChainFile /etc/letsencrypt/live/www.patheticcockroach.com/fullchain.pem
   SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
</VirtualHost>

But if yours do, you might find this guide useful.

Sources:
How to add an IPv6 address and default route with netplan in Ubuntu 17.10 artful? – Ask Ubuntu
– (FR) Impossible de configurer IPv6 (Netplan <3 et Ubuntu 18.04) – Cloud / VPS – OVH Community

Posted in servers.


“Bannasties”

Still cleaning out my closet, I found that interesting piece that I saved back in August 2013.
Back in the days, I stumbled upon a creepy site called “bannasties.com”, which is now defunct (the domain name was abandoned and it’s actually not registered at all anymore). Not sure how I found it, probably by browsing some other site that used it for “protection” again “evil bots”. I always use some kind of proxy so I tend to trigger those kinds of paranoid protections, as well as getting a fair share of “Access denied” and a truckload of captchas – thanks Google and Cloudflare, which, fun fact, was spelled CloudFlare with a capital F back in 2013.

Anyway, all I kept in my draft was a verbatim copy of the site’s front page (or maybe it was the about page). And it’s a fun (although creepy) read. Even more so when you think that now there’s that little something called General Data Protection Regulation (GDPR).

The purpose of bannasties.com is to collect data on spammers/hackers/scrapers/crawlers and other pests that hammer web sites so that I, Lucia, can look at aggregated information. Others can find pages because it is indexed by google and I think it’s nice for people who were looking for a specific IP or host to be able to learn a few details. But it is not my intention to create a resource that permits the world to do “research”. Please don’t try to use this to conduct your own research, the filters on this site are really really tight and constantly changing and you are likely to get banned. Seriously, the rules at bannasties are draconian. The reason for this is that the site is boring to normal humans bot candy for bots. If you are human, you wish search a little bit and also want to avoid getting banned:
Don’t be a bot. Don’t even ‘look’ like a bot. Bot visits are strictly prohibitted.
Always start your search from here or by way of a search engine. It’s currently fairly safe to search for banned IPs, user agents or Hosts by using the appropriate search form. Click a link to load the search form:
Search Form to Find an IP. Example: 78.46.93.87..
Search Form to Find a User Agent. This searches for partial strings provided they contain at least 4 characters: (e.g. “majestic”, “80legs” “mozilla”). .
Search Form to Find a Host. This searches for partial strings provided they contain at least 4 characters: (e.g. “kimsufi”, “corbina.ru” “server”). .
Do not try to search by guessing URI’s. Just don’t do it. The probability you will look like a bot is too high.
Don’t submit more than 3 queries an hour. Queries are submitted if you load a page containing a ‘?’ in the address or press a submit button. Once you’ve submitted more than 3, wait an hour then come back. This site is not intended to permit anyone other than the owner to do extensive personal research.
Don’t use a proxy server or vpn. At. All. (If I detect a proxy, you will be banned. )
Do pass referrers in the default way. That is: No blank referrers. No spoofed referrers. No fake referrers.
Don’t be from a spammy country. The list I consider spammy is constantly changing. But these countries will always be on it: China and Brazil. So are some other countries.
Don’t originate your request from server or web hosting company (e.g. dreamhost, bluehost, hostgator etc.). Use an ISP whose main service is providing internet connectivity (e.g. comcast, at&t, verison etc.) By the way: if this rule means you can’t visit bannasties from work, so be it.
Don’t use a mobile device. I know lots of people use those, but spammers, hackers and scrapers use them. If I detected them here I ban them.
Accept cookies from my site. (You may reject third party cookies.)
If you want permission to visit more I suggest you do something that will send you the ‘banned’ message. (For example, running to many queries times in an hour). You will be presented “the scary page”. The find the email link, click and email me. Explain to me why you want to run numerous searches and if I approve of the reason I might be able to arrange something. But.. really…the number of crawlers here is ridiculous. This site is mostly intended to be for me and provides a very limited amount of access to others who might have been sent here by Google.
Even following these rules does not guarantee you won’t get banned. There are bots ‘out there’ doing really ‘interesting’ things and I write rules based on behavior I observe. If your browser or bot does something that looks really weird, you are likely to get banned.
Privacy policy

This site grants you zero privacy. All connections are monitored. I set cookies; I check them. I change the information set on cookies all the time. Those caught by the spam/hack software are filtered and logged. Data collected when you visit this site is not kept private, could and likely will be discussed publicly especially if you turn out to be a server, vpn, violate any rules described above or even any I dream up in the future.

The End ^^

Posted in privacy, web filtering.


Fixing a couple of TypeScript compilation errors

This is an old draft that I never got to finish but that I don’t want to throw away either. I guess this is where this site is really used as a notepad ^^

Error 1:
node_modules/@types/graphql/subscription/subscribe.d.ts(17,4): error TS2304: Cannot find name 'AsyncIterator'.

=> add “esnext.asynciterable” lib to file tsconfig.json
(source: https://stackoverflow.com/questions/45810696/ionic2-apollo-graphql-eror-cannot-find-name-asynciterator#45812540)

Error 2:
node_modules/aws-sdk/lib/config.d.ts(39,37): error TS2693: 'Promise' only refers to a type, but is being used as a value here.

=> add “es2015.promise” lib to file tsconfig.json

Bonus:
My tsconfig.json file as it was when I started writing this (should be still good, I just added more stuff since then)

{
  "compileOnSave": true,
  "compilerOptions": {
    "target": "es5",
    "noImplicitAny" : true,
    "lib": [
      "es5",
      "es2015.promise",
      "esnext.asynciterable"
    ],
    "skipLibCheck": false,
    "alwaysStrict": true,
    "removeComments": true
  },
  "exclude": [
    "node_modules"
  ],
  "typeRoots": [ "node_modules/@types" ]
}

Posted in JavaScript / TypeScript / Node.js, published drafts.