Browse Source

Fixed indent style

master
Diane 2 years ago
parent
commit
3e35e25566
No known key found for this signature in database GPG Key ID: B13153D92467FCC1
  1. 6
      .editorconfig
  2. 11
      README.md
  3. 52
      articles/20171019-nexus-school-time-projects.md
  4. 74
      articles/20171025-microrl-school-time-projects.md
  5. 83
      articles/20171027-paste-school-time-projects.md
  6. 208
      articles/20171108-an-open-thanks-letter-to-developers.md
  7. 415
      articles/20171121-fancyindex-better-directory-listing.md
  8. 37
      articles/20180129-a-very-simple-and-fast-url-shortener.md
  9. 55
      articles/20180312-work-discipline.md
  10. 66
      articles/20180501-pix-artemix-is-up.md
  11. 28
      articles/20180507-fix-your-shit-apple.md
  12. 170
      articles/20180806-google-amp.md
  13. 83
      articles/20180810-kickstart.md
  14. 13
      articles/20180814-learn-english-asshole.md
  15. 21
      articles/20180914-fuck-amazon-web-services-docs.md
  16. 21
      articles/20180921-thanks-medium.md
  17. 34
      articles/20180922-wtf-js-dates.md
  18. 122
      articles/20181004-read-schooltime-projects.md
  19. 177
      articles/20181018-css-re-work.md
  20. 12
      articles/20181204-go-viper-config-but-why.md
  21. 19
      articles/20190210-2019-and-this-blog.md
  22. 19
      articles/20190219-banks-and-security.md
  23. 13
      articles/20190422-new-section.md
  24. 16
      articles/20190601-htmlspecialchars.md
  25. 38
      articles/20190602-dynamic-data-and-sql-statements.md
  26. 31
      articles/20190603-password-storing.md
  27. 19
      articles/20190812-regions-and-pragma-mark-on-idea.md
  28. 6
      articles/20190818-identification-and-authentication.md
  29. 16
      articles/20190829-global-variables.md
  30. 52
      articles/20190911-conditional-dns-for-multiple-intranet-upstreams-with-dnsmasq.md
  31. 26
      articles/20191103-e-mail-validation.md
  32. 12
      articles/20191111-dns-blocking.md
  33. 57
      articles/20191112-pipes-for-dns-stats.md
  34. 4
      articles/20191210-rust-is-fun.md
  35. 18
      articles/20191212-paste-v2.md
  36. 20
      articles/20191215-file-type-verification.md
  37. 86
      articles/20191224-barebones-git.md
  38. 15
      articles/20200212-linux-softwares-on-windows.md
  39. 0
      articles/20200225-more-on-barebones-git.md
  40. 17
      articles/20200225-sinx-for-dumb-data-aggregation.md
  41. 163
      builder/article-manager.ts
  42. 30
      builder/config.ts
  43. 90
      builder/dates.ts
  44. 57
      builder/feed-manager.ts
  45. 255
      builder/file.ts
  46. 485
      builder/renderer.ts
  47. 48
      builder/serve-handler.d.ts
  48. 48
      builder/server.ts
  49. 14
      builder/tasks.ts
  50. 92
      builder/tasks/articles.ts
  51. 48
      builder/tasks/basic-tasks.ts
  52. 55
      builder/tasks/css.ts
  53. 42
      builder/tasks/js.ts
  54. 77
      builder/tasks/search.ts
  55. 74
      builder/tasks/sub-build.ts
  56. 262
      builder/template-manager.ts
  57. 3
      builder/util.ts
  58. 28
      config.ts
  59. 114
      gulpfile.ts
  60. 98
      package.json
  61. 290
      site/css/normalize.css
  62. 98
      site/css/style.css
  63. 32
      site/js/focusSearch.js
  64. 98
      site/templates/data-less/about.ejs
  65. 43
      site/templates/data-less/dead.ejs
  66. 131
      site/templates/data-less/portfolio.ejs
  67. 41
      site/templates/data-less/search.ejs
  68. 52
      site/templates/pages/article-list.ejs
  69. 58
      site/templates/pages/article.ejs
  70. 26
      site/templates/pages/error.ejs
  71. 54
      site/templates/pages/homepage.ejs
  72. 2
      site/templates/partials/footer.ejs
  73. 12
      site/templates/partials/head.ejs
  74. 32
      site/templates/partials/headers.ejs
  75. 28
      tsconfig.json

6
.editorconfig

@ -5,9 +5,9 @@ root = true
end_of_line = lf
insert_final_newline = true
charset = utf-8
indent_style = space
indent_size = 4
indent_style = tab
indent_size = tab
[package.json]
[keybase.txt]
indent_style = space
indent_size = 2

11
README.md

@ -6,9 +6,9 @@ My blog.
- `gulpfile.ts`: Entry-point for building
- `builder/`: Static site generator
- `tasks/`: Every gulp async task to be used in the `gulpfile.ts`
- `tasks/`: Every gulp async task to be used in the `gulpfile.ts`
- `apps/`: Every web-app based on `artemix.org`
- `search/`: `search.html` web app
- `search/`: `search.html` web app
## How to work in development
@ -22,6 +22,7 @@ To obtain that, run `yarn` then `npx gulp prepareForDev`.
## TODO
- define the app-specific build step for the production build.
It'll probably require a dedicated script, or an entire dedicated
step to invoke build scripts inside every app.
- redesign the inline `<code></code>` block to be clearer
- implement per-directory grouping of articles
- try to add support for asciidoc
- reindent every app

52
articles/20171019-nexus-school-time-projects.md

@ -5,50 +5,66 @@ published: true
My first project of this type is called "Nexus".
It is, as its name shows, a nexus to join every of my websites/projects/tools in a single central page, under the form of a list.
It is, as its name shows, a nexus to join every of my websites/projects/tools in
a single central page, under the form of a list.
Aimed to be easily used, it's simply a static page with a list and a bit'o'informations.
Aimed to be easily used, it's simply a static page with a list and a
bit of info'.
## How it works
A project, website and whatever you want can be added by adding or linking a JSON config file in the /websites/ subdirectory.
A project, website and whatever you want can be added by adding or linking a
JSON config file in the /websites/ subdirectory.
Of course, the JSON file must follow a few "norms"!
Must be provided 3 fields, which are the following:
- Project name ;
- Project description ;
- Project URL.
- Project name;
- Project description;
- Project URL.
Alternatively, can be added the following two fields:
- Project "category" (else, it'll show "no category") ;
- Project author (else, "Anonymous").
- Project "category" (else, it'll show "no category");
- Project author (else, "Anonymous").
But simply making the JSON files by hand is quite boring.
Instead, I wrote a small python script to auto generate the JSON files.
You'll note that the script is quite minimalistic and prevent almost nothing, so you kinda must get every field right or else you'd need to restart from scratch.
This can be problematic, and that's one of the things I'll work on, to make the script a bit more "intelligent", so it'll be easier to check and change some infos while creating the script.
You'll note that the script is quite minimalistic and prevent almost nothing,
so you kinda must get every field right or
else you'd need to restart from scratch.
This can be problematic, and that's one of the things I'll work on, to make the
script a bit more "intelligent", so it'll be easier to check and change some
info while creating the script.
## What about the website?
Keeping the minimalistic idea in mind, and since I'm a pretty bad designer, I decided to go with the cool Skeleton CSS library, as-is (without modifying anything).
Keeping the minimalistic idea in mind, and since I'm a pretty bad designer,
I decided to go with the cool Skeleton CSS library,
as-is (without modifying anything).
Well, I must say it works wonders!
Well, I must say it works wonderfully well!
> And what about the core website functionalities? What if I have a shitton of projects and want to search for a specific one?
> And what about the core website functionalities?
> What if I have a shitton of projects and want to search for a specific one?
I'd say `ctrl+F` but that would be annoying. Right?
I'd say `ctrl+F` but that would be annoying. Right?
Instead, I wrote a bit of vanilla javascript in-page, to add some search'n'sort functionality.
Instead, I wrote a bit of vanilla javascript in-page, to add some
search & sort functionality.
I sacrificed a few milliseconds of loading time (growing exponentially following how many projects you have) to add a sorter (by alphabetical, author name and category) and a search bar.
I sacrificed a few milliseconds of loading time (growing exponentially following
how many projects you have) to add a sorter (by alphabetical, author name
and category) and a search bar.
And, even if the light-loading aspect has been lost a bit, I think that now the website is also more useable, which is an important criteria too.
And, even if the light-loading aspect has been lost a bit, I think that now the
website is also more usable, which is an important criteria too.
UX and performance, OK!
## Conclusion
In short, a small and pretty cool project that have been done on my spare time, and now, thanks to that, it's a bit easier for me to manage my websites on my server!
In short, a small and pretty cool project that have been done on my spare time,
and now, thanks to that, it's a bit easier for me
to manage my websites on my server!

74
articles/20171025-microrl-school-time-projects.md

@ -3,58 +3,82 @@ title: MicroRL - URL shortening, made simple
published: true
---
[siler]: https://github.com/leocavalcante/siler "Siler github"
[git]: https://gitlab.com/MicroRL/MicroRL "µRL git"
[siler]: https://github.com/leocavalcante/siler
Jumping directly in the presentations from my first finished project that was Nexus, let me present my latest project: MicroRL (Real name: `µRL`).
Jumping directly in the presentations from my first finished project that was
Nexus, let me present my latest project: MicroRL (Real name: `µRL`).
> Sooo... What's that ?
> Sooo... What's that ?
I recently made a small a basic URL shortener, still keeping the simplicity and lightness aspect in mind.
I recently made a small a basic URL shortener, still keeping the simplicity and
lightness aspect in mind.
I first got the idea when I was publishing some links on IRC, but using the Tmux IRC client, some URLs were cut by line return.
I first got the idea when I was publishing some links on IRC, but using the Tmux
IRC client, some URLs were cut by line return.
Being quite bothered by this bug, I thought for myself "Hey, what if I made a URL shortener for myself ? After all, it's quite the basic and classic project!".
Being quite bothered by this bug, I thought for myself "Hey, what if I made a
URL shortener for myself ? After all, it's quite the basic
and classic project!".
Then..., I forgot about it.
Later on, I got this problem again, and when I wanted to send some links on twitter, it also limited me.
Later on, I got this problem again, and when I wanted to send some links on
twitter, it also limited me.
This time, I decided to keep the idea from when I had some free time to spend, and I sometimes thought a bit about it.
This time, I decided to keep the idea from when I had some free time to spend,
and I sometimes thought a bit about it.
At the sime time, I started tinkering with a small PHP routing library, called [siler][siler] (library I made a small contribution on, for documentation purposes).
At the sime time, I started tinkering with a small PHP routing library, called
[siler][siler] (library I made a few PRs on).
Naturally, when I started working on what I'd use and how I'd make my URL shortener, I decided I wanted those "new" fancy URLs (you know these URLs that have no "file extension" and are all treated by the same file, like "https://test.abc/this/is/a/fancy") !
When I decided to take on the project, and when I started wondering about what
I'd use and how I'd make my URL shortener, I decided I wanted those "new" fancy
URLs (you know these URLs that have no "file extension" and are all treated by
the same file, like `https://test.abc/this/is/a/fancy`)!
As I wanted my tool to be lightweight, I decided to use Siler. For the database, like always, I've been working with PostgreSQL, as I have a nice DB setup running and it kinda suits my needs here.
As I wanted my tool to be lightweight, I decided to use Siler. For the database,
like always, I've been working with PostgreSQL, as I have a nice DB setup
running and it kinda suits my needs here.
My main concern was about keeping it simple (Remember, KISS) while still fully functional by its primary function. So no over-complicated interface, a simple form field and button would be way enough for me !
My main concern was about keeping it simple (Remember, KISS) while still fully
functional by its primary function. So no over-complicated interface, a simple
form field and button would be way enough for me!
For such a tool, no fancy-looking javascript or CSS library would be useful right ?
For such a tool, no fancy-looking javascript or
CSS library would be useful right?
So I decided to go with in-page CSS and JS with only 10 CSS item changes. Barely enough !
So I decided to go with in-page CSS
and JS with only 10 CSS item changes.
Barely enough!
A simple form with one field and a submission button, a small form request redirection and voilà ! The entire client magic is done !
A simple form with one field and a submission button, a small form request
redirection and voilà! The entire client magic is done!
## Where can I find the project?
Like on the nexus article, the project is freely available on Gitlab ([Here][git]).
The project is sadly not up anymore, due to some git spring cleaning, which
reaped the long-dead repository.
Note though that the URLs you pass aren't cyphered and there's a good chance they won't be (or they may, following a later update. To see...).
A following update I'll work on will be better "random" tokens instead of primary keys.
The "upgrades" listed below never came to light, sadly.
## Some upgrades that may be done later on
I have some ideas to make the µRL tool "better". Some small and (almost) lightweight features to add. Here's a small list of these features:
I have some ideas to make the µRL tool "better". Some small and (almost)
lightweight features to add. Here's a small list of these features:
- More "random" tokens.
- Cyphered URL storage, with random key generation.
- Uploaded URL removal. That would require some work to secure the call, as we don't want someone to simply remove an existing URL.
- Stats for URLs count, most shortened websites etc. . Spam wouldn't be prevented, but an API key would be required with each stats request, as gathering and calculating all the data can cost some time.
- Uploaded URL removal. That would require some work to secure the call,
as we don't want someone to simply remove an existing URL.
- Stats for URLs count, most shortened websites etc.
Spam wouldn't be prevented, but an API key would be required with each
stats request, as gathering and calculating all the data can cost some time.
## Conclusion
As I could see by myself, making a URL shortener is quite simple, fun and can really come handy later !
As I could see by myself, making a URL shortener is quite simple,
fun and can really come handy later on!
I still have a lot of ideas for this URL shortener. Some are easy to implement, some are harder, but all of them are definitely nice to do !
I still have a lot of ideas for this URL shortener.
Some are easy to implement, some are harder,
but all of them are definitely nice to do!

83
articles/20171027-paste-school-time-projects.md

@ -3,20 +3,27 @@ title: Paste - Quick'n'easy online pastebin tool
published: true
---
> Note that, while not a completely identical version, I made a new paste
> with slightly different goals.
>
> The biggest differences are around persistence and security.
>
> It can be found [here](https://gitlab.com/Artemix/paste).
[defuse]: https://github.com/defuse/php-encryption
[random]: https://stackoverflow.com/questions/6101956/generating-a-random-password-in-php/31284266#31284266
[source]: https://gitlab.com/Arte-paste/Paste
Between my first and last project, I took the time to try and make a small pastebin tool.
Between my first and last project, I took the time to try and make
a small pastebin tool.
As always, and once more, my main concern was about lightness.
Small'n'easy !
I also had the goal to "securely" store the text that was sent.
After some bumps and fails, I finally managed to make it work !
After some bumps and fails, I finally managed to make it work!
> > Why do you talk about this project after the µRL project, that came later ?
> > Why do you talk about this project after the µRL project, that came later?
>
> Tbh, I only finished this pastebin project recently,
> due to some mistakes I made during the development.
@ -26,31 +33,39 @@ After some bumps and fails, I finally managed to make it work !
My first steps were, like with every project, easy:
- A quick composer init ;
- Adding Siler as a dependency (Lightness~) ;
- Making the default config ;
- Adding Siler as a dependency (Lightness~);
- Making the default config;
- Adding the default route: `/`.
I wanted a trusted and secure PHP cryptography library, and, obviously, my first thoughts came to the Paragonie Initiative's library, LibSodium.
I wanted a trusted and secure PHP cryptography library, and, obviously, my first
thoughts came to the Paragonie Initiative's library, LibSodium.
Since it'll be natively integrated in PHP7.2 (Yay~), it could be a good choice.
The only constraint I found was manually compiling and installing the extension.
On my linux servers, I don't say. Windows ? Such a pain to use for compilation.
On my linux servers, I don't say. Windows? Such a pain to use for compilation.
So, after that, I tried and searched a bit more for a composer-dependency-managed, secured library.
So, after that, I tried and searched a bit more for a
composer-dependency-managed, secured library.
I searched and searched, always looking for an up-to-date solution, until I found [this one][defuse].
Clean code, nice reviews, seems to be quite rock solid and a very approached look on security.
I searched and searched, always looking for an up-to-date solution,
until I found [this one][defuse].
Clean code, nice reviews, seems to be quite rock solid and a very approached
look on security.
Let's try that !
Let's try that!
## Making the flow~
After choosing the first requirements (of course, for database, I'd go with PostgreSQL, as it's list-ordered entries), I started working on the core workflow: Routes and base logic thinking.
After choosing the first requirements (of course, for database, I'd go with
PostgreSQL, as it's list-ordered entries), I started working on the core
workflow: Routes and base logic thinking.
Once the first base `/` route was setup and running (not something very hard...), I started thinking in-depth on how I wanted the upload/storing and download flow to work.
Once the first base `/` route was setup and running
(not something very hard...), I started thinking in-depth on how I wanted the
upload/storing and download flow to work.
## Security ?
## Security?
The two flows (send/retrieve) are described below.
@ -58,22 +73,29 @@ The two flows (send/retrieve) are described below.
The send flow has a few "security" steps, to allow retrieval key checking.
- Generating the random UID and Key using a cryptographically-secure random key generation ;
- Using the key to cypher the sent text ;
- Calculating the decyphered text's hashsum (using the RipeMD160 hash algorithm) ;
- Hashing this hashsum using the native `password_hash` function, only passing `PASSWORD_BCRYPT` as a restriction (when PHP7.2 will be released, and Argon2I with it, I'll change this line) ;
- Inserting the generated uid, the hashed text hashsum and the cyphered content in database ;
- Generating the random UID and Key using a cryptographically-secure
random key generation algorithm;
- Using the key to cipher the sent text;
- Calculating the deciphered text's hashsum
(using the RipeMD160 hash algorithm);
- Hashing this hashsum using the native `password_hash` function,
only passing `PASSWORD_BCRYPT` as a restriction (when PHP7.2 will be
released, and Argon2I with it, I'll change this line);
- Inserting the generated uid, the hashed text hashsum and
the ciphered content in database;
- And that's all.
### Retrieve flow (aka. Download)
The retrieve flow have a bit less work: only removing the generation part.
- Extracting the first (and technically, only) entry ;
- If no result, well... Fuck off ! Else, ... ;
- Decyphering the text with the key ;
- Generating a hashsum of the decyphered text using the same RipeMD160 algorithm ;
- Verifying the two hashes (the checksum and the new one) using the `password_verify` function ;
- Extracting the first (and technically, only) entry;
- If no result, well... Fuck off ! Else, ...;
- Deciphering the text with the key;
- Generating a hashsum of the deciphered text using the same
RipeMD160 algorithm;
- Verifying the two hashes (the checksum and the new one)
using the `password_verify` function;
- And that's all.
### Conclusion
@ -82,10 +104,13 @@ As shown, the workflow is pretty straightforward and identical.
No hidden magic, no complicated craft with the data.
Only using secure systems and libraries.
> Note that the random key generation library was taken and not modified from [here][random] and I couldn't use RandomLib.
> Note that the random key generation library was taken and not modified
> from [here][random] as I couldn't use RandomLib.
## Now the only thing left is to make some tools to interact with the server !
## Now the only thing left is to make some tools to interact with the server!
As I could discover (and as you could discover by looking at the sourcecode [here][source]), it's quite easy to make a basic text storing service, even when security's one of the most important concerns !
As I could discover, it's quite easy to make a basic text storing service,
even when security's one of the most important concerns!
Something I wanted to make was an upload client tool, like the µClient one, but a bit more able.
Something I wanted to make was an upload client tool,
like the µClient one, but a bit more able.

208
articles/20171108-an-open-thanks-letter-to-developers.md

@ -5,92 +5,186 @@ published: true
As many of you may have seen,
an [issue](https://github.com/jhund/filterrific/issues/147)
has recently be opened on the Github project [Filterrific](https://github.com/jhund/filterrific/)
("A Rails Engine plugin that makes it easy to filter, search, and sort your ActiveRecord lists")
to directly thank the owner of project and received a lot of praise from the Rails community.
The awesome thing here isn't much the act of thanking (althought very nice and thoughtful),
but rather the owner's ([@jhund](https://github.com/jhund)) answer, which clearly shows how much effect this simple "Thank you" had.
> @amingilani thank you for taking the time to reach out! I'm glad to hear that you are getting value out of Filterrific. And I appreciate your appreciation.
has recently be opened on the Github project
[Filterrific](https://github.com/jhund/filterrific/)
("A Rails Engine plugin that makes it easy to filter,
search, and sort your ActiveRecord lists")
to directly thank the owner of project and received a lot of
praise from the Rails community.
The awesome thing here isn't much the act of thanking
(althought very nice and thoughtful),
but rather the owner's ([@jhund](https://github.com/jhund)) answer, which
clearly shows how much effect this simple "Thank you" had.
> @amingilani thank you for taking the time to reach out! I'm glad to hear that
> you are getting value out of Filterrific. And I appreciate your appreciation.
>
> Your comment was the trigger I needed to work on long overdue updates to Filterrific. New release coming soon!
> Your comment was the trigger I needed to work on long overdue updates to
> Filterrific. New release coming soon!
>
> *[Link to comment](https://github.com/jhund/filterrific/issues/147#issuecomment-341867147)*
A bit after, this issue has been shared on [Hacker News](https://news.ycombinator.com/item?id=15623604) and received an incredible level of appreciation from the community.
A bit after, this issue has been shared on
[Hacker News](https://news.ycombinator.com/item?id=15623604) and received
an incredible level of appreciation from the community.
> It was more than a thank you. It also gave some details about how much time it saved them, etc. That's gold.
> It was more than a thank you. It also gave some details about how much time it
> saved them, etc. That's gold.
>
> Makers sometimes exist in a void. They create and wonder if it is accomplishing anything. They wonder what it gets used for.
> Makers sometimes exist in a void. They create and wonder if it is
> accomplishing anything. They wonder what it gets used for.
>
> A few words can speak volumes. It creates a dialogue. The maker is no longer howling into the void, no longer wondering if their felled tree makes any sound.
> A few words can speak volumes. It creates a dialogue. The maker is no longer
> howling into the void, no longer wondering if their
> felled tree makes any sound.
>
> That hi 5 is a thunderous clap that takes their one hand silently clapping to a warm and enthusiastic embrace of connection with another living, breathing being.
> That hi 5 is a thunderous clap that takes their one hand silently clapping to
> a warm and enthusiastic embrace of connection with
> another living, breathing being.
>
> *Author: [Mz](https://news.ycombinator.com/user?id=Mz)*
Coincidentially, I happened to open an issue on the JetBrains suite issue tracker, with the goal to directly thank the developer team for the latest version of the JetBrains Toolbox tool at the time, which I really enjoyed using, as I frequently swap IDEs because I need to work on multiple languages at work.
Coincidentially, I happened to open an issue on the JetBrains suite issue
tracker, with the goal to directly thank the developer team for the latest
version of the JetBrains Toolbox tool at the time, which I really enjoyed using,
as I frequently swap IDEs because I need to work on multiple languages at work.
img:jetbrainsbug[The issue's screenshot]
*[Link to issue](https://youtrack.jetbrains.com/issue/ALL-2128)*
The answer clearly shows some consideration and I really hope that this simple action at least made someone smile.
The answer clearly shows some consideration and I really hope that this simple
action at least made someone smile.
Going back in time a bit, we can also find an [older issue](https://github.com/rails/rails/issues/16731) (from 2014) on the Rails project that also received huge participation from the community.
Going back in time a bit, we can also find an
[older issue](https://github.com/rails/rails/issues/16731) (from 2014) on the
Rails project that also received huge participation from the community.
And some guys even took the time to thank Github in [a repository](https://github.com/thank-you-github/thank-you-github), letting every Github user create a PR to add their signature to this impressive thank letter (I'd recommend you to take 5 minutes to ask about adding yourself in the list).
And some guys even took the time to thank Github in
[a repository](https://github.com/thank-you-github/thank-you-github), letting
every Github user create a PR to add their signature to this impressive thank
letter (I'd recommend you to take 5 minutes to ask about adding yourself in
the list).
The goal of this article is not to ask for any praise, but rather to, in a broader way,
openly thank some companies, communities, maintainers for their work, openness, dedication and much more, all in one place.
The goal of this article is not to ask for any praise, but rather to, in a
broader way, openly thank some companies, communities, maintainers for their
work, openness, dedication and much more, all in one place.
### Now the list may be long and extended with time, so buckle up and be ready.
First, I want to thank [Leo Cavalcante](https://github.com/leocavalcante), the developper of the PHP library [Siler](https://github.com/leocavalcante/siler), which I really find awesome to use for small-scale and pet projects. Thank you for this library, for taking the time to document it, to write [the guide](https://siler.leocavalcante.com/) and more generally to just be really involved and listening.
I want to thank the [BigBinary team](https://www.bigbinary.com/team) for their many [blog artices](http://blog.bigbinary.com/), which I found very detailed, explained, clear and helpful. You wrote a lot of Ruby articles that really helped me understand the core concepts of Ruby and, when I contacted you by e-mail with some questions about your blog, were fast to answer and very nice to speak with.
I want to thank [Mikel Lindsaar](https://github.com/mikel) and every contributor that worked on the [ruby Mail gem](https://github.com/mikel/mail), which really helped me gain a lot of time and simplify my code. Thank you for this awesome library, for the support you are giving out, for the seriousness and dedication you put into this library.
I want to thank [Mike Perham](https://github.com/mperham) and every contributor that worked on [the sidekiq project](https://github.com/mperham/sidekiq), which helped me build a very reliable and "simple" work queue system. Thank you, Mike, for bearing with me as I was pretty much discovering work queues, Ruby and taking the time to explain.
I want to thank the [PhalconPHP development team](https://github.com/phalcon) for their awesome PHP framework, [PhalconPHP](https://github.com/phalcon/cphalcon), which I've been an avid user of, since I discovered it. Thank you for developing this simple, clear, lightweight, versatile and ultra-fast PHP framework, which I decided to use for most of my "big-scale" projects whose requirements didn't fit with the previously mentioned Siler library.
At the same time, I also want to mention the awesome [PhalconPHP community](https://forum.phalconphp.com/), which is a very attentive and involved community that really helped me. Thank you for taking the time to contribute to this community, thank you for having helped me all this time.
I want to thank the [DevRant](https://devrant.com/) founders, dFox and trogus, but also the entire community (honorable mentions to @Linuxxx, @Alice, @bittersweet, @Cyanite), for all this sharing, all those laughs and moments. Thank you for making, and keeping, this community as awesome as it is ; from someone really disliking social networks, I've really found my place here and I hope the adventure will go on for a long time.
I want to thank the [MithrilJS](https://github.com/MithrilJS/mithril.js) community for this awesome WebUI framework, that really blew me away with its lightness, simplicity, useability and features range. Thank you for this library.
I want to thank [Trent Hensler](https://github.com/drtshock) for his awesome repository, [Potato](https://github.com/drtshock/Potato), which gave me a good laugh. Thank you, because thanks to you, I can now say that I forked a potato !
I want to thank [Emanuil Rusev](https://github.com/erusev) for his awesome markdown parser library, [Parsedown](https://github.com/erusev/parsedown), which I chose for its simplicity, lightness and extensibility. Thank you for developing this markdown parser that helped me build my blog (this website !), I just had to change a few lines to really make it work perfectly.
I want to thank the [Protonmail team](https://protonmail.com/about) for their awesome e-mail service, [Protonmail](https://protonmail.com/). Thank you for participating in the development of a safer world and for providing such an awesome web UI.
I want to thank the [Gitlab team and contributors](https://about.gitlab.com/team/) for their awesome Git hosting platform, Gitlab. Thank you for building such tools but also thank you for your honesty, transparency, which really helped me trust you with my projects, both private and public.
I want to thank [Bob Nystrom](http://journal.stuffwithstuff.com/) for his awesome book, [Game Programming Patterns](http://gameprogrammingpatterns.com/), an excellent read that's available for free online, but can also be purchased on PDF, eBook and physical format. Thank you for taking the time to redact it, for making it available to everyone. I decided to purchase a printed version because I really found this book helpful.
I want to thank [Parimal Satyal](https://www.neustadt.fr/parimal-satyal/) for his essay called ["Against an Increasingly User-Hostile Web"](https://www.neustadt.fr/essays/against-a-user-hostile-web/), which I really found interesting and well-redacted, but also for taking the time to respond to my (long) e-mail, but also to answer all the questions I included in it. Thank you for your essays but also for your patience.
I want to thank [Sylvain Lareyre and more generally JobOpportunIT](https://www.jobopportunit.com/qui-sommes-nous), who helped me find a job for my current school year. Thank you for taking the time to help me find the right place, which I actually feel I found, and for assisting me all the way.
I want to thank [Bigsool](https://archipad.com/en/the-team/), the society I am currently working at.
Thank you for the time, the help and pretty much everything (I really don't know how to put all of this in words..) !
First, I want to thank [Leo Cavalcante](https://github.com/leocavalcante), the
developper of the PHP library [Siler](https://github.com/leocavalcante/siler),
which I really find awesome to use for small-scale and pet projects. Thank you
for this library, for taking the time to document it, to write
[the guide](https://siler.leocavalcante.com/) and more generally to just be
really involved and listening.
I want to thank the [BigBinary team](https://www.bigbinary.com/team) for their
many [blog artices](http://blog.bigbinary.com/), which I found very detailed,
explained, clear and helpful. You wrote a lot of Ruby articles that really
helped me understand the core concepts of Ruby and, when I contacted you by
e-mail with some questions about your blog, were fast to answer
and very nice to speak with.
I want to thank [Mikel Lindsaar](https://github.com/mikel) and every contributor
that worked on the [ruby Mail gem](https://github.com/mikel/mail), which really
helped me gain a lot of time and simplify my code. Thank you for this awesome
library, for the support you are giving out, for the seriousness and dedication
you put into this library.
I want to thank [Mike Perham](https://github.com/mperham) and every contributor
that worked on [the sidekiq project](https://github.com/mperham/sidekiq), which
helped me build a very reliable and "simple" work queue system. Thank you, Mike,
for bearing with me as I was pretty much discovering work queues, Ruby and
taking the time to explain.
I want to thank the [PhalconPHP development team](https://github.com/phalcon)
for their awesome PHP framework,
[PhalconPHP](https://github.com/phalcon/cphalcon), which I've been an avid user
of, since I discovered it. Thank you for developing this simple, clear,
lightweight, versatile and ultra-fast PHP framework, which I decided to use for
most of my "big-scale" projects whose requirements didn't fit with the
previously mentioned Siler library.
At the same time, I also want to mention the awesome
[PhalconPHP community](https://forum.phalconphp.com/), which is a very attentive
and involved community that really helped me. Thank you for taking the time to
contribute to this community, thank you for having helped me all this time.
I want to thank the [DevRant](https://devrant.com/) founders, dFox and trogus,
but also the entire community (honorable mentions to @Linuxxx, @Alice,
@bittersweet, @Cyanite), for all this sharing, all those laughs and moments.
Thank you for making, and keeping, this community as awesome as it is ; from
someone really disliking social networks, I've really found my place here and I
hope the adventure will go on for a long time.
I want to thank the [MithrilJS](https://github.com/MithrilJS/mithril.js)
community for this awesome WebUI framework, that really blew me away with its
lightness, simplicity, useability and features range.
Thank you for this library.
I want to thank [Trent Hensler](https://github.com/drtshock) for his awesome
repository, [Potato](https://github.com/drtshock/Potato), which gave me a good
laugh. Thank you, because thanks to you, I can now say that I forked a potato!
I want to thank [Emanuil Rusev](https://github.com/erusev) for his awesome
markdown parser library, [Parsedown](https://github.com/erusev/parsedown),
which I chose for its simplicity, lightness and extensibility. Thank you for
developing this markdown parser that helped me build my blog (this website !),
I just had to change a few lines to really make it work perfectly.
I want to thank the [Protonmail team](https://protonmail.com/about) for their
awesome e-mail service, [Protonmail](https://protonmail.com/). Thank you for
participating in the development of a safer world and for providing such an
awesome web UI.
I want to thank the
[Gitlab team and contributors](https://about.gitlab.com/team/) for their awesome
Git hosting platform, Gitlab. Thank you for building such tools but also thank
you for your honesty, transparency, which really helped me trust you with my
projects, both private and public.
I want to thank [Bob Nystrom](http://journal.stuffwithstuff.com/) for his
awesome book, [Game Programming Patterns](http://gameprogrammingpatterns.com/),
an excellent read that's available for free online, but can also be purchased on
PDF, eBook and physical format. Thank you for taking the time to redact it, for
making it available to everyone. I decided to purchase a printed version because
I really found this book helpful.
I want to thank [Parimal Satyal](https://www.neustadt.fr/parimal-satyal/) for
his essay called["Against an Increasingly User-Hostile Web"](https://www.neustadt.fr/essays/against-a-user-hostile-web/),
which I really found interesting and well-redacted, but also for taking the time
to respond to my (long) e-mail, but also to answer all the questions I included
in it. Thank you for your essays but also for your patience.
I want to thank
[Sylvain Lareyre and more generally JobOpportunIT](https://www.jobopportunit.com/qui-sommes-nous),
who helped me find a job for my current school year. Thank you for taking the
time to help me find the right place, which I actually feel I found, and for
assisting me all the way.
I want to thank [Bigsool](https://archipad.com/en/the-team/), the society I am
currently working at.
Thank you for the time, the help and pretty much everything (I really don't know
how to put all of this in words..)!
I think that I finished this list, as nothing more comes to my mind right now,
so I'll have one final mention:
Thank you, you, for taking the time to read this article, for the work you are doing, whatever it is.
Thank you, you, for taking the time to read this article, for the work you are
doing, whatever it is.
Nowadays, we live in a world in which communicating have never been this easy, still we seem to become less and less sensitive to what's happening around us.
Nowadays, we live in a world in which communicating have never been this easy,
still we seem to become less and less sensitive to what's happening around us.
> I guess that could be a phenomenon that could appear due to an overload of informations, at which we would start to become desensitized.
> I guess that could be a phenomenon that could appear due to an overload of
> informations, at which we would start to become desensitized.
We tend to stop bothering about supporting, promoting each other, even if all it could take would be a "Thank you" to save someone's day.
We tend to stop bothering about supporting, promoting each other, even if all it
could take would be a "Thank you" to save someone's day.
So go give this gratitude to a member of your family, a co-worker, a friend, anyone really, and help make this world a better place.
So go give this gratitude to a member of your family, a co-worker, a friend,
anyone really, and help make this world a better place.
### Finally, thank you.

415
articles/20171121-fancyindex-better-directory-listing.md

@ -7,11 +7,12 @@ Recently, I got my hands on a small server, with a big storage space.
I decided to use it as a lightweight archive file server, accessible over HTTPS.
So, let's try to do that today !
So, let's try to do that today!
## Searching available tools
I already know about a PHP/Js web-based file browser called [h5ai](https://larsjung.de/h5ai/).
I already know about a PHP/Js web-based file browser called
[h5ai](https://larsjung.de/h5ai/).
The design is quite nice but having a JS frontend is a huge constraint
(remember, lightweight was a goal)
and having to install and configure an entire PHP backend is bothersome.
@ -22,24 +23,28 @@ which is clearly too much for this need.
I chose NGINX as the web server, to easily handle HTTP/2, SSL etc.
So, let's check out the auto-indexing system of NGINX !
So, let's check out the auto-indexing system of NGINX!
It's generally the preferred way to do a web Read-only FS, and really
fills its job, as it's the most minimalistic, simple and direct one available.
Still, the blank design and the contrast hurts the eyes, and the listing is
a bit too small.
Let's try to see if NGINX have a way to customize that page !
Let's try to see if NGINX have a way to customize that page!
The only thing I could find that would be "basic" was the [Fancy index](https://www.nginx.com/resources/wiki/modules/fancy_index/) extension,
but it doesn't look maintained, up to date and we don't even know
The only thing I could find that would be "basic" was the
[Fancy index](https://www.nginx.com/resources/wiki/modules/fancy_index/)
extension, but it doesn't look maintained, up to date and we don't even know
if it's compatible, and if so, for how long.
Sooo, that's a no-no.
During this search, I thought about the compatibility issues: we don't want to tightly couple the web server and the directory listing printing, as, the day we'd move away from NGINX, we would have up to nothing new to handle, no matter the server.
During this search, I thought about the compatibility issues: we don't want to
tightly couple the web server and the directory listing printing, as, the day
we'd move away from NGINX, we would have up to nothing new to handle, no matter
the server.
For that reason, and following everything we found, let's roll our own !
For that reason, and following everything we found, let's roll our own!
## The requirements
@ -52,35 +57,24 @@ It uses the URL path and a configured base path to match directories.
If the directory cannot be found, it will return a custom 404 page.
The base path must exist and point to a valid folder. If not, at startup, the server must crash with a clear error message showing that it's not valid.
The base path must exist and point to a valid folder. If not, at startup,
the server must crash with a clear error message showing that it's not valid.
```
Sounds good for a beginning.
Hhm... and what about the internal listing/file access logic ?
Hhm... and what about the internal listing/file access logic?
Since this tool is made to run behind a reverse-proxy such as NGINX or Apache, as it only fills the role of directory-listing, the simplest configuration would follow the next logic:
Since this tool is made to run behind a reverse-proxy such as NGINX or Apache,
as it only fills the role of directory-listing, the simplest configuration would
follow the next logic:
- Is the pointed path a valid file under the host's document root? yes -> serve the file, no -> continue
- Is the pointed path a valid file under the host's document root? yes -> serve
the file, no -> continue
- Proxy_pass to the server.
Looks perfect ! Let's add that to the spec.
```
This tool is a small and lightweight server that prints a directory's content.
It uses the URL path and a configured base path to match directories.
If the directory cannot be found, it will return a custom 404 page.
The base path must exist and point to a valid folder. If not, at startup, the server must crash with a clear error message showing that it's not valid.
Since this tool is made to run behind a reverse-proxy such as NGINX or Apache, as it only fills the role of directory-listing, the simplest configuration should follow the next logic:
- Is the pointed path a valid file under the host's document root? yes -> serve the file, no -> continue
- Proxy_pass to the server.
```
Now that we have a clearer view on what to do and how it should act, let's think
of what we actually want to use...
@ -91,7 +85,7 @@ I eyed the Go language for some time, especially since:
Still, I never got the time to get a hang of it.
Well, a simple project is always the best way to start learning a language !
Well, a simple project is always the best way to start learning a language!
## Let's go !
@ -100,7 +94,8 @@ Well, a simple project is always the best way to start learning a language !
By a quick search, we find [this tutorial][webtut],
directly from the GoLang wiki.
If we look at what's proposed in this tutorial, we can see that, well... there's everything we would need to start learning !
If we look at what's proposed in this tutorial, we can see that, well...
there's everything we would need to start learning!
> Covered in this tutorial:
>
@ -117,17 +112,21 @@ Now... For each request, the logic would be the following:
- Grabbing the request URI
- Constructing real system path by coupling a defined base path and the URI
- If the resulting path doesn't point to a valid directory, return a 404
- If the resulting path points to a valid directory, lists its content and returns a formatted list
- If the resulting path points to a valid directory, lists its content and
returns a formatted list
The first thing we could do, to simplify our work but also to start learning about Go, is directory listing.
The first thing we could do, to simplify our work but also to start learning
about Go, is directory listing.
For now, let's try to write a function that lists the content of the working directory (`pwd`).
For now, let's try to write a function that lists the content of the
working directory (`pwd`).
the [`ioutil` standard lib](https://golang.org/pkg/io/ioutil/) have a `ReadDir(string) ([]os.FileInfo, error)` that is described as
the [`ioutil` standard lib](https://golang.org/pkg/io/ioutil/) have a
`ReadDir(string) ([]os.FileInfo, error)` that is described as
> ReadDir reads the directory named by dirname and returns a list of directory entries sorted by filename.
Looks perfect !
Looks perfect!
```go
func handler() {
@ -138,103 +137,114 @@ func handler() {
}
```
Now that we have a way to cleanly list a directory content (and return a 404 if `error == nil`), we just need to iterate through the resulting array and just grab the bits of informations that we want.
Now that we have a way to cleanly list a directory content (and return a 404 if
`error == nil`), we just need to iterate through the resulting array and just
grab the bits of informations that we want.
Those informations are, separately, the folders and everything else: we want to be able to display all folders first.
Those informations are, separately, the folders and everything else: we want to
be able to display all folders first.
Let's make a small struct to handle our result set !
Let's make a small struct to handle our result set!
> Here, Files is used to describe every reachable entity that is not a folder, so symbolic links etc.
> Here, Files is used to describe every reachable entity that is not a folder,
> so symbolic links etc.
```go
type Res struct {
Path string
Folders []string // For now, we'll want an array of folder names
Files []string // For now, we'll want an array of file names
Path string
Folders []string // For now, we'll want an array of folder names
Files []string // For now, we'll want an array of file names
}
func handler() {
res := &Res{
Path: ".",
Folders: []string{},
Files: []string{},
}
dirs, err := ioutil.ReadDir(res.Path)
if err != nil {
panic("Unable to read directory")
}
res := &Res{
Path: ".",
Folders: []string{},
Files: []string{},
}
dirs, err := ioutil.ReadDir(res.Path)
if err != nil {
panic("Unable to read directory")
}
}
```
That will hold everything together nicely.
We then can simply add a function that will iterate on each directory entry in dirs and, if
We then can simply add a function that will iterate on each directory entry in
dirs and, if
it's a directory, add it to the Folders array, but else in the Files array.
```go
type Res struct {
Path string
Folders []string // For now, we'll want an array of folder names
Files []string // For now, we'll want an array of file names
Path string
Folders []string // For now, we'll want an array of folder names
Files []string // For now, we'll want an array of file names
}
func handler() {
res := &Res{
Path: ".",
Folders: []string{},
Files: []string{},
}
dirs, err := ioutil.ReadDir(res.Path)
if err != nil {
panic("Unable to read directory")
}
for _, f := range dirs {
name := f.Name()
if f.IsDir() {
// If the directory is not named as "myDirectory/", add that final slash
if !strings.HasSuffix(name, "/") {
name += "/"
}
res.Folders = append(res.Folders, name)
} else {
res.Files = append(res.Files, name)
}
}
res := &Res{
Path: ".",
Folders: []string{},
Files: []string{},
}
dirs, err := ioutil.ReadDir(res.Path)
if err != nil {
panic("Unable to read directory")
}
for _, f := range dirs {
name := f.Name()
if f.IsDir() {
// If the directory is not named as "myDirectory/", add that final slash
if !strings.HasSuffix(name, "/") {
name += "/"
}
res.Folders = append(res.Folders, name)
} else {
res.Files = append(res.Files, name)
}
}
}
```
Simple, and that works !
Now, letting this rest a bit, let's try to play a bit with the HTTP server stuff and see what we can use.
Now, letting this rest a bit, let's try to play a bit with the HTTP server stuff
and see what we can use.
If we look at the given example in the [tutorial][webtut], we have this code to start with:
If we look at the given example in the [tutorial][webtut], we have this code to
start with:
```go
package main
import (
"fmt"
"net/http"
"fmt"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/", handler)
http.ListenAndServe(":8080", nil)
http.HandleFunc("/", handler)
http.ListenAndServe(":8080", nil)
}
```
Okay, so I guess `handler` is the actual request handler, as it have both a `http.ResponseWriter` struct and a `http.Request` struct (over a pointer).
Okay, so I guess `handler` is the actual request handler, as it have both a
`http.ResponseWriter` struct and a `http.Request` struct (over a pointer).
By looking at the only instruction in the handler body, it looks like it's printing in a flux named `w`, which is our `http.ResponseWriter` variable, so I guess it's here I'd want to look into for organizing my directory listing.
By looking at the only instruction in the handler body, it looks like it's
printing in a flux named `w`, which is our `http.ResponseWriter` variable, so I
guess it's here I'd want to look into for organizing my directory listing.
But something catches the eye: the value passed in the print formatter: `r.URL.Path[1:]`.
But something catches the eye: the value passed in the print formatter:
`r.URL.Path[1:]`.
After investigation
([Request](https://golang.org/pkg/net/http/#Request),
@ -242,28 +252,35 @@ After investigation
it seems like the entire URI is stored as an array of URL bits
which, together, compose the URI.
That's neat, we'll keep that in memory, it will surely come in handy later !
That's neat, we'll keep that in memory, it will surely come in handy later!
Continuing our code reading, I see that this handler is effectively "mounted" to the route "/" (`http.HandleFunc("/", handler)`).
Continuing our code reading, I see that this handler is effectively "mounted"
to the route "/" (`http.HandleFunc("/", handler)`).
Nothing weird here.
If we try that example, something can immediately be noticed: whatever we type as a URI path ends up printed in the answer.
If we try that example, something can immediately be noticed: whatever we type
as a URI path ends up printed in the answer.
That means two things:
1. Every request that does not find a more precise or previously defined path is handled by the "/" handler.
2. We have, just here, 90% of our request handling ! (Remember, the only thing we want is the path the user is trying to access).
1. Every request that does not find a more precise or previously defined path is
handled by the "/" handler.
2. We have, just here, 90% of our request handling! (Remember, the only thing
we want is the path the user is trying to access).
I'll overlook the bits about the base path configuration and such.
Instead, we'll concentrate on the last thing we actually want: templates.
We want to be able to stylise as we want our directory listing, without a pain. Simple answer: HTML templates.
We want to be able to stylise as we want our directory listing, without a pain.
Simple answer: HTML templates.
Does our magic tutorial have that ? Yes !
Does our magic tutorial have that? Yes!
The only thing we actually need to modify in our code is to add the template loading and change the Fprintf to the template engine's way of sending back content.
The only thing we actually need to modify in our code is to add the template
loading and change the Fprintf to the template engine's way of
sending back content.
If we take a look at the given example,
@ -280,37 +297,40 @@ and the Go handler,
```go
func editHandler(w http.ResponseWriter, r *http.Request) {
title := r.URL.Path[len("/edit/"):]
p, err := loadPage(title)
if err != nil {
p = &Page{Title: title}
}
t, _ := template.ParseFiles("edit.html")
t.Execute(w, p)
title := r.URL.Path[len("/edit/"):]
p, err := loadPage(title)
if err != nil {
p = &Page{Title: title}
}
t, _ := template.ParseFiles("edit.html")
t.Execute(w, p)
}
```
> *Note that `p` is a struct defined a bit before this part, in the web tutorial.*
> *Note that `p` is a struct defined a bit before this part,
> in the web tutorial.*
```go
type Page struct {
Title string
Body []byte
}
Title string
Body []byte
}
```
We can easily adapt our code to use a HTML template !
```go
func handler(w http.ResponseWriter, r *http.Request) {
// res := &Res{}
// res := &Res{}
t, _ := template.ParseFiles("list.html")
t.Execute(w, res)
t, _ := template.ParseFiles("list.html")
t.Execute(w, res)
}
```
I could've continued to read the guide further (and I'll actually do so for my next tweakings with Go), but as far as I'm concerned, we have everything we need right now !
I could've continued to read the guide further (and I'll actually do so for my
next tweakings with Go), but as far as I'm concerned, we have everything we need
right now !
So after a bit of prototyping, I can finally come to this source:
@ -318,111 +338,116 @@ So after a bit of prototyping, I can finally come to this source:
package main
import (
"bytes"
"fmt"
"html/template"
"io/ioutil"
"net/http"
"os"
"strings"
"time"
config "./config"
"bytes"
"fmt"
"html/template"
"io/ioutil"
"net/http"
"os"
"strings"
"time"
config "./config"
)
// Res format of a folder's content
type Res struct {
BasePath string
Path string
ParentPath string
IsNotRoot bool
Folders []string // For now, we'll want an array of folder names
Files []string // For now, we'll want an array of file names
Motd string
IsMotdEnabled bool
BasePath string
Path string
ParentPath string
IsNotRoot bool
Folders []string // For now, we'll want an array of folder names
Files []string // For now, we'll want an array of file names
Motd string
IsMotdEnabled bool
}
func parentPath(str string) string {
if strings.Compare(str, "/") == 0 {
return str
}
if strings.HasSuffix(str, "/") {
str = str[:len(str)-1]
}
if strings.Count(str, "/") == 1 {
return "/"
}
lio := strings.LastIndex(str, "/")
return str[:lio]
if strings.Compare(str, "/") == 0 {
return str
}
if strings.HasSuffix(str, "/") {
str = str[:len(str)-1]
}
if strings.Count(str, "/") == 1 {
return "/"
}
lio := strings.LastIndex(str, "/")
return str[:lio]
}
func handler(w http.ResponseWriter, r *http.Request) {
res := &Res{
BasePath: config.RootFSPath,
Path: r.URL.Path,
Folders: []string{},
Files: []string{},
Motd: config.Motd,
IsMotdEnabled: config.MotdEnabled,
}
res.ParentPath = parentPath(res.Path)
res.IsNotRoot = strings.Compare(res.Path, "/") != 0
// Uniforming the end '/'
if !strings.HasSuffix(res.Path, "/") {
res.Path += "/"
}
var path bytes.Buffer
path.WriteString(res.BasePath)
path.WriteString(res.Path)
dirs, err := ioutil.ReadDir(path.String())
if err != nil {
t, _ := template.ParseFiles("templates/404.html")
t.Execute(w, res)
return
}
for _, f := range dirs {
name := f.Name()
if f.IsDir() {
if !strings.HasSuffix(name, "/") {
name += "/"
}
res.Folders = append(res.Folders, name)
} else {
res.Files = append(res.Files, name)
}
}
t, _ := template.ParseFiles("templates/list.html")
t.Execute(w, res)
res := &Res{
BasePath: config.RootFSPath,
Path: r.URL.Path,
Folders: []string{},
Files: []string{},
Motd: config.Motd,
IsMotdEnabled: config.MotdEnabled,
}
res.ParentPath = parentPath(res.Path)
res.IsNotRoot = strings.Compare(res.Path, "/") != 0
// Uniforming the end '/'
if !strings.HasSuffix(res.Path, "/") {
res.Path += "/"
}
var path bytes.Buffer
path.WriteString(res.BasePath)
path.WriteString(res.Path)
dirs, err := ioutil.ReadDir(path.String())
if err != nil {
t, _ := template.ParseFiles("templates/404.html")
t.Execute(w, res)
return
}
for _, f := range dirs {
name := f.Name()
if f.IsDir() {
if !strings.HasSuffix(name, "/") {
name += "/"
}
res.Folders = append(res.Folders, name)
} else {
res.Files = append(res.Files, name)
}
}
t, _ := template.ParseFiles("templates/list.html")
t.Execute(w, res)
}
func main() {
http.HandleFunc("/", handler)
http.HandleFunc("/", handler)
ListenURL := os.Getenv("HOST")
if len(ListenURL) == 0 {
ListenURL = "127.0.0.1:3001"
}
ListenURL := os.Getenv("HOST")
if len(ListenURL) == 0 {
ListenURL = "127.0.0.1:3001"
}
StartDate := time.Now().String()
StartDate := time.Now().String()
fmt.Printf("Server starting at %s.\nListening to incoming requests on \"%s\"\n", StartDate, ListenURL)
http.ListenAndServe(ListenURL, nil)
fmt.Printf(
"Server starting at %s.\nListening to incoming requests on \"%s\"\n",
StartDate, ListenURL)
http.ListenAndServe(ListenURL, nil)
}
```
You'll also notice two additional things:
- MOTD (which is basically a static text display that can be enabled and customized outside of the template itself).
- MOTD (which is basically a static text display that can be enabled and
customized outside of the template itself).
- A custom 404 page, too.
This server works great, handles requests blazingly fast and is dead simple to hack with !
This server works great, handles requests blazingly fast and is dead simple
to hack with!
I guess, as it's still beginner code, that a lot of things could be improved, but as far as I'm concerned, the tool is efficient as-is !
I guess, as it's still beginner code, that a lot of things could be improved,
but as far as I'm concerned, the tool is efficient as-is!

37
articles/20180129-a-very-simple-and-fast-url-shortener.md

@ -3,30 +3,45 @@ title: A very simple and fast URL shortener
published: true
---
Yesterday, I came home and, before going to sleep and half-asleep, I decided to code something.
Yesterday, I came home and, before going to sleep and half-asleep, I decided to
code something.
What ? I had no idea at the moment I did so.
What? I had no idea at the moment I did so.
So I wanted something dead-simple and minimalistic.
It serves only one purpose: shortening URLs.
For that reason, I have two routes: a `POST` route at which a user can create a new shortened URL, giving the target ; a `GET` route which, when passing the shortened code, will redirect the user to the target URL.
For that reason, I have two routes: a `POST` route at which a user can create a
new shortened URL, giving the target ; a `GET` route which, when passing the
shortened code, will redirect the user to the target URL.
Database ? Why bother ?
Database? Why bother?
We have a very simple identity: the file name is the shortened code, the file content is the target URL.
We have a very simple identity: the file name is the shortened code, the file
content is the target URL.
Which means that, when the user tries to retrieve a target (and be redirected), since he gives us the shortened code, we can simply check for file existence in the storage folder and either return a `301` response with the target URL (if it could be found) or a `404` not found (if we couldn't find any file with this code).
Which means that, when the user tries to retrieve a target (and be redirected),
since he gives us the shortened code, we can simply check for file existence in
the storage folder and either return a `301` response with the target URL (if it
could be found) or a `404` not found
(if we couldn't find any file with this code).
Said that, I want to be able to easily count requests and errors.
I used a simple logging format composed of two files, respectively `request-date(Y-m-d).log` for requests and `error-date(Y-m-d).log` for errors.
I used a simple logging format composed of two files, respectively
`request-date(Y-m-d).log` for requests and `error-date(Y-m-d).log` for errors.
Every incoming request starts with `<--` and every error is contained in one line.
Every incoming request starts with `<--` and every error
is contained in one line.
That means that we can know the request count and the error count, by day, by simply doing a `grep '<--' request-{date}.log | wc -l` for requests and `wc -l error-{date}.log` for errors.
That means that we can know the request count and the error count, by day, by
simply doing a `grep '<--' request-{date}.log | wc -l` for requests and
`wc -l error-{date}.log` for errors.
Finally, for the web UI, I decided to simply go with Skeleton, as there's practically nothing.
Finally, for the web UI, I decided to simply go with Skeleton,
as there's practically nothing.
The shortening request is made in AJAX, giving a clean result but in case the user disables Js, the URL will still be shortened and the code returned, or the error shown, so we're also okay here !
The shortening request is made in AJAX, giving a clean result but in case the
user disables Js, the URL will still be shortened and the code returned, or the
error shown, so we're also okay here!

55
articles/20180312-work-discipline.md

@ -3,25 +3,60 @@ title: Work discipline, or motivation is a bitch
published: true
---
I've gotten an habit of seeing people complaining on how they were gradually losing motivation when working on projects, or seeing articles talking about "how to stay motivated" while working in development.
I've gotten an habit of seeing people complaining on how they were gradually
losing motivation when working on projects, or seeing articles talking about
"how to stay motivated" while working in development.
The thing is, even if my colleagues are pretty tolerant and comprehensive, which gives me a bit more time in case I'm not feeling well, or if I have other urgent matters, my work still requires me to produce some results, or at least provide clearness on how the projects I'm working on are moving.
The thing is, even if my colleagues are pretty tolerant and comprehensive, which
gives me a bit more time in case I'm not feeling well, or if I have other urgent
matters, my work still requires me to produce some results, or at least provide
clearness on how the projects I'm working on are moving.
Motivation is a bitch. That's the only thing to remember.
Motivation is nothing more than a mood, that can come and go following so many factors that you'd really have a hard time controlling it.
Motivation is nothing more than a mood, that can come and go following so many
factors that you'd really have a hard time controlling it.
A lot of papers exists on "how to control it", giving advices such as exercising, being rewarding on yourself and such, and even if they really can contribute to maintain motivation, it's really counter-productive in the long run to rely on that wellness.
A lot of papers exists on "how to control it", giving advices such as
exercising, being rewarding on yourself and such, and even if they really can
contribute to maintain motivation, it's really counter-productive in the long
run to rely on that wellness.
Discipline, instead, is not a mood, but a work habit.
In a way, it's forcing you to work, to track your work and to stay focused on the objectives that you want to achieve, no matter what.
In a way, it's forcing you to work, to track your work and to stay focused on
the objectives that you want to achieve, no matter what.
Painful and annoying at first, it can quickly help you go further, providing better results and helping you even further on some other projects that you could start right with a good organization.
Painful and annoying at first, it can quickly help you go further, providing
better results and helping you even further on some other projects that you
could start right with a good organization.
### Some tips
1. Impose yourself some organization procedures. Kanban, those Agile methods and such are obviously the most complete and exhaustive ways of keeping track of some projects but if that's too overwhelming for someone starting (which is perfectly understandable), something as simple as making a list of steps and goals to achieve on the day, can already help you. It gives you precise steps you can hook yourself on and just follow, and if you feel lost, or if you're stuck on a problem, you still have a list of other tasks you can start right away, keeping that first task for later. Still, don't define too much tasks, or don't switch on every task without having finished some !
2. Don't be hard on you. That advice can also be seen in the "Motivation advices" section, because it's not only a "motivation or discipline method", but rather an over-looked common sense advice. Working one or two hours maximum before taking a small break can really let you rest in between, coming back with a fresh start on the problem, without having lost track of what you did before. Still, taking a 15min-coffee break every 30 minutes is obviously too much and could easily distract you !
3. If you're working alone, or don't have a strict workplace, define a clear work schedule, and stick to it. Having a clear separation between "At work" and "Not at work" will give you a period of time in which you can actually impose yourself the fact that you're *working* on x or y subject, and nothing else.
4. Impose yourself some work methodologies, especially for pet-projects and side-projects. Imposing yourself a "work rhythm" and guidelines which you must follow can help you define and organize goals and plans. Specifications, drafts, diagrams etc. are tools you can use to put on paper the concrete definition of objectives, changes and choices you'll take for the evolution of your projects.
1. Impose yourself some organization procedures. Kanban, those Agile methods and
such are obviously the most complete and exhaustive ways of keeping track of
some projects but if that's too overwhelming for someone starting (which is
perfectly understandable), something as simple as making a list of steps and
goals to achieve on the day, can already help you. It gives you precise
steps you can hook yourself on and just follow, and if you feel lost, or if
you're stuck on a problem, you still have a list of other tasks you can
start right away, keeping that first task for later. Still, don't define too
much tasks, or don't switch on every task without having finished some!
2. Don't be hard on you. That advice can also be seen in the
"Motivation advices" section, because it's not only a "motivation or
discipline method", but rather an over-looked common sense advice. Working
one or two hours maximum before taking a small break can really let you rest
in between, coming back with a fresh start on the problem, without having
lost track of what you did before. Still, taking a 15min-coffee break every
30 minutes is obviously too much and could easily distract you!
3. If you're working alone, or don't have a strict workplace, define a clear
work schedule, and stick to it. Having a clear separation between "At work"
and "Not at work" will give you a period of time in which you can actually
impose yourself the fact that you're *working* on x or y subject, and
nothing else.
4. Impose yourself some work methodologies, especially for pet-projects and
side-projects. Imposing yourself a "work rhythm" and guidelines which you
must follow can help you define and organize goals and plans.
Specifications, drafts, diagrams etc. are tools you can use to put on paper
the concrete definition of objectives, changes and choices you'll take for
the evolution of your projects.

66
articles/20180501-pix-artemix-is-up.md

@ -1,69 +1,93 @@
---
title: Pix.artemix is up ! About Pix.watch and the Android app Pix
title: Pix.artemix is up! About Pix.watch and the Android app Pix
published: true
---
> Update: pix.watch is now down, hence the removed links.
I'm a massive user of IRC, and I also frequently communicate on some Discord servers that doesn't allow media uploading.
I'm a massive user of IRC, and I also frequently communicate on some Discord
servers that doesn't allow media uploading.
For that reason, I often need to host pictures online, and I generally prefer to use a tracker-free, ad-free, account-free tool.
For that reason, I often need to host pictures online, and I generally prefer to
use a tracker-free, ad-free, account-free tool.
For that purpose, I discovered a service a friend of mine made, called Pix (I'll refer to it as "Pix.watch" in this article).
For that purpose, I discovered a service a friend of mine made, called Pix (I'll
refer to it as "Pix.watch" in this article).
The service is very simple, works wonderfully nicely and fits every of my needs!
Still, when on mobile, going to the website is bothersome, as I have to load its entirety (usually over slow Internet speed) and, while it's kept decently lightweight, it's still quite annoying sometimes.
Moreover, the biggest drawback with using the website from my phone is that I can't directly upload an image to Pix from within the gallery or some other app that lets me share media.
Still, when on mobile, going to the website is bothersome, as I have to load its
entirety (usually over slow Internet speed) and, while it's kept decently
lightweight, it's still quite annoying sometimes.
Moreover, the biggest drawback with using the website from my phone is that I
can't directly upload an image to Pix from within the gallery or some other app
that lets me share media.
For that reason, I decided to use my development skills for making something useful, a.k.a an Android client for Pix.watch.
For that reason, I decided to use my development skills for making something
useful, a.k.a an Android client for Pix.watch.
## Pix.watch
Pix.watch is a secure, efficient and very basic image hosting server that doesn't require anything from the client, while it adds a few utilities, like an integrated image resizer.
Pix.watch is a secure, efficient and very basic image hosting server that
doesn't require anything from the client, while it adds a few utilities, like an
integrated image resizer.
img:umatrix-on-pix[dependencies and proof that pix works without any js]
The code for the page is as simple !
The code for the page is as simple!
img:pix-website-source[pix's website source code]
Still, using it on mobile is a bit impractical, so I wanted to find another solution.
Still, using it on mobile is a bit impractical, so I wanted to find another
solution.
## PixDroid
This solution is PixDroid, an app I decided to build with lightness, speed, and privacy in mind.
This solution is PixDroid, an app I decided to build with lightness, speed, and
privacy in mind.
It's my first Android project, so I discovered the entirety of the Android ecosystem.
It's my first Android project, so I discovered the entirety of the Android
ecosystem.
As for every project, simplicity is the core, so I made a very basic layout in order to have something easy to work with and use.
As for every project, simplicity is the core, so I made a very basic layout in
order to have something easy to work with and use.
The app basically have 2 features:
- Share
- List last shares
You can either share from the app's home screen or from everywhere giving a "Share" action on an image, like in your Gallery app.
You can either share from the app's home screen or from everywhere giving a
"Share" action on an image, like in your Gallery app.
The upload details window, accessible from the upload success notification, will let you directly send this newly shared text to anyone you want, open it in your favourite browser or simply copy the direct link.
The upload details window, accessible from the upload success notification, will
let you directly send this newly shared text to anyone you want, open it in your
favourite browser or simply copy the direct link.
The last shares list gives you every share you made (and lets you clear it with a simple button press), and, on click on any of those shares, gives you the same options as the "upload details window".
The last shares list gives you every share you made (and lets you clear it with
a simple button press), and, on click on any of those shares, gives you the same
options as the "upload details window".
## But why a private distribution channel ?
## But why a private distribution channel?
The main concern is about privacy.
Imgur and other "big" services have little to no transparency around how data is handled, what's done with it, and more (even if it may come to some changes in a close future thanks to the GDRP).
Imgur and other "big" services have little to no transparency around how data is
handled, what's done with it, and more (even if it may come to some changes in a
close future thanks to the GDRP).
Also, they are usually posting uploaded images to public galleries or regularly removing "expired" images.
Also, they are usually posting uploaded images to public galleries or regularly
removing "expired" images.
That's definitely not what we want here.
The second concern is about trust.
We can access, read, audit, test, etc., the source-code of Pix.watch without any barrier due to its openness.
We can access, read, audit, test, etc., the source-code of Pix.watch without any
barrier due to its openness.
IMHO, it's a huge advantage when searching for to-be-secure private distribution channels.
IMHO, it's a huge advantage when searching for to-be-secure private distribution
channels.
## Future plans

28
articles/20180507-fix-your-shit-apple.md

@ -1,28 +0,0 @@
---
title: Fix your shit, Apple
published: true
---
Okay, so this is an angry rant against Apple.
I'm the sad owner of a shitty Macbook Pro 15" Late 2015, which, besides
all its problems, doesn't stop giving me electric shocks to its shitty
aluminium case.
It burns, leaves marks and after spending hours with the fuckmuppets
at Apple's customer support, the only thing they said to me was that
it was fucking normal.
How can you produce such pieces of shit branded as professional tools that
it electrocutes its fucking users and it's considered "normal" ?
But not only this small laptop burns quite a lot, but I just put my hand on a
bigger iMac desktop computer that have been powered off for weeks, only
connected to the power cord.
I only fucking put my *hand* on a computer and the burn was so violent it
left a fucking red mark on my hand.
And some people want to buy that shit or brand that as "tools for professionals".
Don't make me fucking laugh. Because that'll be ugly.

170
articles/20180806-google-amp.md

@ -3,56 +3,81 @@ title: Google AMP, and the website obesity problem
published: true
---
If you didn't live in a cave for the past few years, you may have heard about the Google AMP project.
If you didn't live in a cave for the past few years, you may have heard about
the Google AMP project.
## But what is it?
The official [AMP project website](https://www.ampproject.org) advertises Google AMP as an "Open source initiative to make the web better for all".
The official [AMP project website](https://www.ampproject.org) advertises Google
AMP as an "Open source initiative to make the web better for all".
> The AMP Project is an open-source initiative aiming to make the web better for all. The project enables the creation of websites and ads that are consistently fast, beautiful and high-performing across devices and distribution platforms.
> The AMP Project is an open-source initiative aiming to make the web better for
> all. The project enables the creation of websites and ads that are
> consistently fast, beautiful and high-performing across devices and
> distribution platforms.
But if you search a bit on some tech websites about this project, it looks like everyone is freaking out against this project.
But if you search a bit on some tech websites about this project, it looks like
everyone is freaking out against this project.
img:hnresearch[This screenshot is the Hacker News research page, and it shows a lot of negative content around AMP]
*A small search on Hacker News, most websites clearly shows disinterest up to disgust towards this project.*
*A small search on Hacker News, most websites clearly shows disinterest up to
disgust towards this project.*
## But... What *exactly* is Google AMP?
Basically, it's a set of restrictions which you must follow to be able to build an AMP-compliant page.
Basically, it's a set of restrictions which you must follow to be able to build
an AMP-compliant page.
- Only 1 external CSS source, every other CSS definition must be done in-page.
- Only Google AMP as a source of javascript libraries (You can find the list of libraries [here](https://github.com/ampproject/amphtml/tree/master/src)).
- A custom set of new HTML tags, and a ban on a few other HTML tags, considered too heavy, like `<object>` or `<param>`.
- Only Google AMP as a source of javascript libraries (You can find the list of
libraries [here](https://github.com/ampproject/amphtml/tree/master/src)).
- A custom set of new HTML tags, and a ban on a few other HTML tags, considered
too heavy, like `<object>` or `<param>`.
Also, AMP gives a "cache" system, which is Google itself, for your pages, meaning the end reader will directly load a page from Google and not your server, meaning that in some cases, the loading speed may be better.
Also, AMP gives a "cache" system, which is Google itself, for your pages,
meaning the end reader will directly load a page from Google and not your
server, meaning that in some cases, the loading speed may be better.
There is a bit more to know, but that's the base things to know !
There is a bit more to know, but that's the base things to know!
Now that we know what is AMP, we can see a few advantages, like content load restrictions, which will force developers to not include ads and such, so why exactly is everyone freaking about that ?
Now that we know what is AMP, we can see a few advantages, like content load
restrictions, which will force developers to not include ads and such, so why
exactly is everyone freaking about that?
There are a few regularly cited major issues with AMP:
- Every page is loaded on a subdomain of Google (`https://www.google.com/amp/`), which mean that any passive user count isn't working.
- To access the original version, the user is forced to remove either the `.amp` or `?amp=1` clause at the end of the AMP URL, there's no button or shortcut.
- Every external resource must be loaded from a Google server endpoint (`https://cdn.ampproject.org/`), which brings some concerns for privacy.
- Despite Google saying otherwise, most GoogleAMP-ready links are pushed to the top of the results, even if they aren't the most fit for the original search.
- The fact that Google have such great control over Internet puts a lot of questions on "how much" is Google AMP really free.
- Every page is loaded on a subdomain of Google (`https://www.google.com/amp/`),
which mean that any passive user count isn't working.
- To access the original version, the user is forced to remove either the `.amp`
or `?amp=1` clause at the end of the AMP URL, there's no button or shortcut.
- Every external resource must be loaded from a Google server endpoint
(`https://cdn.ampproject.org/`), which brings some concerns for privacy.
- Despite Google saying otherwise, most GoogleAMP-ready links are pushed to the
top of the results, even if they aren't the most fit
for the original search.
- The fact that Google have such great control over Internet puts a lot of
questions on "how much" is Google AMP really free.
## So. Which problems is Google AMP trying to solve, exactly?
The main problems that we encounter on Internet are about load speed and page size.
The main problems that we encounter on Internet are about load speed and page
size.
- Advertisement 3rd-party contents rendering page browsing painful.
- Tracking systems and heavy resources slowing down page loading.
- Heavy media resources taking too long to load.
Those problems are so common nowadays that they were the source of a few talks and papers:
Those problems are so common nowadays that they were the source of a few talks
and papers:
- Talk about "The website obesity crisis": http://idlewords.com/talks/website_obesity.htm
- Neustadt: "Against an increasingly user-hostile web": https://www.neustadt.fr/essays/against-a-user-hostile-web/
- Talk about "The website obesity crisis":
http://idlewords.com/talks/website_obesity.htm
- Neustadt: "Against an increasingly user-hostile web":
https://www.neustadt.fr/essays/against-a-user-hostile-web/
But some developers also took the time to produce humour out of it, and I think you already know those websites:
But some developers also took the time to produce humour out of it, and I think
you already know those websites:
- [Motherfucking Website](http://motherfuckingwebsite.com/)
- [Better Motherfucking Website](http://bettermotherfuckingwebsite.com/)
@ -60,85 +85,125 @@ But some developers also took the time to produce humour out of it, and I think
## But why are those problems even existing?
The main culprits are companies and businesses wanting to milk the most out of their users.
The main culprits are companies and businesses wanting to milk the most out of
their users.
Advertisement and third-party trackers are a very common source of load which can be found of most major websites.
Advertisement and third-party trackers are a very common source of load which
can be found of most major websites.
Another heavy source of load is front-end developers wanting to use *the latest tools* to build a given website, even if it doesn't fit the website's goals.
Another heavy source of load is front-end developers wanting to use *the latest
tools* to build a given website, even if it doesn't fit the website's goals.
We get it, you're proud of showing that you're using the latest ReactJS/Redux/Axios setup, with a preloader, hot page swapping etc. but except if you made a web-based software (in which case, congrats on using an inadequate platform for running a software), I don't think incoming users will care about this marvel of technology.
We get it, you're proud of showing that you're using the latest
ReactJS/Redux/Axios setup, with a preloader, hot page swapping etc. but except
if you made a web-based software (in which case, congrats on using an inadequate
platform for running a software), I don't think incoming users will care about
this marvel of technology.
### First culprit: Companies and businesses
As stated before, it's not uncommon to see third-party resources for pretty much everything, ranging from ad services to trackers like Google Analytics.
As stated before, it's not uncommon to see third-party resources for pretty much
everything, ranging from ad services to trackers like Google Analytics.
Most of those resources are plain useless (like advertisement systems) and were only chosen to try to get "a bit more" of each user's visit, without considering this user.
Most of those resources are plain useless (like advertisement systems) and were
only chosen to try to get "a bit more" of each user's visit, without considering
this user.
Others can have some uses, but doesn't give anything to the user. Simply put, they slow down the user for the sake of tracking.
Others can have some uses, but doesn't give anything to the user. Simply put,
they slow down the user for the sake of tracking.
As a demo, I want to put on the line two major news websites, Le Monde, a french news outlet, and The Washington Post, an english news outlet.
As a demo, I want to put on the line two major news websites, Le Monde, a french
news outlet, and The Washington Post, an english news outlet.
Those two websites have as a main and common goal to display news articles to users.
Those two websites have as a main and common goal to display news articles
to users.
### Le Monde
> This website was already used by Neustadt in the paper cited above, "Against an increasingly user-hostile web".
> This website was already used by Neustadt in the paper cited above, "Against
> an increasingly user-hostile web".
Even if this website is less scary than The Washington Post, it still packs quite some load and constantly re-requests the same resource if you don't move on the page.
Even if this website is less scary than The Washington Post, it still packs
quite some load and constantly re-requests the same resource if you don't move
on the page.
This resource is lightweight (approximately 1kb), but is still a regularly loaded resource. If you forget this tab, you can imagine how much traffic will pass through.
This resource is lightweight (approximately 1kb), but is still a regularly
loaded resource. If you forget this tab, you can imagine how much traffic will
pass through.
The video below demonstrates going to a Le Monde's article without any ad-blocker, restriction extension or cache.
The video below demonstrates going to a Le Monde's article without any
ad-blocker, restriction extension or cache.
video:le-monde
### The Washington Post
This website is even heavier than the one before, loading 5 to 6 ad blocks on each page, not counting the many scripts and images.
This website is even heavier than the one before, loading 5 to 6 ad blocks on
each page, not counting the many scripts and images.
The video below demonstrates going to a The Washington Post's article without any ad-blocker, restriction extension or cache.
The video below demonstrates going to a The Washington Post's article without
any ad-blocker, restriction extension or cache.
video:twp
### The aftermath
Now, it's clear that those websites are not any kind of fancy shop or gallery, but news websites.
Now, it's clear that those websites are not any kind of fancy shop or gallery,
but news websites.
Then I want to ask: what the fuck is any of those links giving to the user ?
Then I want to ask: what the fuck is any of those links giving to the user?
A news outlet reader wants to *read* the news, and not wait for 10 minutes to access a page.
A news outlet reader wants to *read* the news, and not wait for 10 minutes to
access a page.
You should take into consideration that both pages were loaded using a 200kB/s Internet speed, the average speed in France.
You should take into consideration that both pages were loaded using a 200kB/s
Internet speed, the average speed in France.
I'll let you imagine how much time it'd take on a mobile phone.