
For the past 4 months I've been working on moreof.me — a single place for all your connections, thoughts and interests. The idea behind Moreofme is that you can create your own personalized space to share your stories and insights that shape you. As the name suggests, this website reveals more of you. In this post I'm going to tell you how it was made, what technical challenges I was dealing with, and what to expect in future.
The Idea
At first glance, you might think that we're building another social media.

On the first briefing while discussing this project, Moreofme founders told me that we need to make it obvious: moreof.me is different from social media. It's rather an extension of your socials.
One of key features of traditional social medias are people's connections. When the most popular russian social media VKontakte started, it was intended to be a space connecting universities students in Saint Petersburg. A lot of things changed in 12 years, but that's still core concept of social medias: connect people inside of the website: friends, stories, posts feed, connect suggestions.
Moreof.me is different, it's your space. The website currently does not have search for other people and there is no direct connections between users of the website, but we still encourage users to find other people's profiles and read about them. While traditional social media oriented on contect generation allow you to write a little info about yourself (your name, pronouns, avatar, maybe profile banner and sometimes one or two links), on moreof.me you can choose your interests, write a bunch of topics in "About me" section and, of course, add as many links to any other social media as you want for visitors to follow you.
We even have Spotify, TMDB, OpenLibrary and Steam integrations for you to easily pick your favorite movies, tv series, games, books, music and podcasts to display in profile! I had to write parser that processes 30 GB of books dump and then index 38 million rows in postgresql for that book search, but we'll get to that in a minute...
Content
But where's the bio? Postboards are your extended bio! Why limit yourself to just 200 characters when you can have unlimited posts sorted by postboards?
- Do you like hiking? Create "Hiking 🥾" postboard and write a few stories about your adventures
- Do you like programming? Create "Job 🧑💻" postboard and tell everyone what JS framework you dislike the most
- Do you like travelling? Create "✈️" postboard and share some facts about each place you visited
We have rich formatting editor with drafts, archive and you can enable/disable displaying posts after they were posted, you can even rearrange order they appear in your postboard — it's up to you how your page looks.

Can't think of idea to post? To get you started, there are 330 unique interesting questions prompts sorted by categories! Why not answer them all?

Modern web requires frequently updated content. While you might think that Moreof.me is a webpage where you can share stuff about you once and forget about it, that's not true: we have a special type of posts that encourages users to post regularly. Introducing Weeklies!
Weeklies are similar to regular postboards posts, but there are predefined questions for everyone that appear every week and then, as new questions arrive, older disappear.
Saving Content
If you liked another person's content, save it to your private folders for later read. You can create posts folders and move posts between them. You can filter posts by text and sort them.
Moreof.me allows you to save other people's profiles too! You can opt-in to get notifications when they post and share new stuff.

Analytics
While just having a place to share more of you is already good enough, you might want to know more of your profile's visitors. You can track profile shares, profile visits, profile saves, post shares, post saves and link clicks with detailed statistic about each link you posted with CTR ratio.

The Killer Feature
Finally, the last thing I want to mention is the coolest one. I saved it for the end of this introduction so prepare for... Creator Tool!


The idea here is that you can take the story you shared on moreof.me and easily share it on your favorite social media.
Don't like background? You can add up to 5 colors in gradient and choose between 5 gradient variants

But don't stop there and upload a fancy background!

But let's be real: for this post we want LUMEN industries in background. Oh no wait, it's just LumenCoffee 1936 bar in Armenia, Yerevan, not a secret LUMEN facility with psychological tortures on -1 floor.... or is it?
And then you can directly share this video in your social media, such as Instagram stories, no need to even download it to your gallery, just share with one tap!

The Struggle
And now I'm going to dive a bit into how all of that was made and what exactly took us 4 months to implement all features. I was both frontend and backend developer for this project, as well as every other developer 😅
Luckily my years of experience allowed me to make everything on my own.
I'm obviously not going to share any source code there or tell exactly how every part of that project was made, also I had signed an NDA and I don't want to return to previous job where I was harassed everyday, but I want to share some insights on technical challenges and what new technologies I tried!
Stack
For frontend we initially were going for Next.js and React but in the last project that I was maintaining for 2 years I honestly had too much of it. I was tired of React and I was also hearing a lot of great comments about Svelte, but the moment I saw benchmarks (and especially SSR) I was sold to it. So I sat down and learned Svelte in one evening, instantly recognizing what I already knew well about web developing and loving that there is no boilerplate like in React. Svelte is better in every way (except `rm -rf node_modules && bun install`). That moment I finished learning svelte I knew I won't ever return to making projects in React and I was forced to code in Svelte until finally a future universal isomorphic utopian all-in-one framework arrives in like 2030 and replaces every JS framework existing.

I studied Figma file and estimated that we could make the project in 3-4 weeks (from today to shipping). Because of multiple changes along the way, critical decisions, many features that were essential to implement before shipping, it took 13-14 weeks instead.
I've decided to scrap MongoDB I've been using for past 3 years of web development and learn PostgreSQL because we were expecting a lot of data and future scaling. I heard MongoDB works well under a few million rows, but after 10 millions it's nowhere near PostgreSQL. And as much as I like MongoDB simplicity, philosophy and great features... I won't come back to it — I really liked working with PostgreSQL.
I finally understood the importance of ORMs. Previously I was just inserting and parsing raw documents without any type safety or validation whatsoever. This time I've decided to try hyped up drizzle ORM everyone was talking last year and it's the best thing on planet and must-have for your next project with postgres. The drizzle studio is also cool, and although it's missing some features, it works for 90% of cases.
I wanted to try serverless database and almost fell for "10ms" on neon's landing

We tried it, and as it turns out, it adds at least a few hundred ms to every database request with no database configuration customization and it's also not cheap. After we rented production dedicated server I just cloned postgresql and compiled it with needed features and then tuned with pg-tune.
Auth
Before this project I considered 3rd party auth a useless solution for lazy developers. I genuinely thought that making auth is an easy task: you just slam username and password fields, hash password, check database and create session token. Shouldn't take more than a few hours, right? Well I spent tens of hours on rolling out our own auth, thanks to security standards. It was also the first project where I was working hard and tried my best to make auth as secure as possible, the best auth in the world.
The Copenhagen Book by Pilcrow helped me in studying how to implement auth correctly. Not only you need to validate and double check everything you accept from user on every request, you also can't store anything you accept from user in database. You can't use just cryptographic random functions, it must have at least X bits of entropy (where each token defines its own minimum), and all of operations must be strictly rate-limited, one-time and expirable. Also a lot of pages related to auth must have some specific headers for browser security. To be honest, this is the first time I implemented CSRF in my life, I just never bothered to do that until I read how easy it is to abuse CSRF if the website has no protection.
A surprising thing while I was learning from this book, is that basically every string with "@" is email address. It could be a bit more complex than that with a few validation rules, but if you're ever going to validate email, just use zod.string().email(). Do not use regex!
This was also the first time when I worked with Apple OAuth. The worst of OAuth.

As if it was an Apple's tradition to make complex everything related to web, they also have their own version and specification of oauth, not following established standards everyone else had. I had to double check that it was in fact Apple's inconsistency and not my problem before opening issue on OAuth library we were using.
I spent a few hours comparing Medium.com Apple sign-in OAuth redirect string with ours, replacing bit by bit, searching all over the internet: Apple does not give you any clues what's wrong with your request, the API returns a blank page with 500 HTTP code. Finally, I managed to bruteforce correct parameters and Apple sign-in worked. Sike, it didn't! It only works on production because localhost is not secure enough for Apple, while Google sign-in works with localhost perfectly.
The saddest thing about this whole process is that I implemented and refactored a lot of codebase so that we could use avatar URLs returned by Google OAuth directly without user having to reupload them, but Apple does not return avatar after OAuth response, so we had to scrap this. On a bright side, Google heavily rate limits avatar CDN anyway, so whatever.
Instagram redirects
Since moreof.me is intended to be included in bio links, we need it to open correctly from all social medias where user could post it. For some unknown weird reason Instagram decided to put their own browser with separated storage inside of their app and force all links to open inside of it. There is no way to easily avoid this, unless you're onlyfans (yes, they hardcoded the onlyfans domain into app's code and it's the only website that opens externally in user's browser)
But if you go to moreof.me from Instagram, you'll notice that we still managed to implement redirecting user from in-app browser to external browser in mobile OS. When I was researching this, I honestly had no hope that it's possible without showing some kind of this dialog
Please open your external browser and paste this link to continue browsing Moreofme: https://moreof.me/hlothdev and delete Instagram (optional)
But I actually found quite a few solutions from fellow developers who struggled with the same problem. What's even more surprising is that they still work to this day! Two jailbreaks have to be implemented for Android and iOS separately, but they work flawlessly. You can find them yourself, but if you'd like to support me, I can make it easy for you and put links right there (just click the donate button to reveal content)
Attached files & links
- Android Instagram in-app browser jailbreakZwsUDBIyheqVtDXZJrrWZscCbCBiBtLCGDnXyiddzEHKyYaOhDSpppOJlquwyUNkzuAheHYbgJdJVrHIlkLrAGklYaXULGTFMzBElYkXQqRUBJFxRHsHjVFToXGnwyebWWDcwCjF
- iOS Instagram in-app browser jailbreakcPhHHjTGFQhXjZNYwvCQFhUfNkENJvqgGXnKrzEIcLSeBqbTDb
But that's not the only problem we had with Instagram. As you already know, moreof.me allows user to add their socials to profile, but we don't just show these socials in profile — all your socials are also a way for visitors of your page to contact you. When you visit a profile and click on a "comment" button on a post, you are presented with a dialog where you can choose how to contact author of post. We wanted to redirect user directly to Instagram's DMs when you click on Instagram option in that dialog. Interestingly, Instagram does have a way of opening DMs directly with this url:
1const username = 'billieeilish'2const url = `https://ig.me/m/${username}`3// https://ig.me/m/billieeilish
The bad thing is that it does not open Instagram native app. You know how when you go to t.me/ links, you are instantly redirected to the Telegram app installed on your device and the chat is opened? This works using deeplinks, which is a very old and established technology: you can register some app in user's OS to redirect certain URLs to the app instead of browser. Most apps use custom protocol: here is deeplink to chat with me in Telegram: tg://resolve?domain=hlothdev Try it! Click and your Telegram app will open.
So there should be something similar for Instagram, right? Unfortunately, no. Instagram is smarter: they use what's called a "Universal link". Universal as in "OS will decide for you what to open, so just use https:// protocol and I'll figure it out". The problem? This does not work. Instagram indeed has a deeplink, but with https protocol. If you go back to Telegram for a moment and send someone this link: https://ig.me/m/billieeilish and then tap on it, you'll notice how Telegram opens Instagram app and takes you to DMs. But what will happen if you try to open it in browser? It doesn't matter how: paste in address bar, redirect with Location, open with window.open: browser first parses URL as if it was a web URL. https means go to the website. It doesn't matter that user has Instagram app with the same exact deeplink, browser will open the webpage. If only Instagram registered custom protocol, it would have been so much easier.
How do we get around this? Well, we can't. But I still managed to do it somehow, but only for Android. I spent a few hours and found an ancient, deprecated Android protocol that is not even in documentation anymore, it's only mentioned in like one stackoverflow post. It has custom protocol, meaning browsers will interpret it as app deeplink, and it allows you to pass any URL (even with https) to registered apps handler. Here is how it looks:
1const username = 'billieeilish'2const url = `intent://ig.me/m/${username}#Intent;scheme=https;package=com.instagram.android;end;`3// intent://ig.me/m/billieeilish#Intent;scheme=https;package=com.instagram.android;end;
And the craziest thing is that this 2013 stuff worked on my Android 14 from any browser! Unfortunately we had to scrap this idea, because we wanted to be consistent among all mobile platforms (i.e. only implement what Safari supports). But even if we had this, a few years ago browsers decided not to let JavaScript know if user has specific deeplink scheme registered or not, so there was really no clean way of implementing fallback as I did back in the 2020: open deeplink -> watch for errors for 0.1 sec -> open https webpage if deeplink was not opened.
Microservices and queues
A few years ago I was experimenting with my mobile multiplayer strategy board game Remafia (still in development) and implemented message broker between my game server and public-facing API. Moreofme was the first project where I clearly separated all backend microservices that don't operate with business logic of app: auth API is still in frontend repository, but data fetching/parsing and rendering is two separate microservices that in future we can scale horizontally.
I had some struggle with RabbitMQ... Did you know it silently enables guest user with no password that has readonly access to everything? I only found out when randomly pentesting all parts of our application. But when it comes to creating user and roles, you could write 200+ lines of config just for 4 users to have correct permissions to access queues. It has its own hasing algorithm which means you must use its CLI to hash password for authorizing users to put in config. It also silently enables a lot of deprecated plugins upon first start and then complains that they're enabled, in order to disable them you have to explicitly state in config that you want to disable them.
There is no easy way of answering to requests with RabbitMQ. Apparently, "it's not suited for responses and you should use RPC instead". I tried their approach in documentation with creating non-durable queues and answering there, but it created so many problems. Eventually I used what I already implemented in my previous attempt at working with RabbitMQ: reply queues with correlationId. Two channels, two flows: one for requests, another for replies. Pretty simple and straightforward for me and does not require any weird configurations and permissions.
Emails
I'm big fan of selfhosted email services and you can't convince me that anything else for personal use is better. But we've decided to go with external solution because:
- We were planning on implementing marketing emails in future
- It takes like 10 hours to setup everything properly for selfhosted email. Saving 10$ a month does not worth spending 500$ for time spent on configuring a selfhosted mail server
Nodemailer for Node.js still works great to this day and has everything we needed, so I've decided to go with it. Initially I wanted to implement protection against XSS in email templates myself and make variable replacing and... then I remembered there is a library that does all of that and even more and even better: Handlebars.js. It looks old, but it does its job perfectly and fast. Also vscode has syntax support for it.
I quickly made template directories structure with basic html, email html container and content templates with plain-text version of each content template and connected to SMTP of external service we use for sending emails.
But here comes iOS! You see, every mail client was reading email perfectly well, even my beloved macOS mail client which is almost the same app as iOS mail client:


![iOS Mail [X] <http://www.moreof.me/> app](/_next/image?url=https%3A%2F%2Fblog.hloth.dev%2Ffiles%2F246387b6-9201-415a-bd6b-adc746f0d5dc.png&w=3840&q=100)
I had no clue where it comes from. My guess is that [X] is Safari trying to render base64 image in text (logo) and <http://www.moreof.me/> is from title argument of <a> tag on that logo. Eventually I gave up solving this and added a hidden tag in the very start of email body that is picked up for preview in every single email client I tested — this text tag holds plain-text version of email, so the results are even better than it was before. This was also the solution I saw other developers prefer in internet 🤷♂️
Then as we started implementing optional emails (notifications) we wanted to add unsubscribe buttons. I already knew about List-Unsubscribe header and I was 100% sure it would work just like it always did. But when I tested it, it does not appear in any of my clients.
Turns out, all email clients who invented this feature for better user experience, decided to remove it. They are hiding this unsubscribe button unless the domain that sends the email is trustworthy enough (source: https://stackoverflow.com/a/77012666/13689893) because scammers can detect active email inboxes when they receive request to List-Unsubscribe email.
I don't understand this logic: if I don't see the unsubscribe button in the email client on some email, I'm going to click "unsubscribe" in email body anyway. Now, as I open this URL in browser, scammers not only know that the email is still active, but also my IP address, my browser, my OS, my browser fingerprint, and someone else could fall to fishing scam and log in into their account in order to unsubscribe. All of that could have been prevented by proxying List-Unsubscribe request through email hosting servers...
I was also fascinated by how GitHub handles Issues in emails and how you can respond to an email from thread and they'll post that email text from your account, so one time I noticed that they're using Reply-To header, which is just genius. You send email from one email, but ask receiving email client to send reply to another email. So I used it for "Request feature" page — we send feature as text from main email to admins of website, but when they reply, the email is sent to user's registration email.
Creator Tool
Creator Tool is packed with features and it holds most complex code structures in the app. I wanted to make this page feel especially native, so I added a bunch of transitions and gestures there. One of my favourite is Svelte's elements transition
And this insane animation transitioning editor to preview
Compressing images clientside before uploading is essential and luckily easy, thanks to the amazing compressor.js. This takes a second and you won't even notice loading, because it works on canvas. But is canvas really that fast? I had to learn the opposite when I was implementing video compressing.
Compressing videos inside of browser in pure JavaScript! without ffmpeg
I know what you think: ffmpeg-wasm. As much as I like the fact that someone managed to port it to web, its WASM binary size is 32 MB and it's very slow. While conducting tests, it took my Samsung S23 ultra 4.5 minutes to process 22 seconds video, while ffmpeg in CLI takes 10-15 seconds. This clearly wasn't going to work.
Thanks to modern JavaScript and some good opensource developers it's possible to make mp4 and webm videos inside of browser without WASM. Today we can use native JS! Browsers support rendering video since 2015, I'm talking about Media Streams API. If you ever used your camera in browser, for example, on meetings or videocalls, that's exactly what these website use. But it's not limited to external capturing devices: we can also capture frames of <video> element and data from <canvas>. See where I'm going with this? The process is straightforward:
- Seek to specific timestamp in virtual video
- Draw frame to canvas using canvas.drawImage
- You can also crop, scale, apply effects etc at this stage
- Record canvas data to a video using canvas.captureStream()
This method works good enough, but there are some drawbacks. While captureStream allows you to set target FPS, it's actually scaling resulting timeline, because in reality it records in realtime, meaning that duration of the resulting video will always be duration of recording — so for 30 seconds video there is no way we can process it under 30 seconds. Additionally, any lags in browser or switching tabs causes this whole process to freeze and record the same frame over and over again.
Eventually I scrapped this implementation and found something much more interesting: WebCodecs API. As soon as I saw potential in this API, I was thinking: "there is no way Safari supports this", and as it turns out, it kinda does!
Specifically, we are interested in VideoEncoder API that allows encoding image data into video on request, so we're not limited by realtime and can process videos in a few seconds. But in order to create an actual videofile, we also need a specific format muxer, such as mp4-muxer or webm-muxer. The process looks similar to what we've done with Media Streams API, but this time we encode every frame without delays and we can set target fps and bitrate.
Finally, we upload background to server along with some settings on how to render the post content and post ID itself. Server uses JavaScript backend running with bun.sh (actually node.js, but it's not much slower) and uses satori to render html to png. Satori is pretty fast and easy to use, but also primitive, does not support inline elements and only works with node.js due to weird dependencies. We might switch to virtual chromium browser later. Finally, we use JavaScript's sharp or ffmpeg to process resulting video and send it to S3.
This works well, but rendering on CPU is very slow, so in future we're going to rent Cloud GPU and pay 2$/month for using 7000$ GPU.
Also while making this color picker in Creator Tool I learned some color theory and the difference between HSL and HSV. I've decided to go with HSL at first, but then realized that HSV fits this color palette much better


Spotify and its Next.js backend for widget
We had an idea that you can connect your Spotify with your Moreofme and add newly discovered tracks in other profiles to your library with just one click. Initially we wanted to make our own player and request MP4 previews from Spotify API, but Spotify actually deprecated this and returns null for all MP4 previews. We couldn't find any other way of playing Spotify songs than with their embed widget.
Did I mention I hate Next.js? It's not for the fact that it's very slow compared to svelte, but for the fact that it can be used in backend. And somehow Vercel convinced Spotify to use it for SSR on backend of their widget... FYI next.js can handle up to 80 concurrent requests per second with Node.js and is considered the slowest SSR framework among the popular ones for JS.

The widget has some kind of API allowing us to control its playback state (play/pause/seek) and switch songs, so we wanted to hide it and still implement our own player, but it turns out Spotify does not like that! They explicitly prohibit developers from messing with widget's interface and playing songs while not displaying Spotify logo and artist's name.
The widget is very buggy, it takes 179 lines to load it to svelte's container and some messy sphagetti code to track events from it, because it has exactly one callback for everything and it works unintuitively. Since there is already "save to my playlist" button, we just removed our own button for the same purpose.

And to search on Spotify we use their free server-to-server API. They want you to refresh access token every hour, so we had to use a queued approach there not to miss that time when we need to fetch new access token. Also for some reason Spotify does not include podcast show name when it returns its episodes, so we have to make 1 additional request for every podcast that they return. Otherwise, their API is okay.
OpenLibrary and the fastest search among 38 millions books
Honestly this part of moreof.me feels so overcomplicated, but I'll try to keep it simple.
As you already saw, we have a "Currently" section in profile, where you can choose your current activities. One of them is reading books. Initially we wanted to use Google Books, but they don't like programmatic access to their database and prohibit dumps attempts. I wanted something simple, free and optionally allowing us to download full database dump for local searches and fast queries.
OpenLibrary.org seems a good choice! It's community-maintained (and as we know with maps, wikis, software, the more open-minded people work on something open-sourced, the better the final product), so there must be all books ever written and most of them have covers and authors. Just enough basic information to display it in that section.
One way would be to use their API, but, as I mentioned, I wanted local searches. They also explicitly state that any attempts of dumping through API are prohibited.

A better way is to use their dumps they publish every few weeks... 2.9 GB for full list of all books? Seems fine. 100 kb/s? Ok, we might want to switch to webtorrent to automatically download their dumps from archive.org 10 times faster... Unarchiving the resulting dump... aaand it's 20+ gigabytes uncompressed. At least this is TSV, not JSON, we can still parse it with JavaScript using streaming.
And then after a few hours of tries and fails my parser was working and collecting names and covers of all books. It's even faster on production server: a few minutes to download compressed dump, a few minutes to uncompress and parse it and a few minutes to insert all 38 million rows to database.
Well, apparently, scanning 38 million rows isn't fast, especially when we define 'fast" as 100-200 ms query, not 30 seconds. So to make things faster I had to learn how to use full text search in postgresql. Since this was my first time dealing with postgresql on server, I had to learn how text searching algorithms works in it and how to compose indexed text search queries.
With 100ms requests and sorting by relevance I also needed to sort books by popularity. Later we also added priority to books with covers and authors, so the SQL query looks quite big, but runs amazingly fast. Honestly I thought PostgreSQL was some old 1990-era slow database, but I can see why many choose to use it production most of the time. It even has JSON validator built‑in!
Reverse engineering Steam's secret APIs
When I was thinking where to get list of all games, I immediately remembered working (if you can even call making one get request "working") with famous Steam's games list API.
For those who never used it, Steam has the absolutely free, no auth, no special headers, no arguments, 100% uptime JSON list of ALL published games/apps/dlcs to their store. Here it is:
http://api.steampowered.com/ISteamApps/GetAppList/v0002/?format=json
But with its simplicity also comes the problem: there is no way to filter out videos, trailers, DLCs and software, since we only want games. Additionally this API response does not include any popularity score, so we were having problems with... let's just call it questionable Cyberpunk 2077-inspired games.

But there is no public Steam API to get games... or is there?
There exists a website called steamdb.info which somehow has information about every single game and even more. I wanted to use their API, but they don't have it:

So I found this website created by author of SteamDB Pavel Djundik aka xPaw:

It turns out Steam actually has secret APIs, they just don't document it. IStoreService/GetAppList worked pretty well, but it still had no data to sort results on. Eventually we decided to search using Steam's API. The endpoint I was interested in, IStoreQueryService/SearchSuggestions just wouldn't work. From auto-generated documentation, it's unclear what format of data it accepts. Everything I tried failed without any errors, so I was ready to give up, thinking the endpoint was broken. And then it worked!
Look at this monstrosity:
1https://api.steampowered.com/IStoreQueryService/SearchSuggestions/v1/?input_json={"search_term":"cyberpunk","max_results":5,"context":{"language":"english","country_code":"US","steam_realm":"1"},"filters":{"type_filters":{"include_apps":"","include_packages":"","include_bundles":"","include_games":true,"include_demos":"","include_mods":"","include_dlc":"","include_software":"","include_video":"","include_hardware":"","include_series":"","include_music":""}},"data_request":{"include_basic_info":true}}
Finally, search was working great

We also implemented cache of API requests and mixed manually added games to search results from local database, such as Minecraft and Roblox, that are not published in Steam.
Notifications with no websockets and no polling
Usually I would use polling when making notifications: send simple GET request to API several times per minute. Sometimes I would consider using websockets, but we don't want to overload server with open connections too. So why not combine them? Let's use long polling!
Actually the concept is very simple. If you're familiar with how http requests, sockets and streams work, or if you ever developed file streaming, you know that server can append data to request while it's still opened and the client can read it before it's closed. This is how file transfers work. It looks like websocket, but we can't send any data to server in this one-way channel.
I had no idea that this concept has a name. I implemented this myself when I wanted to create progress bar of task running on server in another project, but it worked really bad and sometimes server would send two chunks and break everything.
While making the project, but before we got to notifications, I learned that this is actually a thing called server-sent events, and it has support in every major browser, so creating API endpoint that would use them was really easy and implementing fetching on frontend was even easier.
I didn't want to clutter backend repository with database connections, validating cookies and session token, so I ended up using JWT first time in my life (finally found a good use case for it). Instead of managing DB and session tokens, notifications backend just verifies JWT using shared secret defined in both frontend and backend and gets all details about user from that JWT token. Cool stuff!
Fighting uBlock Origin while unblocking Google Analytics
I don't like Google and consider not spending my money on it or give away them my personal data, but I'm not making decisions about which metrics to use in this project. In my own projects I don't use any metrics because I don't care how many people visit my websites — if they're really popular, I'll start getting job offers to my email :)
So the task was to make all adblockers respect leaking users data to Google servers and not interfere with it. Initially I was thinking about gtm-ga-proxy project, but it died a few years ago and was not updated for new version of Google Analytics. Someone in issues suggested taking a look at Google's concept of making analytics work for additional 30% of users, called "server side tagging". Here is the basic idea on how I accomplished that task:
- You still need gtag.js, but with some modifications, so go grab that at https://www.googletagmanager.com/gtag/js?id=[your id from GA]
- We need to selfhost it, rename it to something peaceful and proxy all its requests through our server
- We need to replace window.XMLHttpRequest and window.fetch in gtag.js but before its minified code. We want to obfuscate URLs of requests, so that adblocks can't detect obvious requests by pathname.
- Now deploy Google Analytics server-side tagging container using docker:
docker run -p 4321:8080 -e CONTAINER_CONFIG='' --restart=always gcr.io/cloud-tagging-10302018/gtm-cloud-image:stable - Finally deploy some simple http proxy on backend that decodes request pathname, passes request body and all headers to that docker container. Make sure to configure nginx to include real user's IP, correct origin header if you're hosting proxy under subdomain and CORS credentials
With base64-encoded requests to selfhosted proxy and renamed gtag.js there is no way any adblocker can see recognize GA. I tested this with uBlock Origin and even it couldn't block our version of GA.
Showing cookie consent screen to EU users only
EU requires all websites using cookies to show consent screen. We wanted cookie preferences to be accessible by everyone, but only show that annoying banner to users from Europe.
Actually not all European users, but just users from Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, United Kingdom, Turkiye, Brazil, South Africa and Japan. And of course, United States, California.
My idea was to check IP address of request and match it to geolocation using some external database. Luckily, we found one for free: lite.ip2location.com and it has free simple API that we use to automatically update database. They even published database schema for all popular DBs on their website!

Then we just use some magic to convert 123.12.32.1 to an integer and related country, region and city to it. For every new request we can call API that looks up that country and decides whether we should force the cookie preferences dialog or not. They also provide database for ipv6 which I've decided to add to the IP database too, just in case, but I still don't understand how you can read a bunch of colons and letters mashed together.
Other insane bugs related to Safari
A cool feature I really liked in design was keyboard toolbar with custom formatting tools in iOS style:

Since it's impossible to add these buttons to iOS keyboard toolbar with JavaScript, we've decided to just show them as fixed part of webpage, where JS would calculate offset from top of the page and move this element.
In most browsers it worked fine using some complex calculations involving VisualViewport API, but not Safari. And I'm not even talking about poor implementation of the API and hacks I had to find out for this to work with how Safari has two scrolls when keyboard is open, but about this:

Safari adds blank space below the webpage when keyboard is open and there is no way to remove it. You simply can't render anything in that blank area because it's outside of root element of the document: it's outside of <html>
Eventually we settled with just static toolbar above editor, because we wanted consistency between all browsers.
Safari knows better when it should autocomplete. A very basic thing is to let developer of the website decide when they have login form, but Safari knows better: if you put <input> with name attribute including "search" word or "username" word, it will ignore autocomplete=off and just fill in.
1<!-- Safari autocompletes this -->2<form>3 <input type="text" name="username" autocomplete="off" />4 <button type="submit">Submit</button>5</form>
I had to use "ctavalue" and put a warning for future developers...

I know iOS users are used to drag pages or views from left to right and go back, and obviously this works in Safari too. But when making Creator Tool's customization menu we stumbled upon an unexpected issue: Safari incorrectly detected moving finger across color picker. The worst part is that Safari does not allow you as developer to disable that "go back" gesture.
With solution taken from here and tweaking some values I was able to fix this issue.
Safari also caches backdrop-blur once it renders. I suppose it's good for performance, but they could've make it work like every other browser with rendering it only as it's needed to re-render.

It's easy to miss this bug but also relatively easy to fix. Just detect Safari's user-agent and set display: 'none' to the entire document element for 0ms to make Safari think that everything has updated.
While Safari implements VideoEncoder (and surprisingly it works much faster on my MacBook than in any other browser) they kinda forgot to implement MediaStreamTrackProcessor. The funny thing is that MDN says it's ONLY implemented in Safari, while in fact it's implemented in every other browser except Safari. And Firefox — it does not work with video encoding at all.
Luckily we can get rid of MediaStreamTrackProcessor altogether by creating VideoFrame for encoding from constructor:
1const frameInterval = 1 / fps2const frame = new VideoFrame(canvas, {3 timestamp: currentFrame * frameInterval * 1000,4 duration: frameInterval * 1000,5})
But I also wanted to mention how I was impressed by Safari when I found out that it supports View Transition API. Thank you, Safari, if it wasn't for a fact that this browser supports super-easy-to-use native page transitions, I would have to spend like 5 hours implementing them with JavaScript and svelte and another 5 hours fixing bugs related to it.
The conclusion
You made it! That's the end of this article. It was really fun to write it and go back through every obstacle we've met while building moreof.me and I'm genuinely hoping that this awesome project will hype up and one night I'll wake up from 100 missed calls about how we urgently need to rent powerful servers to handle millions of new users!! I would love to keep working on this project in future to make it profitable and better. Also I can finally put worty stuff to my portfolio and resume!
Go register an account there: https://moreof.me and send your feedback through this page on the website or directly to [email protected]