Main Menu

Recent posts

#1
Development Tips / Whitepaper faggots
Last post by CultLeader - Today at 09:18:52 AM
I've worked on various environments over the years, bare metals and cloud. I'm not a huge fan of counting data and numbers. In fact, the entire whitepaper (muh scientific) industry is a fraud.

Basically, any time some scammer comes along, like AWS, they tout you all the imaginary advantages of the cloud:
- Scalability
- Flexibility
- Cost efficiency

And it looks modern, shiny, so for most people logic automatically turns off. People start loving cloud and they start hating on premise hardware.

And any suggestion that running your own bare metals might actually be superior solution in all ways will inevitably meet resistance, like:
- Your experience is anecdotical
- Everyone's doing it (no one fired for buying IBM)
- There's no yet wide data available to suggest cloud is inferior

It is extra annoying when people try to hide behind science, like "this is not researched enough".

So, for some reason, to go into the cloud you can go based on feelings, emotions, illogical reasoning and you need to collect data points and perform science to then convince yourself to get out of the cloud?

I knew AWS is a fraud by the time I got my first ever bill and I compared it to the costs of the normal hosting. I didn't need whitepapers. Just for fun I made Eden Platform to work with AWS, but I'd never suggest to run that setup, because AWS will simply drain you on egress bandwidth costs, even if you run only few servers there for quorum. Because, even if all the heavy lifting of monitoring is performed in bare metals, we still want to have metrics of our piece of trash nodes which run on AWS so AWS will still need to send some data out because we monitor AWS nodes in Eden platform.

And I see slowly the fraud unfolding before everyone, and by the time historical fraud of biblical proportions is committed, and someone crunches the data (assuming it will not get censored away) and finally when whitepapers after decades will start saying the truth, that yes, indeed cloud is utterly cost ineffective and doesn't make any sense AWS will already have accumulated unbelievable profits and the next fad will start.

Imagine, if you see some old pedophile giving candies to children, and you can't do anything unless the pedophile already victimizes the child? This is cloud industry, which will kill so many startups with its costs alone and only years later the reality will become mainstream.

Let's discuss party lines of the cloud vendors and deconstruct them one by one:

Cloud is cost effective

Nothing could be further from the truth. In fact, if you're talking about AWS and you're talking about cost you're an idiot already. Basically, I computed the infrastructure cost that I have, of beefy bare metals, to which I pay 200 dollars for hosting and my local datacenter to which I pay 100 dollars for electricity.

I calculated the amount of cores and gigabytes of memory that I have versus that infrastructure that we had.

For CPU cores AWS was 56.17$ per core, and 13.95$ per gigabyte of memory per month. My infrastructure is 2$ per core and 0.27$ per gigabyte of memory per month.

So, the ratios between AWS cost are:
- 28x more expensive in AWS for cores
- 51x more expensive in AWS for memory

Now, the craziest part is... The AWS setup is cost optimized, it has sophisticated system to run mainly spot instances for most workloads, there was a bunch of other work to reduce costs as well performed, and with all this work the most optimistic ratio is 28x for cores?..

And some idiot managers in the meetings say "there's no economy of scale advantage yet to run our own hardware". That's what happens when you get clueless managers that never probably replaced hard disk in their lifetime dictating business decisions.

We can scale instantly with a cloud

Yes you can. Now, let's use some of our brains, look at this one "anecdotal" cost difference ratio of 28x and think about something.

If we just rented bare metal servers, how much more hardware we could get if cost ratio is 28x for the same price? 28x more cores. 51x more memory.

How about this scaling strategy: we just buy 10x more hardware than we need with bare metals, don't give a crap about scaling at all for a while, and we're roughly still 3x cheaper than AWS?

Or should we run some virtualized garbage, barely breathing services in tiny VMs and just keep using hardware at 28x cost after spending effort to optimize it to the best of our ability?

Automatic updates and security patches

Since I'm running NixOS I only do updates once couple years or so. Some people might say software is too old, I'm like, whatever. If some breach is serious, like openssl heartbleed then you'll hear about it. Most of what people consider "security" issues are imaginary and in their head.

Are your servers only accessible through SSH keys you own?
Are your private connections secured through Wireguard VPN?
Are your ZFS datasets encrypted and all secrets in memory, in case someone tries to boot server through installer USB stick or CD?
Is swap disabled on your servers?

I can answer yes to all of these, so, for me, patching just because there's a new version - I don't care. I'm building products most of the time these days and Eden platform lets me focus on that.

In fact, I think I can write other post, that the update procedure itself is a literal attack vector instead - everyone is being told update update update ASAP. I'll let everyone else update with fresh code, test it out for breaches in production, hear stories about how they were hacked and latest data breaches and then decide to update or not and to which version.

So, again, this is a non issue, and if you'll run your virtual machines in the cloud (you 99% will) you'll be responsible for updating the OS anyway.

Faster time to market

I don't understand this argument at all from cloud proponents. Right now, I already own a lot of hardware for me, my servers are almost idle. As long as I'm not doing some business which would require training/inferring AI models (that's just a modest share of 99% businesses out there that actually make money and not just blindly splash VC funds to the left and right) I can build hundreds of different website ideas on the same servers until I code website or service that is used by enough customers that I'd need to expand to more servers. Once I've built Eden platform I'm spending most of the time building new businesses. Sometimes I need to replace a disk, but my servers keep running because ZFS makes disk failure undetectable to applications.

Managed services

In Eden platform I have managed:
- HA postgres
- HA S3 (MinIO)
- HA queue system (NATS)

Everything is setup in a highly available manner and compile time error will tell me that, say, I only have 2 NATS instances if 3 are needed for Jetstream quorum.
If I don't like anything about the managed service, I own it and can change it, make more things configurable and so on. I could even fork the underlying components if I wanted. Good luck to do the same with cloud services.

Better customer experience

This point makes me really pissed. As having been on the side of both trenches I can confidently say that managing even thousands of bare metal services is simpler than dealing with cloud bullshit.

I only remember something magical few times, like new batch of servers shutting down within 24h, but we did Linux update and the issue was gone. Other than that, it is much more easy to debug once you actually own your own hardware. I know physical limits of my network ports and switches, just put monitoring on if they're reached, server/hard drive temperatures and if you're running NixOS nothing really unpredictable will happen. Once I worked out initial issues in my infrastructure, adjusting ventilation for good server temperature I forgot I have these servers. They're just running. Sometimes there's electricity loss but I have monitoring for that, I know when to reload in memory secrets and my systems are highly available - any DC or server can go down and my projects still work.

Now let me talk about some of the biggest pain points I've been dealing with, and the root cause is... the goddamn cloud

- Some component is alerting, running unreliably. Root cause? Its running on spot instance to "save costs" and it gets often shut down because AWS is taking our spot instance from us.
In my infrastructure this will never happen. My servers go down very rarely and jobs are rescheduled to run somewhere else. Even a shitty, unreliable software will run a lot better if it os not shutdown hundreds of times per year because we run on spot instances. And even spot instances don't deliver any real cost savings we could get from bare metals anyway, so what was the point of running it in the cloud?
- Magical connection disconnects for one service. Root cause? Some AWS bullshit limit was hit.
Of course, we don't own networking of AWS. We can't easily monitor it. Our network is doing who knows what. It amazes me that open source people would move heaven and earth not to run some proprietary software on their servers and proprietary software components are becoming extinct. But for some reason, it is okay to host in the cloud, where now you need to face myriad of magical networking problems because you don't have access to their network? How is this a thing? In hetzner your servers just use the VLAN and you configure it. That's it, as long as you're below the limits of the switch and the network card, which you can benchmark with iperf you'll be good. No magic bullshit because this AWS instances happen to have specific stuff about networking or whatever.
- Server becomes unresponsive. Root cause? AWS instance reached memory limits.
You know what happens on bare metals when this happens? Just process gets killed, I get an alert that process is killed. Most of the developers are aware of this issue. Their server doesn't become unresponsive.
- Docker container doesn't start. Root cause? It was built on a newer server with newer x86_64 instruction set and our AWS server was ancient.
I'll be fair, this could happen in old bare metals. But why I bring this point to attention is, Jeff Bezos knows that every server is valuable. They'll never decomission old severs as long as there's any life in them. Think about this, if you saw an ancient server from 15 years ago you'd say "I don't want it, its old". But if the same dusty ancient server ends up as one of your instances - it is still good for Bezos to give it to you as a VM! How in the world does that make sense?
- K8S Pod doesn't start on a node. Root cause? Not enough elastic network interfaces on a node
Again, will never happen in Eden platform. Our networking is extremely simple and doesn't need bandaid services running besides the ancient routing protocols running the entire internet, written by old graybeards that were designed well before even AWS was a thing. Every server in Eden platform has its own ip and containers use host networking for maximum efficiency. Caveat is, for every service you have to pick unique ports. What happens if I pick duplicate port? Its a compile time error in Eden platform. I can even code with OCaml to automatically pick free ports like DNS names. Much superior to having ENI exhaustion alerts and debugging issues in the middle of the night...

Basically, all the worst magical crazy bullshit issues that I've had - most of it was in the cloud environment. If I'm doing NixOS on bare metals I'm learning about Linux and it runs on most of the machines on the face of this earth. If I'm debugging AWS networking I'm just learning about meaningless AWS garbage that might just be a passing fad.

In conclusion

AWS is an amazing business model:
- You pay a shit ton more for their services
- You get to debug all the bullshit problems with their infrastructure and prevent them on your own
- You're the one blamed in case things stop working.

AWS is like a pedophile that always gets away easily by blaming its victims for everything and is motivated to perform more and more crime without anyone to stop it. Amazing customer experience!

So, I guess at this point, there's clearly not enough evidence to prove that cloud is an absolute trash and no whitepapers are written and this is a pseudoscience. But of course, at the end of the cloud era there will be people who knew which was better without whitepapers and there will be the blind followers who will take in all the damage until the mainstream viewpoints have shifted.

Good luck with your AWS account, whitepaper faggots!

And the rest of you have a good day :)
#2
The pattern V2 / Eden Platform updates 2025-04-...
Last post by CultLeader - April 07, 2025, 03:19:20 PM
Hello bois, I added more updates to the Eden Platform https://github.com/cultleader777/EdenPlatform/commit/f2121ba0b8fdb541e6aacc62651acdd8724c0235

Now you can add custom secrets to the secrets.yml and also use those secrets in blackbox deployments.

Another thing I added is support for custom MX/TXT/CNAME records served by our DNS servers, I needed those because I don't have self hosted email yet so for now I'll rely on third party providers. People usually need those for domain verifications and so on.

I did quite a few bug fixes, but I already forgot what specifically.

Seems like I have over half million views on some posts, although when I upgraded SMF in this forum (not because I wanted, hosting provider broke some php function with php version upgrade and forum was offline) online counter broke down, which I'm not interested in spending time to fix.

Getting that many post views I'm glad that calling most leftist software engineers a bunch of faggots finally paid off. Feel free to register for the third user in this forum, bunch of lurkers, lol.

Have a good day bois.
#3
The pattern V2 / Eden Platform updates
Last post by CultLeader - February 05, 2025, 01:55:07 PM
Hello bois, I keep adding features to the Eden platform, things are breaky in the beginning
https://github.com/cultleader777/EdenPlatform/commit/c2d048609fb697ff43b7ac2a0e7e4db9051eac04

Breaking change:
1. tld was removed from region
2. in global settings you must pick the tld to use for internal DNS names
3. tld table has expose_admin field removed because global settings tld exposes admin, things are simpler this way

I added:
- custom environment variables or secrets to the blackbox deployments. This is needed if you, say, want to host database in Eden platform, and you need to expose private environment variable to PostgreSQL from app that users the database, you can do so now
- support for multiple TLDs, you can host as many domains as you want. This was intended to be supported from beginning but of course, there are issues before you actually test it, now it works
- ingresses for blackbox deployments, basically, you can expose blackbox service like wordpress blog or self hosted Jira under any subdomain and host it under eden platform
- you can disable dnssec for tld if the provider you bought domain from doesn't support dnssec

Some fields might seem like ad hoc, like basic_auth file field in blackbox ingress, but that is because I'm deploying features I'm now actually using in my businesses. Inevitably, if someone wants to use Eden platform they'll have to get their hands dirty because I don't think anytime soon I'll start writing thorough documentation, although we already have 777 tests.

Peace bois.
#4
The pattern V2 / 90% cost cut from AWS with Ede...
Last post by CultLeader - January 19, 2025, 03:30:31 PM
Hello bois.

I started building out my products with EdenPlatform on top of AWS. I started with only 4 servers, two datacenters for quorum, modest 8GB machines while having beefy bare metals at local DC.

It started out with 1000$ per month, which is already unbelievable as any decent VPS provider would provide such hardware at laughable ~15$ per box, which would be 60 dollars total. But whatever, as long as I could get EdenPlatform working with multiple datacenters, including AWS I figured its a proof of concept that you can easily connect expensive AWS cloud to other clouds or datacenters.

The cost was starting to run at unbelievable 2000$ per month for four servers. Some AWS fag would scream "visit cost explorer, yoo moost bee doin sumthin wraaang!".

Yes, I was doing something wrong - entrust my businesses with AWS in the first place. I will not bother with cost explorer and complex AWS pricing rules - thank you.

So, I added hetzner bare metal server implementation to Eden platform, took a week. https://github.com/cultleader777/EdenPlatform/commit/6da40c742c118454310d66d8402d20789e566597

Then, once I got it fully working inside Eden platform, it took me a day to migrate all of my infrastructure from AWS to Hetzner machines for which I'm paying laughable 200 dollars per month for all machines.

I migrated:
- Nomad servers
- Consul servers
- Vault servers
- Victoria metrics servers
- Alertmanager servers
- Minio servers
- Postgres servers
- Clickhouse keeper servers - the only one that involved manual action of reconfiguring peers

And I said goodbye to AWS overpriced garbage VMs by running terraform destroy with a perfectly satisfied smiling face.

Mind you, these machines have twice the cores and 8 times more RAM than AWS pile of garbage small machines have. They perform much better and are always snappy, not at the modest RAM limit barelly chugging like they were in AWS.

So, in my case, price reduction for hardware as not just 90%, I got a lot more powerful and cheap machines and I still have all the properties of Eden Patform, cost ratio is more like hundred times more expensive in AWS.

And AWS faggots will start screaming "but your database is not scalable, your queues are not scalable, your storage is not scalable!"

First and foremost, if you use RDS instance in AWS it is not scalable either. If RDS instance breaks down due to traffic, guess who needs to resize it? You. If kafka instance is no longer handling the load, who needs to resize it? You. If ElasticCache instance is too small, who needs to resize it? You. Who needs to setup prometheus exporters to these services? You.

There are very few AWS services that scale out of the box, like S3 and SQS, but cost of running any server in AWS is laughable. And if you use something like DynamoDB, where you're literally raped for every single request, if your traffic bursts you'll never get better price proposition than if you ran bare metal Postgres instance to begin with.

And guess what? Since I have OCaml programming language that generates EdenPlatform code I could easily code OCaml functions that could burst up compute from any cloud at any time with regarding metrics. This is not just some basic autoscaling like kubernetes, I can write OCaml code spin up entire datacenters and terraform code will be generated and backend application scaled to only compute machines. I have all the power in my hands to do any autoscaling strategy I want with EdenPlatform, even querying moon phases during compile time.

In the context of EdenPlatform, clouds like AWS or Google cloud are legacy systems. Temporary stops. You already have managed highly available Postgres with Patroni, you already have S3 (MinIO), you already have Kafka (self hosted NATS), you already have Analytics database (Clickhouse), you already have DNS (Bind), you already have redundant links across datacenters (Wireguard + BGP routing) and everything is set up in highly available fashion, any server can go down and I'm still happily chugging along. Not to mention everything is using rock solid NixOS to have reproducible state on all machines where you can rest assured operating system will be running a long time without any problems.

In 10 years time Cloud will be looked into as the biggest fraud in the history of computing. Maybe AI will be the first and cloud second, both are unbelievable frauds. Startups that serve 10 requests per second spend 100k on AWS while they could be serving the same for under a thousand dollars on normal hosting (if they also adjusted their mindsets to never run garbage of NodeJS and to run Rust programs instead). Today's DevOps practices will be looking like a clown show 10 years from now, when most hosted services will be run in self hosted manner and where compiler like Eden Platform will check 99% of mistakes for you and deployments will work the first time.

Dear AWS, I always hated you. You were overpriced, useless garbage from day one. And people who praise you are mindless imbeciles. Goodbye, and I hope I'll never see you again.

Good day bois
#5
The pattern V2 / Eden Platform - The First Alph...
Last post by CultLeader - November 10, 2024, 04:42:03 PM
Hello boys, the time is ripe to release initial early version of Eden Plafrom.

I don't have 7 clouds, I have two out of 7, AWS and Google Cloud because most people use AWS anyway.

It's been a while, and after 72k lines of Rust implementation code I'm finally running in production and shipping features.

Things are still very early, things are breaky, there's very little documentation. Although, if anyone wants to figure out how it works there are plenty of tests of every feature.

So, let's list things that work:
- Backend apps
- Frontend apps (not well tested yet)
- Postgres queries/mutators/transactions
- Clickhouse queries/inserters/mutators
- Nats queues
- Tracing with grafana tempo
- Log collection with grafana loki
- Alerting with prometheus
- Hashicorp Consul
- Hashicorp Nomad
- Hashicorp Vault
- Automatic TLS certs with certbot
- Automatic wireguard VPN across datacenters
- Exposing backend app to any DNS record via a load balancer

So far as I'm building apps I need to touch Eden platform less and less. The same happened with Eden DB compiler, last major feature I added to EdenDB compiler was OCaml data modules and I don't remember last time I needed to touch it. The same slowly but surely is happening with Eden platform, the more and more I work the more I work on my applications and less and less I need to touch Eden platform.

I can already serve any html imaginable for the users so I can implement most of the websites out there. However I think the current most major features missing are self hosted email. I also want to add FoundationDB next year just to have one fast and reliable key value store in those very rare cases where you need one.

I don't plan to spend too much time on documentation, if you want to figure out behavior of certain table or feature, read `wiki` directory and also read tests on how all the features work, there are over 750 cargo tests in initial release as I usually try to thoroughly test every new feature.

So, from now on I'll keep building applications (which I'll never open source) on top of Eden platform and Eden platform only needs to keep maturing at this point.

Have a good day bois
#6
The pattern V2 / Eden DB Improvements 3
Last post by CultLeader - April 27, 2024, 10:29:26 AM
Sup bois, I've added extremely powerful feature into EdenDB: OCaml data modules. What this allows to do is to define data in OCaml using typesafe generated constructs.

I did this a while ago in this commit https://github.com/cultleader777/EdenDB/commit/9270c3e42ab28410e8ba3018f76b4eea0b586da4 , just didn't get around to describing it yet until now.

Motivation? Eden data language is fast to parse but will never be as powerful as standard typesafe programming language.

For instance, if I want to define 10 similar nodes in a cloud, doing it with OCaml is trivial, run a loop and you're done.

People write an utter abominations like jsonnet, or they add hacks like count variable in terraform or for_each in terraform.

In reality we just need a good programming language to define our data and we don't need to learn useless garbage like jsonnet, or how to program in terraform (which is nightmare). Terraform output is just an assembly compiler later, we don't deal with it directly.

Here's an example.

Say, we have main.edl file


DATA MODULE OCAML "some-module"

TABLE test_table {
    id INT PRIMARY KEY,
    some_text TEXT DEFAULT 'hello',
}


And we have OCaml file at some-module/bin/implementation.ml

open! Context
open! Db_types

let define_data () =
  mk_test_table ~id:123 () |> def_test_table;
  mk_test_table ~id:123 ~some_text:"overriden" () |> def_test_table;
  ()


So OCaml allows us to define table data in OCaml. We didn't write mk_test_table function (generated). define_data function is required in the generated code. There's a some_text column that has default value if unspecified. If your OCaml file implementation mismatches schema it's a compile time error.

To someone who's used to the standard implementation of compilers that do very boring stuff, linking executables from explicit outputs this might not be trivial how it works. In fact, I don't recall any compilers that do anything similar to what is done here. If codegen step is performed, it is usually performed in some Makefile task before compilation.

So, this is how it works:
1. EdenDB compiler parses all .edl sources with includes
2. We first process table schema and their interconnections, any error there (non existing foreign key column and etc.) is EdenDB compiler error
3. Now we know all the schema that will be in EdenDB, we just didn't insert any data yet and didn't check foreign keys, constraints and etc.
4. At this point we actually run OCaml codegen for the OCaml data module defined at some-module.
   We generate: dune, dune-projec, context.ml, context.mli, db_types.ml, implementation.mli, main.ml and dummy implementation.ml if it doesn't exist yet.
   User is responsible for modifying implementation.ml file with his data, he can have other OCaml files with utility functions and everything.
   We assume that we use dune for OCaml.
5. Now we insert the EdenDB data that is defined in EDL
6. At this point, EdenDB compiler runs dune exec for data modules, reads json dump of defined data, and inserts data into EdenDB state according to the schemas
7. The rest of EdenDB checks are performed, like checking duplicate primary keys, foreign key existance on all of the data, whether it is defined in EDL, Lua or OCaml

I haven't seen any compiler do something like that so I thought it is interesting design. And I'm 99% sure, being 70k lines of code into Eden platform that this is how I will define my data at the end of the day.

So, Current eden platform pipeline:
1. Run EdenDB compiler, which runs OCaml data modules and defines data of all of our infrastructure. Basic database correctness checks are performed here
2. Eden platform analysis step, here we perform all the checks regarding our infrastructure, say if defined node ip belongs to the DC subnet, that we have 3 or 5 consul servers per DC and so on
3. Codegen step runs which compiles the rest of our project, including provisioning, postgresql migrations, nix configs, and gazillion of other stuff. There are files that user can edit manually in Eden platform in their project directory (mainly business logic of EdenDB applications).

Have a good day bois!
#7
My Rants / ChatGPT - most overrated progr...
Last post by CultLeader - February 17, 2024, 11:11:23 AM
Working on Eden platform (at this point I have AWS and Google cloud talking together, and I did bunch of internal refactorings, NixOS root on zfs and etc. anyway, 5 more clouds to go until release), I thought, could ChatGPT ever be productive enough in maintaining infrastructure? Not that I haven't heard of it, I'm user of ChatGPT it as everyone else, but when I had actual problems in infra, I even tried giving it as much context as I could and it was next to useless.

In linux when running virtual machines, by default iptable rules apply to the vm guests, which should be disabled if you want your sanity. I've been debugging this for a day, and asked ChatGPT to guide me, with logs all server topolgy and contexts. For crying out loud, I've even pasted in the culprit - the ipables rules which were blocking the traffic and it didn't suggest me that iptable rules were at fault.

When I think about it, programming is extremely precise. Having never used FRRouting before to implement routing for Eden platform it was quite a pain to get networking working across clouds. For google faggity cloud I've spent a lot of time debugging why in the world L2 frame doesn't travel between the hosts only to find out google implements its own faggity andromeda abomination network which doesn't even support L2 traffic. Meaning, in google cloud you have to rely on their slow router garbage which can never be adjusted realtime in terraform and they suggest your to run your own bgp node to interact with their faggity router! All because they don't do L2 traffic. I cannot spend bunch of time dealing with faggotry of google cloud so I simply enapsulated traffic between eden platform datacenters (which could be any datacenter implementation) in IP GRE protocol so it would fly as L3 traffic.

I will not treat any cloud specially and use their special faggotry nonsense just to make nodes across clouds talk when I have public ips in every cloud, so I should be able to implement my own routing, no? Google cloud faggots...

So, anyway, when doing this complex networking, usually, a single configuration misfire and traffic is gone. There's no leeway for error. I even had to redo networking, abstract it to have attributes for datacenters and I had to write configuration simulation in tests so now that I have got to a working state I could freeze the configuration to know for a fact I didn't break anything.

Maybe I just like the pain, but I can't wait until the day emerges where all of the issues I've been dealing with simply become compile time errors in high level Eden platform data and I don't have to deal with such low level details and can focus on finally shipping suite of applications I want to build on top of Eden platform.

ChatGPT, even given all of the context, if it makes mistakes in single place things don't work. And every time I used it to generate even skeleton terraform configs I always have to run it and debug. There was not a single time where I pasted code from ChatGPT and it would just work, except for simple queries like write lua function that sums 1 to 10. Apart from these toy examples AI is next to useless.

So the only use for AI seems to be when you need to write a polite email, generate a picture, rewrite some sentences - anywhere where precision is not important. In software engineering precision is crucial.

Let me ask you one thing, would anyone use a library that does a memory leak one in a million times? No, it would be dismissed as garbage. You have to rigorously test it to make it usable in real world in production.

Some indian faggot said with a smug face "there are no programmers in 5 years". Sure, I guess there will appear a batch of people blindly copy pasting things from ChatGPT and they'll eventually still need a programmer to fix/debug the mess which they have made.

AI coding monkeys think from a very narrow short perspective. For some reason, AI fags cannot imagine producing programs any other way than there are gazillion repositories with different programming languages and you MUST parse and process all of that context before you can contribute. Someone is doing JavaScript with NodeJS? Parse and analyze that faggotry! Someone is doing Ruby, their app is full of bugs? Parse and analyze this faggotry too!

What I do with Eden platform is radically different. I'm looking down like I mentioned in another post. My data is defined in tables and columns. All columns are sequential in column store provided by EdenDB so analyzing data is blazing fast and can be parallelized to all cores. All applications are in Rust (I'll add OCaml if there is ever NATS support added, or I might expose Rust C library for OCaml to talk with NATS). So, in Eden platform you just add high level data, it is all compiled and analyzed and every application interacts correctly from day one. There is no longer need for NodeJS faggots or ChatGPT. To tell you whats wrong or to write many tedious tests. Currently, when writing app in Eden platform literally 900 lines of Rust get generated and I write 100 remaining which interact in typesafe manner with the rest of the ecosystem, making me literally 10x developer. I don't fear to interact with database/queues or other apps even without tests.

That is much more useful than some garbage generated by ChatGPT which may or may not work (usually didn't work for me). I want things to work the first time, with compile time errors knowing everything will work. I'm not interested in some probabilistic vomit I need to debug again and again and again.

Even if I'd let ChatGPT to write documentation I need to proof read it again anyway, but hey, at least it sticks to word vomit now!

Now, I still believe Eden platform will drastically reduce engineering power needed, but it will not be by producing random vomit that doesn't work - it will be code generation and compile time errors before shipping to production. That will be the big firing of Ruby and JavaScript monkeys, not ChatGPT.

Have a good day bois!
#8
Development Tips / Feature velocity
Last post by CultLeader - November 28, 2023, 07:21:40 AM
Today I want to talk about pain that a lot of engineers cause to themselves.

I will tell you right now - no company at a high level gives a tinyest crap about implementation details of any component you create. Someone might say I'm saying that the sky is blue. Apparently not, because the world is infested with yaml abominations and NodeJS.

Let's pick example, there are two NodeJS services. There's a mismatch in how one service calls another and its an error. It turns into a Jira ticket.

Project manager asks what's wrong with that? Engineer says "well, one NodeJS service doesn't know schema of other service, we need to run tests". Poor project manager has to nod head and say "ok ok".

This is bullshit excuse. Your system, oh fellow engineer, who you came up with of hundreds of NodeJS microservices is an abomination and you should feel bad. You're an idiot and it is your fault. When you think about it, if both services were in same executable, in Rust, it would be just compile time error and you'd have to fix it to ship. When services are separated, the exact same error for some reason becomes okay now!

Anytime such excuse comes to light "we can't verify kubernetes deployment yaml hence it errors out know because our helm chart repo is abominable pile of yamls that doesn't work together" its the fault of idiot engineers.

Anything that gets in the way of the high level goals of adding features to make your customers happy is the fault of the engineers. You picked Ruby on Rails garbage and now suffer many production runtime errors because of that? Its your fault. Your system is a repository of crappy incoherent yamls that have no typesafe enforcement so you resort to test environments to catch such errors? It is your fault. You can't deploy new database in 5 minutes with HA, replication and backups and it doesn't interact with your app in a typesafe manner? It is your fault.

Pagan monkeys have this funny idea that every system should be open and you should be allowed to put anything in it, like dynamic type schema registration, dynamic execution hooks or whatever. "Great software is not built, it is grown!". Imbeciles. They're saying that because their mom was an open system, open to 100 different guys that is, and look at them now.

What I do with Eden platform is radically different. There are no incoherent yamls, there's a single source of truth database which can be defined in eden data language or lua (probably also with webassembly in the future if there's need). I already implemented AWS terraform code generation, which uses AWS transit gateway if you have more than one datacenter in AWS, but uses wireguard to connect to all other clouds. I'll implement 6 more clouds for the initial version and they'll all talk together via wireguard VPN with high availability and OSPF routing since day one (you can't deploy half-assed no production ready configurations in Eden platform).

You see, since I said from the very beginning, if something doesn't work in production in Eden platform its my fault as an engineer. It is much more difficult to catch all errors during compile time than just leave users swearing and sweating in production due to runtime errors caused by incorrect configurations. Now I have no excuse like today's yaml fags. So I can actually solve these problems by analyzing data in eden platform compiler which tells me within second if my data is wrong. It tells me instantly with compile time error, that I deployed all DB instances only in single datacenter, which should be scattered around datacenters in region to tolerate DC loss. It tells me instantly if query application is using is no longer correct because schema changed. It tells me instantly if one frontend app link no longer can reach other frontend app page during compilation.

So all of these just become compile time errors. So guess what happens with feature velocity? Since all of the previous excuses become compile time errors there's no reason why you couldn't implement REST call to another service in 5 minutes - compiler tells you instantly schema is changed or is incorrect because we took responsibility. Compiler will tell us instantly if prometheus alert is using non existing series. Compiler tells us instantly that we forgot to configure virtual router IP in certain datacenters and clouds where it is needed.

What an old experienced SRE had to do himself by knowing of the system now compiler tells us to do simply it has all the information together.

So what should you do for the rest of the time once you're using Eden platform (hope to release initial version next year finally)? You just develop features and everything works together from day one. And you can do work of 100 engineers alone now. I'm very excited to release this project next year as no one ever done this and it will radically change the way things are done now. Most yaml masturbating excuse engineers - you're likely to get fired soon.

Have a good day bois!
#9
Development Tips / Looking down
Last post by CultLeader - August 30, 2023, 05:53:50 PM
For my thoughts are not your thoughts, neither are your ways my ways, saith the LORD. For as the heavens are higher than the earth, so are my ways higher than your ways, and my thoughts than your thoughts. - Isaiah 55:8-9
And he said unto them, Ye are from beneath; I am from above: ye are of this world; I am not of this world. - John 8:23
I am Alpha and Omega, the beginning and the ending, saith the Lord, which is, and which was, and which is to come, the Almighty. - Revelation 1:8

Let's talk about two directions software is developed in this world. Say, you develop an open source library. You're in for the world of hurt. How many different people in different contexts would try to use your library? Some idiot in golang book (yes, I've read a golang book just to hear them out, that reassured me even more that golang is trash and one day I might write dedicated post roasting golang) claims that if you want to make useful software it has to be generic. There is that evil word again.

The truth of the matter is if you make stuff generic your things are inherently limited. You are looking up. Looking up means like looking up to the sky, whether it will rain or not, you don't control that. The people that use your library are the ones from the higher context, in a specific organization looking down into your library whether it is usable in this context or not. Long story short, users of your library have all context to use it so they have much easier time to decide what to do. They don't have to use your library, for specific case they could develop their own custom component.

As soon as you put yourself into context of what you want to achieve you start looking down. You have full control of your project. Well, I guess someone could say that as I'm developing Eden platform (I'm roughly 60% there I think, developing infrastructure tool that has cloud/provisioning/versioning/logging/monitoring/ci/cd/security/applications/load balancing/databases/queues and etc. integrated together is quite a lot of work, believe me) I am imagining all the context.

In Eden platform everything is assumed, private ip scheme is assumed. Routing amongst subnets is assumed. I assume datacenters will have certain ip prefixes, like 10.17.0.0/16 for one datacenter and 10.18.0.0/16 for another datacenter. Hence the routing is simple. I assume datacenter can have up to 65k hosts, all inside /24 subnets inside datacenter like 10.17.1.0/24 for one subnet and 10.17.2.0/24 for another. I assume every datacenter has at least two wireguard gateways to connect to all other datacenters which is checked at compile time and you must fix it to ship. I use OSPF in FRRouting to route across subnets and regions and I don't need anything else.

So, developing Eden platform forced me to think in global ways of how one organization can develop everything as a single unbeatable monolith which supports infrastructure of around eight million hosts. I am thinking globally. And that required some refactorings on my part early, to figure out how regions, datacenters and interconnections between datacenters play together for all infrastructure. I was forced to go through those decisions early, not when you need to expand to multiple datacenters in production and nobody has clue on how to do that and then it takes years because of poor early decisions.

The inferior alternative to looking down is to look up, be someone's bitch. For instance, I could have used something like dnscontrol for DNS and pray they support such and such provider for all dns records. Well, that's not good enough if you want to make all assumptions and have all control. So guess what Eden platform does? That's right, we run our own DNS servers, we use rock solid BIND that stood a test of time, hence we can look down. We do our own routing. We do our own builds and rollouts with rock solid and reproducible Nix build system.

I'm not yet into this at the time, but plan is also to just generate all the terraform code needed for every datacenter. Say, you'll specify in datacenter `implementation` column that aws is the implementation and then Eden platform will generate all the terraform for all the machines in their appropriate subnets which obey Eden platform ip scheme and they'll be automatically connected via wireguard to the rest of datacenters of Eden platform, which might be on premise, google cloud azure and etc. And if more than one AWS datacenter exists they'll be connected with native AWS transit gateway instead of wireguard. We can do tricks like that just because we have all the information about our intrastructure, and don't have to limit ourselves with wireguard across datacenters.

It's a lot more work, than say, some absolutely worthless project that I don't understand why anyone would use - FlywayDB. Typical Java crap that... Executes schema migrations. And a lot of people were convinced to use this nonsense. Eden platform rolls its own schema migrations because it's very simple to do BEGIN; execute all migrations; COMMIT ourselves. Database already provides these abstractions that make it easy. However, if key value store was used it would be much more complex.

So, I do not want to release half assed trash like FlywayDB which does one thing and you need to integrate it to everything else. For Eden platform to be useful it must connect and make everything to work together from day one. Eden platform should be all that you'll need, where everything is integrated and working together since day 1, just like Apple strives to do. I want to make assumptions about all the inrastructure, hence we must take control of all the components. When we take control of all the components we are looking down and our job becomes easier, while if we were looking up, such project as Eden platform would be practically impossible.

Man is taller than a woman and looks down on a woman. Say many men use the woman, like the library we talked about. How do you call such a woman that many men looked down upon and used her? We call her a harlot, and rightfully so.

In a perfect design for which we strive, if you look up you can only look up into one direction, have only one master to whom the component will fully submit and not be influenced by any other component. No woman can ever be shared between two men, it is an abomination.

Of course, Eden platform uses third party components that are generic, used up, and because they are not aware of the certain context, suboptimal in certain places. This is just the beginning to get things up and running.

Once Eden platform is built and running in production, then we can allow ourselves to have custom, better optimized components in certain context where generic ones are limiting. But that is very far into the future.

Long story short, if you want to make your life easy, do not be like an open source fag that desperately tries to support every new itch of platform under the sun. For instance, I'll likely release Eden platform only under NixOS because I consider other operating systems a waste of time. To make your life easy, start looking down, globally, about the global context of your app so you can make assumptions and you'll avoid so much trouble down the line where most other engineers will be bogged down by meaningless indecision inducing details.

Have a good day bois.
#10
The pattern V2 / Eden DB Improvements 2
Last post by CultLeader - June 07, 2023, 01:16:27 PM
Sup guys, it's been a while you lurkers that read my posts but stay silent.

I'm working on Eden platform now, and basis of that is EdenDB, to which I make changes as I need them when working on eden platform. I've made quite a few improvements since last improvements post.

Ability to refer to other element's children through REF FOREIGN CHILD syntax

The need for this arose when I needed for abstractions, like NATS or PostgreSQL instances to specify on that volumes they reside.

Say this is a server table (simplified)
TABLE server {
  hostname TEXT PRIMARY KEY,
}

It has volumes as children (simplified)

TABLE server_volume {
  volume_name TEXT PRIMARY KEY CHILD OF server,
  mountpoint TEXT,
}


So say you have server-a with defined volumes

DATA server(hostname) {
  server-a WITH server_volume {
    pgtest1, '/srv/volumes/pgtest1';
  };
  server-b WITH server_volume {
    pgtest1, '/srv/volumes/pgtest1';
  };
}


And we need to define a postgres instance of a single cluster (simplified from real schema)

TABLE db_deployment {
    deployment_name TEXT PRIMARY KEY,
}

TABLE db_deployment_instance {
    deployment_id INT PRIMARY KEY CHILD OF db_deployment,
    db_server REF FOREIGN CHILD server_volume,

    CHECK { deployment_id > 0 },
}


So now we say that we want our foo logical database, with three replicas defined, one will be master with patroni and other replica.

DATA STRUCT db_deployment [
  {
    deployment_name: foo WITH db_deployment_instance [
      {
        deployment_id: 1,
        db_server: server-a=>pgtest1,
      },
      {
        deployment_id: 2,
        db_server: server-b=>pgtest1,
      },
    ]
  }
]


Notice, how we refer to the volume of `server-a` and `server-b` where we want data to reside using `=>` operator, which means child element.
We could nest this arbitrarily deep.

So, by using foreign child syntax syntax we do two things:
1. Specify which server runs the database (parent of the volume)
2. Specify where data on server runs

And, of course, you cannot define non existing elements in EdenDB, so you cannot make a typo of specifying non existing server or volume (unlike in typical yaml hell)

Full gist is here https://gist.github.com/cultleader777/41210e58026a0bec29f7e014945e40b0

Refer to the child element with REF CHILD syntax

Another issue I had working on eden platform is that I wanted to refer to the child element from parent element.
Say, server has multiple network interfaces, how to we specify which interface exposes ssh for us to connect and provision server with?

Say this schema (simplified from original)

TABLE server {
  hostname TEXT PRIMARY KEY,
  ssh_interface REF CHILD network_interface,
}

TABLE network {
    network_name TEXT PRIMARY KEY,
    cidr TEXT,
}

TABLE network_interface {
    if_name TEXT PRIMARY KEY CHILD OF server,
    if_network REF network,
    if_ip TEXT,
    if_subnet_mask_cidr INT,
}


So, network interface is a child of server and we can refer to the child from the parent so that we know on which network interface we ssh to every machine.

DATA STRUCT network [
  {
    network_name: lan,
    cidr: '10.16.0.0/12',
  }
]

DATA server(hostname, ssh_interface) {
  server-a, eth0 WITH network_interface {
    eth0, lan, 10.17.0.10, 24;
  };
}


Also, we can refer to child of arbitrary depth with the `=>` syntax from parent.

TABLE existant_parent {
    some_key TEXT PRIMARY KEY,
    spec_child REF CHILD existant_child_2,
}

TABLE existant_child {
    some_child_key TEXT PRIMARY KEY CHILD OF existant_parent,
}

TABLE existant_child_2 {
    some_child_key_2 TEXT PRIMARY KEY CHILD OF existant_child,
}

DATA existant_parent {
    outer_val, inner_val=>henloz WITH existant_child {
        inner_val WITH existant_child_2 {
            henlo
        }
    }
}


As usual, EdenDB fails to compile if existing child elements don't exist.

Full example here https://gist.github.com/cultleader777/d4f26449d2814a30d6b34e55c5d19c76

Detached defaults with DETACHED DEFAULT syntax

I had an issue of working on Eden platform that if default is defined in the database schema then it can't be changed by the user.
This is because eden platform schema resides in one set of files and user defines data in his own set of files.

Say, server has hostname and belongs to tld in the schema file

TABLE server {
  hostname TEXT PRIMARY KEY,
  tld REF tld DETACHED DEFAULT,
  fqdn TEXT GENERATED AS { hostname .. "." .. tld },
}

TABLE tld {
  domain TEXT PRIMARY KEY,
}


Now, it would be not nice to specify to which TLD server belongs in the default. Every user of eden platform will have its own domain.
So tld element is DETACHED DEFAULT. It must be defined by the user, and can only ever be defined once. If it is not defined, or defined multiple times it is compiler error.


DEFAULTS {
  // defines default for table 'server' and column 'tld'.
  // you cannot define defaults for non existing tables
  // and column must be marked as detached default
  server.tld epl-infra.net,
}

// now we can define data with detached default
DATA STRUCT server {
  hostname: server-a
}

DATA tld {
  epl-infra.net;
}


Full example here https://gist.github.com/cultleader777/3823ccef5c22b4b086c2468ab9e2e89c

And these are the main features I needed to add to EdenDB so far while working on eden platform compiler.

See you later bois!