Main Menu

Recent posts

#31
Main Thesis / Part 5 - The World of Inequali...
Last post by CultLeader - June 03, 2021, 10:40:49 AM
Earlier I have mentioned how I cannot stand the attitude that "every programming language is equal, but some are better for some things and some for others". Same with databases, infra management tools and you name it. Today we'll debunk this leftist cuck nonsense.

Humans

Consider every human and his attributes. Their height, their weight, how smelly they are, metabolism etc. All are decided by genes. So, if we just take one dimension of their height, does it make sense to categorize people among height? By centimeters? But there are tons of people that would be considered in the same centimeter of height, yet by milimeters they are still different height. So, height is a spectrum. There is not equality on all people by their height. If we picked two people randomly from this earth and I had to make a bet "are they the same height?" - I'd always bet against it, that they will be different height and be right 99% of the time.

Same with other dimensions, like weight, how smelly they are - all people are very different based on these dimensions. So, on single dimensions separately people are very different.

Now lets take all these attributes of single people and say that every person is described by an array of scores for each their attribute, from 0.0 to 1.0.

| name  | height | weight | smelliness |
|-------+--------+--------+------------|
| alice |    0.6 |    0.4 |        0.8 |
| bob   |    0.8 |    0.6 |        0.5 |


What are the odds that we will find equal rows in such tables with infinite attributes? Practically zero for every person on this earth?

Okay, now, consider we need to evaluate person on certain attributes being fit or unfit for the same job. Say, we need to evaluate basketball, and we pick few dimensions we need for people.

| name  | height | weight | speed | accuracy | quick thinking |
|-------+--------+--------+-------+----------+----------------|
| bob   |    1.0 |    0.5 |   0.3 |      0.1 |            0.2 |
| roy   |    0.8 |    0.9 |   0.1 |      0.7 |            0.4 |
| dylan |    0.8 |    0.4 |   0.9 |      0.9 |            0.8 |


And the sum of these attributes in float space would be the suitability of this person performing well in the basketball arena. What are the odds of summing these attributes being equal between candidates? Next to none?

So, we clearly see that humans are exceedingly unlikely to be equal amongst themselves either in one dimension attributes, in all their attributes or in specific job that they need to do attributes

Programming languages

Okay, let's apply the same logic for programming languages, with each attribute being 0.0 to 1.0, 1.0 being the best and 0.0 being the worst (all the values are my opinions)

| name  | typesafety | conciseness | syntax sugar | performance | memory usage | metaprogramming capabilities | total language power as sum of all |
|-------+------------+-------------+--------------+-------------+--------------+------------------------------+------------------------------------|
| lua   |        0.0 |         0.6 |          0.1 |         0.4 |          0.8 |                          0.0 |                                1.9 |
| ruby  |        0.0 |         0.9 |          0.8 |         0.1 |          0.2 |                          0.5 |                                2.5 |
| lisp  |        0.0 |         0.6 |          0.2 |         0.3 |          0.5 |                          0.6 |                                2.2 |
| rust  |        1.0 |         0.3 |          0.4 |         1.0 |          1.0 |                          0.7 |                                4.4 |
| ocaml |        1.0 |         0.8 |          0.2 |         0.9 |          0.8 |                          0.3 |                                4.0 |
| go    |        0.6 |         0.2 |          0.0 |         0.8 |          0.8 |                          0.0 |                                2.4 |
| java  |        0.5 |         0.2 |          0.2 |         0.7 |          0.0 |                          0.0 |                                1.6 |


What are the odds of these ever being equal among different programming languages? Some language will have to come over the top and have the biggest score.

As you can see, overall Rust somewhat overtook OCaml. But that assumes weights are equal, that is, I value all the properties equally in this sum. Let's see what I believe the weights are for each attribute from 0.0 to 1.0, as weights themselves aren't equal either. Same logic applies to weights of importance - these cannot be equal either!

| attribute name               | importance |
|------------------------------+------------|
| typesafety                   |        1.0 |
| conciseness                  |        0.6 |
| syntax sugar                 |        0.0 |
| performance                  |        0.5 |
| memory usage                 |        0.7 |
| metaprogramming capabilities |        0.0 |


As you can see, typesafety is paramount as I'm using the pattern in my projects. Consiceness, it should be decent, but perfect score is not necessary, it shouldn't be as bad as Java is all I'm asking. Syntax sugar, since I'm using the pattern and I can get as much sugar as I want is completely irrelevant. Performance, it should be decent, but not of paramount importance. Memory usage also should be decent as I want to get most out of cheap boxes, I don't want to have big beefy machines just so I could run memory hungry JVMs. Metaprogamming capabilities also are irrelevant as I'm using the pattern.

So, how about we multiply the weights with languages and see the final scores that we get?

| name  | typesafety | conciseness | syntax sugar | performance | memory usage | metaprogramming capabilities | total language power as sum of all |
|-------+------------+-------------+--------------+-------------+--------------+------------------------------+------------------------------------|
| lua   | 0.0 * 1.0  | 0.6 * 0.6   | 0.1 * 0.0    | 0.4 * 0.5   | 0.8 * 0.7    | 0.0 * 0.0                    |                               1.12 |
| ruby  | 0.0 * 1.0  | 0.9 * 0.6   | 0.8 * 0.0    | 0.1 * 0.5   | 0.2 * 0.7    | 0.5 * 0.0                    |                               0.73 |
| lisp  | 0.0 * 1.0  | 0.6 * 0.6   | 0.2 * 0.0    | 0.3 * 0.5   | 0.5 * 0.7    | 0.6 * 0.0                    |                               0.86 |
| rust  | 1.0 * 1.0  | 0.3 * 0.6   | 0.4 * 0.0    | 1.0 * 0.5   | 1.0 * 0.7    | 0.7 * 0.0                    |                               2.38 |
| ocaml | 1.0 * 1.0  | 0.8 * 0.6   | 0.2 * 0.0    | 0.9 * 0.5   | 0.8 * 0.7    | 0.3 * 0.0                    |                               2.49 |
| go    | 0.6 * 1.0  | 0.2 * 0.6   | 0.0 * 0.0    | 0.8 * 0.5   | 0.8 * 0.7    | 0.0 * 0.0                    |                               1.68 |
| java  | 0.5 * 1.0  | 0.2 * 0.6   | 0.2 * 0.0    | 0.7 * 0.5   | 0.0 * 0.7    | 0.0 * 0.0                    |                               0.97 |


And, as you can see, once the attributes which I care about most are weighted OCaml comes out as a clear winner, with Rust being second choice. Ruby and Lisp become utterly irrelevant, since they have no typesafety and hence cannot be used to implement the pattern. No language is equal either.

Same can be done with databases, infrastructure management solutions, IDE's - you name it. None of the outcomes of such analysis can be equal. Your job is to find the best one. And I don't mean "the best one for such and such usecase", I mean "the best one" ;)

Next time some leftist cuck says to you "I develop NodeJS and my code is as solid as what you do in OCaml because I spend double the development time writing muh precious unit tests" - refer such idiot to this forum post ;)
#32
Main Thesis / Part 4 - Specific vs Generic
Last post by CultLeader - June 02, 2021, 08:09:27 AM
Other cornerstone of logic that is needed for the software development with the pattern or even in life in general is that specific solutions will always be superior to generic solutions.

For instance, Clickhouse will always be faster than Postgres for analytical workloads. All database was optimized to do was to utilize all cores and process tons of data fast. This fact has lots of ramifications throughout all logical levels of Clickhouse design choices:

  • Data is stored in columnar format
  • All cores are utilized for queries (few users assumed)
  • Data is sorted and compressed to get max read throughput
  • There are no real transactions, append only workload is assumed, hard to delete rows
  • Inserts go straight to disk without buffering, hence, user has to ensure it batches inserts for max throughput

This is radically different from Postgres, that aims to be OLTP database from which you can serve web application and have ACID transactions across multiple tables. It is made to handle many concurrent queries at once and uses MVCC so each row would have its lifetime start and lifetime end and readers wouldn't block writers. Also, it has WAL logging, to prevent losing data after changes. Data is stored in rows so deletes and inserts would be efficient.

There is no query you could not execute on Postgres that you could execute in Clickhouse. And Postgres provides stronger guarantees against data corruption and ensures correctness. However, since Clickhouse solves more specific problem, of data warehousing that is, it can avoid implementing all the complexities that Postgres implements to ensure guarantees it provides. Row deletion? Doesn't really matter in Clickhouse, hence, you can always quickly append more data to the table. Transactions? Clickhouse is not really interested in that as you can insert duplicates if you want and Clickhouse can merge and remove dupes in the background. Cores? we assume that there are few users (typically analysts) querying the data so we can give all the cores server has, which we can't do in Postgres, since Postgres assumes there are many requests happening to the database at once.

Postgres is great database and for OLTP workloads that it handles it is a perfect fit. But it will never be as good as Clickhouse for OLAP workloads because Clickhouse specializes in that. Likewise, if you want to keep your sanity developing web application, you would never use Clickhouse to store realtime user data - it is not fit for that either.

Both databases have their place under the sun. And I sound like a typical leftist cuck saying that, so, I'll say another thing.

There's Apache Druid that tries to do the same as Clickhouse and is an utter failure, just like typical Java shit under the sun. Yet some idiot writes a blogpost https://leventov.medium.com/comparison-of-the-open-source-olap-systems-for-big-data-clickhouse-druid-and-pinot-8e042a5ed1c7 claiming that:

QuoteFor a wide range of applications, neither ClickHouse nor Druid or Pinot are obvious winners. First and foremost, I recommend to take into account, the source code of which system your are able to understand, fix bugs, add features, etc. The section "On Performance Comparisons and Choice of the System" discusses this more.

Clickhouse is not an obvious winner against Druid or Pinot? What kind of smokepipe is this idiot smoking? Its simpler, faster, and doesn't eat infinite memory for simple tasks like typical Java shit and is not complex to deploy. Obviously, doing more with less servers and simpler maintenance is cheaper so Cloudflare went with Clickhouse for obvious reasons https://blog.cloudflare.com/how-cloudflare-analyzes-1m-dns-queries-per-second/ .

So, there are few niches of components, like OLAP database or OLTP database and I have one and only one choice for these, Clickhouse for OLAP and Postgres for OLTP. I do not drown into pagan delusion that all these competitors in each categories are best in their own right. I mean, is Clickhouse version 7 better than version 6? If you say yes, you admit one is better than another. Just in versionings of the same component. What about different components? They can never be equal either and one will be better than all others. It can't be any other way. And ideally you'd want to find that component.

Software Development

How does this tie to software development?

I see people from OOP background abusing interfaces. I did this myself as a greenhorn in my career. Think of some abstraction then think of its implementors. What inevitably happens is system becomes full of spaghetti code and confusion. What happens 99% of the time is that you implement some new implementation and old interface is not enough, you must refactor it. Then you add method to implementation, and it ripples through the rest of the implementors and you usually implement dummy method there. What a waste.

As I got older I started appreciating ifs, elses and matches way more than I appreaciate interfaces to solve different flows problem. Under an if branch you can make a simple assumption that all the flow under branch is consistent, and is specific to do one thing, with all the assumptions. Hence, there is no nasty intertwined logic between different object implementations and code is simpler.

I've been doing OCaml for n years now, name stands for Objective Caml - and I never implemented a real object. Just not needed. Structures and functions is to this day all I use in OCaml. I got out of using objects even before I found OCaml and I never missed them ever since.

OOP pattern vs specific ifs

Say you have a task to query prices from different markets from different bitcoin exchanges. Well, the API's are quite different on most of them.

Say, A exchange lets you query prices directly with API key without creating a session. B exchange, on the other hand, flow first requires you to get a session key and only then you can query prices.

Typical OOP way would be for Exchange A:


interface PriceApi {
  double getPrice();
}

class ExchangeA implements PriceApi {
  double getPrice() {
    ...
    return thePrice;
  }
}

...

double queryPrice(PriceApi api) {
  return api.getPrice();
}


I added queryPrice flow that ties it all together, just with the implementation.

Okay, simple API? For the exchange A that is. Now we add exchange B, that needs session key:


interface PriceApi {
  void initSession();
  double getPrice();
}

class ExchangeA implements PriceApi {
  void initSession() {}

  double getPrice() {
    ...
    return thePrice;
  }
}

class ExchangeB implements PriceApi {
  void initSession() {
    // do some magicka to initiate the session
    ...
  }

  double getPrice() {
    ...
    return thePrice;
  }
}

...

double queryPrice(PriceApi api) {
  api.initSession();
  return api.getPrice();
}


Look, we had to accomodate for the init session flow, so, we had to change:

  • The interface
  • The exchange A implementor
  • The flow that ties them all together

We added one more exchange and we had to change three places!

What I now rather do with OCaml, is much simpler:


type exchange =
  | ExchangeA
  | ExchangeB

let query_price =
  function
  | ExchangeA -> (
      exchange_a_get_price ()
    )
  | ExchangeB -> (
      exchange_b_init_session ();
      exchange_b_get_price ()
    )


Exchange flows are separate and matched. Also, if they had arguments you could encode them in the enum. ExchangeB code does separately all it needs and there is no confusion or need to change its implementation. This is sanity. This is specific. Java interfaces are generic and are growing like blob of insanity with more added implementations. To write perfect interface from scratch you have to have hindsight in all the use cases, which is practically impossible.

And I'm sure, someone will say "well, that is a bad example, you could have solved this problem with X, Y, Z in Java", to which I reply - every Java codebase is a bad example. I've never seen sane Java project, I've browsed hadoop, kafka, zookeeper and plenty of other repositories I forgot and its always the same:

  • Thousands of interfaces
  • Thousands of implementations
  • Indirect call stacks so deep, you have nightmares about developing with that code
  • 99% of interface implementation methods are just indirectly calling yet another implementation

People who develop Java are seriously sick in the head. These people need help. They cannot write maintainable code. This is how unhealthy cults start and is perfect job security. Instead of writing simple ifs and elses they by default go for interfaces. Utter insanity.

Real life examples

Consider a car, say, a lambo. Lambo is optimized throughout all levels to go fast and look good. Carbon fiber, low body, hard tyres, aerodynamics and all that stuff. Not much room for cargo space, engine in the back.

Or, consider Mercedes G class. Car that is made to drive through offroad. It has high body, soft suspension, usually soft tires, zero interest in aerodynamics. Entirely different profile of the car.

There are unlikely any parts that could be compatible between two cars and all is different.

This is the generic car that typical Java developer would make:

  • Generic body
  • Generic wheels
  • Generic engine

Want this generic car to be aerodynamic? Tough, you cannot, generic body doesn't support that and bending its body requires to adjust other parts and their defined spaces.
Want this generic car to have a very powerful engine? Tough, you cannot, generic car body only has space for a weak small engine.
Want this generic car to have very large wheels to go offroad? Tough, you cannot, the body cannot fit those wheels.

See where I'm getting at? You cannot make generic shitty car perfect fit for any specific purpose. You have to design car already to be aligned with the purpose and optimize all parts and fit them appropriately. You cannot build a lambo and put 1.5L petrol engine in it - such lambo will not be fast and lose entire point and interest from the buyer. Every single part of the car has to say the same thing and show the same vision of the purpose of the car.

I wish most Java developers understood this and instead of producing tons of worthless crappy code with interfaces in name of reusability that nobody could maintain would be more useful to society flipping burgers at McDonald's instead.

Specificity also permeates the pattern - everything is specific, no generic interfaces, hard types everywhere, hard implementations, maximum checking of specific inconsistencies and correctness at compile time. Nothing ideally is left where users can make trivial mistakes, like parsing json by hand, sending raw SQL queries to the database, forgetting to configure certain secret and so on.

Marriage

You have two choices as a man:

  • A used up slut
  • A virgin

Naturally testosterone dictates to a man that marrying a non virgin slut is disgusting. And a soy latte sipping leftist faglet is happy marrying up a slut that had tens of sexual partners because that's the only woman he'll ever get.

A womans entire desire and purpose and life ought to be to worship and please the one and only man. One woman should naturally be thinking about one specific man that should always be in her mind.

What is the case with a slut? She slept with tens if not hundred of chad guys. Will she ever think only of one man that swept her of her feet? No, she became damaged, generic, used up and not a wife material. Man that marries up such slut will certainly won't be the most attractive man she ever met and she'll be constantly thinking about the great sex she had before she married this chump. And infidelity with divorce is very likely. If someone doesn't know, statistical chance of divorce increases exponentially with brides sexual partner count.

So, marry a virgin so her heart would only be filled with specific things about you and don't marry a slut, which has her heart filled with all the other chads before you. Just like a used up, smurfed out Java projects, with many interfaces for all the use cases which will never be able to compete with dedicated solutions to a certain problem.
#33
Main Thesis / Part 3 - Masculine and Feminin...
Last post by CultLeader - June 02, 2021, 07:33:25 AM
This post will be the spiritual explanation behind the pattern post. You have to understand the pattern if you want to understand this post.

So, in a nutshell, the pattern is when you have a meta executable, that checks as many things as you want, say, database queries, schema migrations, if servers with components have enough resources and so on. Then, once all of your assumptions are checked you generate as much typesafe, tighly restricted code as you want. You can generate backend code that serves REST endpoints and you can generate typesafe frontend code that queries the backend - everything is consistent. You could also generate native app code, say android or swift to interact with the backend - sky is the limit.

This is a divine pattern and permeates all of the creation. Imagine your codebase is a body, so, meta executable is brain that ensures everything is consistent and logical and dictates and has absolute control over the shape of the body. Just like a rooster has control over many hen. The same way in traditional family the man is in the control of the house and has to think how to provide and grow the family.

But I would have you know, that the head of every man is Christ; and the head of the woman is the man; and the head of Christ is God. - 1 Corinthians 11:3
Wives, submit yourselves unto your own husbands, as unto the Lord. - Ephesians 5:22

Examples from the bible are countless.

This is same as God of the bible bossed around children of Israel in Old Testament how Jesus also orders around his followers. Jesus is in the masculine plane and calls all the shots.

Consider your body, eyes provide information to brain, touching and smelling also provides information to brain, yet the brain is responsible for making decisions - one point, so that everything be consistent. What if the arms of one person wouldn't listen to the brain? Imagine, you hold shield in one hand and sword with another, you can cut with one hand but you cannot put shield up to defend against the enemy because other arm is disabled. Is such person fit for battle? We'd call such person disabled, i.e. not of a full capability.

Believers throughout the entire bible got chastized and punished for disobeying God's word. God does not consider himself disabled, he want his body to function properly, i.e. everyone ought to be doing his will. Any deviation from that has negative consequences.

This is the strict order we establish with meta executable and masculine plane. We generate code that must be typesafe and consistent. 95% bad stuff that otherwise would happen only in runtime in a dynamically typed language is now caught in our custom compile time and hence has to work in the production.

The world and its insanity

By contrast, consider average leftist organization. The people cannot conceive such idea, hence all their environments are incosistent and very fragile. Consider all the errors that happen in production, misconfigured switches or routers. Invalid SQL query. Kafka topics that still exists but nobody knows if anyone uses them. Broken contracts by REST API's. There is no order, all is chaos. Such companies hire thousands of people, hence must have deep pockets, get best developers and yet the end result is an utter failure of strife, confusion, misconduct and production failures over and over again.

For where envying and strife is, there is confusion and every evil work. - James 3:16

Average leftist pagan monkey cannot conceive the idea of a masculine plane that checks logically if our codebase makes sense, and only if it does, generates code that enforces the agreed upon boundaries. Also, of course, the side effect of generated code is that all the complexity and coherency of the system is ensured by a masculine meta executable plane, hence, people who write code that plugs into the enforcing code only need to fill in the blanks and use high abstractions provided by the framework. Hence, since these people are doing easier job, and don't need as much technical knowledge, can work only in feminine plane and scale the system with high level abstractions. Consider analysts, which, in my experience, are mostly women. But, people who build the platform (SRE's) are men in general as they must have knowledge of low level masculine plane.

Leftist idiot SRE's build and provide a database, and just give it to the feminine plane. They do not ensure that queries will work in production. This work goes into developers who work in feminine plane and they cannot ensure that, unless they write ton of unit tests, but nobody ever does anyway. If they do build the framework, about which masculine plane is unaware, there are tons of cases where developers do exact same thing but a little differently, and hence you get inconsistent duplication of effort that brings in more problems, confusion and maintenance issues.

How things ought to be done

There must be one ruler that dictates how application developers develop their apps, and they must all use the same abstractions. To avoid duplicated waste there must be:
- One way to query/mutate the database
- One way to store secrets
- One way to interact with queues, with typesafety, of course
- One way to log things
- And so on and so on

Masculine plane has to provide all of these abstractions, make sure they all work together and are perfectly coherent. Masculine plane must ensure there is no duplication of effort and provide this for users of feminine plane to build their abstractions with ease and minimal effort. Only then we can consider it a perfect system.

To achieve this, you MUST represent your problems as data.

For instance, there must be a single list in the meta executable that has all the queues in the company, all with their typesafe signatures, capn proto generated schemas and such. If feminine plane component wants to put that queue to use, for instance, put element into it when it receives HTTP request, masculine plane must ensure that:

  • Typesafe rest endpoint is generated and user only implements the typesafe function that receives typesafe, already parsed payload body of REST request, hence, he doesn't parse anything by hand and cannot ever make a mistake
  • Typesafe function is provided for user implementing the function in feminine plane, that will accept typesafe queue element, and masculine plane generated code takes care of serializing the payload, say, with Capn Proto and putting it into, say, Kafka queue.
  • All logging for the component must be declared upfront, specific, typesafe structs with their fields
  • All monitoring for interaction with provided abstractions (HTTP endpoint, typesafe kafka queue) must be generated automatically, say, prometheus global variables must be generated to be incremented upon usage of abstractions
  • If all the above following is defined as data in the masculine plane, by higher level abstractions, grafana dashboard of interactions with queues and rest endpoints can be easily generated from data

No need any longer for nonsense of skimming through million lines of codebases to find all the times database is used through gazillion different libraries and stuff.

Only one way to do everything, hence, duplication of effort should be impossible, hence, minimal waste in the system, and we could finally declare what no leftist cuck could ever dream declaring of - having a perfect system.

Paganism

This is radically different from pagan software developers today, that comprise 99% of developers. There used to be specific gods that do specific thing back in the day. Say zeus, god of thunder, aphrodite god of love, aristaeus god of bees. Pagan stupidity limits them to thinking that if they want rain, they had to pray for zeus, love - pray to aphrodite. This paganism is permeated in software developers today.

All of the mentioned responsibilities today are listed as separate "gods" that maintain bunch of crappy, incoherent yamls that never work the first time anyway. Say, you want to go to kubernetes, go to that team, write some yaml. Or, you need a database, go to that team. You need secrets, go to that team. This is paganism and its a mental disorder. Pagan monkeys cannot fathom the idea that all of these services simply ought to be developed as a single perfectly coherent codebase whose components perfectly interact with each other. They cannot imagine one all powerful God, like the God of bible, who made everything and knows everything and is perfectly consistent and has no contradictions. When pagan monkeys imagine a god, like in doom eternal, in final part, ancient gods part II turns out that god was banished and suspended and is not all powerful and was killed by his creation. This is pathetic pagan monkey god that is not all powerful and fell a prey to his creation. Pathetic.

Say, you need a database for component, you need to declare in masculine plane as data:

  • Database name
  • Its tables with columns
  • What typesafe queries it will have
  • Component that will use given databases typesafe queries
  • Write some filler code in feminine plane
  • If component is ever deleted and nobody uses database make sure to notify everyone and optionally have ability to drop or backup data

Done with your day. No need to ask anyone to deploy anything and schedule meetings of making everything work together.

Imagine connecting kafka to clickhouse in masculine plane, you would declare that this typesafe queue is to be dumped into that table. Masculine plane could check and enforce:

  • That signature matches
  • That the table is somehow used and not end up in legacy forgotten unused territory like kafka queues
  • Ingestion rate is monitored and so on

Sky is the limit and things are radically better once we establish masculine planes and feminine planes and everyone's responsibilities are clear. This is how you can develop things faster with few people than thousands of monkeys at google or facebook fixing production issues daily due to lack of coherency between developers. Also, this is how you can ensure you have minimal amount of components that accomplish everything, for instance, you don't need MySQL and Postgres, since you implement these abstractions yourself you will be motivated to use minimal set of tools you can to solve everything and to increase the value of entire system by having the least interactions between components. For instance, you wouldn't need to think how to ship table changes to Kafka from Postgres and MySQL if you only pick Postgres and need to implement only Postgres->Kafka flow without implementing MySQL->Kafka flow.

Soul

Also what is interesting, since with masculine plane you can generate completely independent coherent modules, there could be no proof at all that code was generated. Container could be running in kubernetes with minimal dependencies, and is in perfect harmony with the rest of the system, yet an outside observer that would inspect binary code would have no idea that executable was mostly generated and has no shared code with most of the system. This is like having a soul, thing that keeps us sane in these troubled, attrocious, imbecile leftist times and keeps us going into the LORD's direction. There is nothing on the outside that tells us we are connected to the LORD, yet we are and are in perfect harmony with him and his mind.

For who hath known the mind of the Lord, that he may instruct him? But we have the mind of Christ. - 1 Corinthians 2:16

So this is the reasoning behind the pattern and explanation of divinity with neccessary masculinity and its interaction with the feminine plane.

Illustration of architecture comparing paganism vs feminine and masculine planes:



I know which one I prefer ;)
#34
Main Thesis / Part 2 - The Pattern
Last post by CultLeader - June 02, 2021, 06:57:20 AM
In this post I will reveal most valuable programming pattern I know. It's extremely solid, versatile and opens limitless possibilities. I've discovered this independently myself, because I knew that is the way the LORD had to create everything. There is no other way to do things (everything before this is a wrong way), and it will be clear why. It is so deep, its core goes to the foundation of both sexes, male and female, masculinity and femininity and what and why that is at all.

Show me any programming language feature you know without which you cannot live and it all pales in comparison to this pattern. It is irrelevant and useless and has no meaning compared to this. In fact, I can do any language feature you ever knew with this pattern while host programming language only needs basic features.

You don't even need to practice it, it will be obvious - there is nothing that you will not know about this pattern that I know just reading this post.

If you only read single post in this forum, it better be this post.

Okay, hype speech is over, let's go.

Represent abstractions as data

What does that mean? Sounds vague marketing sales bullshit, let me explain it very simply. Instead of connecting to the database and sending the query from code, we define query as data in a program that will generate typesafe call to a function. Here is OCaml example of what I mean:

let res = execute_write ~params:[|string_of_int voting_id|] pg "
  UPDATE voting_session
  SET signatures_count = signatures_count + 1
  WHERE voting_id = $1
  AND NOT is_scheduled
  AND NOW() < petition_deadline
  " in


This code block executes SQL query in Postgres and you pass parameters and convert them to strings by hand.

Every single time you write such block you are risking of making a mistake. Not nice. Ruby monkey answer to this? "Just write tests!". Oh yes, If I was idiot Ruby monkey I'd see no issue with writing tests for every single most trivial code block for everything.

Here is the alternative:

let db_queries = [
...

  mk_db_mutator
    FtVoting
    "increment_voting_signatures_count"
    {|
    -- voting_id:15
    UPDATE voting_session
    SET signatures_count = signatures_count + 1
    WHERE voting_id = <voting_id:int>
    AND NOT is_scheduled
    AND NOW() < petition_deadline
    |};
...
]


I defined exact same query as data in a list of other queries. mk_db_mutator function just returns struct of this query.

This data resides in a meta executable that generates OCaml code for a typesafe function:

let dbm_increment_voting_signatures_count (pg: Postgresql.connection) ~(voting_id: int) =
  incr dbm_increment_voting_signatures_count_init_count;
  let timestamp = now () in
  send_db_query_record ~timestamp ~query_label:"dbm_increment_voting_signatures_count" ~duration:(-1.0) ~rows_affected:0 ~rows_returned:(-1) ~arguments_keys:["voting_id"] ~arguments_values
:[string_of_int voting_id];
  let res = execute_write ~params:[|string_of_int voting_id|] pg "
    UPDATE voting_session
    SET signatures_count = signatures_count + 1
    WHERE voting_id = $1
    AND NOT is_scheduled
    AND NOW() < petition_deadline
    " in
  let duration = now () -. timestamp in
  let rows_affected = int_of_string_opt res.result#cmd_tuples |> Option.value ~default:(-1) in
  incr dbm_increment_voting_signatures_count_success_count;
  incr_float dbm_increment_voting_signatures_count_duration_sum duration;
  send_db_query_record ~timestamp ~query_label:"dbm_increment_voting_signatures_count" ~duration ~rows_affected ~rows_returned:0 ~arguments_keys:["voting_id"] ~arguments_values:[string_of_int voting_id];
  res


So, compilation of codebase goes like this:
1. Meta executable runs, performs all checks, and if everything is okay emits code
2. The rest of my codebase compiles, and uses typesafe functions

Not a big deal? Okay, let's go through all the things that one small query has checked when it is defined as data:

1. It has to work, mutate data in the table, it is run with example data against test database in mind
2. We check that it does mutate a row
3. <voting_id:int> is substituted and added to a typesafe function, I can add arguments in any places I want
4. Code is generated to save EVERY SINGLE QUERY in Clickhouse WITH THEIR ARGUMENTS in a NICE MAP. I have every single query ever executed against the database, well compressed, with their parameter names and values in Clickhouse.
5. Prometheus global variables are generated and incremented and automatically exposed to prometheus.
6. Every single query is assigned to some feature enum, so, if I delete features logically I have to go through all the code and see where it is used
7. If I change query signature, I will have to refactor code where I use this function.

Imagine, I just write very little code, of defining query as data and all this code is generated and all these things happen on autopilot. I have confidence that queries will work and things work 99% of the time. I barely write any tests either.

My mistake factor on producing invalid queries is now drastically reduced, and I can write very complex queries and know things will mostly work.

Imagine you're at work, and someone says, we have to label and monitor all queries in the database. Find places where we query database and make sure all is tracked.

Well, you could be lucky, attach to the framework, but what if you have lots of code that just uses SQL library which has no concept of tracking? Do you track in database?

Here, what I did, SINCE I HAVE ALL QUERIES DEFINED AS DATA, I added few lines of code to:
1. generate prometheus global variables
2. generate function that dumps all these metrics to prometheus string
3. few lines add increment to global variables in their special global variables

Wow, so simple anyone can do it. And you can do anything you'd want when you generate code from data. Sooooooooo simple!


Database tables

I also define them as data. Not as raw SQL.

let db_tables = [
  ...

  mk_db_table
    11
    "chat_message_likes"
    [
      mk_field 1 "message_id" Int ~index:true;
      mk_field 2 "user_id" Int ~index:true;
      mk_field 3 "time_liked" Timestamp ~default:"CURRENT_TIMESTAMP" ~index:true;
    ]
    ~uniq_constraints:["message_id"; "user_id"]
    ~test_data:[
      ["112"; "10"; "NOW()"];
      ["113"; "12"; "NOW()"];
    ]
  ;

  ...
]


And the RAW SQL is just generated.

mk_field 1 - field number, is unique identifier for field that does not figure in SQL, so, I could rename fields in SQL if I want to.

So, I can rename a column, keep same number, and migration is generated that renames column.

Guess what? If I change schema in the data my queries break, I have to fix that. Everything has to work together.

Which is nicer:

- Having raw SQL to define schemas, maybe parse SQL Ast and try figure out what it does, and write migrations by hand also
- Define schema as data and just generate all migrations with simple code generation?

I choose second, and it paid dividends time and time and time and time again.

I have single test that checks that migrated schema from previous version must be same as freshly initialized, so, I can tolerate errors in migration function, because they are detected early and never reach production.

What I could also do, since I have schema as data:
- Generate code for automatic monitoring of int overflows
- Say that table has logical unique constraint, but don't enforce it because of performance and generate code to monitor it
- Maybe flag some tables that need to be mirrored to Clickhouse for better performance and generate Clickhouse code that will keep refreshing table in Clickhouse

Possibilities of this approach to solve problems are endless.

REST endpoints

Rest endpoints? I have them as data. Here's an example:

let rest_endpoints = [
  ...

  mk_rest_endpoint
    FtForum
    "/api/post_to_chat_room"
    LoggedInUser
    (CustomAction "post_to_chat");

  ...
]


As you can see, I even have enum, whether endpoint is accessible to logged in user or to outsider.

What is custom action you ask? I have all custom actions defined as data:

let custom_actions = [
  ...

  mk_custom_action
    FtForum
    ~return_type:"post_to_chat_return_result"
    "post_to_chat"
    [
      "chat_room_id", Int;
      "contents", String;
    ];

  ...
]


They have type signatures and arguments. What type signature the rest endpoint has depends on the custom action. For instance, now rest endpoint receives json with body of `chat_room_id` and `contents` - it comes from custom action. It could database query mentioned earlier instead, and rest endpoint would accept different argument.

How do I call this rest endpoint? I'm glad you asked, you just use typesafe OCaml function that under the hood gets results of this custom action and you only provide typesafe function. In fact, we receive defined return type:

type post_to_chat_return_result =
  | CAR_PostMsg_MessageTooLong
  | CAR_PostMsg_EmptyMessage
  | CAR_PostMsg_UserDoesntBelongToRoom
  | CAR_PostMsg_Ok
[@@deriving yojson]


No json parsing by hand, I just call generated function in the frontend:

rest_post_post_to_chat { chat_room_id = chat_room.room_id; contents = contents } (fun res ->
    match res with
    | CAR_PostMsg_Ok -> (
        materialize_toast "Hoooray, posted!"
    )
    | CAR_PostMsg_MessageTooLong -> (
        materialize_toast "Message to long"
    )
    | CAR_PostMsg_EmptyMessage -> (
        materialize_toast "Message empty"
    )
    | CAR_PostMsg_UserDoesntBelongToRoom -> (
        materialize_toast "You do not belong to this chat room"
    )
)


If I add more enum outcomes in the backend I MUST also fix the frontend to account for such case.

I don't deal with raw json. I don't deal with null pointer exceptions. Everything is typesafe and usually works first time and there are no surprises. I don't understand why people would ever want to code frontend in plain javascript, suicide!

Let's look at all code that was generated to serve/call this rest endpoint:


  post "/api/post_to_chat_room" begin fun req ->
    wauth_succ "backend_rest_ca_post_to_chat" req (fun session ->
      App.string_of_body_exn req |> Lwt.map (fun json ->
        match Yojson.Safe.from_string json |> gen_type_7557852441389680360_of_yojson with
        | Ok args -> (
            let with_connection = with_connection ~ray_id:(get_ray_id_from_request req) in
            let api = mk_custom_action_auth_api "post_to_chat" with_connection session req.request in
            let result = ca_post_to_chat api (args.chat_room_id) (args.contents) in
            result |> post_to_chat_return_result_to_yojson |> Yojson.Safe.to_string |> respond_json_str_lwt
          )
        | Error err -> respond_json_str_lwt "PHEIL"
      )
    )
  end;


All of the tricky stuff, about authenticating, parsing right type, right arguments, and returning type as json is generated.
The function that is not generated is ca_post_to_chat and is a perfectly typesafe function that just receives appropriate labeled arguments:


let ca_post_to_chat ~(api: custom_action_auth_api) ~(chat_room_id: int) ~(contents: string) =
  if String.length contents > 1000 then (
    CAR_PostMsg_MessageTooLong
  ) else if String.is_empty contents then (
    CAR_PostMsg_EmptyMessage
  ) else (
    with_connection "post_message_to_chat" (fun c ->
        let res = dbq_does_user_belong_to_chat_room c
            ~user_id:api.session_data.user_id ~room_id:chat_room_id in
        if res.(0).user_belongs > 0 then (
          dbm_post_to_chat_room c
            ~user_id:api.session_data.user_id
            ~chat_room_id ~contents |> ensure_pg_ok;
          CAR_PostMsg_Ok
        ) else (
          CAR_PostMsg_UserDoesntBelongToRoom
        )
      )
  )


As you can see, I don't deal with anything that is HTTP related, I declare rest endpoint, state custom action with its arguments, its return type (that will be perfectly reflected in frontend, bye bye faggitty HTTP return error codes), and I have to provide typesafe function to fulfill that contract and if anything is wrong, compiler will error out and I cannot ship to production.


This is generated frontend code, so I could call this rest endpoint just as any other function with result callback:

let rest_post_post_to_chat (args: gen_type_7557852441389680360) with_result =
  let url_var = Printf.sprintf "/api/post_to_chat_room" in
  let body = gen_type_7557852441389680360_to_yojson args |> Yojson.Safe.to_string in
  post_ajax_url_json_wcallback url_var "frontend_ca_post_to_chat" body (fun str_res ->
      Yojson.Safe.from_string str_res |> post_to_chat_return_result_of_yojson |>
      function
      | Ok result -> (
          with_result result
        )
      | Error err -> (
          print_endline "Cannot deserialize custom action"
        )
    )


I don't deal with knowing what is the right URL to call this - all generated from same source, and both perfectly match!

Plus, I make these checks of the top of my head:
1. Custom action must exist to be attached to REST endpoint
2. You cannot call REST endpoint with non existing arguments, because same generated type (without nulls) is both in frontend and backend, and same serialization function parses them
3. There cannot be duplicate REST endpoints, even if I have thousands of them (and they grow very fast, because adding many of them is now trivial due to code generation) duplicate paths will be detected
4. It is generated that every REST endpoint with its label is logged and monitored in prometheus automatically.
5. If I break query in the backend, and type signatures change I must refactor the frontend too


Thoughts

What framework offers you that? Because everything I see is a joke compared to what I do myself. And that is for the very simple fact:

DEFINE YOUR PROBLEMS AS DATA

If I had to write separate calls to rest endpoints and separate calls in backends everything would become a mess very quickly. If I had no typesafety, things would become a mess very quickly.

TYPESAFETY IS A MUST IF YOU WANT TO DO SUCH THIGS.

I rip for everyone I meet about typesafety. Most people can't understand it because they never tried to do something as powerful as this pattern, so, they think its okay to push non working code, someone notices and push some other garbage code to fix first garbage code. But if you want to do things like this pattern, try doing it with NodeJS or Ruby and we'll see how much you'll last. You'll start drowning in infinite amount of runtime errors very quickly. Hence, we have to use statically typed language, that will check our generated code integrity or these exploits become practically impossible.

Now I hope everyone understands why I don't take any dynamically typed scripting languages seriously, be it python or javascript or ruby. I don't need them. I do much more powerful things, with much greater correctness and typesafety. I barely spend time debugging code, because it just usually works. Ruby developer has to write 2x code, implementation and tests because he has no choice. I write 90%+ percent implementation code on rock solid abstractions and very little tests, for pesky things like making sure regex is parsing what I expect. I couldn't imagine writing trivial tests for every REST endpoint, or database query - what a waste of time. Things like that work from the first time in vast majority of the cases and I can focus on the application.

Sure, I wish some features were present in OCaml that ruby has, like decent string interpolation, but again, I did not find it issue enough yet, but, if I will, I will just make typesafe template abstraction in meta executable.

Who forbids me of defining templates like that? This code does not exist, but I'm just fantasizing:


let templates = [
  ...
  mk_template "some_template_ini" {|
    ip_address = <the_ip:string>

    open_port = <port:int>

    <iterate peers { peer_port: int, peer_ip: string }
    connection = <peer_ip>:<peer_port>
    <iterate end>
  |};
  ...
];


And then just generating typesafe function with all these arguments:

let template_string = tplt_some_template_ini ~the_ip:"127.0.0.1" ~port:8080 ~peers:[{ peer_port = 1234; peer_ip = "1.1.1.1" }] in
...


What forbids me from doing that? Hence, I don't care about programming language features.

I only need:
1. Decent typesafety (like OCaml and Rust provides, Java with random null pointers is way too weak)
2. Meta executable with my domain problem defined as data

Other benefits of generating code yourself in meta executable:
1. Your IDE will be grateful, it can parse trivial generated language code, instead of having to support very complex language features
2. You have a reference point of how generated code looks, so, you don't need to wonder, like in Lisp what macros have done and why stuff doesn't work
3. Ultimate flexibility to have cross context consistency, imagine, generating language bindings to your rest endpoints for huge amount of different languages
4. Unlike C/Rust/Lisp macros, which are very weak and just do local context this allows you to evaluate global context of everything working together from all different sides (you can check that for instance, a queue exists but is not used by anyone)
5. Since your problem is defined as immutable data, you can test your data very fast, mostly in memory consistency tests for your domain (avoiding pesky integration tests that require spinning up machines)

This is so simple anyone could do this. You don't need PhD's (people in university are overrated anyway), only common sense. If I taught a greenhorn coder this, he would probably beat the living crap out of 99% seasoned developers in productivity and correctness.

That's it. Not interested in Rust macros, Lisp macros, Rebol, Red, Scala, Go, V or any of 100 languages that will come out next year and forever. I can do everything on my own and generate anything I ever wanted. Practical applications of this pattern are endless and productivity gains are immeasurable. No wonder I consider everyone else around me an idiot now :/

I will also explain the logical reasoning and philosophy behind this. We will touch sexes, why even women are attracted to a certain behavior in a man, feminine and masculine planes and all creation.
#35
Main Thesis / Part 1 - Programming languages
Last post by CultLeader - June 02, 2021, 06:23:05 AM
I wanna speak about programming languages I use. Tried lots of crap over the years and I found a few gems that are my go to tools and I don't need anything else. I have access to abstractions way more powerful than any programming language gives you out of the box (and sadly, no one I know uses this trick, I had to come up with that myself).

I wan't to speak about languages I use for everything I do, and I don't need anything else:

Languages I use

OCaml

The best language I've ever used, by its expressional power (which is limited, but can be unlimited). The rock solidity of type safety and type inference makes this indispensable. Single thread C like performance is also great, hence, you can develop websites, or anything really, without wasting too much CPU.

This is a variable in OCaml:

let x = 7

This is a square function in OCaml:
let square arg = arg * arg

This is how to square list of elements in ocaml:

List.map [1; 2; 3] ~f:(fun i -> i * i) ;; will yield [1; 4; 9]

As you can see, very expressive, much more expressive than C++. But most of all: TYPESAFE. You write script-like programs but they have to compile to be shipped. All types in examples are inferred, if they are not inferred you can explicitly specify them.

Usually, scripting languages are garbage. You find out if they work only during runtime. Here, 90% of trivial errors are caught before you ship anything, needless to say, a huge time save.

If project doesn't need to squeeze every drop of performance from CPU (90%+ projects are like that), choosing OCaml is no brainer and allows writing tens of thousands of lines of code, mostly without tests, that usually just works. I can't imagine any other language that even would come close.

Rust

Sweed delivery from pain points of C++. No more segfaults. If something is performance critical, picking Rust is a no brainer. Already there are 10x more libraries for Rust than C++. Rust will never be as terse as OCaml, being a low level language, but with a good reason. We have to get to low level to, and be very explicit about telling CPU what we wan't to do. Square function in Rust:

fn square(num: i64) -> i64 {
  num * num
}


As you can see, quite more verbosity, and mandatory type annotations which we didn't have for OCaml, but hey, if you want performance, this is the sacrifice I'm willing to make.

That's it

These are all the languages I'd ever use in production and with which I develop everything, from compilers, to web services, to DSL, to metrics systems - anything.

I DO NOT USE ANYTHING ELSE.

Hence, I know only two languages, which allow me to do any abstractions I can think of and I laugh when I see other languages introduce new features (because I already have something more superior than any language feature, and these are not things that come out of the box of OCaml nor Rust)




Languages I'd never use

Now, let's roast the languages I don't care about (even if I know them) and why I would never ever use or need any of them

C++

Rust did everything C++ failed at 100x better. From macros to package managers. If you're starting a new C++ project in 2021 you're seriously screwed in the head.

Go

A pathetic excuse of a language. Neither as fast as Rust, neither as expressive as OCaml. No parameterized typing (pathetic). No full type inference (I don't roast Rust for that because it is fast).

Pathetic error handling:

f, err := os.Open("filename.ext")
if err != nil {
    log.Fatal(err)
}


Are you serious? Not even Result<Success, Error> monad? Not even Some(..) None option? Check for nil? Go kill yourselves.

The only decent feature is portable executables.

If this is the best google and rob pike can come up with then I can't help but state that they're bunch of idiots.

Javascript

For frontend, I use js_of_ocaml which compiles to javascript monstrosity. But I do have to sometimes write itsy little bitsy of javascript, for instance, if you have javascript library, I write bindings to use it frm typesafe OCaml, I make sure they work and just write OCaml and enjoy typesafety. Also, if I need to access rest endpoints they are typesafe too, I don't parse jsons by hand.

I would never use javascript because it is not typesafe. I don't want to check spelling errors by writing unit test for every most trivial function in the world. I want to write code that works from the first time 90% of the time and barely have any tests, except for exceptional circumstances. This goes for any dynamically typed language - run from them.

Python

Non typesafe, all the drawbacks of javascript. There are circumstances when I'd write up to 100 lines of throwaway scripts with python, as it is already installed in machine, or python has some library I need, but again, I wrap those libraries in typesafe way and enjoy them from OCaml without any drawbacks of dynamic typing.

Ruby

Would never use, no idea why anyone would use it. Does not come preinstalled, is not typesafe, either write 2x code, 1x for code, 1x for tests, and even then type safety errors creep into production. Performance is also horrible. Community is generally very low-quality people, that know something horrible, like java, they learned ruby and then they think that it is all there is to the world. No OCaml developer I've ever met or talked with gives a tiny rats behind about ruby, because from OCaml standpoint it's pathetic.

Java/JVM

In general, avoid anything with JVM. JVM people tend to produce very complex projects, avoid 99% stuff from apache software foundation. For every pathetic pile of crap ASF puts out eventually there comes a simpler, more easily maintained analog.

I will not post syntax of Java (Scala is a decent attempt to save sinking JVM ship) not to rape anyone's eyes, but boy is it verbose. And boy is it error prone. And boy there are many null pointer exceptions in production. And if you want to save money on hosting or hardware, OCaml executables serving websites run up to 32MB of usage (I'm generously overestimating), if you deploy any JVM pile of crap, in the most conservative setups be prepared to sacrifice at least 128MB of RAM. Reality is much more grievious, JVM process that uses 1GB of RAM is fairly modest.

If you choose anything JVM based to develop your website solution, forget about running few cheap digital ocean boxes to serve your website (no problem with OCaml) until it gets popular and startup idea is justified. Be prepared to call devops team and pay them gazillions to build your private cloud with kubernetes instead, then, once you've wasted tens of thousands of dollars for servers and development, see your startup fail and all of that startup cost as a waste.

Why would anyone start with JVM? Bizarre.

Lisp/Clojure

There are lots of fanbois of lisp there, been there myself. Then one day I figured I'm spending 90% time debugging non typesafe code. And Paul Graham states it is the most powerful language, because of muh macros. Lisp macros to what I will reveal here and what I do with OCaml/Rust will be pathetic and pale in comparison. So, if we don't care about lisp macros, lisp becomes similar to other non-typesafe languages and which we already dismissed as garbage,

Muh favorite some other language

Sure, post something else, but 99% chances are, I don't care. It will never have feature which I cannot develop myself (sounds hard to believe, but when I explain it, it will be so simple, you'll just think "why didn't I think of this myself?"). In general, I'm not limited by any pesky language features, all I need is basic type safety and I'm set to build anything I want on top of that.
#36
Main Thesis / Cult Rules
Last post by CultLeader - June 02, 2021, 06:05:47 AM
Sup bois. Your boi here, the cult leader of this forum.

As you can see, by a domain name, this forum is a hacking cult. And I don't mean it in a good way, like people here are cult-like passionate (even though I am passionate).

But I mean it in a way, I can't stand all the unproductive, low quality, doe-eyed leftist cuck software developers and their excuses today and hence I've created a cult of my own way, of my own productivity.

What this means, if I hear typycal BS you'd read on Quora, like there's no such thing as bad database, there's no such thing as bad programming language, everything is appropriate at its own due time, I'll ban your ass.

I deal enough with idiot developers in a daily life, I will not deal with them here.

I'm a douchebag, I love myself and I embrace it - I tell it like it is without any regard to your feelings.

There will be one idea of how to do things - that idea is mine (sorta, inspired by the LORD, my greatest teacher). You don't like that? Tough, this forum is my hacking cult, so feel free to get out.

Now, of course, reasonable people can ask questions of why things will are done certain way, and everything is divinely simplex, easily explained, rationally and logically, but the moment I smell faggotry or trolling - people will get banned.

Congratulations on reading my opinions, there are lots more to read and I hope you learn a lot. In fact, if you indulge in all the advice that I give in this cult, you'll be top 1% developer that thinks ahead, and will have amazing insights ahead of average, brainwashed, braindead, leftist cucks who will stay unproductive and miserable with their stupidity.