Author Topic: Easiest way to attack a design choice...  (Read 638 times)


  • Administrator
  • Newbie
  • *****
  • Posts: 32
    • View Profile
Easiest way to attack a design choice...
« on: January 29, 2022, 02:24:41 PM »
.. is by saying it is generic. And we'll cover a lot of examples that reveal these hidden inefficiencies to the naked eye.

Lets start with foundations, a CPU. x86 or ARM is a generic compute machine. Generic compute machines will never be as optimal for special tasks as dedicated ASICs. Bitcoin mining first happened on CPUs. Then, since GPU aligns more specifically with the problem of massive parallel computation people mined with GPU's. GPU's are, again a generic compute device rather made for rendering games, so the next step were creating ASIC's. Well, there's a bus stop I between GPU and ASIC I skipped called Field Programmable Gate Arrays (FPGA), but I don't know if anyone tried programming FPGA's to mine Bitcoin. Anyway, the moral of the story is that CPUs/GPUs are a generic devices that will always be beat by specialised hardware. I just spent a paragraph to say that the sky is blue, so what?

Think about a world before GPU's, FPGA's and ASIC's. It wasn't that obvious then was it? And rest assured, we will challenge a lot of de facto ideas that we will predict disintegrate in the future.

Memory allocators.

Today a lot of people spend time developing memory allocators. ME's are implementations of a generic interface of malloc and free. Think of all the possible questions you have to ask yourself when implementing malloc. Does user want a big chunk of memory or small? How often is memory freed? Does memory travel between threads? The implementors cannot have the slightest idea, because such information is not passed to the function, hence they have to make generic approximations. That is, allocators code is full of if's and else's draining precious CPU cycles trying to fight this lack of information with dynamic analysis. Dynamism is cancer too, like I mentioned in another post.

There are standard operating system allocators, then jemalloc, now mi-malloc seems to be competing for the title of the "fastest" allocator. In reality they're just better approximating to the needs of the current time real world software.

But, for instance, if a routine uses small objects we could implement a simpler small object allocator and beat the generic one. Same with big objects. Same with objects that don't need to ever be freed. Specialised allocators will always beat generic ones.

Today in Rust you pick one allocator for everything and tada... I predict that in future compilers will perform static analysis like a skilled programmer and will pick specialised allocators for routines instead of one generic allocator for everything like jemalloc, mi-malloc and so on.

So, we just murdered the today's untouchable holy cow of memory allocators and this truth will eventually become obvious in a few decades to everyone. What's next?

SQL databases. Let's pick on generic solution of query planning. What happens today is, there's dynamic table of metadata about the data in the DB. If someone makes a query the DB has to parse that statement, analyse validity with the metadata, optimise the query with table, stats and so on.

Accepting a string is a generic solution. We must spend time parsing, analysing, handling errors in our precious runtime.

How about we move information about DB tables and queries to the compile time? After all ruby monkeys write tests to be reasonably sure about DB interactions before deployment to production.

What if database tables wouldn't be binary blobs in some generic filesystem format but rather a programing language constructs?
What if your DB was a generated code of a single object where tables were hash map arrays and your planned queries are simply exposed methods that you call and have typesafe results?
What if by defining the queries you need beforehand you could eliminate all queries that don't hit indexes and never deploy that?
What if, after analysing queries for tables you would determine that multiple tables are never queried together and can be serviced by multiple cores without even needing any generic locking mechanisms?

I'd love to use such DB instead and I will probably implement one with the pattern...

Another sacred cow is dead. Let's pick on Haskell faggots!

Smug Haskell academic weenies say "everything must be pure". Guess what? Making everything pure is a generic solution. Haskell crap because of that has to copy gazillions of objects and be inefficient, this greatly complicates the development of software projects with this impractical puzzle language.

Think of Cardano, now soon to be dead shitcoin that didn't deliver anything of practical value since 2017, having four years plus of a headstart versus projects like Avalanche, Polkadot and Solana, which use practical languages like Go or Rust and already have huge ecosystems. Meanwhile ADA Haskell faggots are just beginning to optimise their 8 tps smart contract platform. Even if they do optimise and become sufficiently usable (but will never reach speeds of Rust of Polkadot and Solana) their competitors will already be huge and nobody will need yet another blockchain.

So much for imposing a generic solution of purity to your project and destroying your productivity and develoment efforts.

The more specific solution would be controlled side effects. You mark a function pure and it can only use pure functions. I imagine a programming language will emerge, or maybe provable ADA already does this, where you can query and prove the code, that it does modify variable, is pure and so on.

Third cow done. Let's get serious: Operating systems.

Think of linux and the crazy amount of generic stuff it supports. I can guarantee that there is a ton of performance left on the table because of generic linux interfaces.

For instance, Terry Davis (RIP, you genius hacker soul!) wrote temple OS which specialised in games and in his OS he could perform a million context switches versus thousands of linux. That was because he coded a specific solution and circumvented lots of wasteful abstractions that are simply not needed in his context.

So far linux seems to suffice most applications with adequate performance, but I predict more and more unikernels will appear that will function as faster databases, load balancers, search engines and so on and so on. And unikernels will also eliminate many of generic system attack vectors, such as unintended shell access.

We could go on forever and ever about things that are generic and now are costly because of that, but I don't have infinite time. Simply, if you see inefficiency, or encumberance, ask yourself, is it because, for instance, XML, JSON or YAML is generic? If something is, how could it be made specific? Maybe these could be specific, typesafe language constructs? ;)

And you'll see angles you have never considered before.

By the way, nature seems to be created with all hardware, no generic intel chips. Specific bones, eyes, nails to every animal kind. So that speaks volumes of the infinite wisdom of our Creator!

Have a nice day
« Last Edit: January 31, 2022, 01:35:01 PM by CultLeader »