Tuesday, January 27, 2015

No Interfaces Worthy Of The Name

Between self-sufficient concepts such as cars or Class Car on the one hand and large scale models on the other hand, there are interfaces. Not interactions or relationships but interfaces. The problem is that the software industry is extremely impoverished in those and as a result, dealing with it is excruciating torture to me. Because interfaces are what I'm best at and what I love. MVC is a good example of interfaces which are debilitatingly painful to me due to being hopelessly broken and low-level.

Think about it and think about how many concepts there are for large scale structure. Some patterns, architectures, frameworks, libraries, that's 4 categories already. Then how many concepts there are for small scale self-sufficient structure, probably hundreds of the latter. And then how many concepts there are for interfaces that one would be willing to use (so command and instruction and function don't count).

Events? But events are broken and not first class, so they aren't real. Object-capabilities? Disgustingly low level and broken. Object, maybe but that counts as small-scale structure really, or non-interface even. So there's message passing, inheritance, polymorphism? that one doesn't count. delegation. cloning vs instantiating, subclassing. Oh yes, aspects vs crosscutting, those are nice. Agents? Not really. Actors? Hmm maybe, maybe not. Probably not. Meh, probably yes but the problem is I just don't give a damn since it's about distribution and concurrency.

So there's no first class events, there's no first class dependencies, aspects aren't in any language I know. Transformational programming seemed in its infancy when I first heard about it, and I've never heard anyone ever ever mention it since then. Namespaces suck rocks so they're broken. Naked Objects? Oh yeah there's some guy who implemented it as a library or framework in Java, that's good for him honestly but doesn't count. Especially with the implementation being so kitsch and primitive rather than thorough and comprehensive. I mean, where's the IDE using naked objects? Nowhere.

There's remote message sends and proxy object, doesNotUnderstand: NullObject, those are another 4 interface concepts. So that makes what? 10? An even dozen? Twenty? It doesn't matter how many there are because here's the sick thing, they're enumerable. and they're not categories of things either, they're discrete instances of interfaces.

The software world forms an uncanny valley type field to me. There's large scale structure and then there's small scale structure and there's no bridges between them.

I don't think I'm the only one who loathes debugging or reverse-engineering with a passion. But I do think I'm the only one who understands why. The tools are worthless because the concepts to even minimally support asking "where did this bug come from?" and "how do I use this?" don't exist in software.

Sunday, January 18, 2015

Conversation On Secure Multiplexing

I drew some insights into the execution stack from TUNES. More of them than the whole exokernel thing.

Main and only insight from exokernel was that secure multiplexing is independent of abstraction. You can have ONLY secure multiplexing enabling you to present something that looks exactly like the bare resource you're multiplexing. That insight fueled Xen and other hyper-virtualization things.

The only problem with it is it's a lie. Secure multiplexing is an abstraction by itself. You run into the limitations of the abstraction if you push it, exposing the underlayer's existence, at which point the abstraction starts to fray and reveal its nature. For example, that there ARE other OSes running on top of the hypervisor because there's "missing time". and then it becomes obvious that hiding each other and not permitting any way to cooperate or interact is a choice of abstraction.

Joe B: fuck, I comprehend nothing

Okay, say you've got a CPU. now the traditional way to multiplex (slice and share it) is with a scheduler. Problem is that OS schedulers look nothing like CPUs, they're higher level. What people managing a cloud want ideally is to present CPUs, bare and naked, and tell everyone to fuck off because hey there's your CPU, your problem.

Now they don't want those CPUs to be REAL CPUs because that's not scalable. But they also don't want them to interact, so one asshole customer can't bring the whole business crawling to its knees. They want no-stick teflon quarantine isolation from each other. better than quarantine, they want everyone stuck in their own reality with no way to guess that they're stuck in a virtual reality.

multiplexing = slicing and sharing
secure multiplexing = teflon nostick compartmentalized quarantined isolated slicing and sharing

If you're a bank, you give out gold. but you want to give out virtual gold tokens that function just like actual gold. and you want to give out as much as people will buy without collapsing your business. You don't want to give out REAL gold because most of it's just going to sit in people's homes unused rather than being consumed in jewelry and electronics. And if people are only going to trade them then they only need to be pseudo-real enough for the purposes of trading. The virtual gold tokens need to look and feel real when they're being tested by a buyer, and at no other time, which is money.

Any questions? Or is this too primitive?

Joe B: no, this is perfect

Well, the exokernel folk tried to pull the same stunt as gold => money but with CPU+memory or in general 'comp hardware'. the only problem is that nobody pretends that money ACTUALLY IS gold. nobody tries to melt money down to make jewelry. nobody tries to electroplate anything with it.  so what these guys were doing is ... debasing.

They were debasing CPU+memory+hardware and saying "it's just as good as the real thing!!" and the problem with that is inevitably they'd run into someone trying to treat it EXACTLY like the real thing (ie, someone who bought into the propaganda) and then they try to use the debased gold to electroplate something ... and feel gypped because it doesn't work.

So with exokernel, if you have a really high load on the CPU, many operating systems, you come to have missing time. and the whole mockery of it being teflon and no-stick comes crashing down. Now it's not a problem if admins in the cloud-providers keep a watch on resource utilization and add more physical computers in time ... but those admins can't pretend to themselves that it's JUST AS GOOD AS real physical computers.

And if you're going to have something that's intrinsically different from physical computers, then why not do away with some of the problems of it? So the exokernel folk's attitude that their project was somehow purer and better than everything else is just a lie.

What does the Unix scheduler provide as an execution abstraction? It provides processes. C processes to be specific. GemStone provides Smalltalk processes or smalltalk images even. The C processes *ARE* images, they're just dumb as fuck images ...

So what is the exokernel lesson? The REAL lesson? At any time, at any point in the stack of abstractions, you can insert a circular loop from a node (layer) to itself, presenting a facsimile of that layer higher up. And if you understand that then the whole exokernel project is revealed as limited in scope because it was providing ONE such circular loop among the one to two dozen layers of abstraction found in a typical operating system.

Joe B: what is this layer, and how does it loop on itself? is it the physical computer, which loops by resources being added to it?

It's any layer. you can take ANY layer and make it loop in on itself. the loop forms a layer.

Say you've got a harddisk. it presents blocks. So you can partition it and now you have four hard disks which also present blocks. And if you're smart you can make those partitions flexible.

Say you've got a monitor with 1 framebuffer. well, you can partition the monitor and present multiple framebuffers. and those are now called windows. Or you can have multiple monitors present as one framebuffer.

You generally need some OTHER resource mixed in with the first one in order to fake the first resource.

gold + paper = paper money

If you could completely supplant the underlying resource, you would do away with it and it would be called a change of technology.

TCP allows how many different sockets? That all run over a single physical copper wire. The phone company uses multiplexing to provide virtual circuits instead of real circuits.

Richard: you got what I said about OSI, right? about how SOCKS is just a circular loop of a layer?
Joe B: oh yes. I got the words, not the concept. I'd have to learn the OSI model first.
Richard: SOCKS provides a sideband and extension to the layer below but it really does nothing else. Much like barebones secure multiplexing provides a sideband, although the exokernel tried to pretend the sideband didn't exist.

application layer (protocols used by applications, supposedly close to humans)
V
transport layer (virtual circuits)
V
data layer (packets)
V
link layer (0s and 1s to the next computer)
V
physical layer (physical connectors, physical cables, electrical voltages, radio frequencies)

Joe B: okay, that makes sense

In the fibersphere model, there are no packets and the virtual circuits are pretty close to real circuits so they're fused in with the link layer. Too bad we have no fibersphere because it might have been resistant to wiretapping. since you'd need to own a substantial fraction of the world's computing resources to wiretap everybody. Not even to interpret or do analysis, JUST to wiretap.

So, the OSI's model provided two additional layers to the above, and both of them were sidebands off of the application layer and the transport layer. SOCKS takes virtual circuits and provides ... virtual circuits. + some proxying and crypto. The so-called presentation layer took in application stuff and provided ... different application stuff. MIME took text and provided images, both of them being application layer.

The fact these two layers were BESIDE the application and transport layers really confused the dumbasses that made OSI, which means moralists since this was a standard, they thought since SOCKS takes in virtual circuits we'll just ignore that it provides virtual circuits, we'll focus on the other stuff it provides and call it a higher layer. And as for the presentation layer, since there's nothing closer to humans than applications, by definition, then by stupidity it follows presentation must be below applications and let's ignore the facts to the contrary.

Joe B: yeah, I stalled at trying to distinguish application from presentation

An email is an application object. the application layer provides for emails. Well, MIME took emails and provided images and that's exactly how gmail attachments work. They just hide the MIME, as they should have in the past but didn't.

Basically, those two layers are extensions of an existing layer rather than separate layers in themselves. Extensions which aren't accepted enough to be considered part of the same layer. Or weren't at the time that OSI was made. Hence the service and presentation layers belong on the same level as transport and application ... just besides them.

Joe B: so… a loop layer is one that can take in the same entities that it can provide?

It's basically a type of extension of the layer. It's aware of the other layer and the other layer isn't aware of it.

Joe B: hmmm

Joe B: is this design, or is this analysis? well it's both. it's awesome, lol.

It's the kind of high level analysis that fuels systems design, and NOT normal design. It's part of the majestic overlayer that has been until now entirely missing. This is lesson 4?


  • definitions / thinking
  • manipulating datasets
  • injecting values