Mycorrhiza – the plant wide web

So, about an hour of extra free time today, I get to write. 🙂

If you’ve found your way to my blog using Google, odds are you already know a bit about what mycorrhiza are. You may have seen a wiki page or maybe even a commercial product. Because I am more than a little fascinated by the subject, I will try to provide you with some information about it that might otherwise be a little hard to glean from the Internet.

Mycorrhiza are symbiotic relationships between plants and fungi, which was largely unknown to us until it’s relatively recent discovery. As it turns out, most land plants take part in them and the development of mycorrhiza probably played a key role in getting plants to first colonise dry land millions of years ago.

Fungi are from the human perspective a very unique type of living organism. While to us they are familiar as their multicellular fruiting bodies, aka Mushrooms, but the main body of the fungus is usually the below-ground part which is composed of a mesh of mycelium. To us this may simply be the rather unremarkable “roots” of the fungus, but fungi are very different from both plants and animals… Fungi feed by growing through their food and digesting it on the outside of their bodies, then absorbing the nutrients. Each strand of mycellium is only a single cell thick, so the mesh is very fine, and the cellular contents within this mycellium can flow very quickly from where it is available to where it is needed.

This makes fungi a lot better at certain tasks than both plants and animals. Notably in the case of plants, they are able to extract minerals and water out of soils that are otherwise either toxic or unavailable to the plant. The relationship between plant roots and fungi is very old and both plants and mycorrhizal fungi have specialised organs that serve to interface between the two species. The plant uses these organs to get nutrients and water from the fungus, and the fungus feeds of of hexoses, a special type of sugar produced by the plants specifically for the fungus.

What is so interesting about mycorrhizal fungi however is that the mesh of mycellium, is not a branching tree-like structure like with plant roots, it is a web. The same mycorrhizal fungi can be linked with multiple different plants and the fungal mesh is perfectly capable of transporting chemicals between different plants. There has been research done that shows that tomato plants can share chemical signals through this network. There is other research done that shows that plants of different species can feed one-another through the network.

This somewhat challenges the long-standing belief that plants compete for resources in their natural environment, where only the most efficient survive. If most land plants take part of a local mycorrhizal network then in fact, the forest or the grassland is socialism. Or at least that’s what the people selling you your mycorrhizal fungus spores are trying to convince you of. That your plant will be automatically better off if well interfaced with a local network.

I’ve spoken to several scientists who are actively researching the subject and in every case they are rather frustrated with the assumptions made by the companies rushing to capitalise on the discovery of mycorrhiza. The truth is: It’s complicated. After the excitement of the initial discoveries wore off, scientists quickly found that there are as many different mycorrhizal fungi as there are plants and that all behave in different ways and many of them compete in the natural environment for their precious resource — plants. Different plants will invest different amounts of resources in the network and some fungi are selective about which plants they interface with. Bottom line, it will all require decades more of research before we can say anything for sure.

What I like to think (which is to say there is some foundation of this in the current research into mycorrhiza, but I am not  a scientist) is that in nature, there are all of these different systems and what nature ends up using is whatever works best in a particular scenario. If different fungi are competing for plants and therefore only the most successful survive, some of the time this means that perfectly socialist networks will be prevalent. And since we have observed examples of this being the case, I would say it’s safe to say this kind of system can work. The plants and the fungi are able to determine this on their own without human interference, even in a system where abuse is quite possible and is probably even advantageous in some cases. That is, there are both plants and fungi that take advantage of the network, because they can. But it seems in the grand scheme of things, networks that do not have such individuals work better and out-compete networks that do contain exploitation.

What I consider to be the truly mind-blown moment however, is that if you look at the most typical average back-yard you can see examples of all of this everywhere. Both the woodland and grassland ecosystems in fact harbour mycorrhizal networks in their soils. For trees, research shows that trees up to 14 meters away from each-other are linked, whereas in fields of grass the network probably spans the entire thing. If you have a small tree planted near a large tree, even if they are not the same species, odds are the larger tree is feeding the younger one.


Xiaomi Philips WiFi LED: bring your device closer to router

To stay consistent with my practice of giving you the solution near the top of the post: If you brought this bulb outside of mainland China there is no fixing it. Don’t waste your time and take it back to the store you brought it from and ask for a refund. Buy a different smart lamp that is properly made and actually works.

I don’t always bash crapware on my blog but when I do… Seriously if there was another way for me to warn people away from this product I would have. The manufacturer along with the rest of the distribution chain seem dead set on simply conning people into buying the remaining stock of this  bulb, because I doubt there is anything else they can do with it.

Although in most cases I would simply have built a device like this myself from parts, I have found that I no longer have the kind of time to invest myself into the project and the debugging and it would be better for me to simply buy a finished product designed for the purpose and use it. So looking up what other people usually buy I looked up my favorite online store and found the Xiaomi Philips smart bulb.

Compared to my working Eufy Lumos smart light it’s clear that the Xiaomi bulb is designed to be a generic replacement for it. However the Xiaomi bulb comes with a 40% discount and appears to be cheaply made out of plastic whereas the Eufy bulb is metal. Both bulbs get pretty hot in normal operation, however as you might understand plastic getting very hot vs metal getting hot is quite the difference.

The app provided by Xiaomi is designed to appear similar to the Eufy app, however this is where the trouble beings. Xiaomi accounts use an ID number rather than a username and not only is this impossible to memorise, the ID cannot be chosen and is simply sequential, one for each person who ever signed up for an account. The trouble is the bulb Xiaomi makes is limited in the numbers that it supports because the firmware uses a 32-bit spot for the account number. So anybody up from account number 4294967295 is unable to link their bulb to their account, which is now in the distant past with my account ID coming up as something like 9 million, and all the app says is “bring your device closer to your router”. (For the Eufy bulb I was able to simply log in with my existing Anker account and it worked flawlessly.)

Xiaomi did release an upgraded firmware (or so they claim), however seeing as how you need to link the bulb to your account to access it at all there is no way to install it. When I was fixing this I did obviously find this post, which contains an account someone made a while ago which people share and the Xiaomi representatives suggest (in broken English) linking your bulb with it temporarily to upgrade the firmware. Here the next problem follows, as it appears that the firmware they released (the description, frustratingly does not admit Xiaomi’s fault here but simply says it “makes it easier for people to link their bulbs”, asif the consumer stupidity was to blame for their shortsightedness), cannot be downloaded, the procedure always gets stuck at 40% and then times out. You also cannot trick the bulb by linking it to a mainland-China server for the upgrade (as suggested by other Xiaomi techs), because the bulb can only be linked to servers of the region where it was brought and does not show up in the app at all.

The trouble at this point is that the Xiaomi app only links to the Chinese forum (and I don’t speak Chinese). If you happened to have found their English-language forum using Google, you soon would have found that they’ve made it read-only (the new thread button does not work and the existing threads require me to be in some special group to post, which cannot be acquired seeing as how you cannot post). From what I was able to discern however, Xiaomi ran into issues with GDPR and so they apparently left their non-mainland-China servers to some outside company, which isn’t maintaining them enough to maintain working copies of the firmware and this is why the procedure doesn’t work. There’s also been some mention of a lawsuit between Philips and Xiaomi, which I have no trouble believing seeing as how the bulb is little more than a scam, at this point.

So anyway, I’m no novice computer user who wouldn’t be able to find his way around a tricky firmware update (I worked in IT for 15 years). I tried everything I could think of and there’s just nothing you can do to get a bulb in to a firmware version supporting newer accounts. I’ve spent 5 hours diagnosing all kinds of alternatives, and all I can tell you is offer you heartfelt advice to stay as far away from this company as you possibly can.

For some reason my supplier wouldn’t let me post a review for their “bestseller”. At first I couldn’t even get a refund, as they just shipped a replacement, even though I very clearly requested a refund. I got my refund after I, ahem, insisted and the post office was kind enough to return the replacement to the store free of charge, however I still feel people should be warned, so this is what my post is for. Hopefully someone can find it over on Google. Oh and feel free to leave a comment here, because I don’t think you’ll be able to leave it anywhere else.


LARES: Automation framework


Another one of my failed projects from my past is LARES, an automation framework I designed along with two coworkers. Basically we had a team, an electronics guy, a mechanical engineer and me, the software developer. In an unfortunate example of what happens to a project when you gloss over the requirement of a manager and a marketing person, what resulted is that I wrote the software but nobody else on the team did anything else. This happened in spite of the fact that we had a customer and working prototypes.

Past regrets aside, this does mean that I now have an automation framework software I don’t have a use for. Considering the point that it would a greater failure to let it go to waste because the project didn’t work out, I converted the software to open source (and the demo hardware has been used for various other things as well):

I figure that most people would rather write their own framework than use an existing one, allow me to try to sell you my framework. What is an automation framework? Well, the defacto solution for automation these days is by that German S company who’s name I will not mention for copyright reasons. Their all capitals product became synonymous with automation, automating anything from industrial machinery to things like HVACs and, oh I don’t know uranium centrifuges in Iran. The problem with this brand is that it is $$$. My automation framework attempts to deliver the same kind of service for less than 10% of the price.

This is accomplished by using off the shelf components which can easily be replaced if they fail. By this I mean PCs, routers, Ethernet switches and various purpose-made IoT devices such as Arduino boards or other types of Ethernet-connected A/D or relay boards.

The framework, running on the PC is composed of a background service that runs the hardware drivers and whatever automation is required to bring the hardware to the state indicated in the internal database. The second part is the foreground that runs while the user is viewing it, and it’s role is to take user inputs and relay them into the internal database.


The whole thing is compartmentalised into these boxes, with engineering symbols on them. Each box represents both the background and foreground components of the corresponding hardware component, or abstract concept, depending on what you’re trying to do. This is simplifies to the process of automation to a game with the blocks of that four letter L company, the brand name of which I am not going to name either. A few sensor blocks, a few valve blocks and a regulator block or two come together to automate the process of generating clarified water, or whatever process it is that you wanted to automate.

This interface is web-based, which means it can be viewed on anything that can be connected to the network. It can be shared over the Internet, to coordinate multiple factories over long distance or whatever you want. The preferred control device, we envisioned for the framework being a cheap wireless consumer-grade tablet. It has a touchscreen so the user can just tap the screen and the tablet is portable, or could be mounted onto an arm with a charging cable. If you are working in an explosive environment, there are EX certified tablets.

I should probably mention that yes it actually works and I have previously worked in companies where we built such things for commercial clients. This was intended to be a commercial product, it just never got off the ground.

I’ve abandoned the project long ago, and as such the source code is not in a great state, being mostly in Slovenian and well partially translated. If someone on the Internet would actually like to use it I’d be motivated to at least finish the translation project.

If not, at least I presume someone has a chance to find it now and know that it has existed. Thank you.



Comfortable environments

Hello all,

I apologise for leaving this blog for so long in an ugly state.

One thing that I must have written about before are my thoughts on the concept of beauty. These have to do with a simple question that was bugging me for a long time, namely, what do I find interesting in this image:

Yes, the image is from a video-game but this does not matter. I usually have a subconscious process going that latches onto interesting ideas and although I don’t know why something is interesting, the payoff is usually worth it when I find out.

Now the backstory is that the image features the intersection of the natural and the technological. This as a concept has always been interesting to me. Our modern city environment is obviously technological and functional, so why don’t we find it comfortable? Is there a trick to this?

Well it turns out, there is. What got me thinking is this Kurzgesagt video:

I created this leaf generator:

…which encodes a file hash as a leaf (or bug if you want), which is a great deal easier to remember and compare than, the numeric hash format. Something similar could probably be done with self-symmetrical (fractal) shapes.

It was a step forward, it made something technological a lot easier to process for a human being. However it wasn’t really… it didn’t really show a way to make technological environments comfortable and intuitive. For this, I would have to realise, that not all things we are born to process encode raw data. It’s not that we find data encoded in a specific way comfortable — we find certain patterns comfortable.

They are, as it turns out, patterns which tell us that an environment we’ve found ourselves in is suitable for life. Let’s take a very simplified example so that I can demonstrate to you. This here, is a landscape:

Any environment we’d encounter and choose to live in would need to have water. In landscapes in nature, the presence of water is demoted by mists. So let’s add mists:

The other component needed for life is sunlight, so let’s add it:

Beautiful, isn’t it? Throw in some self-symmetrical objects, and if you were in a survival situation, and you’d find a place like this, you’d choose to stay there because you found it beautiful. And this would enable you to survive, as the environment would likely be full of life that you could eat and survive off of.

This as it turns out is what I found in that image. It has mists and a sunset. It’s kind of anticlimactic once you know.

Our brains cannot really wrap our minds around the concept of water mains and grow-lights, we need to see water and sunlight to find the place livable. And here while technological systems might find it preferable to have straight lines and clear rules, we need to mix in some sun, water and symmetry and then we’re good!


Pipes! Fatbergs! Chemistry!

Hey all,

One of my freetime interests is in pipes. When I was a kid, like maybe 5, my father was an engineer working on the country’s nuclear power plant. This had to do with a lot of printing and so where’s other 5 year olds were colouring pictures of animals, I was colouring cooling systems diagrams. I blame this for my fascination with pipes. The odds of someone else sharing this interest are probably exactly zero, but you never know right?

So, let’s start with something epic. How can sewage be epic you ask? Well, like this:

…this tank is capable of containing 13 giga-litres of sewage (13000000000 l). It fills up when it’s raining (and there is snow melt), so that’s why the camera is wet. Here’s a construction phase picture of the pipes that lead into it:

Feel sorry for the people who live in that town tho. That much sewage can’t smell nice no matter how well executed the containment is.

The truth is, as counter-intuitive as that may sound, sewage and water don’t mix. From the microbiological perspective, water makes it difficult for oxygen to reach the organic compounds and so their degradation slows, or the sewage becomes infested with microbes that do not use oxygen, producing a foul smell. Thus, as much as I admire the americans that built this fancy giga-litre storage tank, it’s fundamentally a bad idea. I do understand that they have no other way to really solve the issue though. It’s not like people are going to give up flush toilets to help them do it.


One of the more easier to get to videos about the sewers is this stuff pushed by the BBC:

What you have here is people who have no idea what they are talking about, sounding authoritative, because that’s just the way the BBC rolls. That stuff is not fat, it’s calcium soap.

Some searching online, will lead you to several research articles, there’s actually only a few (unsurprisingly not many scientists are interested in what goes on in the sewers). What happens is when sewage travels along the sewage network too far, it begins to decompose on route to the treatment plant. The microbes growing in it start to produce sulphuric acid, which eats away at the concrete pipes that the sewage is flowing through. This causes calcium and similar minerals to leech from the concrete, which when they come into contact with fresh water and fats from buildings further down the network, produces this calcium soap, which unlike normal soap is not water soluble and deposits in the pipes.

But, so you may say, the BBC is still correct in saying that this is because people are dumping oils into the sewers? Well, not really. Fat in the sewers is not just dumped oil, it’s also basically anything you use soap or dish soap for. Research shows that normal domestic outputs like sinks and showers from a single skyscraper, contribute enough fat to create a problem. The grease interceptors that they have been promoting as a result of this fatberg problem (which are a good idea by the way, EU requires them to be installed in basically every parking lot), actually lengthen the amount of time that the fats spend in the water and increases the amount of problematic compounds in the sewage.

In other words, carrying fats is the basic and unavoidable function of sewers, and the BBC is fooling you into believing it’s your fault that the sewers are incorrectly designed. But oh their glorious engineers from the 60s’! Infallible!


Didn’t really want to turn this into a rant, but hey it’s my blog! Deal with it!

Just kidding. Hope you learned something. Anything you may be wondering about, feel free to drop a comment.


Imapsync on Ubuntu 18.04


Imapsync, as you may know, is a tool for copying / transferring / backing up email accounts between two IMAP servers.

I found it interesting that while there are some blogs out there that claim to contain the instructions how to set this up, none of them are actually correct. The original author of imapsync has since removed all traces of any pre-compiled binaries in an effort to focus people on his paid Imapsync service. The only way you can use Imapsync for free is if you compile it yourself. You will need a Linux machine for this.

If you don’t want the hassle of all of this and just want to transfer or backup your mail, go spend 60 €. The author deserves it. If not however, follow the instructions below:

First, make sure you have the tools to be able to follow these instructions:

sudo apt install git make cpanminus

Now open a terminal and download the Imapsync package:

git clone

This creates a folder called “imapsync”, go into it and run the install script:

cd imapsync
sudo make install

This will run a long process, which will at the end tell you exactly what you need to do. In my case it looks like this:

Here is a cpanm command to install missing Perl modules:
cpanm App::cpanminus Authen::NTLM Crypt::OpenSSL::RSA Data::Uniqid Dist::CheckConflicts IO::Tee JSON::WebToken JSON::WebToken::Crypt::RSA Mail::IMAPClient Module::ScanDeps PAR::Packer Parse::RecDescent Readonly Regexp::Common Sys::MemInfo Test::Mock::Guard Test::MockObject Test::Pod Test::Requires Test::Deep Test::Warn Unicode::String

Run the command you are given (do not copy mine), with sudo.

Some of them will most probably fail to install. This is because they depend on system libraries that must be installed with apt. You will most likely need:

sudo apt install libssl-dev libpar-packer-perl

If anything else fails to install, google it. Perl is extremely widespread and instructions are very easy to come by.

To see what else is missing you can re-run this at any time:

sudo make install

Repeat these steps until this no longer yields any errors. At this point your installation is ready and you can start using it.

Before you use it, please be sure to at least glance the very useful “FAQ” documentation! Most significantly, if you are copying from or to GMail, I highly recommend using the  –gmail# switch as appropriate and as documented here. This will take care of all of GMail’s quirks, many of which you otherwise need to consider for a successful sync.


My experience with the other installation instructions was that it could result in a slightly broken install that caused Imapsync to use a lot of CPU (100%) and work at the rather minimal speed of about 0.01 messages per second. The other procedures also trash up the system with various libraries that are going to be set as manually installed and will not be automatically removed should you ever choose to uninstall some of this stuff.

In any event, if you have a few gigabytes of emails it will take a few hours so either run it in a server in the background or leave it alone to work. Imapsync will write a log even if you don’t tell it to, it’ll be placed in the “LOG_imapsync” folder. You can interrupt the proces at any time and if you run the command again later it will be able to resume.


My blogs usually come with explanations on the hows and whys. In this case I figured people would abhor having to read through that stuff, so I’m putting it at the end.

IMAP is heavily server centric. If your email client detects as connection to an IMAP server that does not contain the messages in your inbox, it will delete all your local copies immediately. This makes any client-side backups of IMAP mail rather unreliable. The only real way to back up your email is to copy it to another IMAP server.

Short of various questionable solutions such as setting up two email accounts and copying emails between them (which is time-consuming and can fail completely), the only real option is to use something like Imapsync. This will preserve the email unique IDs and ensure that the copying process does not create duplicates. It will also synchronise other information such as which emails have been read and stuff like that.

I think that’s all there is to it. I will update this blog if I remember anything else that’s important.

Good luck.


Artificial Intelligence


Artificial Intelligence has been one of those promising things that people have been talking about for some time.

I always somehow screw up the serious tone of my English tech blogs by dwelling too deep on some AI-related topic. The truth is, AI has been an interest of mine for a long time (those of you that are good with a search engine will be able to find my contributions to from about 20+ years ago) and I still have some ideas about it that I wish I had time to put into code.

There is a lot of mysticism online related to artificial intelligence. For a lot of people it’s little more than a science fiction level fascination and you can immediately tell this is the case based on their persistent and senseless recycling of Asimov’s laws, which originate from 60’s science fiction and are not applicable and never will be applicable to any real-world software program. I am not one of those people. I view AI from the perspective of a software developer and there is no place in my understanding of AI, for overly vague abstractions that are made up and have no translation in real life machine code.

It is probably no secret that I work as in IT. Few lines of work make it as obvious, that there are routine tasks that machines are good at — and creative tasks that machines suck at and need a fallible organic operator to get them done. Such tasks are a core element of maintenance of any large scale system in use today. The goal of creating AI is automating the latter in a way that machines can understand the problems and solve them creatively.

Today’s AI is not quite there yet. I mean, today’s AI is far more than anything dreamed up in the 60s. Big companies like Google and Facebook figured out long ago that humans are not very good at understanding their role in social groups, when the number of people exceeds about 100 members. Google’s AI is a hivemind superintelligence that connects people using automation and… figures out what news items and Youtube videos you’ll be interested in. I have no doubt that some of you who use their services realised by now that it seems to figure out what you really needed in a day or so. Somehow.

But that AI is still not quite the understanding problem solver I was looking for, for my IT jobs. That being said, I do believe that we have the technology to make such an AI a reality. Presently, the problem is mainly that nobody has yet figured out how to make money from constructing a real AI. I don’t know if there is a business model to support the creation of a true AI, other than aiming to make a company with the explicit intention of inflating enough buzzwords to end up being brought out by Google and earning a fortune that way. Jokes aside, I think if there was commercial interest in creating a true AI, we’d have one by now.

My reasoning is primarily that.. I think I know how to make one. And since Randall manages to implement things I’ve thought about into comics so well, I think hundreds of other people just like us must be thinking about the same solutions as well. This means there probably already is a critical mass of developers out there who could make an AI if they put their minds to it.

The most promising piece of code that I have seen thus far was a trivial chatterbot created by a friend several years ago, who had intended to make it capable of learning a language. He allowed the bot to create a database of words in which words were no longer mere words, but abstractions. If you put the idea that the database has to make sense to you as the developer out of your head, you would have been able to see that the bot demonstrated some kind of rudimentary “true understanding” of the meaning behind the words. I think this concept, expanded to relate to observations and reenactments more functional than mere words, could be an AI capable of understanding a problem, which could then potentially be wired to try to solve said problem, based on observed solutions. And potentially it could be made capable of linking up simpler solutions into more complex ones, in other words: It could be made to solve problems independently.

The point that this does not seem impossible is… fascinating. I have been toying with the idea of simply sacrificing some of my free time to work on such an AI, on and off for the last few years. I want to work on it but… I’m not enough of a hardware robots guy to be capable of putting such an AI into a practical environment it could learn from as humans do (by connecting words to things we see and experience). I also lack certain skills in maths, which would help me make things like efficient poly-dimensional searches, which are required for efficient processing of learned abstractions from multiple aggregated inputs.

What got me writing this post was the idea I had last, that I might after all be able to create an AI that can creatively solve networking problems. Networking as it so happens is pretty native to computers and it would not be outside my skill set to set it up like that. I’d still face problems like how to set up an environment in such a way that problem solving would actually be an advantage over some more brute force approach. But at the very least it’s something where solutions could be taught and then demonstrated. There is potential.

I’m still not really sure why I write these blogs, given the likelihood that nobody will ever read them. Still, if other people are thinking as I am on this, there is a nonzero chance that someone will find some advantage or encouragement in reading what my thoughts on the subject were, I suppose.

Good luck. 🙂


Key lime pie Internet “mystery”


So the time has come around again where the Internet has reminded me of the Key Lime Pie Internet Mystery. You’ve likely found it on Reddit.

It relates to a phenomena of SPAM comments on the Internet, on random websites, seemingly about key lime pie (pictured), sentences eventually devolving into pornographic proportions of nonsense. The thing is, that while a spambot may be to blame, it’s difficult to explain why they would be advertising pie of all things, and why they would keep this up for over a decade.

I’ve investigated this some time ago and found it to be an encryption scheme, probably used for some deep web style illegal internet based exchange of messages. I believe a Stargate episode long ago implied that the CIA uses this sort of thing as well on occasion. Who knows what the reality is, but it’s true that things are often best hidden in plain sight, as demonstrated by the number of people and amount of time wasted on this mystery, yielding little or nothing public.

What frustrates me is that despite some people knowing the nature of this stuff, it keeps being perpetuated as a “mystery” — because of course something instantly ceases to be interesting as soon as you have explained it. So people just don’t get to know the truth. I think even my reply to the Reddit got taken down. Well, I can try and fix it again on here and hopefully someone finds this page in their research.


So, what is the key to understanding the nonsensical messages regarding key lime pie? It is an encryption technique called steganography.

You may have noticed that the comments regarding key lime pie do not look like code. They look like at least semi-sensible sentences. This is the key component of steganography, like the wiki says: “Steganography is the practice of concealing a file, message, image, or video within another file, message, image, or video”. In other words, the hidden messages are hidden within these sentences. They are encoded by the means of… the order of words, the length of particular words, the punctuation and similar. Things that a human reader might easily glance over while reading a message.

After all, we all know what SPAM looks like. We know bots tend to include semi-sensible text into their messages to defeat automated anti-SPAM protection. These types of messages are normal… right? Nobody would suspect them to hold meaning. As said however, the meaning is not in the text itself, it’s in the things we assume are random.

One of the downsides of steganography, like every other form of encryption, is that besides the message itself, it also requires an encryption key of some kind. Let’s take an easier example, an ancient roman syctale:

The use of the encryption device isn’t difficult to figure out form just looking at it. You take a stick, wrap a strap of paper on it and then write your message horizontally across the strip. When you unwrap your paper strap, the letters will appear to be giberish until you wrap them around the same kind of stick again. In this case, the strap of paper is the encrypted message and the width of the stick is the key. Without a stick of the same width, the message does not come together again.

Giving every commander the same kind of wooden stick might have worked well in ancient times, before the invention of the ruler and standardized measuring units. But these days, it would take very little time to figure out and decode. So modern encryption schemes use  a key that keeps changing.


Back to the key lime pie example, the SPAM comments posted in various internet sites are clearly the encrypted messages, but what is the key? Where would a group of people want to post a key that keeps changing over time, that would be anonymously accessible to everyone, but arouse no suspicion? The answer is Facebook.

Of course, because in this case the encryption scheme is steganography, the key is the original text which is modulated to generate the encrypted messages. In layman’s terms, to encrypt a message, you could take an original message which is used as a key, split it down to segments of different length, then make each segment represent a letter of your message. Put the gibberish back together in the order of your characters in your message and you’ve successfully encrypted your hidden message.

Let’s try doing that together. Let’s take the latest Facebook post by our friend Jake Carson, and paste it into a spreadsheet. Then, let’s take each line and assign a character to it. I’m going with the english alphabet, plus a space and I’ve skipped the lines that are only dots. I’ve trimmed off the remaining messages. We end up with a key table like this.

Now let’s encode our message with it. I’m going with “hello world”:

..And We Hate To Sound Like A Broken Record But Here Is A Key Lime Pie For Our Buddy “Maurice White”, Founder Of The Great Group “Earth, Wind and Fire”!..Rest In Peace Dude.…Can’t Get Enough Of That Key Lime Pie, Key Lime Pie, Key Lime Pie. Can’t Get Enough Of That Key Lime Pie Or I’ll Just Cry Until I Die, I Don’t Know Why I Just Love My Key Lime Pies!….are so wild about him and his Famous Cheese Burgers and Key Lime Pies, are so wild about him and his Famous Cheese Burgers and Key Lime Pies,His Drop Dead Gorgeous Wife “Miss Anita” together in they’re Historic Key..Miss Anita And ’Chef ‘Captain Kutchie Pelaez’s Key West-Kutcharitaville Key Lime Pie Factory And Cafe’, “Where Eating Is A Pleasure And Cooking Is An Art”….. Hell, “Chef Kutchie Pelaez” Has More Talent In His Toe-Nail Clippings Than All The Others Have In Their Entire Bodies!..Figure!!!!!!!….His Drop Dead Gorgeous Wife “Miss Anita” together in they’re Historic Key Your Time in They’re Little “Key West Island” near the Biltmore Estate are so wild about him and his Famous Cheese Burgers and Key Lime Pies, …Kobe Bryant May Be Retiring From Basket Ball But Captain Kutchie’s Is Still His Pie Of Choice!…

Does this read as something familiar? The reason why some of these texts are all capitals whereas some of them aren’t is mainly because if you check the Facebook post, the segments added in the later posts do not have all capital letters.

But now that you have the encrypted message, if you go back to the key table and find the individual segments, you can reconstruct the original message “hello world”.

But this is just an example, the actual encryption scheme probably doesn’t use the alphabet, they probably encrypt their messages with something else and then use that, to determine which segments to use in the text. The original text is more than 200 individual lines and likely the lines with just the dots mean something too. But for me at this point the mystery is solved. If you really want to know what the messages say, you’re going to have to fiddle with it some more after this point. Just remember to use the latest key posted on Facebook at the same time as the message you are decrypting. 🙂

As for why the text is about key lime pies in particular? Well, it needed to be something mundane that would not arouse suspicion or identify the author, likely the programmer of the encryption tool just googled “key” and eventually arrived at an ad for key lime pie, which they copied. When they realized over time that they need more lines for a more complex key, they just padded it with nonsense from a porn site, about a woman named the same way as Captain Kutchie’s wife, for the lulz. Likely authored lines of nonsense to help combine these two himself.

By the way, would be glad to answer any questions you might have, just post them in the comments.

Also, enjoy your pie. 🙂


Heterogeneous systems advantages


As you know (or maybe you don’t? Who knows how you got here) I’ve made an operating system kernel from scratch a while ago. Now the date of the page is right, the thing was put together in 2008, but then wasn’t touched again after 2013. The reason why this happened is because I realized that in order to achieve frame rates higher than 16-ish FPS (which is enabled by VideoBIOS), I would have to implement every graphics driver ever, for any card that ever existed, which did not seem like a worthwhile use of my time. 😛

I was explaining  this to a coworker not long ago and this made the subject fresh in my mind. I remembered the event driven architecture I had in plan for this kernel… The idea was to let the kernel allocate requested system resources directly (such as clock intervals, screen regions, I/O messages, and so forth) and allow for an unbroken chain of events from the hardware directly into the individual apps, to entirely avoid polling of any kind, which is commonly used in modern operating systems (and is inherently inefficient).


This morning I suppose my mind was still processing this idea and I had a dream about multiprocessing. It came with visuals of hardware architecture and all, I think I saw a FM2 socket, it seems my subconscious is fairly well-versed in technology. 😛

Jokes aside, I was always fascinated by heterogeneous multiprocessing. That is… processing using multiple CPUs, where the different CPUs are different and do different things. It’s an idea AMD tackled with quite a bit while trying to marry GPU architecture (which is mainly parallel) with CPU architecture (which is heavily serial), starting out with the Geode cores and eventually arriving at the modern Ryzen APUs. It was also an idea visited by the Cell line of processors, although I suspect that move was more based in cost saving than performance hunting.

The natural assumption is, that since the different cores are good at different things, in tandem they should be better at a random workload than a homogeneous CPU, which does one thing well and others worse. In reality this is not the case, as much of the performance of modern machines comes from carefully timing the delays between the individual pieces of hardware and the difference between a slow and a fast program is mainly in how well the program catches the rhythm of the underlying hardware. A heterogeneous high performance system is always going to be fairly uncommon and therefore no programs are optimized for it, yielding poor overall performance.

However, there are use cases, such as for example the management controller on a server. It is usually a not very powerful secondary CPU, that runs an entirely separate system that watches for failures in the main system and acts to correct. The main advantage is that it is independent and therefore still able to act in case the main system becomes inoperable.

The role of a main CPU on a motherboard is of course fairly obviously separate from the other micro-controllers on a typical PC motherboard, even in cases where the CPU is on a standardized interconnect:

The idea suggested in my dream was that you could run one of the CPU cores (or a second CPU on the motherboard) with an entirely separate program, from the main OS. In reality, this would work, if the CPU worked with it’s own independent memory and didn’t have any conflict or need to share other hardware with the other CPU. Which begs the question regarding the use case. 😛

As described before, while this would be pretty cool, it would obviously not offer a performance advantage as — modern computers are designed to have all CPUs share the same OS and have very similar roles, sometimes even shared cache, which my idea is in great conflict with. There is also no advantage to having a supercharged management controller as management tasks are just not that elaborate.

When a computer boots, it uses a single CPU, however soon after the second CPU (or third, etc) is given something to do by pointing it to a memory location and letting it do it’s thing. At that point in the process, the programmer has to be aware that the second CPU will be executing simultaneously and independently of his main CPU boot thread and therefore the problems involved in multi-threaded processing become relevant. You are basically booting multiple computers at the same time and sharing every running program between them.

The performance of the system at this point, depends on how well the operating system orchestrates all the components and their workloads, so that on the one hand it acts reliably, and on the other, the timings are not off and every part has something to do without waiting it’s turn. This is why most supercomputers are designed for a specific type of task, despite having petaflops of processing power to work with.


I think as far as my idea goes, trying to figure out how to make it work with hardware is ultimately pointless (at least as long as I don’t have a chip fab at my disposal). When I first realized I would have to code drivers, my solution was that I wouldn’t use PC hardware, but rather something like a tablet where all parts would all share the same hardware. I think my event-driven concept has value, but if I implemented it today, I would probably go for something like Linux, instead of coding the whole thing myself. Yes, it wouldn’t be as efficient, but it would work and it would be a finite effort.

I think the advantage of such a system is not so much in the raw performance, as it could be in usability. One thing that my hardware-shared and event-driven system offered as an advantage over modern operating systems, is that it enabled usability. One could stop thinking in terms of what a computer has attached, but rather be thinking of the resources of an entire local network collectively (as events could always be transmitted over a network). And I think the useful value here is in simple usability. Use a system with processing power for it’s processing resources and use a portable system with a broad user interface (a tablet) for it’s user interface resources. Migrate resources between compatible devices seamlessly.

When I first thought of tablets for my kernel in 2008, mobile devices were not yet in widespread use. In the 10 years, this has changed. Perhaps the above paragraph will also successfully predict what in technology will change over the next 10 years. Will the cloud become local? We will see.


First entry

Hello everyone,

So sometime in 2016 I’ve decided to migrate my tech blog to somewhere self-hosted but I never got around to it. Typically I would code my site myself, however this time around I don’t feel like spending all the time and tinkering needed to set it up. I just want this over with, haha.

So, welcome. This blog will contain techy stuff that would be worth posting about in English. I do also have a tech blog in Slovenian, which is educational in nature, however sometimes I just get ideas or somesuch that I want to put online somewhere so people have a chance to see them, should a random Google search lead them this way. I’m sure the AI knows if you really wanted to know my idea or not. 😛 Aye I realize nobody will ever see or read this stuff, but… it’s worth a shot.

I might also occasionally spam this blog with various other thoughts that I sometimes get inspired with.