Turkey buys Delphi licenses for an estimated one million students

January 20, 2020 Leave a comment

The ministry of education in Turkey recently announced that they will be offering Delphi free of charge to their students through a license deal with Embarcadero. An estimated one million students will thus learn object oriented programming through this initiative.

Getting object-pascal back into universities and education is very important. Not just for Delphi as a product and Embarcadero as a company, but to ensure that the next generation of software developers are given a firm grasp on fundamental programming concepts; concepts that represent the building-blocks that all software rests on; a curriculum that has taken heavy damage from the adoption of Java and C# in the early 2K’s.

Object Pascal as a language (including Freepascal, Oxygene and various alternative compilers) have been fluctuating between #11 and #14 on the Tiobi index for a few years. Tiobi is an index that tracks the use and popularity of languages around the world, and helps companies get an indication of where to invest. And despite what people have been led to believe, object pascal has seen stable growth for many years, and is far more widespread than sceptics would have you believe.

As an ardent object pascal developer this is excellent news! Not only will it help the next generation of students learn proper programming from the ground up – but it will also help to retire some of the unfounded myths surrounding the language (and Delphi in particular) that is sadly still in circulation. Most of these rumors stem from the hostile takeover of Borland by Microsoft, coupled with rampant piracy of outdated and antiquated versions of Delphi online.

Thank you to Hür Akdülger for informing the Delphi Developer community about this. Truly a monumental sign of growth. Congratulations Embarcadero and the Turkish students!

Source [in Turkish]:


Interbase for 2020 and beyond

January 19, 2020 Leave a comment

I just published an article about InterBase on the Embarcadero community pages! Interbase is a much loved database that has seen some radical improvements. Check out my top 5 reasons to use InterBase with your Delphi, C++Builder or Sencha applications here: https://community.idera.com/developer-tools/b/blog/posts/five-reasons-to-use-interbase-in-2020-and-beyond


Check out my top five InterBase features for 2020 and beyond

In memory of Rudy Velthuis

January 4, 2020 1 comment

rudyOur fellow Delphi and C++ developer Rudy Velthuis (1960 ~ 2019) is sadly reported to have passed away.

I think we have all had a dialog with him at some point in our developer careers. I remember him helping me getting to grips with bitmap scanline coding under Delphi 5 on the old Borland newsgroups.

Besides being a superb developer Rudy also had a great sense of humor, and a charm and wit that made him fun to talk with and easy learn from. He would go out of his way to help others and share his insight and wisdom whenever he could.

Sadly this tragic news has taken some time to reach our community. Rudy passed away peacefully in his sleep last may. When the news broke this morning on Facebook, the otherwise active group fell utterly silent.

Our deepest condolences to his family and loved ones. He will be fondly remembered by everyone.

Rest in peace.

Nodebuilder, QTX and the release of my brand new social platform

January 2, 2020 9 comments

First, let me wish everyone a wonderful new year! With xmas and the silly season firmly behind us, and my batteries recharged – I feel my coding fingers itch to get started again.

2019 was a very busy year for me. Exhausting even. I have juggled both a full time job, kids and family, as well as our community project, Quartex Media Desktop. And since that project has grown considerably, I had to stop and work on the tooling. Which is what NodeBuilder is all about.

I have also released my own social media platform (see further down). This was initially scheduled for Q4 2020, but Facebook pissed me off something insanely, so I set it up in december instead.



For those of you that read my blog you probably remember the message system I made for the Quartex Desktop, called Ragnarok? This is a system for dealing with message dispatching and RPC, except that the handling is decoupled from the transport medium. In other words, it doesnt care how you deliver a message (WebSocket, UDP, REST), but rather takes care of serialization, binary data and security.

All the back-end services that make up the desktop system, are constructed around Ragnarok. So each service exposes a set of methods that the core can call, much like a normal RPC / SOAP service would. Each method is represented by a request and response object, which Ragnarok serialize to a JSON message envelope.

In our current model I use WebSocket, which is a full duplex, long-term connection (to avoid overhead of having to connect and perform a handshake for each call). But there is nothing in the way of implementing a REST transport layer (UDP is already supported, it’s used by the Zero-Config system. The services automatically find each other and register, as long as they are connected to the same router or switch). For the public service I think REST makes more sense, since it will better utilize the software clustering that node.js offers.


Node Builder is a relatively simple service designer, but highly effective for our needs


Now for small services that expose just a handful of methods (like our chat service), writing the message classes manually is not really a problem. But the moment you start having 20 or 30 methods – and need to implement up to 60 message classes manually – this approach quickly becomes unmanageable and prone to errors. So I simply had to stop before xmas and work on our service designer. That way we can generate the boilerplate code in seconds rather than days and weeks.

While I dont have time to evolve this software beyond that of a simple service designer (well, I kinda did already), I have no problem seeing this system as a beginning of a wonderful, universal service authoring system. One that includes coding, libraries and the full scope of the QTX runtime-library.

In fact, most of the needed parts are in the codebase already, but not everything has been activated. I don’t have time to build both a native development system AND the development system for the desktop.


NodeBuilder already have a fully functional form designer and code editor, but it is dormant for now due to time restrictions. Quartex Media Desktop comes first

But right now, we have bigger fish to fry.

Quartex Media Desktop

We have made tremendous progress on our universal desktop environment, to the point where the baseline services are very close to completion. A month should be enough to finish this unless something unforeseen comes up.


Quartex Media Desktop provides an ecosystem for advanced web applications

You have to factor in that, this project has only had weekends and the odd after work hours allocated for it. So even though we have been developing this for 12 months, the actual amount of days is roughly half of that.

So all things considered I think we have done a massive amount of code in such a short time. Even simple 2d games usually take 2 years of daily development, and that includes a team of at least 5 people! Im a single developer working in my spare time.

So what exactly is left?

The last thing we did before xmas was upon us, was to throw out the last remnants of Smart Mobile Studio code. The back-end services are now completely implemented in our own QTX runtime-library, which has been written from scratch. There is not a line of code from Smart Mobile Studio in QTX, which means we no longer have to care what that system does or where it goes.

To sum up:

  • Push all file handling code out of the core
  • Implement file-handling as it’s own service

Those two steps might seem simple enough, but you have to remember that the older code was based on the Linux path system, and was read-only.

So when pushing that code out of the core, we also have to add all the functionality that was never implemented in our prototype.


Each class actually represents a separate “mini” program, and there are still many more methods to go before we can put this service into production.

Since Javascript does not support threads, each method needs to be implemented as a separate program. So when a method is called, the file/task manager literally spawns a new process just for that task. And the result is swiftly returned back to the caller in async manner.

So what is ultimately simple, becomes more elaborate if you want to do it right. This is the price we pay for universality and a cluster enabled service-stack.

This is also why I have put the service development on pause until we have finished the NodeBuilder tooling. And I did this because I know by experience that the moment the baseline is ready, both myself and users of the system is going to go “oh we need this, and that and those”. Being able to quickly design and auto-generate all the boilerplate code will save us months of work. So I would rather spend a couple of weeks on NodeBuilder than wasting months having to manually write all that boilerplate code down the line.

What about the QTX runtime-library?

Writing an RTL from scratch was not something I could have anticipated before we started this project. But thankfully the worst part of this job is already finished.

The RTL is divided into two parts:

  • Non Visual code. Classes and methods that makes QTX an RTL
  • Visual code. Custom Controls + standard controls (buttons, lists etc)
  • Visual designer

As you can see, the non-visual aspect of the system is finished and working beautifully. It’s a lot faster than the code I wrote for Smart Mobile Studio (roughly twice as fast on average). I also implemented a full visual designer, both as a Delphi visual component and QTX visual component.


Quartex Media Desktop makes running on several machines [cluster] easy and seamless

So fundamental visual classes like TCustomControl is already there. What I haven’t had time to finish are the standard-controls, like TButton, TListBox, TEdit and those type of visual components. That will be added after the release of QTX, at which point we throw out the absolute last remnants of Smart Mobile Studio from the client (HTML5 part) software too.

Why is the QTX Runtime-Library important again?

When the desktop is out the door, the true work begins! The desktop has several roles to play, but the most important aspect of the desktop – is to provide an ecosystem capable of hosting web based applications. Offering features and methods traditionally only found in Windows, Linux or OS X. It truly is a complete cloud system that can scale from a single affordable SBC (single board computer), to a high-end cluster of powerful servers.

Clustering and writing distributed applications has always been difficult, but Quartex Media Desktop makes it simple. It is no more difficult for a user to work on a clustered system, as it is to work on a normal, single OS. The difficult part has already been taken care of, and as long as people follow the rules, there will be no issues beyond ordinary maintenance.

And the first commercial application to come out of Quartex Components, is Cloud Forge, which is the development system for the platform. It has the same role as Visual Studio for Windows, or X Code for Apple OS X.


The Quartex Media Desktop Cluster cube. A $400 super computer

I have prepared 3 compilers for the system already. First there is C/C++ courtesy of Clang. So C developers will be able to jump in and get productive immediately. The second compiler is freepascal, or more precise pas2js, which allows you to compile ordinary freepascal code (which is highly Delphi compatible) to both JavaScript and WebAssembly.

And last but not least, there is my fork of DWScript, which is the same compiler that Smart Mobile Studio uses. Except that my fork is based on the absolute latest version, and i have modified it heavily to better match special features in QTX. So right out of the door CloudForge will have C/C++, two Object Pascal compilers, and vanilla Javascript and typescript. TypeScript also has its own WebAssembly compiler, so doing hard-core development directly in a browser or HTML5 viewport is where we are headed.

Once the IDE is finished I can finally, finally continue on the LDEF bytecode runtime, which will be used in my BlitzBasic port and ultimately replace both clang, freepascal and DWScript. As a bonus it will emit native code for a variety of systems, including x86, ARM, 68k [including 68080] and PPC.

This might sound incredibly ambitious, if not impossible. But what I’m ultimately doing here -is moving existing code that I already have into a new paradigm.

The beauty of object pascal is the sheer size and volume of available components and code. Some refactoring must be done due to the async nature of JS, but when needed we fall back on WebAssembly via Freepascal (WASM executes linear, just like ordinary native code does).

A brand new social platform

During december Facebook royally pissed me off. I cannot underline enough how much i loath A.I censorship, and the mistakes that A.I does – in which you are utterly powerless to complain or be heard by a human being. In my case i posted a gif from their own mobile application, of a female body builder that did push-ups while doing hand-stands. In other words, a completely harmless gif with strength as the punchline. The A.I was not able to distinguish between a leotard and bare-skin, and just like that i was muted for over a week. No human being would make such a ruling. As an admin of a fairly large set of groups, there are many cases where bans are the result. Disgruntled members that acts out of revenge and report technical posts about coding as porn or offensive. Again, you are helpless because there are nobody you can talk to about resolving the issue. And this time I had enough.

It was always planned that we would launch our own social media platform, an alternative to Facebook aimed at adult geeks rather than kids (Facebook operates with an age limit of 12 years). So instead of waiting I rushed out and set up a brand new social network. One where those banale restrictions Facebook has conditioned us with, does not apply.

Just to underline, this is not some simple and small web forum. This is more or less a carbon copy of Facebook the way it used to be 8-9 years ago. So instead of having a single group on facebook, we can now have as many groups as we like, on a platform that looks more or less identical to Facebook – but under our control and human rules.


Amigadisrupt.com is a brand new social media platform for geeks

You can visit the site right now at https://www.amigadisrupt.com. Obviously the major content on the platform right now is dominated by retro computing – but groups like Delphi Developer and FPC developer has already been setup and are in use. But if you are expecting thousands of active users, that will take time. We are now closing in on 250 active users which is pretty good for such a short period of time. I dont want a platform anywhere near as big as FB. The goal is to get 10k users and have a stable community of coders, retro geeks, builders and creative individuals.

AD (Amiga Disrupt) will be a standard application that comes with Quartex Media Desktop. This is the beauty of web technology, in that it can unify different resources under one roof. And we will have our cake and eat it come hell or high water.

Disclaimer: Amiga Disrupt has a lower age limit of 18 years. This is a platform meant for adults. Which means there will be profanity, jokes that would get you banned on Facebook and content that is not meant for kids. This is hacker-land, and political correctness is considered toilet paper. So if you need social toffery like FB and Twitter deals with, you will be kicked by one of the admins.

After you sign up your feed will be completely empty. Here is how to get it started. And feel free to add me to your friends-list!thumb

NVidia Jetson Nano: Part 2

November 27, 2019 Leave a comment

Last week I posted a review of the NVidia Jetson Nano IoT board, where I gave the board a score of 6 out of 10. This score was based on the fact that the CPU in particular was a joke compared to other, far better and more affordable boards on the market.


Ubuntu running on the NVidia Jetson Nano

Since my primary segment for IoT boards involves web technology, WebAssembly and Asm.js (HTML5) in particular, the cpu did not deliver the performance I expected from an NVidia board. But as stated in my article, this board was never designed to be a multi-purpose maker board. This is a board for training and implementing A.I models (machine learning), which is why it ships with an astonishing 128 CUDA cores in it’s GPU.

However, two days after I posted said review, NVidia issued several updates. The graphics drivers were updated, followed by a custom Chrome browser build. Naturally I was eager to see if this would affect my initial experience, and to say it did would be an understatement.

Driver update

After this update the Jetson was able to render the HTML5 based desktop system I use for testing, and it passed more or less all my test-cases with flying colors. In fact, I was able to run two tests simultaneously. It renders Quake 3 (uses WebGl) in full-screen at 45 fps, with Tyrian (uses the gpu’s 2d functions) running at a whopping 35 fps (!).

Obviously that payload is significant, and all 4 CPU cores were taxed at 70% when running two such demanding programs inside the same viewport. Personally I’m not much of a gamer, but I find that testing a SoC by its ability to process HTML5 and JS, especially code that taps into the underlying GPU for both 3d and 2d – gives a good picture of the SoC’s capabilities.

I still think the CPU was a poor choice by NVidia. The production costs of adding an A72 CPU is practically non-existent. The A72 would add 50 cents (USD 0.5) to the boards retail value, but in return it would give us a much higher clock speed and beefier CPU cache. The A72 is in practical terms twice as fast as the A57.

A better score

But, fair is fair, and I am changing the review score to 7 out of 10, which is the best score I have ever given to any IoT device.

The only board that has a chance of topping that, is the ODroid N2, when (if ever) Hardkernel bothers to implement X drivers for the Mali GPU.

NVidia Jetson Nano

November 21, 2019 Leave a comment

The sheer volume of technology you can pick up for $100 in 2019 is simply amazing. Back in 2012 when the IoT revolution began, people were amazed at how the tiny Raspberry PI SoC was able to run a modern browser and play quake. Fast forward to 2019 and that power has been multiplied a hundred fold.

Of all the single board computers available today, the NVidia Jetson Nano exists in a whole different sphere compared to the rest. It ships with 4 Gb of ram, a mid-range 64-bit quad processor – and a whopping 128 GPU cores (!).


It does cost a little more than the average board, but not much. The latest Raspberry PI v4 sells for $79, just $20 less than the Jetson. But the Video-Core IV that powers the Raspberry PI v4 was not designed for tasks that Jetson can take on. Now don’t get me wrong, the Raspberry PI v4 is roughly the same SoC (the CPU on the PI is one revision higher), and the Video-Core GPU is a formidable graphics processor. But if you want to work with artificial intelligence, you need proper Cuda support; something the Video-Core can’t deliver.

The specs

Let’s have a look at the specs before we continue. To give you some idea of the power here, this is roughly the same specs as the Nintendo Switch. In fact, if we go up one model to the NVidia TX 1 – that is exactly what the Nintendo Switch uses. So if you are expecting something to replace your Intel i5 or i7 desktop computer, you might want to look elsewhere. But as an IoT device for artificial intelligence work, it does have a lot to offer:

  • 128-core Maxwell GPU
  • Quad-core Arm A57 processor @ 1.43 GHz
  • System Memory – 4GB 64-bit LPDDR4 @ 25.6 GB/s
  • Storage – microSD card slot (devkit) or 16GB eMMC flash (production)
  • Video Encode – 4K @ 30 | 4x 1080p @ 30 | 9x 720p @ 30 (H.264/H.265)
  • Video Decode – 4K @ 60 | 2x 4K @ 30 | 8x 1080p @ 30 | 18x 720p @ 30 (H.264/H.265)
  • Video Output – HDMI 2.0 and eDP 1.4 (video only)
  • Gigabit Ethernet (RJ45) + 4-pin PoE header
  • USB – 4x USB 3.0 ports, 1x USB 2.0 Micro-B port for power or device mode
  • M.2 Key E socket (PCIe x1, USB 2.0, UART, I2S, and I2C)
  • 40-pin expansion header with GPIO, I2C, I2S, SPI, UART signals
  • 8-pin button header with system power, reset, and force recovery related signals
  • Misc – Power LED, 4-pin fan header
  • Power Supply – 5V/4A via power barrel or 5V/2A via micro USB port

ARM for the future

Most of the single board computers (SBC’s) on the market today, sell somewhere around the $70 mark. As a consequence their processing power and graphical capacity is severely limited compared to traditional desktop computers. IoT boards are closely linked to the mobile phone industry, typically behind the latest SoC by a couple of years; Which means you can expect to see ARM IoT boards that floors an Intel i5 sometime in 2022 (perhaps sooner).

Top of the line mobile phones, like the Samsung Galaxy Note 10, are presently shipping with the Snapdragon 830 SoC. The Snapdragon 830 delivers the same processing power as an Intel i5 CPU (actually slightly faster). It is amazing to see how fast ARM is catching up with Intel and AMD. Powerful technology is expensive, which is why IoT is brilliant; by using the same SoC’s that the mobile industry has perfected, they are able to tap into the price drop that comes with mass production. IoT is quite frankly piggybacking on the the every growing demand for faster mobile devices; which is a great thing for us consumers.

Intel and AMD is not resting on their laurels either, and Intel in particular has lowered their prices considerably to meet the onslaught of cheap ARM boards hitting the market. But despite their best efforts – there is no doubt where the market is heading; ARM is the future – both for mobile, desktop and server.

The Nvidia Jetson Nano

Out of all the boards available on the market for less than US $100, and that is a long list – one board deserves special attention: the NVidia Jetson Nano developer board. This was released back in march of this year. I wanted to get this board as it hit the stores – but since my ODroid XU4 and N2 have performed so well, there simply was no need for it; until recently.

First of all, the Jetson SBC is not a cheap $35 board. The GPU on this board are above and beyond anything available for competing single board computers. Most IoT boards are fitted with a relatively modest GPU (graphical processing unit), typically with 1, 2, 4 or 8 cores. The NVidia Jetson Nano though, ships with 128 GPU cores (!); making it by far the best option for creating, training and deploying artificial intelligence models.


The Jetson boards were designed for one thing, A.I. It’s not really a general purpose SBC

I mean, the IoT boards to date have all suffered from the same tradeoffs, where the makers must try to strike a balance between CPU, GPU and RAM. This doesn’t always work out like the architects have planned. Single board computers like like the Asus Tinkerboard or NanoPI Fire 3 are examples of this; The Tinkerboard is probably the fastest 32-bit ARM processor I have ever experienced, but the way the different parts have been integrated, coupled with poorly planned power distribution (as in electrical power) resulted in terrible stability problems. The NanoPI Fire 3 is also a paper-tiger that, for the best of intentions, is useless for anything but Android.

The Jetson Nano is more expensive, but NVidia has managed to balance each part in a way that works well; at least for artificial intelligence work. I must underline that the CPU was a big disappointment for me personally. I misread the specs before I ordered and expected it to be fitted with a quad core A72 cpu @ 2 GHz. Instead it turned out to use the A57 cpu running @ 1.43 GHz. This might sound unimportant, but the A72 is almost twice as fast (and costing only 50 cents more). Why NVidia decided on that model of CPU stands as a completely mystery.

In my view, this sends a pretty clear message that NVidia has no interest in competing with other mainstream boards, but instead aim solely at the artificial intelligence market.

Artificial intelligence

If working with A.I is something that you find interesting, then you are in luck. This is the primary purpose of the Jetson Nano. Training an A.I model on a Raspberry PI v4 or ODroid N2 is also possible, but it would be a terrible and frustrating experience because those boards are not designed for it. What those 128 cuda cores can chew through in 2 hours, would take a week for a Raspberry PI to process.

If we look away from the GPU, single board computers like the ODroid N2 is better in every way. The power distribution on the SoC is better, the boot options are better, the bluetooth and IR options are better, the network and disk controllers are more powerful, the ram runs at higher clock-speeds -and last but not least, the “little-big” CPU architecture ODroid devices are known for, just floors the Jetson. It literally has nothing to compete with if we ignore that GPU.

So if you have zero interest in A.I, cuda or Tensorflow, then I urge you to look elsewhere. To put the CPU into perspective: The model used on the Jetson is only slightly faster than the Raspberry PI 3b. The extra memory helps it when browsing complex HTML5 websites, but not enough to justify the price. By comparison the ODroid XU4 retails at $40 and will slice through those website like it was nothing. So if you are looking for a multi-purpose board, the Jetson Nano is not it.

Upstream Ubuntu

Most IoT boards ship with a trimmed-down Linux distribution especially adapted for that board. This is true for more or less all the IoT boards I have tested over the last 18 months. The reason companies ship these slim and optimized setups is simply because running a full Linux distribution, like we have on x86, demands too much of the CPU (read: there will be little left for your code). These are micro computers after all.


The Jetson Ubuntu distro comes pre-loaded with drivers and various A.I libraries

The difference between SBC’s like the PI, ODroid (or whatnot) and the Jetson Nano, is that Jetson comes with a full, upstream Ubuntu based distribution. In other words – the same distribution you would install for your desktop PC, is somehow expected to perform miracles on a device only marginally faster than a Raspberry PI 3b.
I must say I am somewhat puzzled by Nvidia’s choice of Linux for this board, but against all odds the board does perform. The desktop is responsive and smooth despite the payload involved, most likely because the renderer and layout engine taps into the GPU cores. But optimal? Hardly.

Other distributions?

One thing that must be stressed when it comes to the Jetson Nano, is that Nvidia is not shipping a system where everything is wide open. Just like the Raspberry PI’s VC (video core) GPUs are proprietary and requires commercial, closed-source drivers, so does the Jetson. It’s not just a question of driver in this case, but a customized kernel.

The board is called the “Nvidia Jetson Nano developer kit”, which means Nvidia sells this exclusively to work with A.I under their standards. For this reason, there are no alternative distributions or flavours. You are expected to work with Ubuntu, end of story.

Having said that, the way to get around this -is simply to work backwards, and strip the official disk image of services and packages you don’t need. A full upstream Ubuntu install means you have two browsers, email and news client, music players, movie players, photo software, a full office suite -and more python libraries than you will ever need. With a little bit of cleaning, you can trim the distro down and make it suitable for other tasks.

Practical uses

No single board computer is identical. There are different CPUs, different GPUs, IO controllers, ram – which all helps define what a board is good at.
The Jetson Nano is a powerful board, but only within the sphere it was designed for. The moment you move away from artificial intelligence, the board is instantly surpassed by cheaper models.


51 fps running Quake [Javascript] is very good. But the general experience of HTML5 is not optimal due to the cpu not being the most powerful

As such, the only practical use this board has, is for running A.I software and models. Unless Nvidia decides to do a special build of Chrome where those cores are used to speed up HTML5 rendering, I can’t even recommend this for casual browsing. It is an excellent SoC for gaming though. It’s probably the only IoT board that delivers 35-50 fps when emulating Playstation and Sega Saturn games (which are very demanding). I also got 54 fps running Quake 3 compiled to JavaScript, which is pretty cool.

But if gaming is you sole interest, $100 will buy you a much better x86 board to play with. In one of my personal projects (Quartex Media Desktop), which is a cluster “supercomputer” consisting of many IoT boards connected through a switch – the Jetson was supposed to act as the master board. In a cluster design you typically have X number of slaves which runs headless (no desktop or graphical output as all) – and a master unit that deals with the display / desktop. The slave boards don’t need much in terms of GPU since they won’t run graphical tasks (so the GPU is often disabled)– which puts enormous pressure on the master to represent everything graphically.

In my case the front-end is written in Javascript and uses Chrome to render a very complex display. Finding an SBC with enough power to render modern and complex HTML5 has not been easy. I was shocked that the Jetson Nano had poorer performance than devices costing half it’s asking price.

It could be throttling based on heat, because the board gets really hot. It has massive heatsink much like the ODroid N2, but I have a sneaking suspicion that active cooling is going to be needed to get the best out of this puppy. At worst i get 17 fps when running Quake 3 [javascript], but i think there was an issue with a driver. After an update was released, the frame-rate went back to 50+ fps. Extremely annoying because you feel you can’t trust the SoC.

Final verdict

The Nvidia Jetson Nano is an impressive IoT board, and when it comes to GPU power and artificial intelligence, those 128 cuda cores is a massive advantage. But computing is not just about fancy A.I models. And this is ultimately where the Jetson comes up short. When something as simple as browsing modern websites can unexpectedly bring the SoC to a grinding halt, then I cannot recommend the system for anything but what it was designed to do: artificial intelligence.

There are also some strange design decisions on hardware level. On most boards the USB sockets have separate data-lines and power route. But on the Jetson the four USB 3.0 sockets share a single data-line. In other words, whatever you connect to the device, will affect data throughput.

Nvidia also made the same mistake that Asus did, namely to power the USB ports and CPU from the same route. It’s not quite as bad as the Tinkerboard. Nvidia made sure the board has two power sockets, so should undervolting become an issue, you can just plug in a secondary 5V barrel socket (disabled by default).

The lack of eMMc support or a sata interface is also curious. Hosting a full Ubuntu on a humble SD-card, with an active swapfile, is a recipe for disaster.

So is it worth the $99 asking price?

If you work with artificial intelligence or GPU based computing then yes, the board will help you prototype your products. But if you expect to do anything ordinary, like run demanding HTML5 apps or host CPU intensive system services – I find myself struggling to say yes.

It’s a cool board, and it also makes one hell of a games machine – but the hardware is already outdated. If you want a general purpose single board computer, you are much better off buying the Raspberry PI 4 or the ODroid N2, both are cheaper and deliver much better processing power. Obviously the GPU is hard to match.

I will give the board a score of 6 out of 10, purely because the board is brilliant for A.I and compute tasks. This might change as more distros  become available. A full Ubuntu install is one hell of a load on the system after all.

Quartex “Cloud Ripper” hardware

November 10, 2019 Leave a comment

For close to a year now I have been busy on a very exciting project, namely my own cloud system. While I have written about this project quite a bit these past months, mostly focusing on the software aspect, not much has been said about that hardware.


Quartex “Cloud Ripper” running neatly on my home-office desk

So let’s have a look at Cloud Ripper, the official hardware setup for Quartex Media Desktop.

Tiny footprint, maximum power

Despite its complexity, the Quartex Media Desktop architecture is surprisingly lightweight. The services that makes up the baseline system (read: essential services) barely consume 40 megabytes of ram per instance (!). And while there is a lot of activity going on between these services -most of that activity is message-dispatching. Sending messages costs practically nothing in cpu and network terms. This will naturally change the moment you run your cloud as a public service, or setup the system in an office environment for a team. The more users, the more signals are shipped between the processes – but with the exception of reading and writing large files, messages are delivered practically instantaneous and hardly use CPU time.


Quartex Media Desktop is based on a clustered micro-service architecture

One of the reasons I compile my code to JavaScript (Quartex Media Desktop is written from the ground up in Object Pascal, which is compiled to JavaScript) has to do with the speed and universality of node.js services. As you might know, Node.js is powered by the Google V8 runtime engine, which means the code is first converted to bytecodes, and further compiled into highly optimized machine-code [courtesy of llvm]. When coded right, such Javascript based services execute just as fast as those implemented in a native language. There simply are no perks to be gained from using a native language for this type of work. There are however plenty of perks from using Node.js as a service-host:

  • Node.js delivers the exact same behavior no matter what hardware or operating-system you are booting up from. In our case we use a minimal Linux setup with just enough infrastructure to run our services. But you can use any OS that supports Node.js. I actually have it installed on my Android based Smart-TV (!)
  • We can literally copy our services between different machines and operating systems without recompiling a line of code. So we don’t need to maintain several versions of the same software for different systems.
  • We can generate scripts “on the fly”, physically ship the code over the network, and execute it on any of the machines in our cluster. While possible to do with native code, it’s not very practical and would raise some major security concerns.
  • Node.js supports WebAssembly, you can use the Elements Compiler from RemObjects to write service modules that executes blazingly fast yet remain platform and chipset independent.

The Cloud-Ripper cube

The principal design goal when I started the project, was that it should be a distributed system. This means that instead of having one large-service that does everything (read: a typical “native” monolithic design), we instead operate with a microservice cluster design. Services that run on separate SBC’s (single board computers). The idea here is to spread the payload over multiple mico-computers that combined becomes more than the sum of their parts.


Cloud Ripper – Based on the Pico 5H case and fitted with 5 x ODroid XU4 SBC’s

So instead of buying a single, dedicated x86 PC to host Quartex Media Desktop, you can instead buy cheap, off-the-shelves, easily available single-board computers and daisy chain them together. So instead of spending $800 (just to pin a number) on x86 hardware, you can pick up $400 worth of cheap ARM boards and get better network throughput and identical processing power (*). In fact, since Node.js is universal you can mix and match between x86, ARM, Mips and PPC as you see fit. Got an older PPC Mac-Mini collecting dust? Install Linux on it and get a few extra years out of these old gems.

(*) A single XU4 is hopelessly underpowered compared to an Intel i5 or i7 based PC. But in a cluster design there are more factors than just raw computational power. Each board has 8 CPU cores, bringing the total number of cores to 40. You also get 5 ARM Mali-T628 MP6 GPUs running at 533MHz. Only one of these will be used to render the HTML5 display, leaving 4 GPUs available for video processing, machine learning or compute tasks. Obviously these GPUs won’t hold a candle to even a mid-range graphics card, but the fact that we can use these chips for audio, video and computation tasks makes the system incredibly versatile.

Another design goal was to implement a UDP based Zero-Configuration mechanism. This means that the services will find and register with the core (read: master service) automatically, providing the machines are all connected to the same router or switch.


Put together your own supercomputer for less than $500

The first “official” hardware setup is a cluster based on 5 cheap ARM boards; namely the ODroid XU4. The entire setup fits inside a Pico Cube, which is a special case designed to house this particular model of single board computers. Pico offers several different designs, ranging from 3 boards to a 20 board super-cluster. You are not limited ODroid XU4 boards if you prefer something else. I picked the XU4 boards because they represent the lowest possible specs you can run the Quartex Media Desktop on. While the services themselves require very little, the master board (the board that runs the QTXCore.js service) is also in charge of rendering the HTML5 display. And having tested a plethora of boards, the ODroid XU4 was the only model that could render the desktop properly (at that low a price range).

Note: If you are thinking about using a Raspberry PI 3B (or older) as the master SBC, you can pretty much forget it. The media desktop is a piece of very complex HTML5, and anything below an ODroid XU4 will only give you a terrible experience (!). You can use smaller boards as slaves, meaning that they can host one of the services, but the master should preferably be an ODroid XU4 or better. The ODroid N2 [with 4Gb Ram] is a much better candidate than a Raspberry PI v4. A Jetson Nano is an even better option due to its extremely powerful GPU.

Booting into the desktop

One of the things that confuse people when they read about the desktop project, is how it’s possible to boot into the desktop itself and use Quartex Media Desktop as a ChromeOS alternative?

How can a “cloud platform” be used as a desktop alternative? Don’t you need access to the internet at all times? If it’s a server based system, how then can we boot into it? Don’t we need a second PC with a browser to show the desktop?


Accessing the desktop like a “web-page” from a normal Linux setup

To make a long story short: the “master” in our cluster architecture (read: the single-board computer defined as the boss) is setup to boot into a Chrome browser display under “kiosk mode”. When you start Chrome in kiosk mode, this removes all traces of the ordinary browser experience. There will be no toolbars, no URL field, no keyboard shortcuts, no right-click popup menus etc. It simply starts in full-screen and whatever HTML5 you load, has complete control over the display.

What I have done, is to to setup a minimal Linux boot sequence. It contains just enough Linux to run Chrome. So it has all the drivers etc. for the device, but instead of starting the ordinary Linux Desktop (X or Wayland) -we instead start Chrome in kiosk mode.


Booting into the same desktop through Chrome in Kiosk Mode. In this mode, no Linux desktop is required. The Linux boot sequence is altered to jump straight into Chrome

Chrome is started to load from (this is a special address that always means “this machine”), which is where our QTXCore.js service resides that has it’s own HTTP/S and Websocket servers. The client (HTML5 part) is loaded in under a second from the core — and the experience is more or less identical to starting your ChromeBook or NAS box. Most modern NAS (network active storage) devices are much more than a file-server today. NAS boxes like those from Asustor Inc have HDMI out, ships with a remote control, and are designed to act as a media center. So you connect the NAS directly to your TV, and can watch movies and listen to music without any manual conversion etc.

In short, you can setup Quartex Media Desktop to do the exact same thing as ChromeOS does, booting straight into the web based desktop environment. The same desktop environment that is available over the network. So you are not limited to visiting your Cloud-Ripper machine via a browser from another computer; nor are you limited to just  using a dedicated machine. You can setup the system as you see fit.

Why should I assemble a Cloud-Ripper?

Getting a Cloud-Ripper is not forced on anyone. You can put together whatever spare hardware you have (or just run it locally under Windows). Since the services are extremely lightweight, any x86 PC will do. If you invest in a ODroid N2 board ($80 range) then you can install all the services on that if you like. So if you have no interest in clustering or building your own supercomputer, then any PC, Laptop or IOT single-board computer(s) will do. Provided it yields more or equal power as the XU4 (!)

What you will experience with a dedicated cluster, regardless of putting the boards in a nice cube, is that you get excellent performance for very little money. It is quite amazing what $200 can buy you in 2019. And when you daisy chain 5 ODroid XU4 boards together on a switch, those 5 cheap boards will deliver the same serving power as an x86 setup costing twice as much.


The NVidia Jetson Nano SBC, one of the fastest boards available at under $100

Pico is offering 3 different packages. The most expensive option is the pre-assembled cube. This is for some reason priced at $750 which is completely absurd. If you can operate a screwdriver, then you can assemble the cube yourself in less than an hour. So the starter-kit case which costs $259 is more than enough.

Next, you can buy the XU4 boards directly from Hardkernel for $40 a piece, which will set you back $200. If you order the Pico 5H case as a kit, that brings the sub-total up to $459. But that price-tag includes everything you need except sd-cards. So the kit contains power-supply, the electrical wiring, a fast gigabit ethernet switch [built-into the cube], active cooling, network cables and power cables. You don’t need more than 8Gb sd-cards, which costs practically nothing these days.

Note: The Quartex Media Desktop “file-service” should have a dedicated disk. I bought a 256Gb SSD disk with a USB 3.0 interface, but you can just use a vanilla USB stick to store user-account data + user files.

As a bonus, such a setup is easy to recycle should you want to do something else later. Perhaps you want to learn more about Kubernetes? What about a docker-swarm? A freepascal build-server perhaps? Why not install FreeNas, Plex, and a good backup solution? You can set this up as you can afford. If 5 x ODroid XU4 is too much, then get 3 of them instead + the Pico 3H case.

So should Quartex Media Desktop not be for you, or you want to do something else entirely — then having 5 ODroid XU4 boards around the house is not a bad thing.

Oh and if you want some serious firepower, then order the Pico 5H kit for the NVidia Jetson Nano boards. Graphically those boards are beyond any other SoC on the market (in it’s price range). But as a consequence the Jetson Nano starts at $99. So for a full kit you will end up with $500 for the boards alone. But man those are the proverbial Ferrari of IOT.

Hydra, what’s the big deal anyway?

October 29, 2019 7 comments

RemObjects Hydra is a product I have used for years in concert with Delphi, and like most developers that come into contact with RemObjects products – once the full scope of the components hit you, you never want to go back to not using Hydra in your applications.

Note: It’s easy to dismiss Hydra as a “Delphi product”, but Hydra for .Net and Java does the exact same thing, namely let you mix and match modules from different languages in your programs. So if you are a C# developer looking for ways to incorporate Java, Delphi, Elements or Freepascal components in your application, then keep reading.

But let’s start with what Hydra can do for Delphi developers.

What is Hydra anyways?

Hydra is a component package for Delphi, Freepascal, .Net and Java that takes plugins to a whole new level. Now bear with me for a second, because these plugins is in a completely different league from anything you have used in the past.

In short, Hydra allows you to wrap code and components from other languages, and use them from Delphi or Lazarus. There are thousands of really amazing components for the .Net and Java platforms, and Hydra allows you compile those into modules (or “plugins” if you prefer that); modules that can then be used in your applications like they were native components.


Hydra, here using a C# component in a Delphi application

But it doesn’t stop there; you can also mix VCL and FMX modules in the same application. This is extremely powerful since it offers a clear path to modernizing your codebase gradually rather than doing a time consuming and costly re-write.

So if you want to move your aging VCL codebase to Firemonkey, but the cost of having to re-write all your forms and business logic for FMX would break your budget -that’s where Hydra gives you a second option: namely that you can continue to use your VCL code from FMX and refactor the application in your own tempo and with minimal financial impact.

The best of all worlds

Not long ago RemObjects added support for Lazarus (Freepascal) to the mix, which once again opens a whole new ecosystem that Delphi, C# and Java developers can benefit from. Delphi has a lot of really cool components, but Lazarus have components that are not always available for Delphi. There are some really good developers in the Freepascal community, and you will find hundreds of components and classes (if not thousands) that are open-source; For example, Lazarus has a branch of Synedit that is much more evolved and polished than the fork available for Delphi. And with Hydra you can compile that into a module / plugin and use it in your Delphi applications.

This is also true for Java and C# developers. Some of the components available for native languages might not have similar functionality in the .Net world, and by using Hydra you can tap into the wealth that native languages have to offer.

As a Delphi or Freepascal developer, perhaps you have seen some of the fancy grids C# and Java coders enjoy? Developer Express have some of the coolest components available for any platform, but their focus is more on .Net these days than Delphi. They do maintain the control packages they have, but compared to the amount of development done for C# their Delphi offerings are abysmal. So with Hydra you can tap into the .Net side of things and use the latest components and libraries in your Delphi applications.

Financial savings

One of coolest features of Hydra, is that you can use it across Delphi versions. This has helped me leverage the price-tag of updating to the latest Delphi.

It’s easy to forget that whenever you update Delphi, you also need to update the components you have bought. This was one of the reasons I was reluctant to upgrade my Delphi license until Embarcadero released Delphi 10.2. Because I had thousands of dollars invested in components – and updating all my licenses would cost a small fortune.

So to get around this, I put the components into a Hydra module and compiled that using my older Delphi. And then i simply used those modules from my new Delphi installation. This way I was able to cut cost by thousands of dollars and enjoy the latest Delphi.


Using Firemonkey controls under VCL is easy with Hydra

A couple of years back I also took the time to wrap a ton of older components that work fine but are no longer maintained or sold. I used an older version of Delphi to get these components into a Hydra module – and I can now use those with Delphi 10.3 (!). In my case there was a component-set for working closely with Active Directory that I have used in a customer’s project (and much faster than having to go the route via SQL). The company that made these don’t exist any more, and I have no source-code for the components.

The only way I could have used these without Hydra, would be to compile them into a .dll file and painstakingly export every single method (or use COM+ to cross the 32-bit / 64-bit barrier), which would have taken me a week since we are talking a large body of quality code. With Hydra i was able to wrap the whole thing in less than an hour.

I’m not advocating that people stop updating their components. But I am very thankful for the opportunity to delay having to update my entire component stack just to enjoy a modern version of Delphi.

Hydra gives me that opportunity, which means I can upgrade when my wallet allows it.

Building better applications

There is also another side to Hydra, namely that it allows you to design applications in a modular way. If you have the luxury of starting a brand new project and use Hydra from day one, you can isolate each part of your application as a module. Avoiding the trap of monolithic applications.


Hydra for .Net allows you to use Delphi, Java and FPC modules under C#

This way of working has great impact on how you maintain your software, and consequently how you issue hotfixes and updates. If you have isolated each key part of your application as separate modules, you don’t need to ship a full build every time.

This also safeguards you from having all your eggs in one basket. If you have isolated each form (for example) as separate modules, there is nothing stopping you from rewriting some of these forms in another language – or cross the VCL and FMX barrier. You have to admit that being able to use the latest components from Developer Express is pretty cool. There is not a shadow of a doubt that Developer-Express makes the best damn components around for any platform. There are many grids for Delphi, but they cant hold a candle to the latest and greatest from Developer Express.

Why can’t I just use packages?

If you are thinking “hey, this sounds exactly like packages, why should I buy Hydra when packages does the exact same thing?“. Actually that’s not how packages work for Delphi.

Delphi packages are cool, but they are also severely limited. One of the reasons you have to update your components whenever you buy a newer version of Delphi, is because packages are not backwards compatible.


Delphi packages are great, but severely limited

A Delphi package must be compiled with the same RTL as the host (your program), and version information and RTTI must match. This is because packages use the same RTL and more importantly, the same memory manager.

Hydra modules are not packages. They are clean and lean library files (*.dll files) that includes whatever RTL you compiled them with. In other words, you can safely load a Hydra module compiled with Delphi 7, into a Delphi 10.3 application without having to re-compile.

Once you start to work with Hydra, you gradually build up modules of functionality that you can recycle in the future. In many ways Hydra is a whole new take on components and RAD. This is how Delphi packages and libraries should have been.

Without saying anything bad about Delphi, because Delphi is a system that I love very much; but having to update your entire component stack just to use the latest Delphi, is sadly one of the factors that have led developers to abandon the platform. If you have USD 10.000 in dependencies, having to pay that as well as buying Delphi can be difficult to justify; especially when comparing with other languages and ecosystems.

For me, Hydra has been a tremendous boon for Delphi. It has allowed me to keep current with Delphi and all it’s many new features, without losing the money I have already invested in components packages.

If you are looking for something to bring your product to the next level, then I urge you to spend a few hours with Hydra. The documentation is exceptional, the features and benefits are outstanding — and you will wonder how you ever managed to work without them.

External resources

Disclaimer: I am not a salesman by any stretch of the imagination. I realize that promoting a product made by the company you work for might come across as a sales pitch; but that’s just it: I started to work for RemObjects for a reason. And that reason is that I have used their products since they came on the market. I have worked with these components long before I started working at RemObjects.

ARM Linux Services with Oxygene and Elements

October 14, 2019 Leave a comment

Linux is one of those systems that just appeals to me out of the box. I work with Windows on a daily basis, but at this point there is really nothing in the way of me jumping ship all together. I mean, whenever i need something that is Windows specific, I can just fire up a virtual-machine and get the job done there.

The only thing that is stopping me from going “all in” with Linux (and believe me I have tried) is that finding proper documentation for Linux written with Windows converts in mind, is actually a challenge in itself. Most tutorials are either meant for non-developers, like how to install a program via Synaptic and so on; which is brilliant if you have no experience with Linux whatsoever. But finding articles that aims to help a Windows developer get up to speed on Linux, that’s the tricky bit.

Screenshot at 2019-10-13 15-51-18

Top-Left bash window shows the output of my Elements compiled micro-service

One of the features I wanted to learn about, was how to run a program as a service on Linux. Under Windows this is quite easy. You have the service manager that gives you a good overview of registered services. And programatically a service is ultimately just a normal WinAPI program that supports the service-api messages. Writing services in either Object-Pascal or C# is pretty straight-forward. I also do a lot of service work via Quartex Pascal (my own toolchain) that compiles to JavaScript. Node.js is actually a very capable service host once you understand the infrastructure.

Writing Daemons with Oxygene and Elements

Since the Elements compiler generates code for ARM Linux, learning how to get a service registered and started on boot is something that I think many developers will be interested in. It was one of the first questions I had when I started looking at Linux, and it took a while to find a clean cut answer.

In this little article I will show you how I went about this, but please keep in mind that Linux never has “one way” of doing something. Part of the strength that Linux delivers, is that you can configure and customize the system completely, from Kernel to desktop. You literally have different service sub-systems to pick from, as well as different windowing-managers, desktop systems (e.g Wayland or X) and even keyring implementations. This is what makes Linux so confusing when coming from a mono culture like Microsoft Windows.

control-linux-startup-670x335As for hardware, i’m using an ODroid N2, which is a very powerful ARM based SBC (single board computer). You can use more or less any ARM device with Elements, providing the Linux distribution is based on Debian. So a Raspberry PI v4 with Ubuntu or Lubuntu will work fine. I’m using the ODroid N2 “full disk image” with Ubuntu Mate. So nothing out of the ordinary.

To make something clear off the bat: a Linux service (called a daemon, the ancient greek word for “helper” and “informer”) is just an ordinary shell application. You don’t have to do anything in particular in terms of code. Once your service is registered, you can start and stop it with the systemctl shell command like any other Linux service.

Note: There is also fork() mechanisms (cloning processes), but that’s out of scope for this little post.

Service manifest

Before we can get your binary registered as a service, we need to write a service manifest file. This is just a normal text-file in INI format that defines how you want your service to run. Here is an example of such a file:

Description=Elements Service

ExecStart=/usr/bin /usr/bin/ElementsService.exe


Before you save this file, remember to replace the username (user=) and the name of your executable.

Note: The ExecStart property can be defined in 3 ways:

  • Direct path to the executable
  • Current working directory + path to executable (like above)
  • Current working directory + path to executable + parameters

You can read more about each property here: System.d service info


For debian based distributions (Ubuntu branches) the most common service-host (or process manager) is called systemd. I am not even going to pretend to know the differences between systemd and the older Init. There are fierce debates in the Linux community around these two alternatives. But unless you are a Linux C developer that likes to roll your own Kernel in the weekends, it’s not really relevant for our goals in this post. Our task here is to write useful services and make them run side-by-side with other services.

With the service-manifest file done, we need to copy the service manifest in place where systemd can find it. So start by saving the manifest file as “elements.service” here:


As you probably guessed from the ExecPath property, your service executable goes in:


If all went well you can now start your service from the command-line:

systemctl start elements

And you can stop the service with:

systemctl stop elements

Resident services

Starting and stopping a service is all good and well, but that doesn’t mean it will automatically start when you reboot your Linux box. In order to make the service resident (persisted, so Linux remembers to fire it up on boot), you have to enable the service:

systemctl enable elements

If you want to stop the service from starting on boot, just disable it:

systemctl disable elements

Now there is a ton of things you can tweak and change in the service-manifest file. For example, do you want Linux to restart your service if it crashes? How many times should Linux attempt to bring the service back up? Should it only bring it back up if the exit-code is zero?

If you want Linux to always restart a service if it stops (regardless of reason), you set the following flag in the service-manifest:


If you want Linux to only restart if the service fails, meaning that the exit-code of the application is <> 0, then you use this value instead:


You can also set the service to only start after some other service, for example if your service has networking as a criteria (which is what we set in the service-manifest above), or a database engine.

There is a ton of different settings you can apply to the service-manifest, but listing them all here would be a small book. Better to just check the documentation and experiment a bit. So check the link and pick the ones that makes sense to your particular service.


You should be very careful with how you define restart options. If something goes wrong and your service crash on start, Linux will keep restarting it en-mass. Automatic restart creates a loop, and it can be wise to make sure it doesn’t get stuck. I would set restart to “on-error” exclusively, so that your service has a chance to exit gracefully.

Happy coding! And a special thanks to Benjamin Morel for his helpful posts.



.NetRocks, you made my day!

October 11, 2019 4 comments

72462670_10156562141710906_5626655686042583040_nA popular website for .Net developers is called dot-net-rocks. This is an interesting site that has been going for a while now; well worth the visit if you do work with the .Net framework via RemObjects Elements, VS or Mono.

Now it turns out that the guys over at dot–net-rocks just did an episode on their podcast where they open by labeling me as a “raving lunatic” (I clearly have my moments); which I find absolutely hilarious, but not for the same reasons as them.

Long story short: They are doing a podcast on how to migrate legacy Delphi applications to C#, and in that context they somehow tracked down an article I posted way back in 2016, which was meant as a satire piece. Now don’t get me wrong, there are serious points in the article, like how the .Net framework was modeled on the Delphi VCL, and how the concepts around CLR and JIT were researched at Borland; but the tone of the whole thing, the “larger than life” claims etc. was meant to demonstrate just how some .Net developers behave when faced with alternative eco-systems. Having managed some 16+ usergroups for Delphi, C#, JavaScript (a total of six languages) on Facebook for close to 15 years, as well as working for Embarcadero that makes Delphi -I speak from experience.

It might be news to these guys that large companies around Europe is still using Delphi, modern Delphi, and that Object Pascal as a language scores well on the Tiobi index of popular programming languages. And no amount of echo-chamber mentality is going to change that fact. Heck, as late as 2018 and The Walt Disney Company wanted to replace C# with Delphi, because it turns out that bytecodes and embedded tech is not the best combination (cpu spikes when the GC kicks in, no real-time interrupt handling possible, GPIO delays, the list goes on).

I mean, the post i made back in 2016 is such obvious, low-hanging fruit for a show their size to pound on. You have this massive show that takes on a single, albeit ranting (and probably a bit of a lunatic if I don’t get my coffee) coder’s post. Underlying in the process how little they know about the object pascal community at large. They just demonstrated my point in bold, italic and underline 😀

Look before you shoot

DotNetRocks is either oblivious that Delphi still have millions of users around the world, or that Pascal is in fact available for .Net (which is a bit worrying since .Net is supposed to be their game). The alternative is that the facts I listed hit a little too close to home. I’ll leave it up to the reader to decide. Microsoft has lost at least 10 Universities around Europe to Delphi in 2018 that I know of, two of them Norwegian where I was personally involved in the license sales. While only speculation, I do find the timing for their podcast and focus on me in particular to be, “curious”.

72704588_10156562141590906_7030064639744409600_nAnd for the record, the most obvious solution when faced with “that legacy Delphi project”, is to just go and buy a modern version of Delphi. DotNetRocks delivered a perfect example of that very arrogance my 2016 post was designed to convey; namely that “brogrammers” often act like Delphi 7 was the last Delphi. They also resorted to lies to sell their points: I never said that Anders was dogged for creating Delphi. Quite the opposite. I simply underlined that by ridiculing Delphi in one hand, and praising it’s author with the other – you are indirectly (and paradoxically) invalidating half his career. Anders is an awesome developer, but why exclude how he evolved his skills? Ofcourse Ander’s products will have his architectural signature on them.

Not once did they mention Embarcadero or the fact that Delphi has been aggressively developed since Borland kicked the bucket. Probably hoping that undermining the messenger will somehow invalidate the message.


Porting Delphi to C# manually? Ok.. why not install Elements and just compile it into an assembly? You don’t even have to leave Visual Studio

Also, such an odd podcast for professional developers to run with. I mean, who the hell converts a Delphi project to C# manually? It’s like listening to a graphics artist that dont know that Photoshop and Illustrator are the de-facto tools to use. How is that even possible? A website dedicated to .Net, yet with no insight into the languages that run on the CLR? Wow.

If you want to port something from Delphi to .Net, you don’t sit down and manually convert stuff. You use proper tools like Elements from RemObjects; This gives you Object-Pascal for .Net (so a lot of code will compile just fine with only minor changes). Elements also ships with source-conversion tools, so once you have it running under Oxygene Pascal (the dialect is called Oxygene) you either just use the assemblies — or convert the Pascal code to C# through a tool called an Oxidizer.


The most obvious solution is to just upgrade to a Delphi version from this century

The other solution is to use Hydra, also a RemObjects product. They can then compile the Delphi code into a library (including visual parts like forms and frames), and simply use that as any other assembly from within C#. This allows you to gradually phase out older parts without breaking the product. You can also use C# assemblies from Delphi with Hydra.

So by all means, call me what you like. You have only proved my point so far. You clearly have zero insight into the predominant Object-Pascal eco-systems, you clearly don’t know the tools developers use to interop between arcetypical and contextual languages — and instead of fact checking some of the points I made, dry humor notwithstanding, you just reacted like brogrammers do.

Well, It’s been weeks since I laughed this hard 😀 You really need to check before you pick someone to verbally abuse on the first date, because you might just bite yourself in the arse here he he



Quartex Media Desktop, new compiler and general progress

September 11, 2019 Leave a comment

It’s been a few weeks since my last update on the project. The reason I dont blog that often about Quartex Media Desktop (QTXMD), is because the official user-group has grown to 2000+ members. So it’s easier for me to post developer updates directly to the audience rather than writing articles about it.


Quartex Media Desktop ~ a complete environment that runs on every device

If you haven’t bothered digging into the project, let me try to sum it up for you quickly.

Quick recap on Quartex Media Desktop

To understand what makes this project special, first consider the relationship between Microsoft Windows and a desktop program. The operating system, be it Windows, Linux or OSX – provides an infrastructure that makes complex applications possible. The operating-system offers functions and services that programs can rely on.

The most obvious being:

  • A filesystem and the ability to save and load data
  • A windowing toolkit so programs can be displayed and have a UI
  • A message system so programs can communicate with the OS
  • A service stack that takes care of background tasks
  • Authorization and identity management (security)

I have just described what the Quartex Media Desktop is all about. The goal is simple:

to provide for JavaScript what Windows and OS X provides for ordinary programs.

Just stop and think about this. Every “web application” you have ever seen, have all lacked these fundamental features. Sure you have libraries that gives you a windowing environment for Javascript, like Embarcadero Sencha; but im talking about something a bit more elaborate. Creating windows and buttons is easy, but what about ownership? A runtime environment has to keep track of the resources a program allocates, and make sure that security applies at every step.

Target audience and purpose

Take a second and think about how many services you use that have a web interface. In your house you probably have a router, and all routers can be administered via the browser. Sadly, most routers operate with a crude design and that leaves much to be desired.


Router interfaces for web are typically very limited and plain looking. Imagine what NetGear could do with Quartex Media Desktop instead

If you like to watch movies you probably have a Plex or Kodi system running somewhere in your house; perhaps you access that directly via your TV – or via a modern media system like Playstation 4 or XBox one. Both Plex and Kodi have web-based interfaces.

Netflix is now omnipresent and have practically become an institution in it’s own right. Netflix is often installed as an app – but the app is just a thin wrapper around a web-interface. That way they dont have to code apps for every possible device and OS out there.

If you commute via train in Scandinavia, chances are you buy tickets on a kiosk booth. Most of these booths run embedded software and the interface is again web based. That way they can update the whole interface without manually installing new software on each device.


Plex is a much loved system. It is based on a mix of web and native technologies

These are just examples of web based interfaces you might know and use; devices that leverage web technology. As a developer, wouldn’t it be cool if there was a system that could be forked, adapted and provide advanced functionality out of the box?

Just imagine a cheap Jensen router with a Quartex Media Desktop interface! It could provide a proper UI interface with applications that run in a windowing environment. They could disable ordinary desktop functionality and run their single application in kiosk mode. Taking full advantage of the underlying functionality without loss of security.

And the same is true for you. If you have a great idea for a web based application, you can fork the system, adjust it to suit your needs – and deploy a cutting edge cloud system in days rather than months!

New compiler?

Up until recently I used Smart Mobile Studio. But since I have left that company, the matter became somewhat pressing. I mean, QTXMD is an open-source system and cant really rely on third-party intellectual property. Eventually I fired up Delphi, forked the latest  DWScript, and used that to roll a new command-line compiler.


Web technology has reached a level of performance that rivals native applications. You can pretty much retire Photoshop in favour of web based applications these days

But with a new compiler I also need a new RTL. Thankfully I have been coding away on the new RTL for over a year, but there is still a lot of work to do. I essentially have to implement the same functionality from scratch.

There will be more info on the new compiler / codegen when its production ready.


If I was to list all the work I have done since my last post, this article would be a small book. But to sum up the good stuff:

  • Authentication has been moved into it’s own service
  • The core (the main server) now delegates login messages to said service
  • We no longer rely on the Smart Pascal filesystem drivers, but use the raw node.js functions instead  (faster)
  • The desktop now use the Smart Theme engine. This means that we can style the desktop to whatever we like. The OS4 theme that was hardcoded will be moved into its own proper theme-file. This means the user can select between OS4, iOS, Android and Ubuntu styling. Creating your own theme-files is also possible. The Smart theme-engine will be replaced by a more elaborate system in QTX later
  • Ragnarok (the message api) messages now supports routing. If a routing structure is provided,  the core will relay the message to the process in question (providing security allows said routing for the user)
  • The desktop now checks for .info files when listing a directory. If a file is accompanied by an .info file, the icon is extracted and shown for that file
  • Most of the service layer now relies on the QTX RTL files. We still have some dependencies on the Smart Pascal RTL, but we are making good progress on QTX. Eventually  the whole system will have no dependencies outside QTX – and can thus be compiled without any financial obligations.
  • QTX has it’s own node.js classes, including server and client base-classes
  • Http(s) client and server classes are added to QTX
  • Websocket and WebSocket-Secure are added to QTX
  • TQTXHybridServer unifies http and websocket. Meaning that this server type can handle both orinary http requests – but also websocket connections on the same network socket. This is highly efficient for websocket based services
  • UDP classes for node.js are implemented, both client and server
  • Zero-Config classes are now added. This is used by the core for service discovery. Meaning that the child services hosted on another machine will automatically locate the core without knowing the IP. This is very important for machine clustering (optional, you can define a clear IP in the core preferences file)
  • Fixed a bug where the scrollbars would corrupt widget states
  • Added API functions for setting the scrollbars from hosted applications (so applications can tell the desktop that it needs scrollbar, and set the values)
  • .. and much, much more

I will keep you all posted about the progress — the core (the fundamental system) is set for release in december – so time is of the essence! Im allocating more or less all my free time to this, and it will be ready to rock around xmas.

When the core is out, I can focus solely on the applications. Everything from Notepad to Calculator needs to be there, and more importantly — the developer tools. The CloudForge IDE for developers is set for 2020. With that in place you can write applications for iOS, Android, Windows, OS X and Linux directly from Quartex Media Desktop. Nothing to install, you just need a modern browser and a QTX account.

The system is brilliant for small teams and companies. They can setup their own instance, communicate directly via the server (text chat and video chat is scheduled) and work on their products in concert.

Why move to Windows 10?

September 6, 2019 1 comment

When it comes to Windows editions, Windows 7 is probably the most successful operating-system Microsoft has ever released. When it hit stores back in October of 2009, it replaced Windows Vista (Longhorn) which, truth be told, caused more problems than it solved. The issues surrounding Vista were catastrophic for many reasons, but they were especially severe for developers. I remember buying a brand new laptop with Vista pre-installed, but in less than a week I rolled back to Windows XP.


In retrospect, Vista was perhaps not as bad as it’s reputation would have it. I honestly feel it’s a very misunderstood edition of Windows, one that brought features common to the NT family into the mainstream. But back then people were still unfamiliar with what exactly that meant; things like “roaming profiles” was alien to users and developers with no background in networking. In my case Vista came at a juncture where I had two product releases on my hand. Time was of the essence, and spending days refactoring my code for the changes could not have come at a worse moment.

Be this as it may, the rejection of Vista forced Microsoft to replace it with something better. Vista was supposed to have a 10 year life-cycle, but Microsoft put Vista out of its misery in 3 years.

Windows 7 retirement plan

Windows 7 has been a wonderful system to work with. I can honestly say that with exception of Windows 10, it’s been the best operating system I have ever used. And i include OS X and Ubuntu in that equation. But as great as it was, Windows 7 is now 10 years old; an eternity in the software business. The needs of consumers and developers are radically different today, and with Windows 10 available as a free upgrade – it’s time to let the system go.

Microsoft actually ended mainstream support back in January of 2015 (!), but due to its popularity and massive adoption, they decided to extend support a few more years. This means that Windows 7, although practically retro in computing terms, still receives driver updates and security patches. But that is about to change sooner than you think.

Come next January (read: over xmas) and Windows 7 has an appointment with the gallows; something that will affect laptops, servers and desktop systems alike. This means there will be no more security patches, no more feature updates and no new virus definitions for Windows Defender. In other words January 14 2020 is the day Microsoft take Windows 7 off life-support.

This retirement also affects tablets, so if you have a Windows 7 based Surface, the time has come to jump ship and get Windows 10 installed. The same is true for Windows 7 Enterprise – it’s already obsolete by half a decade.

Some have stated that the embedded version of Windows 7, used primarily in custom-made products like ATMs, POS and kiosk type products, somehow avoids this retirement; but that’s just it – retirement truly means retirement. January 14 2020 really is the day Microsoft puts Windows 7 in the ground; be it laptop, server, desktop or surface.

The king is dead, long live the king

You might be wondering, since Windows 7 is still so popular, why would Microsoft seek to replace it? Well there are many reasons. First of all Windows 7 is based on the old NT kernel, which by today’s standard is a dinosaur compared to competing operating-systems. NT was constructed around a security scheme that has served humanity well, but it’s poorly equipped to deal with modern threats. Windows 7 also has a considerably larger memory footprint compared to Windows 10 – not to mention that Windows 10 has been optimized from scratch for better performance on all supported devices. So it’s never really been a question of why, but rather when and at which cost.


Windows 10 comes in many shapes and sizes

You also have to factor in that Windows 10 introduces a host of new features that is unique to that OS. Things like support for touch interfaces (both display and navigation devices) is one of them, but developers will be more affected by the new application model (UWP) and UI framework. Truth be told, UWP was first introduced in Windows 8 as a part of Microsoft’s plan to streamline all versions of their OS (tablet, mobile, desktop and server). The promise of UWP is that, if you follow the guidelines and stick to the APIs – the same application can run on all variations of the same OS; regardless of CPU even (more about that below).

Since this was introduced Microsoft sadly dropped out of the smartphone OS business though. Their Windows for mobile never gained the recognition it deserved, and they retired it in favor of Android. Personally i loved their phones; they somehow managed to take the best features from both Apple iOS and Android, and combine them intuitively and elegantly. Not to mention that they cost 40% of what an iPhone of Samsung Galaxy sold for.

Windows 10 is also the first OS from Microsoft that treats XBox as a first-class citizen, so developing titles for XBox has become easier. DirectX now aims at delivering console level experience for laptop and desktop computers; it’s pretty much been refactored from scratch, with aggressive and radical optimization (read: hand written assembly) to get every last drop of performance out of the hardware.

Unlike previous editions of DirectX, Microsoft has toned down the amount of insulation between your code and the actual hardware. DirectX was always padded left and right with interfaces and abstractions, making raw access to GPU resources impossible (or at least, impractical). Thankfully Microsoft has realized that they took this too far, and trimmed the insulation layers accordingly; meaning that developers can now access resources en-par with AMD Mantle, Apple Metal and Vulcan (Factoid: Vulcan is a replacement for OpenGL. OpenGL originated with Silicon Graphics machines, a graphics workstation that was hugely popular back in the 90s and early 2k’s).

WinRT, ARM and the beyond

While developers who focus on business applications could care less about DirectX and multimedia, the underlying changes to the Windows 10 core are of such a magnitude that all avenues of development will be affected. Some of the UI changes are profoundly linked to the work that makes Windows 10 unique – and Microsoft has made it perfectly clear that all future endeavors is built on the Windows 10 baseline.


Windows is moving to ARM, and Windows 10 technology is the foundation

Besides purely technical changes, access to the Microsoft Store is one of the features that have a more immediate, financial effect on software development. Marco Cantu actually blogged about this back in 2016, regarding how you can use WDB (Windows Desktop Bridge, a.k.a “project Centennial”) to publish Firemonkey applications to Microsoft store. I mean, any modern developer who makes a living from selling software, having their products available through official channels is pretty essential. And that excludes Windows 7 by default.

And last but not least, there is WinRT, short for Windows Runtime, a sand boxed version of windows that allows applications to be deployed to both x86 and ARM. WinRT involves x86 emulation on ARM SoCs (system on a chip), meaning that you will be able to run applications compiled for x86 on Microsoft’s upcoming Windows for ARM release. But performance wise emulation will obviously not deliver the same level of performance as native ARM code. The emulation layer is meant as an intermediate solution, allowing developers time to evolve compilers that can target ARM directly.

I probably don’t have to outline the business opportunities Windows on ARM represent.

Market adoption

If the features and promise of Windows 10 is not enough to convince you to update immediately, consider the following: There are more than 1 billion Windows users in the world. Windows 7 presently holds 37% of the global market (with Windows 10 at 43%), which means that hundreds of millions of computers will be affected by the now immanent retirement plan.


ARM is still a hardware platform companies can afford to postpone, but with both Apple and Microsoft being open about their move to ARM in the near future, the risk for developers being left behind is very real. And having to deal with the cost of refactoring your entire portfolio over something as trivial as an update, well – I’m sure you see my point.

There really is zero strategic advantage in sticking with the lowest common denominator, which in this case is the stock WinAPI that has defined Windows since the nineties. Especially not when upgrading to Windows 10 is free of charge.


From a personal point of view, I cannot imagine being a developer in 2019 and relying on an operating-system that is retired. I must admit that I do own virtual machines where Windows 7 is used, but those are not instances where I do software development; I use them primarily for stress testing software running in other VMWare instances, which conceptually is not a problem.

Microsoft is still offering a free upgrade plan for Windows 7 users. In other words there is no financial loss in updating your development machines, be they physical or virtual.

I look forward to Microsoft’s next phase, where virtual reality and augmented reality technology is implemented more closely for all supported hardware platforms. As for changes that affect desktop business applications, have a look at the following links:


Using multiple languages is the same project

August 21, 2019 1 comment

Most compilers can only handle a single syntax for any project, but the Elements compiler from RemObjects deals with 5 (five!) different languages -even within the same project. That’s pretty awesome and opens up for some considerable savings.

I mean, it’s not always easy to find developers for a single language, but when you can approach your codebase from C#, Java, Go, Swift and Oxygene (object pascal) at the same time (inside the same project even!), you suddenly have some options.  Especially since you can pick exotic targets like WebAssembly. Or what about compiling Java to .net bytecodes? Or using the VCL from C#? It’s pretty awesome stuff!

Check out Marc Hoffmans article on the Elements compiler toolchain and how you can mix and match between languages, picking the best from each — while still compiling to a single binary of llvm optimized code:


Click on the picture to be redirected


RemObjects Elements + ODroid N2 = true

August 7, 2019 Leave a comment

Since the release of Raspberry PI back in 2012 the IOT and Embedded market has exploded. The price of the PI SBC (single board computer) enabled ordinary people without any engineering background to create their own software and hardware projects; and with that the IOT revolution was born.

Almost immediately after the PI became a success, other vendors wanted a piece of the pie (pun intended), and an avalanche of alternative mini computers started surfacing in vast quantities. Yet very few of these so-called “pi killers” actually stood a chance. The power of the Raspberry PI is not just price, it’s actually the ecosystem around the product. All those shops selling electronic parts that you can use in your IOT projects for example.


The ODroid N2, one of the fastest SBCs in it’s class

The ODroid family of single-board computers stands out as unique in this respect. Where other boards have come and gone, the ODroid family of boards have remained stable, popular and excellent alternatives to the Raspberry PI. Hardkernel, the maker of Odroid boards and its many peripherals, are not looking for a “quick buck” like others have. Instead they have slowly and steadily perfected their hardware,  software, and seeded a great community.

ODroid is very popular at RemObjects, and when we added 64-bit ARM Linux support a couple of weeks back, it was the ODroid N2 board we used for testing. It has been a smooth ride all the way.


As I am typing this, a collection of ODroid XU4s is humming away inside a small, desktop cluster I have built. This cluster is made up of 5 x ODroid XU4 boards, with an additional ODroid N2 acting as the head (the board that controls the rest via the network).


My ODroid Cluster in all its glory

Prior to picking ODroid for my own projects, I took the time to test the most popular boards on the market. I think I went through eight or ten models, but none of the other were even close to the quality of ODroid. It’s very easy to confuse aggressive marketing with quality. You can have the coolest hardware in the world, but if it lacks proper drivers and a solid Linux distribution, it’s for all means and purposes a waste of time.

Since IOT is something that i find exciting on a personal level, being able to target 64-bit ARM Linux has topped my wish-list for quite some time. So when our compiler wizard Carlo Kok wanted to implement support for 64-bit ARM-Linux, I was thrilled!

We used the ODroid N2 throughout the testing phase, and the whole process was very smooth. It took Carlo roughly 3 days to add support for 64-bit ARM Linux and it hit our main channel within a week.

I must stress that while ODroid N2 is one of our verified SBCs, the code is not explicitly about ODroid. You can target any 64-bit ARM SBC providing you use a Debian based Linux (Ubuntu, Mint etc). I tested the same code on the NanoPI board and it ran on the first try.

Why is this important?

The whole point of the Elements compiler toolchain, is not just to provide alternative compilers; it’s also to ensure that the languages we support become first class citizens, side by side with other archetypical languages. For example, if all you know is C# or Java, writing kernel drivers has been our of limits. If you are operating with traditional Java or .Net, you have to use a native bridge (like the service host under Windows). Your only other option was to code that particular piece in traditional C.


With Elements you can pick whatever language you know and target everything

With Elements that is no longer the case, because our compilers generates llvm optimized machine-code; code that in terms of speed, access and power stand side by side with C/C++. You can even import C/C++ header files and work directly with the existing infrastructure. There is no middleware, no service host, no bytecodes and no compromise.

Obviously you can compile to bytecodes too if you like (or WebAssembly), but there are caveats to watch out for when using bytecodes on SBCs. The garbage collector can make or break your product, because when it kicks in -it causes CPU spikes. This is where Elements step-up and delivers true native compilation. For all supported targets.

More boards to come

This is just my personal blog, so for the full overview of boards I am testing there will be a proper article on our official RemObjects blog-space. Naturally I can’t test every single board on the market, but I have around 10 different models which covers the common boards used by IOT and embedded projects.

But for now at least, you can check off the ODroid N2 (64-bit) and NanoPI-Fire 2 (32-bit)

Check out RemObjects Remoting SDK

July 22, 2019 3 comments

RemObjects Remoting SDK is one of those component packages that have become more than the sum of it’s part. Just like project Jedi has become standard equipment almost, Remoting SDK is a system that all Delphi and Freepascal developers should have in their toolbox.

In this article I’m going to present the SDK in broad strokes; from a viewpoint of someone who haven’t used the SDK before. There are still a large number of Delphi developers that don’t know it even exists – hopefully this post will shed some light on why the system is worth every penny and what it can do for you.

I should also add, that this is a personal blog. This is not an official RemObjects presentation, but a piece written by me based on my subjective experience and notions. We have a lot of running dialog at Delphi Developer on Facebook, so if I read overly harsh on a subject, that is my personal view as a Delphi Developer.

Stop re-inventing the wheel

Delphi has always been a great tool for writing system services. It has accumulated a vast ecosystem of non-visual components over the years, both commercial and non-commercial, and this allows developers to quickly aggregate and expose complex behavior — everything from graphics processing to databases, file processing to networking.

The challenge for Delphi is that writing large composite systems, where you have more than a single service doing work in concert, is not factored into the RTL or project type. Delphi provides a bare-bone project type for system services, and that’s it. Depending on how you look at it, it’s either a blessing or a curse. You essentially start on C level.

So fundamental things like IPC (inter process communication) is something you have to deal with yourself. If you want multi-tenancy that is likewise not supported out of the box. And all of this is before we venture into protocol standards, message formats and async vs synchronous execution.

The idea behind Remoting SDK is to get away from this style of low-level hacking. Without sounding negative, it provides the missing pieces that Delphi lacks, including the stuff that C# developers enjoy under .net (and then some). So if you are a Delphi developer who look over at C# with smudge of envy, then you are going to love Remoting SDK.

Say goodbye to boilerplate mistakes

Writing distributed servers and services is boring work. For each function you expose, you have to define the parameters and data-types in a portable way, then you have to implement the code that represents the exposed function and finally the interface itself that can be consumed by clients. The latter must be defined in a way that works with other languages too, not just Delphi. So while server tech in it’s essential form is quite simple, it’s the infrastructure that sets the stage of how quickly you can apply improvements and adapt to change.

For example, let’s say you have implemented a wonderful new service. It exposes 60 awesome functions that your customers can consume in their own work. The amount of boilerplate code for 60 distributed functions, especially if you operate with composite data types, is horrendous. It is a nightmare to manage and opens up for sloppy, unnecessary mistakes.


After you install Remoting SDK, the service designer becomes a part of the IDE

This is where Remoting SDK truly shines. When you install the software, it integrates it’s editors and wizards closely with the Delphi IDE. It adds a ton of new project types, components and whatnot – but the most important feature is without a doubt the service designer.


Start the service-designer in any server or service project and you can edit the methods, data types and interfaces your system expose to the world

As the name implies, the service designer allows you to visually define your services. Adding a new function is a simple click, the same goes for datatypes and structures (record types). These datatypes are exposed too and can be consumed from any modern language. So a service you make in Delphi can be used from C#, C/C++, Java, Oxygene, Swift (and visa-versa).

Auto generated code

A service designer is all good and well I hear you say, but what about that boilerplate code? Well Remoting SDK takes care of that too (kinda the point). Whenever you edit your services, the designer will auto-generate a new interface unit for you. This contains the classes and definitions that describe your service. It will also generate an implementation unit, with empty functions; you just need to fill in the blanks.

The designer is also smart enough not to remove code. So if you go in and change something, it won’t just delete the older implementation procedure. Only the params and names will be changed if you have already written some code.


Having changed a service, hitting F9 re-generates the interface code automatically. Your only job is to fill in the code for each method in the implementation units. The SDK takes care of everything else for you

The service information, including the type information, is stored in a special file format called “rodl”. This format is very close to Microsoft WSDL format, but it holds more information. It’s important to underline that you can import the service directly from your servers (optional naturally) as WSDL. So if you want to consume a Remoting SDK service using Delphi’s ordinary RIO components, that is not a problem. Visual Studio likewise imports and consumes services – so Remoting SDK behaves identical regardless of platform or language used.

Remoting SDK is not just for Delphi, just to be clear on that. If you are presently using both Delphi and C# (which is a common situation), you can buy a license for both C# and Delphi and use whatever language you feel is best for a particular task or service. You can even get Remoting SDK for Javascript and call your service-stack directly from your website if you like. So there are a lot of options for leveraging the technology.

Transport is not content

OK so Remoting SDK makes it easy to define distributed services and servers. But what about communication? Are we boxed into RemObjects way of doing things?

The remoting framework comes with a ton of components, divided into 3 primary groups:

  • Servers
  • Channels (clients)
  • Messages

The reason for this distinction is simple: the ability to transport data, is never the same as the ability to describe data. For example, a message is always connected to a standard. It’s job is ultimately to serialize (represent) and de-serialize data according to a format. The server’s job is to receive a request and send a response. So these concepts are neatly decoupled for maximum agility.

As of writing the SDK offers the following message formats:

  • Binary
  • Post
  • SOAP
  • JSON

If you are exposing a service that will be consumed from JavaScript, throwing in a TROJSONMessage component is the way to go. If you expect messages to be posted from your website using ordinary web forms, then TROPostMessage is a perfect match. If you want XML then TROSOAPMessage rocks, and if you want fast, binary messages – well then there is TROBinaryMessage.

What you must understand is that you don’t have to pick just one! You can drop all 4 of these message formats and hook them up to your server or channel. The SDK is smart enough to recognize the format and use the correct component for serialization. So creating a distributed service that can be consumed from all major platforms is a matter of dropping components and setting a property.


If you double-click on a server or channel, you can link message components with a simple click. No messy code snippets in sight.

Multi-tenancy out of the box

With the release of Rad-Server as a part of Delphi, people have started to ask what exactly multi-tenancy is and why it matters. I have to be honest and say that yes, it does matter if you are creating a service stack where you want to isolate the logic for each customer in compartments – but the idea that this is somehow new or unique is not the case. Remoting SDK have given users multi-tenancy support for 15+ years, which is also why I haven’t been too enthusiastic with Rad-Server.

Now don’t get me wrong, I don’t have an axe to grind with Rad-Server. The only reason I mention it is because people have asked how i feel about it. The tech itself is absolutely welcome, but it’s the licensing and throwing Interbase in there that rubs me the wrong way. If it could run on SQLite3 and was free with Enterprise I would have felt different about it.


There are various models for multi-tenancy, but they revolve around the same principles

To get back on topic: multi-tenancy means that you can dynamically load services and expose them on demand. You can look at it as a form of plugin functionality. The idea in Rad-Server is that you can isolate a customer’s service in a separate package – and then load the package into your server whenever you need it.


Some of the components that ship with the system

The reason I dislike Rad-Server in this respect, is because they force you to compile with packages. So if you want to write a Rad-Server system, you have to compile your entire project as package-based, and ship a ton of .dpk files with your system. Packages is not wrong or bad per-se, but they open your system up on a fundamental level. There is nothing stopping a customer from rolling his own spoof package and potentially bypass your security.

There is also an issue with un-loading a package, where right now the package remains in memory. This means that hot-swapping packages without killing the server wont work.

Rad-Server is also hardcoded to use Interbase, which suddenly bring in licensing issues that rubs people the wrong way. Considering the price of Delphi in 2019, Rad-Server stands out as a bit of an oddity. And hardcoding a database into it, with the licensing issues that brings -just rendered the whole system mute for me. Why should I pay more to get less? Especially when I have been using multi-tenancy with RemObjects for some 15 years?

With Remoting SDK you have something called DLL servers, which does the exact same thing – but using ordinary DLL files (not packages!). You don’t have to compile your system with packages, and it takes just one line of code to make your main dispatcher aware of the loaded service.

This actually works so well that I use Remoting SDK as my primary “plugin” system. Even when I write ordinary desktop applications that has nothing to do with servers or services – I always try to compartmentalize features that could be replaced in the future.

For example, I’m a huge fan of ElevateDB, which is a native Delphi database engine that compiles directly into your executable. By isolating that inside a DLL as a service, my application is now engine agnostic – and I get a break from buying a truck load of components every time Delphi is updated.

Saving money

The thing about DLL services, is that you can save a lot of money. I’m actually using an ElevateDB license that was for Delphi 2007. I compiled the engine using D2007 into a DLL service — and then I consume that DLL from my more modern Delphi editions. I have no problem supporting or paying for components, that is right and fair, but having to buy new licenses for every single component each time Delphi is updated? This is unheard of in other languages, and I would rather ditch the platform all together than forking out $10k ever time I update.


A DLL server can be used for many things if you are creative about it

While we are on the subject – Hydra is another great money saver. It allows you to use .net and Java libraries (both visual and non-visual) with Delphi. With Hydra you can design something in .net, compile it into a DLL file, and then use that from Delphi.

But — you can also compile things from Delphi, and use it in newer versions of Delphi. Im not forking out for a Developer Express update just to use what I have already paid for in the latest Delphi. I have one license, I compile the forms and components into a Hydra Module — and then use it from newer Delphi editions.


Hydra, which is a separate product, allows you to stuff visual components and forms inside a vanilla DLL. It allows cross  language use, so you can finally use Java and .net components inside your Delphi application

Bonjour support

Another feature I love is the zero configuration support. This is one of those things that you often forget, but that suddenly becomes important once you deploy a service stack on cluster level.

apple_bonjour_medium-e1485166557218Remoting SDK comes with support for Apple Bonjour, so if you want to use that functionality you have to install the Bonjour library from Apple. Once installed on your host machines, your RemObjects services can find each other.

ZeroConfig is not that hard to code manually. You can roll your own using UDP or vanilla messages. But getting service discovery right can be fiddly. One thing is broadcasting an UDP message saying “here I am”, it’s something else entirely to allow service discovery on cluster level.

If Bonjour is not your cup of tea, the SDK provides a second option, which is RemObjects own zero-config hub. You can dig into the documentation to find out more about this.

What about that IPC stuff you mentioned?

I mentioned IPC (inter process communication) at the beginning here, which is a must have if you are making a service stack where each member is expected to talk to the others. In a large server-system the services might not exist on the same, physical hardware either, so you want to take height for that.

With the SDK this is just another service. It takes 10 minutes to create a DLL server with the functionality to send and receive messages – and then you just load and plug that into all your services. Done. Finished.

Interestingly, Remoting SDK supports named-pipes. So if you are running on a Windows network it’s even easier. Personally I prefer to use a vanilla TCP/IP based server and channel, that way I can make use of my Linux blades too.

Building on the system

There is nothing stopping you from expanding the system that RemObjects have established. You are not forced to only use their server types, message types and class framework. You can mix and match as you see fit – and also inherit out your own variation if you need something special.

firm_foundation-720x340For example, WebSocket is an emerging standard that has become wildly popular. Remoting SDK does not support that out of the box, the reason is that the standard is practically identical to the RemObjects super-server, and partly because there must be room for third party vendors.

Andre Mussche took the time to implement a WebSocket server for Remoting SDK a few years back. Demonstrating in the process just how easy it is to build on the existing infrastructure. If you are already using Remoting SDK or want WebSocket support, head over to his github repository and grab the code there: https://github.com/andremussche/DelphiWebsockets

I could probably write a whole book covering this framework. For the past 15 years, RemObjects Remoting SDK is the first product I install after Delphi. It has become standard for me and remains an integral part of my toolkit. Other packages have come and gone, but this one remains.

Hopefully this post has tickled your interest in the product. No matter if you are maintaining a legacy service stack, or thinking about re implementing your existing system in something future-proof, this framework will make your life much, much easier. And it wont break the bank either.

You can visit the product page here: https://www.remotingsdk.com/ro/default.aspx

And you can check out the documentation here: https://docs.remotingsdk.com/

Augmented reality, I don’t think so

July 20, 2019 Leave a comment

The world used to be a big place, but having worked around europe for a few years – and lately in the US, it appears to me much smaller. But the fact of the matter is, that different nationalities have different tastes and interests.


The world is not as big as it used to be

In the US right now there is a strong interest in virtual-reality. The interest was so strong that Sony jumped on the VR bandwagon early, offering a full VR kit for the Playstation 4. This has been available for two years already (or is it three?). Here in Scandinavia though, VR is not that hot. People buy it, but not in the same volume we see in the US. It is expected to pick up this fall when the dark period begins; Norway, Sweden, Denmark and Finland are largely without sunlight most of the year – and during that time people use indoor hobbies more. But right now, VR is not really a thing up here.

In parallel with VR, Microsoft picked up the gauntlet that Google threw away earlier, namely that of augmented reality.  You probably remember the Google Glasses craze that hit California a decade ago right? Well Microsoft have continued to research and develop the technology which is now available for pre-order.

The problem? Microsoft Holo-lens II is a $3500 gadget that is presently aimed at business only. With emphasis on industrial design and medical applications. I don’t know about you, but forking up $3500 for what is ultimately just a curiosity is way out of my budget. There are much more important things to spend $3500 on to be frank.

Asia, the mother of implementation

While America and Europe are hotbeds of invention, it is ultimately Asia that implements, refine and eventually perfect technology. Some might disagree with that, there are always exceptions to the rule (3d printers and VR systems are very much American made), but what im talking about are “traits and inclinations” in the market. Patterns of commerce if you like.

What usually happens when something new is made, is that it costs a fortune. To push prices down companies move production to Asia, and there materials etc. are swapped out for cheaper alternatives. Some re-designing takes place — and before you know it, a product that cost $3500 when made in the US or the EU, can be picked up for $799 on Amazon. After some time, production volume has reached it’s zenith and the device that once cost an arm or a leg, can now be bought $299 or $199.

But, there are exceptions! When there is a technology that is wildly popular, like augmented reality and VR, Asia is quick to produce their own take on the technology early – trying to get a piece of the proverbial pie. This is often a good thing, especially for consumers. It can also be a good thing for the US and EU based companies – because mistakes and design-flaws they haven’t noticed yet are taken care of by proxy.

With that in mind, I ventured into the Asian markets to see what I could find.

Banggood and Alibaba

Having searched through an avalanche of cheap VR glasses for mobile phones, I finally found something that could be worth looking into. The advert was extremely thin, citing only augmented reality in english – with everything else in chinese (I presume, I must admit I would not spot the difference between chinese, japanese and korean if my life depended on it. Tibetan I can spot due to my Buddhist training, but that’s about it).

When I ran the string of characters through google, it returned this:

“VISION-800 3D Glasses Video Android 4.4
MTK6582 1G/2G 5MP AC WIFI BT4.0 2060P MIC”

Looking at the glasses they have pretty much everything you expected from an Augmented reality setup. There is a camera up front, lenses, audio jacks on both sides, a few buttons and switches – and the magic words “powered by Android  Wearable”. The price was $249 so it wouldn’t break the bank either.


The Vision 800 in all their glory

I should also mention that websites like Banggood and Alibaba have pretty good return policies too. These websites are actually front-ends for hundreds (if not thousands) of smaller, local outlets. The outlets gets a place to sell their goods to the west, and Alibaba and Banggood takes a small cut of each sale.

To manage this and make it viable, they have a rating system in place. So if one of the outlets scams you – you can vote them down. 3 complaints is enough to get the outlet kicked from either site, which I presume is a dire financial loss (considering the volume these websites push on a daily basis). So there is some degree of consumer safety compared to direct order. I would never order anything directly from a tech cornershop in mainland china, because should they rip you off – there is nothing you can do about it.

Augmented? I don’t think so

When the glasses finally arrived i was surprised at how light they were. You might expect them to be top-heavy and tip forward on the ridge of your nose – but since the weight is actually the lenses, not the circuitry, they balance quite well.

But augmented reality? Im sorry, but these glasses are not even in the ballpark.

The device is running a fork of Android – but not the fork adapted for wearables! The glasses also comes with a stock mouse (cordless), and you are in fact greeted by a plain desktop. The cordless mouse does work really well though, but I thankfully had the sense to order a $5 air-mouse (read: remote control for android devices) or I would go insane trying to exit applications properly.

What you can do is download some augmented reality apps from Google Play. These will tap into the camera on the glasses, and you can play a few games. But that’s really it. I noticed that the outlet had changed the title and text for these glasses a few days before they arrived here, so the whole deal is a little bit fishy. Looking at the instruction leaflet, these glasses have been sold as “movie glasses”. I would even go so far as to say they have nothing to do with augmented reality.

Media glasses

Augmented reality aside, there are interesting uses for glasses like this. If the field of view is good enough, they could make for a novel screen “on the road”. I mean, if you plug in a hybrid USB dongle that gives you both keyboard and mouse, there is nothing in the way of using the glasses as a monitor / travel PC. You have the same apps that you enjoy on your phone; a modern browser that gives you access to cloud services etc.

The glasses also have an SD card slot which is super handy – and 2Gb onboard storage. So if you are taking a flight from Europe to Australia and want to tune out noise and watch movies – these glasses might just be the ticket.


The audio works well

I must admit it was fun to install NetFlix on these and kick back. But this is  also when i started to have some issues.

The first issue is that there is no focal lense involved. You are literally looking at two tiny screens. And if you use regular glasses like I do, watching without my ordinary glasses is a complete blur. I had to use contact-lenses (which I hate) to get any use out of these. But if your eyesight is fine, you will no doubt have a better experience.

For me being 100% dependent on my regular glasses, it actually makes more sense to buy a cheap, second-hand Samsung Galaxy Edge, which were designed to be used as a proper VR display, and then permanently fixing it to a set of cheap Samsung VR casing. Even the most rudimentary VR casing offers focal lenses, so you can adjust the focus to compensate for poor eyesight.

The second issue has to do with display resolution. If you have 20/20 eyesight then high resolutions is wonderful. But in my case I would actually see better if the resolution was slightly lower. Sadly the devices seem fixed to what I can only presume is 1600×1024 (or something like that), and there are no options for changing resolution, offset display or skew the picture. Again, these are factors that only become important if you have poor eyesight.


The way they solved audio is actually quite good. On each arm of the glasses you have an audio-jack out. And the kit comes with two small earplugs. And again – if you are on a long flight and just want to snuggle up in your own world – this works really well.

If you have ear-pods like I do, you can use them via the standard BT interface. But I noticed that there was a slight lag when using these; no doubt the CPU had problems handling audio when playing a full HD movie. The lag was not there when i used the normal jack – so the culprit is probably the BT device cache.


I’m not a huge gamer myself, I mostly play games like Destiny in the Playstation. On the odd occasion that I jump into other games, it’s mostly retro stuff. And I have a house pretty much packed with Amiga, Silicon Graphics, and more arcade hardware than god ever intended a person to own.

Having said that, the device is capable of both native Android games – and emulation. I had no problem installing UAE (Unix Amiga Emulator), and it’s more than powerful enough to emulate an A1200 with AGA (advanced graphics architecture).

I didn’t test the casting option – because the device can display-cast to your living room TV. But somehow it seems backwards using these as a casting source – when you already have a supercomputer in your pocket. Any modern phone, be it a Samsung or Apple device, will outperform whatever is powering these glasses – so if gaming is your thing, look elsewhere.

Final words

glasses3These glasses have potential. Or perhaps better phrased – the technology holds a lot of promise. If they had opted for focal-lenses and a wider field of vision, they would have been a fantastic experience. I have no problem imagining this type of tech replacing monitors in the near future – at least for movie experiences.

I must admit it’s really tricky to hammer down a verdict. On one hand they are really fun, you can install NetFlix, browse the web and watch movies if you copy them over to an sd-card (the glasses come with a 16Gb sd-card). You have mouse control, BT and i have no problem seeing myself on a flight to hong-kong enjoying a movie marathon wearing these.

But are they worth $250 ? I would have to say no. A fair price would be something in the $70 region. If they corrected the lenses I would have no problem buying a pair at $99. And if they expanded the field of vision to cover the width of the glasses – I would absolutely pick them up at $150. But $250 for this? Sadly I can’t say they are worth the money.

I was also surprised to find pornhub as a pre-defined shortcut in the browser (I mean, who does that?). It made me laugh first, thinking it was a cheeky joke – but as a parent who could have bought these for a child, it is utterly unacceptable. It’s not the first time I have found smut on a device from Asia. But yeah, a bit creepy.

So, I would have to give them a 3 out of 6 verdict. If you have money to burn and a long flight ahead, then by all means – they will give you a comfy way of watching movies during the flight. But the technology is (for the lack of a better word) premature.

As for augmented reality – forget it. You are better off stuffing your phone inside a $100 Samsung VR casing. The official Samsung Galaxy Edge casing probably cost next to nothing by now. And for $250 you should have no problem sourcing a used Galaxy Edge phone too. Which will be 100 times better than this.

I started this post citing the inherent differences between nationalities in what they enjoy, but I must admit that through these, I can see why VR holds such potential. I can’t see myself strapping on a full suit, helmet and gloves just to play a game or do some work. But glasses like these (but not these) is absolutely in the vicinity of “practical”. Just a damn shame they didn’t do a full width LCD with focal lenses; then I would have promoted them.

Right now: a fun curiocity, good for watching the odd movie if you eyesight is perfect – but for the rest of us, it’s just not worth the dough


30% discount on all RemObjects products!

July 8, 2019 Leave a comment

This is brilliant. RemObjects is giving a whopping 30% discount on all products!

This means you can now pick up RemObjects Remoting Framework, Data Abstract, Hydra or the Elements compiler toolchain – with a massive 30% saving!

These are battle-hardened, enterprise level solutions that have been polished over years and they are in constant development. Each solution integrates seamlessly into Embarcadero Delphi and provides a smooth path to delivering quality products in days rather than weeks.

But you better hurry because it’s only valid for one week (!)

Use the coupon code: “DelphiDeveloper”


Use the Delphi Developer coupon to get 30% discount – click here


A Delphi propertybag

July 7, 2019 14 comments

A long, long time ago, way back in the previous century, I often had to adjust a Visual Basic project my company maintained. Going from object-pascal to VB was more than a little debilitating; Visual Basic was not a compiled language like Delphi is, and it lacked more or less every feature you needed to produce good software.


I could probably make a VB clone using Delphi pretty easily. But I think the world has experienced enough suffering, no need to add more evil to the universe

Having said that, I have always been a huge fan of Basic (it was my first language after all, it’s what schools taught in the 70s and 80s). I think it was a terrible mistake for Microsoft to retire Basic as a language, because it’s a great way to teach kids the fundamentals of programming.

Visual Basic is still there though, available for the .Net framework, but to call it Basic is an insult of the likes of GFA Basic, Amos Basic and Blitz Basic; the mighty compilers of the past. If you enjoyed basic before Microsoft pushed out the monstrosity that is Visual Basic, then perhaps swing by GitHub and pick up a copy of BlitzBasic?  BlitzBasic is a completely different beast. It compiles to machine-code, allows inline assembly, and has been wildly popular for game developers over the years.

A property bag

The only feature that I found somewhat useful in Visual Basic, was an object called a propertybag. It’s just a fancy name for a dictionary, but it had a couple of redeeming factors beyond lookup ability. Like being able to load name-value-pairs from a string, recognizing datatypes and exposing type-aware read/write methods. Nothing fancy but handy when dealing with database connection-strings, shell parameters and the like.

So you could feed it strings like this:

first=12;second=hello there;third=3.14

And the class would parse out the names and values, stuff it in a dictionary, and you could easily extract the data you needed. Nothing fancy, but handy on rare occasions.

A Delphi version

Im mostly porting code from Delphi to Oxygene these days, but here is my Delphi implementation of the propertybag object. Please note that I haven’t bothered to implement the propertybag available in .Net. The Delphi version below is based on the Visual Basic 6 version, with some dependency injection thrown in for good measure.

unit fslib.params;





  (* Exceptions *)
  EPropertybag           = class(exception);
  EPropertybagReadError  = class(EPropertybag);
  EPropertybagWriteError = class(EPropertybag);
  EPropertybagParseError = class(EPropertybag);

  (* Datatypes *)
  TPropertyBagDictionary = TDictionary ;

  IPropertyElement = interface
    function  GetAsInt: integer;
    procedure SetAsInt(const Value: integer);

    function  GetAsString: string;
    procedure SetAsString(const Value: string);

    function  GetAsBool: boolean;
    procedure SetAsBool(const Value: boolean);

    function  GetAsFloat: double;
    procedure SetAsFloat(const Value: double);

    function  GetEmpty: boolean;

    property Empty: boolean read GetEmpty;
    property AsFloat: double read GetAsFloat write SetAsFloat;
    property AsBoolean: boolean read GetAsBool write SetAsBool;
    property AsInteger: integer read GetAsInt write SetAsInt;
    property AsString: string read GetAsString write SetAsString;

  TPropertyBag = Class(TInterfacedObject)
  strict private
    FLUT:       TPropertyBagDictionary;
  strict protected
    procedure   Parse(NameValuePairs: string);
    function    Read(Name: string): IPropertyElement;
    function    Write(Name: string; Value: string): IPropertyElement;

    procedure   SaveToStream(const Stream: TStream);
    procedure   LoadFromStream(const Stream: TStream);
    function    ToString: string; override;
    procedure   Clear; virtual;

    constructor Create(NameValuePairs: string); virtual;
    destructor  Destroy; override;



  cnt_err_sourceparameters_parse =
  'Failed to parse input, invalid or damaged text error [%s]';

  cnt_err_sourceparameters_write_id =
  'Write failed, invalid or empty identifier error';

  cnt_err_sourceparameters_read_id =
  'Read failed, invalid or empty identifier error';


  TPropertyElement = class(TInterfacedObject, IPropertyElement)
  strict private
    FName:      string;
    FData:      string;
    FStorage:   TPropertyBagDictionary;
  strict protected
    function    GetEmpty: boolean; inline;

    function    GetAsInt: integer; inline;
    procedure   SetAsInt(const Value: integer); inline;

    function    GetAsString: string; inline;
    procedure   SetAsString(const Value: string); inline;

    function    GetAsBool: boolean; inline;
    procedure   SetAsBool(const Value: boolean); inline;

    function    GetAsFloat: double; inline;
    procedure   SetAsFloat(const Value: double); inline;

    property    AsFloat: double read GetAsFloat write SetAsFloat;
    property    AsBoolean: boolean read GetAsBool write SetAsBool;
    property    AsInteger: integer read GetAsInt write SetAsInt;
    property    AsString: string read GetAsString write SetAsString;
    property    Empty: boolean read GetEmpty;

    constructor Create(const Storage: TPropertyBagDictionary; Name: string; Data: string); overload; virtual;
    constructor Create(Data: string); overload; virtual;

// TPropertyElement

constructor TPropertyElement.Create(Data: string);
  inherited Create;
  FData := Data.Trim();

constructor TPropertyElement.Create(const Storage: TPropertyBagDictionary;
  Name: string; Data: string);
  inherited Create;
  FStorage := Storage;
  FName := Name.Trim().ToLower();
  FData := Data.Trim();

function TPropertyElement.GetEmpty: boolean;
  result := FData.Length < 1;

function TPropertyElement.GetAsString: string;
  result := FData;

procedure TPropertyElement.SetAsString(const Value: string);
  if Value  FData then
    FData := Value;
    if FName.Length > 0 then
      if FStorage  nil then
        FStorage.AddOrSetValue(FName, Value);

function TPropertyElement.GetAsBool: boolean;
  TryStrToBool(FData, result);

procedure TPropertyElement.SetAsBool(const Value: boolean);
  FData := BoolToStr(Value, true);

  if FName.Length > 0 then
    if FStorage  nil then
      FStorage.AddOrSetValue(FName, FData);

function TPropertyElement.GetAsFloat: double;
  TryStrToFloat(FData, result);

procedure TPropertyElement.SetAsFloat(const Value: double);
  FData := FloatToStr(Value);
  if FName.Length > 0 then
    if FStorage  nil then
      FStorage.AddOrSetValue(FName, FData);

function TPropertyElement.GetAsInt: integer;
  TryStrToInt(FData, Result);

procedure TPropertyElement.SetAsInt(const Value: integer);
  FData := IntToStr(Value);
  if FName.Length > 0 then
    if FStorage  nil then
      FStorage.AddOrSetValue(FName, FData);

// TPropertyBag

constructor TPropertyBag.Create(NameValuePairs: string);

  inherited Create;
  FLUT := TDictionary.Create();

  NameValuePairs := NameValuePairs.Trim();
  if NameValuePairs.Length > 0 then

destructor TPropertyBag.Destroy;

procedure TPropertyBag.Clear;

procedure TPropertyBag.Parse(NameValuePairs: string);
  LList:      TStringList;
  x:          integer;
  LId:        string;
  LValue:     string;
  LOriginal:  string;
  LPos:       integer;
  // Reset content

  // Make a copy of the original text
  LOriginal := NameValuePairs;

  // Trim and prepare
  NameValuePairs := NameValuePairs.Trim();

  // Anything to work with?
  if NameValuePairs.Length > 0 then
    // Check if the data is URL-encoded
    LPos := pos('%', NameValuePairs);
    if  (LPos >= low(NameValuePairs) )
    and (LPos  0 then
      (* Populate our lookup table *)
      LList := TStringList.Create;
        LList.Delimiter := ';';
        LList.StrictDelimiter := true;
        LList.DelimitedText := NameValuePairs;

        if LList.Count = 0 then
          raise EPropertybagParseError.CreateFmt(cnt_err_sourceparameters_parse, [LOriginal]);

          for x := 0 to LList.Count-1 do
            LId := LList.Names[x].Trim().ToLower();
            if (LId.Length > 0) then
              LValue := LList.ValueFromIndex[x].Trim();
              Write(LId, LValue);
          on e: exception do
          raise EPropertybagParseError.CreateFmt(cnt_err_sourceparameters_parse, [LOriginal]);

function TPropertyBag.ToString: string;
  LItem: TPair;
  setlength(result, 0);
  for LItem in FLut do
    if LItem.Key.Trim().Length > 0 then
      result := result + Format('%s=%s;', [LItem.Key, LItem.Value]);

procedure TPropertyBag.SaveToStream(const Stream: TStream);
  LData: TStringStream;
  LData := TStringStream.Create(ToString(), TEncoding.UTF8);

procedure TPropertyBag.LoadFromStream(const Stream: TStream);
  LData: TStringStream;
  LData := TStringStream.Create('', TEncoding.UTF8);

function TPropertyBag.Write(Name: string; Value: string): IPropertyElement;
  Name := Name.Trim().ToLower();
  if Name.Length > 0 then
    if not FLUT.ContainsKey(Name) then
      FLut.Add(Name, Value);

    result := TPropertyElement.Create(FLut, Name, Value) as IPropertyElement;
  end else
  raise EPropertybagWriteError.Create(cnt_err_sourceparameters_write_id);

function TPropertyBag.Read(Name: string): IPropertyElement;
  LData:  String;
  Name := Name.Trim().ToLower();
  if Name.Length > 0  then
    if FLut.TryGetValue(Name, LData) then
      result := TPropertyElement.Create(LData) as IPropertyElement
      raise EPropertybagReadError.Create(cnt_err_sourceparameters_read_id);
  end else
  raise EPropertybagReadError.Create(cnt_err_sourceparameters_read_id);


BTree for Delphi

July 7, 2019 4 comments

Click here to read

A few weeks back I posted an article on RemObjects blog regarding universal code, and how you with a little bit of care can write code that easily compiled with both Oxygene, Delphi and Freepascal. With emphasis on Oxygene.

The example I used was a BTree class that I originally ported from Delphi to Smart Pascal, and then finally to Oxygene to run under WebAssembly.

Long story short I was asked if I could port the code back to Delphi in its more or less universal form. Naturally there are small differences here and there, but nothing special that distinctly separates the Delphi version from Oxygene or Smart Pascal.

Why this version?

If you google BTree and Delphi you will find loads of implementations. They all operate more or less identical, using records and pointers for optimal speed. I decided to base my version on classes for convenience, but it shouldn’t be difficult to revert that to use records if you absolutely need it.

What I like about this BTree implementation is that it’s very functional. Its easy to traverse the nodes using the ForEach() method, you can add items using a number as an identifier, but it also supports string identifiers.

I also changed the typical data reference. The data each node represent is usually a pointer. I changed this to variant to make it more functional.

Well, here is the Delphi version as promised. Happy to help.

unit btree;




  // BTree leaf object
  TQTXBTreeNode = class(TObject)
    Identifier: integer;
    Data:       variant;
    Left:       TQTXBTreeNode;
    Right:      TQTXBTreeNode;

  TQTXBTreeProcessCB = reference to procedure (const Node: TQTXBTreeNode; var Cancel: boolean);

  EBTreeError = class(Exception);

  TQTXBTree = class(TObject)
    FRoot:    TQTXBTreeNode;
    FCurrent: TQTXBTreeNode;
    function  GetEmpty: boolean;  virtual;
    function  GetPackedNodes: TList;

    property  Root: TQTXBTreeNode read FRoot;
    property  Empty: boolean read GetEmpty;

    function  Add(const Ident: integer; const Data: variant): TQTXBTreeNode; overload; virtual;
    function  Add(const Ident: string; const Data: variant): TQTXBTreeNode; overload; virtual;

    function  Contains(const Ident: integer): boolean; overload; virtual;
    function  Contains(const Ident: string): boolean; overload; virtual;

    function  Remove(const Ident: integer): boolean; overload; virtual;
    function  Remove(const Ident: string): boolean; overload; virtual;

    function  Read(const Ident: integer): variant; overload; virtual;
    function  Read(const Ident: string): variant; overload; virtual;

    procedure Write(const Ident: string; const NewData: variant); overload; virtual;
    procedure Write(const Ident: integer; const NewData: variant); overload; virtual;

    procedure Clear; overload; virtual;
    procedure Clear(const Process: TQTXBTreeProcessCB); overload; virtual;

    function  ToDataArray: TList;
    function  Count: integer;

    procedure ForEach(const Process: TQTXBTreeProcessCB);

    destructor Destroy; override;


// TQTXBTree

destructor TQTXBTree.Destroy;
  if FRoot  nil then

procedure TQTXBTree.Clear;
  lTemp:  TList;
  x:  integer;
  if FRoot  nil then
    // pack all nodes to a linear list
    lTemp := GetPackedNodes();

      // release each node
      for x := 0 to ltemp.Count-1 do
      // dispose of list

      // reset pointers
      FCurrent := nil;
      FRoot := nil;

procedure TQTXBTree.Clear(const Process: TQTXBTreeProcessCB);

function TQTXBTree.GetPackedNodes: TList;
  LData:  Tlist;
  LData := TList.Create();
  ForEach( procedure (const Node: TQTXBTreeNode; var Cancel: boolean)
    Cancel  := false;
  result := LData;

function TQTXBTree.GetEmpty: boolean;
  result := FRoot = nil;

function TQTXBTree.Count: integer;
  LCount: integer;
  ForEach( procedure (const Node: TQTXBTreeNode; var Cancel: boolean)
      Cancel  := false;
  result := LCount;

function TQTXBTree.ToDataArray: TList;
  Data: TList;
  Data := TList.Create();

  ForEach( procedure (const Node: TQTXBTreeNode; var Cancel: boolean)
      Cancel := false;
  result := data;

function TQTXBTree.Add(const Ident: string; const Data: variant): TQTXBTreeNode;
  result := Add( Ident.GetHashCode(), Data);

function TQTXBTree.Add(const Ident: integer; const Data: variant): TQTXBTreeNode;
  lNode:  TQTXBtreeNode;
  LNode := TQTXBTreeNode.Create();
  LNode.Identifier := Ident;
  LNode.Data := data;

  if FRoot = nil then
    FRoot := LNode;

  FCurrent := FRoot;

  while true do
    if (Ident  FCurrent.Identifier) then
      if (FCurrent.right = nil) then
        FCurrent.right := LNode;
      end else
      FCurrent := FCurrent.right;
    end else
  result := LNode;

function TQTXBTree.Read(const Ident: string): variant;
  result := Read( Ident.GetHashCode() );

function TQTXBTree.Read(const Ident: integer): variant;
  FCurrent := FRoot;
  while FCurrent  nil do
    if (Ident  Fcurrent.Identifier) then
      FCurrent := FCurrent.Right
      result := FCUrrent.Data;

procedure TQTXBTree.Write(const Ident: string; const NewData: variant);
  Write( Ident.GetHashCode(), NewData);

procedure TQTXBTree.Write(const Ident: integer; const NewData: variant);
  FCurrent := FRoot;
  while (FCurrent  nil) do
    if (Ident  Fcurrent.Identifier) then
      FCurrent := FCurrent.Right
      FCurrent.Data := NewData;

function  TQTXBTree.Contains(const Ident: string): boolean;
  result := Contains( Ident.GetHashCode() );

function TQTXBTree.Contains(const Ident: integer): boolean;
  result := false;
  if FRoot  nil then
    FCurrent := FRoot;

    while ( (not Result) and (FCurrent  nil) ) do
      if (Ident  Fcurrent.Identifier) then
        FCurrent := FCurrent.Right
        Result := true;

function TQTXBTree.Remove(const Ident: string): boolean;
  result := Remove( Ident.GetHashCode() );

function TQTXBTree.Remove(const Ident: integer): boolean;
  LFound: boolean;
  LParent: TQTXBTreeNode;
  LReplacementParent: TQTXBTreeNode;
  LChildCount: integer;
  FCurrent := FRoot;
  LFound := false;
  LParent := nil;
  LReplacement := nil;
  LReplacementParent := nil;

  while (not LFound) and (FCurrent  nil) do
    if (Ident  FCurrent.Identifier) then
      LParent := FCurrent;
      FCurrent := FCurrent.right;
    end else
    LFound := true;

    if LFound then
      LChildCount := 0;

      if (FCurrent.left  nil) then

      if (FCurrent.right  nil) then

      if FCurrent = FRoot then
        case (LChildCOunt) of
        0:  begin
              FRoot := nil;
        1:  begin
              if FCurrent.right = nil then
                FRoot := FCurrent.left
                FRoot :=FCurrent.Right;
        2:  begin
              LReplacement := FRoot.left;
              while (LReplacement.right  nil) do
                LReplacementParent := LReplacement;
                LReplacement := LReplacement.right;

            if (LReplacementParent  nil) then
              LReplacementParent.right := LReplacement.Left;
              LReplacement.right := FRoot.Right;
              LReplacement.left := FRoot.left;
            end else
            LReplacement.right := FRoot.right;

        FRoot := LReplacement;
      end else
        case LChildCount of
        0:  if (FCurrent.Identifier < LParent.Identifier) then
            Lparent.left  := nil else
            LParent.right := nil;
        1:  if (FCurrent.Identifier < LParent.Identifier) then
              if (FCurrent.Left = NIL) then
              LParent.left := FCurrent.Right else
              LParent.Left := FCurrent.Left;
            end else
              if (FCurrent.Left = NIL) then
              LParent.right := FCurrent.Right else
              LParent.right := FCurrent.Left;
        2:  begin
              LReplacement := FCurrent.left;
              LReplacementParent := FCurrent;

              while LReplacement.right  nil do
                LReplacementParent := LReplacement;
                LReplacement := LReplacement.right;
              LReplacementParent.right := LReplacement.left;

              LReplacement.right := FCurrent.right;
              LReplacement.left := FCurrent.left;

              if (FCurrent.Identifier < LParent.Identifier) then
                LParent.left := LReplacement
                LParent.right := LReplacement;

  result := LFound;

procedure TQTXBTree.ForEach(const Process: TQTXBTreeProcessCB);

  function ProcessNode(const Node: TQTXBTreeNode): boolean;
    if Node  nil then
      if Node.left  nil then
        result := ProcessNode(Node.left);
        if result then

      Process(Node, result);
      if result then

      if (Node.right  nil) then
        result := ProcessNode(Node.right);
        if result then



Calling node.js from Delphi

July 6, 2019 Leave a comment

We got a good question about how to start a node.js program from Delphi on our Facebook group today (third one in a week?). When you have been coding for years you often forget that things like this might not be immediately obvious. Hopefully I can shed some light on the options in this post.

Node or chrome?

nodeJust to be clear: node.js has nothing to do with chrome or chromium embedded. Chrome is a web-browser, a completely visual environment and ecosystem.

Node.js is the complete opposite. It is purely a shell based environment, meaning that it’s designed to run services and servers, with emphasis on the latter.

The only thing node.js and chrome have in common, is that they both use the V8 JavaScript runtime engine to load, JIT compile and execute scripts at high speed. Beyond that, they are utterly alien to each other.

Can node.js be embedded into a Delphi program?

Technically there is nothing stopping a C/C++ developer from compiling the node.js core system as C++ builder compatible .obj files; files that can then be linked into a Delphi application through references. But this also requires a bit of scaffolding, like adding support for malloc_, free_ and a few other procedures – so that your .obj files uses the same memory manager as your Delphi code. But until someone does just that and publish it, im afraid you are stuck with two options:

  • Use a library called Toby, that keeps node.js in a single DLL file. This is the most practical way if you insist on hosting your own version of node.js
  • Add node.js as a prerequisite and give users the option to locate the node.exe in your application’s preferences. This is the way I would go, because you really don’t want to force users to stick with your potentially outdated or buggy build.

So yes, you can use toby and just add the toby dll file to your program folder, but I have to strongly advice against that. There is no point setting yourself up for maintaining a whole separate programming language, just because you want JavaScript support.

“How many in your company can write high quality WebAssembly modules?”

If all you want to do is support JavaScript in your application, then I would much rather install Besen into Delphi. Besen is a JavaScript runtime engine written in Freepascal. It is fully compatible with Delphi, and follows the ECMA standard to the letter. So it is extremely compatible, fast and easy to use.

Like all Delphi components Besen is compiled into your application, so you have no dependencies to worry about.

Starting a node.js script

The easiest way to start a node.js script, is to simply shell-execute out of your Delphi application. This can be done as easily as:

ShellExecute(Handle, 'open', PChar('node.exe'), pchar('script.js'), nil, SW_SHOW);

This is more than enough if you just want to start a service, server or do some work that doesn’t require that you capture the result.

If you need to capture the result, the data that your node.js program emits on stdout, there is a nice component in the Jedi Component Library. Also plenty of examples online on how to do that.

If you need even further communication, you need to look for a shell-execute that support pipes. All node.js programs have something called a message-channel in the Javascript world. In reality though, this is just a named pipe that is automatically created when your script starts (with the same moniker as the PID [process identifier]).

If you opt for the latter you have a direct, full duplex message channel directly into your node.js application. You just have to agree with yourself on a protocol so that your Delphi code understands what node.js is saying, and visa versa.


If you don’t want to get your hands dirty with named pipes and rolling your own protocol, you can just use UDP to let your Delphi application communicate with your node.js process. UDP is practically without cost since its fundamental to all networking stacks, and in your case you will be shipping messages purely between processes on localhost. Meaning: packets are never sent on the network, but rather delegated between processes on the same machine.

In that case, I suggest you ship in the port you want your UDP server to listen on, so that your node.js service acts as the server. A simple command-line statement like:

node.exe myservice.js 8090

Inside node.js you can setup an UDP server with very little fuzz:

function setupServer(port) {
  var os = require("os");
  var dgram = require("dgram");
  var socket = dgram.createSocket("udp4");

  var MULTICAST_HOST = "";
  var BROADCAST_HOST = "";
  var ALL_PORT = 60540;
  var MULTICAST_TTL = 1; // Local network

  socket.on('listening', function() {
    if(broadcast) { socket.setBroadcast(true); }
  socket.on('message', parseMessage);

function parseMessage(message, rinfo) {
try {
  var messageObject = JSON.parse(message);
  var eventType = messageObject.eventType;
  } catch(e) {

Note: the code above assumes a JSON text message.

You can then use any Delphi UDP client to communicate with your node.js server, Indy is good, Synapse is a good library with less overhead – there are many options here.

Do I have to learn Javascript to use node.js?

If you download DWScript you can hook-up the JS-codegen library (see library folder in the DWScript repository), and use that to compile DWScript (object pascal) to kick-ass Javascript. This is the same compiler that was used in Smart Mobile Studio.

“Adding WebAssembly to your resume is going to be a hell of a lot more valuable in the years to come than C# or Java”

Another alternative is to use Freepascal, they have a pas2js project where you can compile ordinary object-pascal to javascript. Naturally there are a few things to keep in mind, both for DWScript and Freepascal – like avoiding pointers. But clean object pascal compiles just fine.

If JavaScript is not your cup of tea, or you simply don’t have time to learn the delicate nuances between the DOM (document object model, used by browsers) and the 100% package oriented approach deployed by node.js — then you can just straight up to webassembly.

RemObjects Software has a kick-ass webassembly compiler, perfect if you dont have the energy or time to learn JavaScript. As of writing this is the fastest and most powerful toolchain available. And I have tested them all.

WebAssembly, no Javascript needed

RO-Single-Gear-512You might remember Oxygene? It used to be shipped with Delphi as a way to target Microsoft CLR (common language runtime) and the .net framework.

Since then Oxygene and the RemObjects toolchain has evolved dramatically and is now capable of a lot more than CLR support.

  • You can compile to raw, llvm optimized machine code for 8 platforms
  • You can compile to CLR/.Net
  • You can compile to Java bytecodes
  • You can compile to WebAssembly!

WebAssembly is not Javascript, it’s important to underline that. WebAssembly was created especially for developers using traditional languages, so that traditional compilers can emit web friendly, binary code. Unlike Javascript, WebAssembly is a purely binary format. Just like Delphi generates machine-code that is linked into a final executable, WebAssembly is likewise compiled, linked and emitted in binary form.

If that sounds like a sales pitch, it’s not. It’s a matter of practicality.

  • WebAssembly is completely barren out of the box. The runtime environment, be it V8 for the browser or V8 for node.js, gives you nothing out of the box. You don’t even have WriteLn() to emit text.
  • Google expects compiler makers to provide their own RTL functions, from the fundamental to the advanced. The only thing V8 gives you, is a barebone way of referencing objects and functions on the other side, meaning the JS and DOM world. And that’s it.

So the reason i’m talking a lot about Oxygene and RemObjects Elements (Elements is the name of the compiler toolchain RemObjects offers), is because it ships with an RTL. So you are not forced to start on actual, literal assembly level.


If you don’t want to study JavaScript, Oxygene and Elements from RemObjects is the solution

RemObjects also delivers a DelphiVCL compatibility framework. This is a clone of the Delphi VCL / Freepascal LCL. Since WebAssembly is still brand new, work is being done on this framework on a daily basis, with updates being issued all the time.

Note: The Delphi VCL framework is not just for WebAssembly. It represents a unified framework that can work anywhere. So if you switch from WebAssembly to say Android, you get the same result.

The most important part of the above, is actually not the visual stuff. I mean, having HTML5 visual controls is cool – but chances are you want to use a library like Sencha, SwiftUI or jQueryUI to compose your forms right? Which means you just want to interface with the widgets in the DOM to set and get values.

jQuery UI Bootstrap

You probably want to use a fancy UI library, like jQuery UI. This works perfectly with Elements because you can reference the controls from your WebAssembly module. You dont have to create TButton, TListbox etc manually

The more interesting stuff is actually the non-visual code you get access to. Hundreds of familiar classes from the VCL, painstakingly re-created, and usable from any of the 5 languages Elements supports.

You can check it out here: https://github.com/remobjects/DelphiRTL

Skipping JavaScript all together

I dont believe in single languages. Not any more. There was a time when all you needed was Delphi and a diploma and you were set to conquer the world. But those days are long gone, and a programmer needs to be flexible and have a well stocked toolbox.

At least try the alternatives before you settle on a phone

Knowing where you want to be is half the journey

The world really don’t need yet-another-c# developer. There are millions of C# developers in India alone. C# is just “so what?”. Which is also why C# jobs pays less than Delphi or node.js system service jobs.

What you want, is to learn the things others avoid. If JavaScript looks alien and you feel uneasy about the whole thing – that means you are growing as a developer. All new things are learned by venturing outside your comfort zone.

How many in your company can write high quality WebAssembly modules?

How many within one hour driving distance from your office or home are experts at WebAssembly? How many are capable of writing industrial scale, production ready system services for node.js that can scale from a single instance to 1000 instances in a large, clustered cloud environment?

Any idiot can pick up node.js and knock out a service, but with your background from Delphi or C++ builder you have a massive advantage. All those places that can throw an exception that JS devs usually ignore? As a Delphi or Oxygene developer you know better. And when you re-apply that experience under a different language, suddenly you can do stuff others cant. Which makes your skills valuable.


The Quartex Media Desktop have made even experienced node / web developers gasp. They are not used to writing custom-controls and large-scale systems, which is my advantage

So would you learn JavaScript or just skip to WebAssembly? Honestly? Learn a bit of both. You don’t have to be an expert in JavaScript to compliment WebAssembly. Just get a cheap book, like “Node.js for beginners” and “JavaScript the good parts” ($20 a piece) and that should be more than enough to cover the JS side of things.

Adding WebAssembly to your resume and having the material to prove you know your stuff, is going to be a hell of a lot more valuable in the years to come than C#, Java or Python. THAT I can guarantee you.

And, we have a wicked cool group on Facebook you can join too: Click here to visit RemObjects Developer.