Quartex “Cloud Ripper” hardware

November 10, 2019 Leave a comment

For close to a year now I have been busy on a very exciting project, namely my own cloud system. While I have written about this project quite a bit these past months, mostly focusing on the software aspect, not much has been said about that hardware.

74238389_10156646805205906_1728576808808349696_o

Quartex “Cloud Ripper” running neatly on my home-office desk

So let’s have a look at Cloud Ripper, the official hardware setup for Quartex Media Desktop.

Tiny footprint, maximum power

Despite its complexity, the Quartex Media Desktop architecture is surprisingly lightweight. The services that makes up the baseline system (read: essential services) barely consume 40 megabytes of ram per instance (!). And while there is a lot of activity going on between these services -most of that activity is message-dispatching. Sending messages costs practically nothing in cpu and network terms. This will naturally change the moment you run your cloud as a public service, or setup the system in an office environment for a team. The more users, the more signals are shipped between the processes – but with the exception of reading and writing large files, messages are delivered practically instantaneous and hardly use CPU time.

CloudRipper

Quartex Media Desktop is based on a clustered micro-service architecture

One of the reasons I compile my code to JavaScript (Quartex Media Desktop is written from the ground up in Object Pascal, which is compiled to JavaScript) has to do with the speed and universality of node.js services. As you might know, Node.js is powered by the Google V8 runtime engine, which means the code is first converted to bytecodes, and further compiled into highly optimized machine-code [courtesy of llvm]. When coded right, such Javascript based services execute just as fast as those implemented in a native language. There simply are no perks to be gained from using a native language for this type of work. There are however plenty of perks from using Node.js as a service-host:

  • Node.js delivers the exact same behavior no matter what hardware or operating-system you are booting up from. In our case we use a minimal Linux setup with just enough infrastructure to run our services. But you can use any OS that supports Node.js. I actually have it installed on my Android based Smart-TV (!)
  • We can literally copy our services between different machines and operating systems without recompiling a line of code. So we don’t need to maintain several versions of the same software for different systems.
  • We can generate scripts “on the fly”, physically ship the code over the network, and execute it on any of the machines in our cluster. While possible to do with native code, it’s not very practical and would raise some major security concerns.
  • Node.js supports WebAssembly, you can use the Elements Compiler from RemObjects to write service modules that executes blazingly fast yet remain platform and chipset independent.

The Cloud-Ripper cube

The principal design goal when I started the project, was that it should be a distributed system. This means that instead of having one large-service that does everything (read: a typical “native” monolithic design), we instead operate with a microservice cluster design. Services that run on separate SBC’s (single board computers). The idea here is to spread the payload over multiple mico-computers that combined becomes more than the sum of their parts.

IMG_4644_Product_1024x1024@2x

Cloud Ripper – Based on the Pico 5H case and fitted with 5 x ODroid XU4 SBC’s

So instead of buying a single, dedicated x86 PC to host Quartex Media Desktop, you can instead buy cheap, off-the-shelves, easily available single-board computers and daisy chain them together. So instead of spending $800 (just to pin a number) on x86 hardware, you can pick up $400 worth of cheap ARM boards and get better network throughput and identical processing power (*). In fact, since Node.js is universal you can mix and match between x86, ARM, Mips and PPC as you see fit. Got an older PPC Mac-Mini collecting dust? Install Linux on it and get a few extra years out of these old gems.

(*) A single XU4 is hopelessly underpowered compared to an Intel i5 or i7 based PC. But in a cluster design there are more factors than just raw computational power. Each board has 8 CPU cores, bringing the total number of cores to 40. You also get 5 ARM Mali-T628 MP6 GPUs running at 533MHz. Only one of these will be used to render the HTML5 display, leaving 4 GPUs available for video processing, machine learning or compute tasks. Obviously these GPUs won’t hold a candle to even a mid-range graphics card, but the fact that we can use these chips for audio, video and computation tasks makes the system incredibly versatile.

Another design goal was to implement a UDP based Zero-Configuration mechanism. This means that the services will find and register with the core (read: master service) automatically, providing the machines are all connected to the same router or switch.

IMG_4650_Product_1024x1024@2x

Put together your own supercomputer for less than $500

The first “official” hardware setup is a cluster based on 5 cheap ARM boards; namely the ODroid XU4. The entire setup fits inside a Pico Cube, which is a special case designed to house this particular model of single board computers. Pico offers several different designs, ranging from 3 boards to a 20 board super-cluster. You are not limited ODroid XU4 boards if you prefer something else. I picked the XU4 boards because they represent the lowest possible specs you can run the Quartex Media Desktop on. While the services themselves require very little, the master board (the board that runs the QTXCore.js service) is also in charge of rendering the HTML5 display. And having tested a plethora of boards, the ODroid XU4 was the only model that could render the desktop properly (at that low a price range).

Note: If you are thinking about using a Raspberry PI 3B (or older) as the master SBC, you can pretty much forget it. The media desktop is a piece of very complex HTML5, and anything below an ODroid XU4 will only give you a terrible experience (!). You can use smaller boards as slaves, meaning that they can host one of the services, but the master should preferably be an ODroid XU4 or better. The ODroid N2 [with 4Gb Ram] is a much better candidate than a Raspberry PI v4. A Jetson Nano is an even better option due to its extremely powerful GPU.

Booting into the desktop

One of the things that confuse people when they read about the desktop project, is how it’s possible to boot into the desktop itself and use Quartex Media Desktop as a ChromeOS alternative?

How can a “cloud platform” be used as a desktop alternative? Don’t you need access to the internet at all times? If it’s a server based system, how then can we boot into it? Don’t we need a second PC with a browser to show the desktop?

73475069_10156646805615906_2668445017588105216_o

Accessing the desktop like a “web-page” from a normal Linux setup

To make a long story short: the “master” in our cluster architecture (read: the single-board computer defined as the boss) is setup to boot into a Chrome browser display under “kiosk mode”. When you start Chrome in kiosk mode, this removes all traces of the ordinary browser experience. There will be no toolbars, no URL field, no keyboard shortcuts, no right-click popup menus etc. It simply starts in full-screen and whatever HTML5 you load, has complete control over the display.

What I have done, is to to setup a minimal Linux boot sequence. It contains just enough Linux to run Chrome. So it has all the drivers etc. for the device, but instead of starting the ordinary Linux Desktop (X or Wayland) -we instead start Chrome in kiosk mode.

74602781_10156646805300906_6294526665393438720_o

Booting into the same desktop through Chrome in Kiosk Mode. In this mode, no Linux desktop is required. The Linux boot sequence is altered to jump straight into Chrome

Chrome is started to load from 127.0.0.1 (this is a special address that always means “this machine”), which is where our QTXCore.js service resides that has it’s own HTTP/S and Websocket servers. The client (HTML5 part) is loaded in under a second from the core — and the experience is more or less identical to starting your ChromeBook or NAS box. Most modern NAS (network active storage) devices are much more than a file-server today. NAS boxes like those from Asustor Inc have HDMI out, ships with a remote control, and are designed to act as a media center. So you connect the NAS directly to your TV, and can watch movies and listen to music without any manual conversion etc.

In short, you can setup Quartex Media Desktop to do the exact same thing as ChromeOS does, booting straight into the web based desktop environment. The same desktop environment that is available over the network. So you are not limited to visiting your Cloud-Ripper machine via a browser from another computer; nor are you limited to just  using a dedicated machine. You can setup the system as you see fit.

Why should I assemble a Cloud-Ripper?

Getting a Cloud-Ripper is not forced on anyone. You can put together whatever spare hardware you have (or just run it locally under Windows). Since the services are extremely lightweight, any x86 PC will do. If you invest in a ODroid N2 board ($80 range) then you can install all the services on that if you like. So if you have no interest in clustering or building your own supercomputer, then any PC, Laptop or IOT single-board computer(s) will do. Provided it yields more or equal power as the XU4 (!)

What you will experience with a dedicated cluster, regardless of putting the boards in a nice cube, is that you get excellent performance for very little money. It is quite amazing what $200 can buy you in 2019. And when you daisy chain 5 ODroid XU4 boards together on a switch, those 5 cheap boards will deliver the same serving power as an x86 setup costing twice as much.

Jetson-Nano_3QTR-Front_Left_trimmed

The NVidia Jetson Nano SBC, one of the fastest boards available at under $100

Pico is offering 3 different packages. The most expensive option is the pre-assembled cube. This is for some reason priced at $750 which is completely absurd. If you can operate a screwdriver, then you can assemble the cube yourself in less than an hour. So the starter-kit case which costs $259 is more than enough.

Next, you can buy the XU4 boards directly from Hardkernel for $40 a piece, which will set you back $200. If you order the Pico 5H case as a kit, that brings the sub-total up to $459. But that price-tag includes everything you need except sd-cards. So the kit contains power-supply, the electrical wiring, a fast gigabit ethernet switch [built-into the cube], active cooling, network cables and power cables. You don’t need more than 8Gb sd-cards, which costs practically nothing these days.

Note: The Quartex Media Desktop “file-service” should have a dedicated disk. I bought a 256Gb SSD disk with a USB 3.0 interface, but you can just use a vanilla USB stick to store user-account data + user files.

As a bonus, such a setup is easy to recycle should you want to do something else later. Perhaps you want to learn more about Kubernetes? What about a docker-swarm? A freepascal build-server perhaps? Why not install FreeNas, Plex, and a good backup solution? You can set this up as you can afford. If 5 x ODroid XU4 is too much, then get 3 of them instead + the Pico 3H case.

So should Quartex Media Desktop not be for you, or you want to do something else entirely — then having 5 ODroid XU4 boards around the house is not a bad thing.

Oh and if you want some serious firepower, then order the Pico 5H kit for the NVidia Jetson Nano boards. Graphically those boards are beyond any other SoC on the market (in it’s price range). But as a consequence the Jetson Nano starts at $99. So for a full kit you will end up with $500 for the boards alone. But man those are the proverbial Ferrari of IOT.

Hydra, what’s the big deal anyway?

October 29, 2019 5 comments

RemObjects Hydra is a product I have used for years in concert with Delphi, and like most developers that come into contact with RemObjects products – once the full scope of the components hit you, you never want to go back to not using Hydra in your applications.

Note: It’s easy to dismiss Hydra as a “Delphi product”, but Hydra for .Net and Java does the exact same thing, namely let you mix and match modules from different languages in your programs. So if you are a C# developer looking for ways to incorporate Java, Delphi, Elements or Freepascal components in your application, then keep reading.

But let’s start with what Hydra can do for Delphi developers.

What is Hydra anyways?

Hydra is a component package for Delphi, Freepascal, .Net and Java that takes plugins to a whole new level. Now bear with me for a second, because these plugins is in a completely different league from anything you have used in the past.

In short, Hydra allows you to wrap code and components from other languages, and use them from Delphi or Lazarus. There are thousands of really amazing components for the .Net and Java platforms, and Hydra allows you compile those into modules (or “plugins” if you prefer that); modules that can then be used in your applications like they were native components.

hydra-01-overview

Hydra, here using a C# component in a Delphi application

But it doesn’t stop there; you can also mix VCL and FMX modules in the same application. This is extremely powerful since it offers a clear path to modernizing your codebase gradually rather than doing a time consuming and costly re-write.

So if you want to move your aging VCL codebase to Firemonkey, but the cost of having to re-write all your forms and business logic for FMX would break your budget -that’s where Hydra gives you a second option: namely that you can continue to use your VCL code from FMX and refactor the application in your own tempo and with minimal financial impact.

The best of all worlds

Not long ago RemObjects added support for Lazarus (Freepascal) to the mix, which once again opens a whole new ecosystem that Delphi, C# and Java developers can benefit from. Delphi has a lot of really cool components, but Lazarus have components that are not always available for Delphi. There are some really good developers in the Freepascal community, and you will find hundreds of components and classes (if not thousands) that are open-source; For example, Lazarus has a branch of Synedit that is much more evolved and polished than the fork available for Delphi. And with Hydra you can compile that into a module / plugin and use it in your Delphi applications.

This is also true for Java and C# developers. Some of the components available for native languages might not have similar functionality in the .Net world, and by using Hydra you can tap into the wealth that native languages have to offer.

As a Delphi or Freepascal developer, perhaps you have seen some of the fancy grids C# and Java coders enjoy? Developer Express have some of the coolest components available for any platform, but their focus is more on .Net these days than Delphi. They do maintain the control packages they have, but compared to the amount of development done for C# their Delphi offerings are abysmal. So with Hydra you can tap into the .Net side of things and use the latest components and libraries in your Delphi applications.

Financial savings

One of coolest features of Hydra, is that you can use it across Delphi versions. This has helped me leverage the price-tag of updating to the latest Delphi.

It’s easy to forget that whenever you update Delphi, you also need to update the components you have bought. This was one of the reasons I was reluctant to upgrade my Delphi license until Embarcadero released Delphi 10.2. Because I had thousands of dollars invested in components – and updating all my licenses would cost a small fortune.

So to get around this, I put the components into a Hydra module and compiled that using my older Delphi. And then i simply used those modules from my new Delphi installation. This way I was able to cut cost by thousands of dollars and enjoy the latest Delphi.

hydramix

Using Firemonkey controls under VCL is easy with Hydra

A couple of years back I also took the time to wrap a ton of older components that work fine but are no longer maintained or sold. I used an older version of Delphi to get these components into a Hydra module – and I can now use those with Delphi 10.3 (!). In my case there was a component-set for working closely with Active Directory that I have used in a customer’s project (and much faster than having to go the route via SQL). The company that made these don’t exist any more, and I have no source-code for the components.

The only way I could have used these without Hydra, would be to compile them into a .dll file and painstakingly export every single method (or use COM+ to cross the 32-bit / 64-bit barrier), which would have taken me a week since we are talking a large body of quality code. With Hydra i was able to wrap the whole thing in less than an hour.

I’m not advocating that people stop updating their components. But I am very thankful for the opportunity to delay having to update my entire component stack just to enjoy a modern version of Delphi.

Hydra gives me that opportunity, which means I can upgrade when my wallet allows it.

Building better applications

There is also another side to Hydra, namely that it allows you to design applications in a modular way. If you have the luxury of starting a brand new project and use Hydra from day one, you can isolate each part of your application as a module. Avoiding the trap of monolithic applications.

img_517046

Hydra for .Net allows you to use Delphi, Java and FPC modules under C#

This way of working has great impact on how you maintain your software, and consequently how you issue hotfixes and updates. If you have isolated each key part of your application as separate modules, you don’t need to ship a full build every time.

This also safeguards you from having all your eggs in one basket. If you have isolated each form (for example) as separate modules, there is nothing stopping you from rewriting some of these forms in another language – or cross the VCL and FMX barrier. You have to admit that being able to use the latest components from Developer Express is pretty cool. There is not a shadow of a doubt that Developer-Express makes the best damn components around for any platform. There are many grids for Delphi, but they cant hold a candle to the latest and greatest from Developer Express.

Why can’t I just use packages?

If you are thinking “hey, this sounds exactly like packages, why should I buy Hydra when packages does the exact same thing?“. Actually that’s not how packages work for Delphi.

Delphi packages are cool, but they are also severely limited. One of the reasons you have to update your components whenever you buy a newer version of Delphi, is because packages are not backwards compatible.

delphi-500

Delphi packages are great, but severely limited

A Delphi package must be compiled with the same RTL as the host (your program), and version information and RTTI must match. This is because packages use the same RTL and more importantly, the same memory manager.

Hydra modules are not packages. They are clean and lean library files (*.dll files) that includes whatever RTL you compiled them with. In other words, you can safely load a Hydra module compiled with Delphi 7, into a Delphi 10.3 application without having to re-compile.

Once you start to work with Hydra, you gradually build up modules of functionality that you can recycle in the future. In many ways Hydra is a whole new take on components and RAD. This is how Delphi packages and libraries should have been.

Without saying anything bad about Delphi, because Delphi is a system that I love very much; but having to update your entire component stack just to use the latest Delphi, is sadly one of the factors that have led developers to abandon the platform. If you have USD 10.000 in dependencies, having to pay that as well as buying Delphi can be difficult to justify; especially when comparing with other languages and ecosystems.

For me, Hydra has been a tremendous boon for Delphi. It has allowed me to keep current with Delphi and all it’s many new features, without losing the money I have already invested in components packages.

If you are looking for something to bring your product to the next level, then I urge you to spend a few hours with Hydra. The documentation is exceptional, the features and benefits are outstanding — and you will wonder how you ever managed to work without them.

External resources

Disclaimer: I am not a salesman by any stretch of the imagination. I realize that promoting a product made by the company you work for might come across as a sales pitch; but that’s just it: I started to work for RemObjects for a reason. And that reason is that I have used their products since they came on the market. I have worked with these components long before I started working at RemObjects.

ARM Linux Services with Oxygene and Elements

October 14, 2019 Leave a comment

Linux is one of those systems that just appeals to me out of the box. I work with Windows on a daily basis, but at this point there is really nothing in the way of me jumping ship all together. I mean, whenever i need something that is Windows specific, I can just fire up a virtual-machine and get the job done there.

The only thing that is stopping me from going “all in” with Linux (and believe me I have tried) is that finding proper documentation for Linux written with Windows converts in mind, is actually a challenge in itself. Most tutorials are either meant for non-developers, like how to install a program via Synaptic and so on; which is brilliant if you have no experience with Linux whatsoever. But finding articles that aims to help a Windows developer get up to speed on Linux, that’s the tricky bit.

Screenshot at 2019-10-13 15-51-18

Top-Left bash window shows the output of my Elements compiled micro-service

One of the features I wanted to learn about, was how to run a program as a service on Linux. Under Windows this is quite easy. You have the service manager that gives you a good overview of registered services. And programatically a service is ultimately just a normal WinAPI program that supports the service-api messages. Writing services in either Object-Pascal or C# is pretty straight-forward. I also do a lot of service work via Quartex Pascal (my own toolchain) that compiles to JavaScript. Node.js is actually a very capable service host once you understand the infrastructure.

Writing Daemons with Oxygene and Elements

Since the Elements compiler generates code for ARM Linux, learning how to get a service registered and started on boot is something that I think many developers will be interested in. It was one of the first questions I had when I started looking at Linux, and it took a while to find a clean cut answer.

In this little article I will show you how I went about this, but please keep in mind that Linux never has “one way” of doing something. Part of the strength that Linux delivers, is that you can configure and customize the system completely, from Kernel to desktop. You literally have different service sub-systems to pick from, as well as different windowing-managers, desktop systems (e.g Wayland or X) and even keyring implementations. This is what makes Linux so confusing when coming from a mono culture like Microsoft Windows.

control-linux-startup-670x335As for hardware, i’m using an ODroid N2, which is a very powerful ARM based SBC (single board computer). You can use more or less any ARM device with Elements, providing the Linux distribution is based on Debian. So a Raspberry PI v4 with Ubuntu or Lubuntu will work fine. I’m using the ODroid N2 “full disk image” with Ubuntu Mate. So nothing out of the ordinary.

To make something clear off the bat: a Linux service (called a daemon, the ancient greek word for “helper” and “informer”) is just an ordinary shell application. You don’t have to do anything in particular in terms of code. Once your service is registered, you can start and stop it with the systemctl shell command like any other Linux service.

Note: There is also fork() mechanisms (cloning processes), but that’s out of scope for this little post.

Service manifest

Before we can get your binary registered as a service, we need to write a service manifest file. This is just a normal text-file in INI format that defines how you want your service to run. Here is an example of such a file:

[Unit]
Description=Elements Service
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
User=qtx
ExecStart=/usr/bin /usr/bin/ElementsService.exe

[Install]
WantedBy=multi-user.target

Before you save this file, remember to replace the username (user=) and the name of your executable.

Note: The ExecStart property can be defined in 3 ways:

  • Direct path to the executable
  • Current working directory + path to executable (like above)
  • Current working directory + path to executable + parameters

You can read more about each property here: System.d service info

Systemd

For debian based distributions (Ubuntu branches) the most common service-host (or process manager) is called systemd. I am not even going to pretend to know the differences between systemd and the older Init. There are fierce debates in the Linux community around these two alternatives. But unless you are a Linux C developer that likes to roll your own Kernel in the weekends, it’s not really relevant for our goals in this post. Our task here is to write useful services and make them run side-by-side with other services.

With the service-manifest file done, we need to copy the service manifest in place where systemd can find it. So start by saving the manifest file as “elements.service” here:

/etc/systemd/system/elements.service

As you probably guessed from the ExecPath property, your service executable goes in:

/usr/bin/ElementsService.exe

If all went well you can now start your service from the command-line:

systemctl start elements

And you can stop the service with:

systemctl stop elements

Resident services

Starting and stopping a service is all good and well, but that doesn’t mean it will automatically start when you reboot your Linux box. In order to make the service resident (persisted, so Linux remembers to fire it up on boot), you have to enable the service:

systemctl enable elements

If you want to stop the service from starting on boot, just disable it:

systemctl disable elements

Now there is a ton of things you can tweak and change in the service-manifest file. For example, do you want Linux to restart your service if it crashes? How many times should Linux attempt to bring the service back up? Should it only bring it back up if the exit-code is zero?

If you want Linux to always restart a service if it stops (regardless of reason), you set the following flag in the service-manifest:

Restart=always

If you want Linux to only restart if the service fails, meaning that the exit-code of the application is <> 0, then you use this value instead:

Restart=on-failure

You can also set the service to only start after some other service, for example if your service has networking as a criteria (which is what we set in the service-manifest above), or a database engine.

There is a ton of different settings you can apply to the service-manifest, but listing them all here would be a small book. Better to just check the documentation and experiment a bit. So check the link and pick the ones that makes sense to your particular service.

Reflections

You should be very careful with how you define restart options. If something goes wrong and your service crash on start, Linux will keep restarting it en-mass. Automatic restart creates a loop, and it can be wise to make sure it doesn’t get stuck. I would set restart to “on-error” exclusively, so that your service has a chance to exit gracefully.

Happy coding! And a special thanks to Benjamin Morel for his helpful posts.

 

 

.NetRocks, you made my day!

October 11, 2019 4 comments

72462670_10156562141710906_5626655686042583040_nA popular website for .Net developers is called dot-net-rocks. This is an interesting site that has been going for a while now; well worth the visit if you do work with the .Net framework via RemObjects Elements, VS or Mono.

Now it turns out that the guys over at dot–net-rocks just did an episode on their podcast where they open by labeling me as a “raving lunatic” (I clearly have my moments); which I find absolutely hilarious, but not for the same reasons as them.

Long story short: They are doing a podcast on how to migrate legacy Delphi applications to C#, and in that context they somehow tracked down an article I posted way back in 2016, which was meant as a satire piece. Now don’t get me wrong, there are serious points in the article, like how the .Net framework was modeled on the Delphi VCL, and how the concepts around CLR and JIT were researched at Borland; but the tone of the whole thing, the “larger than life” claims etc. was meant to demonstrate just how some .Net developers behave when faced with alternative eco-systems. Having managed some 16+ usergroups for Delphi, C#, JavaScript (a total of six languages) on Facebook for close to 15 years, as well as working for Embarcadero that makes Delphi -I speak from experience.

It might be news to these guys that large companies around Europe is still using Delphi, modern Delphi, and that Object Pascal as a language scores well on the Tiobi index of popular programming languages. And no amount of echo-chamber mentality is going to change that fact. Heck, as late as 2018 and The Walt Disney Company wanted to replace C# with Delphi, because it turns out that bytecodes and embedded tech is not the best combination (cpu spikes when the GC kicks in, no real-time interrupt handling possible, GPIO delays, the list goes on).

I mean, the post i made back in 2016 is such obvious, low-hanging fruit for a show their size to pound on. You have this massive show that takes on a single, albeit ranting (and probably a bit of a lunatic if I don’t get my coffee) coder’s post. Underlying in the process how little they know about the object pascal community at large. They just demonstrated my point in bold, italic and underline 😀

Look before you shoot

DotNetRocks is either oblivious that Delphi still have millions of users around the world, or that Pascal is in fact available for .Net (which is a bit worrying since .Net is supposed to be their game). The alternative is that the facts I listed hit a little too close to home. I’ll leave it up to the reader to decide. Microsoft has lost at least 10 Universities around Europe to Delphi in 2018 that I know of, two of them Norwegian where I was personally involved in the license sales. While only speculation, I do find the timing for their podcast and focus on me in particular to be, “curious”.

72704588_10156562141590906_7030064639744409600_nAnd for the record, the most obvious solution when faced with “that legacy Delphi project”, is to just go and buy a modern version of Delphi. DotNetRocks delivered a perfect example of that very arrogance my 2016 post was designed to convey; namely that “brogrammers” often act like Delphi 7 was the last Delphi. They also resorted to lies to sell their points: I never said that Anders was dogged for creating Delphi. Quite the opposite. I simply underlined that by ridiculing Delphi in one hand, and praising it’s author with the other – you are indirectly (and paradoxically) invalidating half his career. Anders is an awesome developer, but why exclude how he evolved his skills? Ofcourse Ander’s products will have his architectural signature on them.

Not once did they mention Embarcadero or the fact that Delphi has been aggressively developed since Borland kicked the bucket. Probably hoping that undermining the messenger will somehow invalidate the message.

vspas

Porting Delphi to C# manually? Ok.. why not install Elements and just compile it into an assembly? You don’t even have to leave Visual Studio

Also, such an odd podcast for professional developers to run with. I mean, who the hell converts a Delphi project to C# manually? It’s like listening to a graphics artist that dont know that Photoshop and Illustrator are the de-facto tools to use. How is that even possible? A website dedicated to .Net, yet with no insight into the languages that run on the CLR? Wow.

If you want to port something from Delphi to .Net, you don’t sit down and manually convert stuff. You use proper tools like Elements from RemObjects; This gives you Object-Pascal for .Net (so a lot of code will compile just fine with only minor changes). Elements also ships with source-conversion tools, so once you have it running under Oxygene Pascal (the dialect is called Oxygene) you either just use the assemblies — or convert the Pascal code to C# through a tool called an Oxidizer.

vsdelphi

The most obvious solution is to just upgrade to a Delphi version from this century

The other solution is to use Hydra, also a RemObjects product. They can then compile the Delphi code into a library (including visual parts like forms and frames), and simply use that as any other assembly from within C#. This allows you to gradually phase out older parts without breaking the product. You can also use C# assemblies from Delphi with Hydra.

So by all means, call me what you like. You have only proved my point so far. You clearly have zero insight into the predominant Object-Pascal eco-systems, you clearly don’t know the tools developers use to interop between arcetypical and contextual languages — and instead of fact checking some of the points I made, dry humor notwithstanding, you just reacted like brogrammers do.

Well, It’s been weeks since I laughed this hard 😀 You really need to check before you pick someone to verbally abuse on the first date, because you might just bite yourself in the arse here he he

Cheers

 

Quartex Media Desktop, new compiler and general progress

September 11, 2019 Leave a comment

It’s been a few weeks since my last update on the project. The reason I dont blog that often about Quartex Media Desktop (QTXMD), is because the official user-group has grown to 2000+ members. So it’s easier for me to post developer updates directly to the audience rather than writing articles about it.

desktop_01

Quartex Media Desktop ~ a complete environment that runs on every device

If you haven’t bothered digging into the project, let me try to sum it up for you quickly.

Quick recap on Quartex Media Desktop

To understand what makes this project special, first consider the relationship between Microsoft Windows and a desktop program. The operating system, be it Windows, Linux or OSX – provides an infrastructure that makes complex applications possible. The operating-system offers functions and services that programs can rely on.

The most obvious being:

  • A filesystem and the ability to save and load data
  • A windowing toolkit so programs can be displayed and have a UI
  • A message system so programs can communicate with the OS
  • A service stack that takes care of background tasks
  • Authorization and identity management (security)

I have just described what the Quartex Media Desktop is all about. The goal is simple:

to provide for JavaScript what Windows and OS X provides for ordinary programs.

Just stop and think about this. Every “web application” you have ever seen, have all lacked these fundamental features. Sure you have libraries that gives you a windowing environment for Javascript, like Embarcadero Sencha; but im talking about something a bit more elaborate. Creating windows and buttons is easy, but what about ownership? A runtime environment has to keep track of the resources a program allocates, and make sure that security applies at every step.

Target audience and purpose

Take a second and think about how many services you use that have a web interface. In your house you probably have a router, and all routers can be administered via the browser. Sadly, most routers operate with a crude design and that leaves much to be desired.

router

Router interfaces for web are typically very limited and plain looking. Imagine what NetGear could do with Quartex Media Desktop instead

If you like to watch movies you probably have a Plex or Kodi system running somewhere in your house; perhaps you access that directly via your TV – or via a modern media system like Playstation 4 or XBox one. Both Plex and Kodi have web-based interfaces.

Netflix is now omnipresent and have practically become an institution in it’s own right. Netflix is often installed as an app – but the app is just a thin wrapper around a web-interface. That way they dont have to code apps for every possible device and OS out there.

If you commute via train in Scandinavia, chances are you buy tickets on a kiosk booth. Most of these booths run embedded software and the interface is again web based. That way they can update the whole interface without manually installing new software on each device.

plex-desktop-movies-1024x659

Plex is a much loved system. It is based on a mix of web and native technologies

These are just examples of web based interfaces you might know and use; devices that leverage web technology. As a developer, wouldn’t it be cool if there was a system that could be forked, adapted and provide advanced functionality out of the box?

Just imagine a cheap Jensen router with a Quartex Media Desktop interface! It could provide a proper UI interface with applications that run in a windowing environment. They could disable ordinary desktop functionality and run their single application in kiosk mode. Taking full advantage of the underlying functionality without loss of security.

And the same is true for you. If you have a great idea for a web based application, you can fork the system, adjust it to suit your needs – and deploy a cutting edge cloud system in days rather than months!

New compiler?

Up until recently I used Smart Mobile Studio. But since I have left that company, the matter became somewhat pressing. I mean, QTXMD is an open-source system and cant really rely on third-party intellectual property. Eventually I fired up Delphi, forked the latest  DWScript, and used that to roll a new command-line compiler.

desktop_02

Web technology has reached a level of performance that rivals native applications. You can pretty much retire Photoshop in favour of web based applications these days

But with a new compiler I also need a new RTL. Thankfully I have been coding away on the new RTL for over a year, but there is still a lot of work to do. I essentially have to implement the same functionality from scratch.

There will be more info on the new compiler / codegen when its production ready.

Progress

If I was to list all the work I have done since my last post, this article would be a small book. But to sum up the good stuff:

  • Authentication has been moved into it’s own service
  • The core (the main server) now delegates login messages to said service
  • We no longer rely on the Smart Pascal filesystem drivers, but use the raw node.js functions instead  (faster)
  • The desktop now use the Smart Theme engine. This means that we can style the desktop to whatever we like. The OS4 theme that was hardcoded will be moved into its own proper theme-file. This means the user can select between OS4, iOS, Android and Ubuntu styling. Creating your own theme-files is also possible. The Smart theme-engine will be replaced by a more elaborate system in QTX later
  • Ragnarok (the message api) messages now supports routing. If a routing structure is provided,  the core will relay the message to the process in question (providing security allows said routing for the user)
  • The desktop now checks for .info files when listing a directory. If a file is accompanied by an .info file, the icon is extracted and shown for that file
  • Most of the service layer now relies on the QTX RTL files. We still have some dependencies on the Smart Pascal RTL, but we are making good progress on QTX. Eventually  the whole system will have no dependencies outside QTX – and can thus be compiled without any financial obligations.
  • QTX has it’s own node.js classes, including server and client base-classes
  • Http(s) client and server classes are added to QTX
  • Websocket and WebSocket-Secure are added to QTX
  • TQTXHybridServer unifies http and websocket. Meaning that this server type can handle both orinary http requests – but also websocket connections on the same network socket. This is highly efficient for websocket based services
  • UDP classes for node.js are implemented, both client and server
  • Zero-Config classes are now added. This is used by the core for service discovery. Meaning that the child services hosted on another machine will automatically locate the core without knowing the IP. This is very important for machine clustering (optional, you can define a clear IP in the core preferences file)
  • Fixed a bug where the scrollbars would corrupt widget states
  • Added API functions for setting the scrollbars from hosted applications (so applications can tell the desktop that it needs scrollbar, and set the values)
  • .. and much, much more

I will keep you all posted about the progress — the core (the fundamental system) is set for release in december – so time is of the essence! Im allocating more or less all my free time to this, and it will be ready to rock around xmas.

When the core is out, I can focus solely on the applications. Everything from Notepad to Calculator needs to be there, and more importantly — the developer tools. The CloudForge IDE for developers is set for 2020. With that in place you can write applications for iOS, Android, Windows, OS X and Linux directly from Quartex Media Desktop. Nothing to install, you just need a modern browser and a QTX account.

The system is brilliant for small teams and companies. They can setup their own instance, communicate directly via the server (text chat and video chat is scheduled) and work on their products in concert.

Why move to Windows 10?

September 6, 2019 1 comment

When it comes to Windows editions, Windows 7 is probably the most successful operating-system Microsoft has ever released. When it hit stores back in October of 2009, it replaced Windows Vista (Longhorn) which, truth be told, caused more problems than it solved. The issues surrounding Vista were catastrophic for many reasons, but they were especially severe for developers. I remember buying a brand new laptop with Vista pre-installed, but in less than a week I rolled back to Windows XP.

win7

In retrospect, Vista was perhaps not as bad as it’s reputation would have it. I honestly feel it’s a very misunderstood edition of Windows, one that brought features common to the NT family into the mainstream. But back then people were still unfamiliar with what exactly that meant; things like “roaming profiles” was alien to users and developers with no background in networking. In my case Vista came at a juncture where I had two product releases on my hand. Time was of the essence, and spending days refactoring my code for the changes could not have come at a worse moment.

Be this as it may, the rejection of Vista forced Microsoft to replace it with something better. Vista was supposed to have a 10 year life-cycle, but Microsoft put Vista out of its misery in 3 years.

Windows 7 retirement plan

Windows 7 has been a wonderful system to work with. I can honestly say that with exception of Windows 10, it’s been the best operating system I have ever used. And i include OS X and Ubuntu in that equation. But as great as it was, Windows 7 is now 10 years old; an eternity in the software business. The needs of consumers and developers are radically different today, and with Windows 10 available as a free upgrade – it’s time to let the system go.

Microsoft actually ended mainstream support back in January of 2015 (!), but due to its popularity and massive adoption, they decided to extend support a few more years. This means that Windows 7, although practically retro in computing terms, still receives driver updates and security patches. But that is about to change sooner than you think.

Come next January (read: over xmas) and Windows 7 has an appointment with the gallows; something that will affect laptops, servers and desktop systems alike. This means there will be no more security patches, no more feature updates and no new virus definitions for Windows Defender. In other words January 14 2020 is the day Microsoft take Windows 7 off life-support.

This retirement also affects tablets, so if you have a Windows 7 based Surface, the time has come to jump ship and get Windows 10 installed. The same is true for Windows 7 Enterprise – it’s already obsolete by half a decade.

Some have stated that the embedded version of Windows 7, used primarily in custom-made products like ATMs, POS and kiosk type products, somehow avoids this retirement; but that’s just it – retirement truly means retirement. January 14 2020 really is the day Microsoft puts Windows 7 in the ground; be it laptop, server, desktop or surface.

The king is dead, long live the king

You might be wondering, since Windows 7 is still so popular, why would Microsoft seek to replace it? Well there are many reasons. First of all Windows 7 is based on the old NT kernel, which by today’s standard is a dinosaur compared to competing operating-systems. NT was constructed around a security scheme that has served humanity well, but it’s poorly equipped to deal with modern threats. Windows 7 also has a considerably larger memory footprint compared to Windows 10 – not to mention that Windows 10 has been optimized from scratch for better performance on all supported devices. So it’s never really been a question of why, but rather when and at which cost.

002

Windows 10 comes in many shapes and sizes

You also have to factor in that Windows 10 introduces a host of new features that is unique to that OS. Things like support for touch interfaces (both display and navigation devices) is one of them, but developers will be more affected by the new application model (UWP) and UI framework. Truth be told, UWP was first introduced in Windows 8 as a part of Microsoft’s plan to streamline all versions of their OS (tablet, mobile, desktop and server). The promise of UWP is that, if you follow the guidelines and stick to the APIs – the same application can run on all variations of the same OS; regardless of CPU even (more about that below).

Since this was introduced Microsoft sadly dropped out of the smartphone OS business though. Their Windows for mobile never gained the recognition it deserved, and they retired it in favor of Android. Personally i loved their phones; they somehow managed to take the best features from both Apple iOS and Android, and combine them intuitively and elegantly. Not to mention that they cost 40% of what an iPhone of Samsung Galaxy sold for.

Windows 10 is also the first OS from Microsoft that treats XBox as a first-class citizen, so developing titles for XBox has become easier. DirectX now aims at delivering console level experience for laptop and desktop computers; it’s pretty much been refactored from scratch, with aggressive and radical optimization (read: hand written assembly) to get every last drop of performance out of the hardware.

Unlike previous editions of DirectX, Microsoft has toned down the amount of insulation between your code and the actual hardware. DirectX was always padded left and right with interfaces and abstractions, making raw access to GPU resources impossible (or at least, impractical). Thankfully Microsoft has realized that they took this too far, and trimmed the insulation layers accordingly; meaning that developers can now access resources en-par with AMD Mantle, Apple Metal and Vulcan (Factoid: Vulcan is a replacement for OpenGL. OpenGL originated with Silicon Graphics machines, a graphics workstation that was hugely popular back in the 90s and early 2k’s).

WinRT, ARM and the beyond

While developers who focus on business applications could care less about DirectX and multimedia, the underlying changes to the Windows 10 core are of such a magnitude that all avenues of development will be affected. Some of the UI changes are profoundly linked to the work that makes Windows 10 unique – and Microsoft has made it perfectly clear that all future endeavors is built on the Windows 10 baseline.

Windows-10-Mobile

Windows is moving to ARM, and Windows 10 technology is the foundation

Besides purely technical changes, access to the Microsoft Store is one of the features that have a more immediate, financial effect on software development. Marco Cantu actually blogged about this back in 2016, regarding how you can use WDB (Windows Desktop Bridge, a.k.a “project Centennial”) to publish Firemonkey applications to Microsoft store. I mean, any modern developer who makes a living from selling software, having their products available through official channels is pretty essential. And that excludes Windows 7 by default.

And last but not least, there is WinRT, short for Windows Runtime, a sand boxed version of windows that allows applications to be deployed to both x86 and ARM. WinRT involves x86 emulation on ARM SoCs (system on a chip), meaning that you will be able to run applications compiled for x86 on Microsoft’s upcoming Windows for ARM release. But performance wise emulation will obviously not deliver the same level of performance as native ARM code. The emulation layer is meant as an intermediate solution, allowing developers time to evolve compilers that can target ARM directly.

I probably don’t have to outline the business opportunities Windows on ARM represent.

Market adoption

If the features and promise of Windows 10 is not enough to convince you to update immediately, consider the following: There are more than 1 billion Windows users in the world. Windows 7 presently holds 37% of the global market (with Windows 10 at 43%), which means that hundreds of millions of computers will be affected by the now immanent retirement plan.

segment

ARM is still a hardware platform companies can afford to postpone, but with both Apple and Microsoft being open about their move to ARM in the near future, the risk for developers being left behind is very real. And having to deal with the cost of refactoring your entire portfolio over something as trivial as an update, well – I’m sure you see my point.

There really is zero strategic advantage in sticking with the lowest common denominator, which in this case is the stock WinAPI that has defined Windows since the nineties. Especially not when upgrading to Windows 10 is free of charge.

Reflections

From a personal point of view, I cannot imagine being a developer in 2019 and relying on an operating-system that is retired. I must admit that I do own virtual machines where Windows 7 is used, but those are not instances where I do software development; I use them primarily for stress testing software running in other VMWare instances, which conceptually is not a problem.

Microsoft is still offering a free upgrade plan for Windows 7 users. In other words there is no financial loss in updating your development machines, be they physical or virtual.

I look forward to Microsoft’s next phase, where virtual reality and augmented reality technology is implemented more closely for all supported hardware platforms. As for changes that affect desktop business applications, have a look at the following links:

 

Using multiple languages is the same project

August 21, 2019 1 comment

Most compilers can only handle a single syntax for any project, but the Elements compiler from RemObjects deals with 5 (five!) different languages -even within the same project. That’s pretty awesome and opens up for some considerable savings.

I mean, it’s not always easy to find developers for a single language, but when you can approach your codebase from C#, Java, Go, Swift and Oxygene (object pascal) at the same time (inside the same project even!), you suddenly have some options.  Especially since you can pick exotic targets like WebAssembly. Or what about compiling Java to .net bytecodes? Or using the VCL from C#? It’s pretty awesome stuff!

Check out Marc Hoffmans article on the Elements compiler toolchain and how you can mix and match between languages, picking the best from each — while still compiling to a single binary of llvm optimized code:

mixins

Click on the picture to be redirected

 

RemObjects Elements + ODroid N2 = true

August 7, 2019 Leave a comment

Since the release of Raspberry PI back in 2012 the IOT and Embedded market has exploded. The price of the PI SBC (single board computer) enabled ordinary people without any engineering background to create their own software and hardware projects; and with that the IOT revolution was born.

Almost immediately after the PI became a success, other vendors wanted a piece of the pie (pun intended), and an avalanche of alternative mini computers started surfacing in vast quantities. Yet very few of these so-called “pi killers” actually stood a chance. The power of the Raspberry PI is not just price, it’s actually the ecosystem around the product. All those shops selling electronic parts that you can use in your IOT projects for example.

55468436_2018717198423856_993746185506258944_n

The ODroid N2, one of the fastest SBCs in it’s class

The ODroid family of single-board computers stands out as unique in this respect. Where other boards have come and gone, the ODroid family of boards have remained stable, popular and excellent alternatives to the Raspberry PI. Hardkernel, the maker of Odroid boards and its many peripherals, are not looking for a “quick buck” like others have. Instead they have slowly and steadily perfected their hardware,  software, and seeded a great community.

ODroid is very popular at RemObjects, and when we added 64-bit ARM Linux support a couple of weeks back, it was the ODroid N2 board we used for testing. It has been a smooth ride all the way.

ODroid

As I am typing this, a collection of ODroid XU4s is humming away inside a small, desktop cluster I have built. This cluster is made up of 5 x ODroid XU4 boards, with an additional ODroid N2 acting as the head (the board that controls the rest via the network).

67582488_10156396548830906_5204248427029856256_o

My ODroid Cluster in all its glory

Prior to picking ODroid for my own projects, I took the time to test the most popular boards on the market. I think I went through eight or ten models, but none of the other were even close to the quality of ODroid. It’s very easy to confuse aggressive marketing with quality. You can have the coolest hardware in the world, but if it lacks proper drivers and a solid Linux distribution, it’s for all means and purposes a waste of time.

Since IOT is something that i find exciting on a personal level, being able to target 64-bit ARM Linux has topped my wish-list for quite some time. So when our compiler wizard Carlo Kok wanted to implement support for 64-bit ARM-Linux, I was thrilled!

We used the ODroid N2 throughout the testing phase, and the whole process was very smooth. It took Carlo roughly 3 days to add support for 64-bit ARM Linux and it hit our main channel within a week.

I must stress that while ODroid N2 is one of our verified SBCs, the code is not explicitly about ODroid. You can target any 64-bit ARM SBC providing you use a Debian based Linux (Ubuntu, Mint etc). I tested the same code on the NanoPI board and it ran on the first try.

Why is this important?

The whole point of the Elements compiler toolchain, is not just to provide alternative compilers; it’s also to ensure that the languages we support become first class citizens, side by side with other archetypical languages. For example, if all you know is C# or Java, writing kernel drivers has been our of limits. If you are operating with traditional Java or .Net, you have to use a native bridge (like the service host under Windows). Your only other option was to code that particular piece in traditional C.

Water-Weather-tvOS@2x

With Elements you can pick whatever language you know and target everything

With Elements that is no longer the case, because our compilers generates llvm optimized machine-code; code that in terms of speed, access and power stand side by side with C/C++. You can even import C/C++ header files and work directly with the existing infrastructure. There is no middleware, no service host, no bytecodes and no compromise.

Obviously you can compile to bytecodes too if you like (or WebAssembly), but there are caveats to watch out for when using bytecodes on SBCs. The garbage collector can make or break your product, because when it kicks in -it causes CPU spikes. This is where Elements step-up and delivers true native compilation. For all supported targets.

More boards to come

This is just my personal blog, so for the full overview of boards I am testing there will be a proper article on our official RemObjects blog-space. Naturally I can’t test every single board on the market, but I have around 10 different models which covers the common boards used by IOT and embedded projects.

But for now at least, you can check off the ODroid N2 (64-bit) and NanoPI-Fire 2 (32-bit)

Check out RemObjects Remoting SDK

July 22, 2019 3 comments

RemObjects Remoting SDK is one of those component packages that have become more than the sum of it’s part. Just like project Jedi has become standard equipment almost, Remoting SDK is a system that all Delphi and Freepascal developers should have in their toolbox.

ro_logo
In this article I’m going to present the SDK in broad strokes; from a viewpoint of someone who haven’t used the SDK before. There are still a large number of Delphi developers that don’t know it even exists – hopefully this post will shed some light on why the system is worth every penny and what it can do for you.

I should also add, that this is a personal blog. This is not an official RemObjects presentation, but a piece written by me based on my subjective experience and notions. We have a lot of running dialog at Delphi Developer on Facebook, so if I read overly harsh on a subject, that is my personal view as a Delphi Developer.

Stop re-inventing the wheel

Delphi has always been a great tool for writing system services. It has accumulated a vast ecosystem of non-visual components over the years, both commercial and non-commercial, and this allows developers to quickly aggregate and expose complex behavior — everything from graphics processing to databases, file processing to networking.

The challenge for Delphi is that writing large composite systems, where you have more than a single service doing work in concert, is not factored into the RTL or project type. Delphi provides a bare-bone project type for system services, and that’s it. Depending on how you look at it, it’s either a blessing or a curse. You essentially start on C level.

So fundamental things like IPC (inter process communication) is something you have to deal with yourself. If you want multi-tenancy that is likewise not supported out of the box. And all of this is before we venture into protocol standards, message formats and async vs synchronous execution.

The idea behind Remoting SDK is to get away from this style of low-level hacking. Without sounding negative, it provides the missing pieces that Delphi lacks, including the stuff that C# developers enjoy under .net (and then some). So if you are a Delphi developer who look over at C# with smudge of envy, then you are going to love Remoting SDK.

Say goodbye to boilerplate mistakes

Writing distributed servers and services is boring work. For each function you expose, you have to define the parameters and data-types in a portable way, then you have to implement the code that represents the exposed function and finally the interface itself that can be consumed by clients. The latter must be defined in a way that works with other languages too, not just Delphi. So while server tech in it’s essential form is quite simple, it’s the infrastructure that sets the stage of how quickly you can apply improvements and adapt to change.

For example, let’s say you have implemented a wonderful new service. It exposes 60 awesome functions that your customers can consume in their own work. The amount of boilerplate code for 60 distributed functions, especially if you operate with composite data types, is horrendous. It is a nightmare to manage and opens up for sloppy, unnecessary mistakes.

ide_int

After you install Remoting SDK, the service designer becomes a part of the IDE

This is where Remoting SDK truly shines. When you install the software, it integrates it’s editors and wizards closely with the Delphi IDE. It adds a ton of new project types, components and whatnot – but the most important feature is without a doubt the service designer.

bonjour

Start the service-designer in any server or service project and you can edit the methods, data types and interfaces your system expose to the world

As the name implies, the service designer allows you to visually define your services. Adding a new function is a simple click, the same goes for datatypes and structures (record types). These datatypes are exposed too and can be consumed from any modern language. So a service you make in Delphi can be used from C#, C/C++, Java, Oxygene, Swift (and visa-versa).

Auto generated code

A service designer is all good and well I hear you say, but what about that boilerplate code? Well Remoting SDK takes care of that too (kinda the point). Whenever you edit your services, the designer will auto-generate a new interface unit for you. This contains the classes and definitions that describe your service. It will also generate an implementation unit, with empty functions; you just need to fill in the blanks.

The designer is also smart enough not to remove code. So if you go in and change something, it won’t just delete the older implementation procedure. Only the params and names will be changed if you have already written some code.

bonjour_source

Having changed a service, hitting F9 re-generates the interface code automatically. Your only job is to fill in the code for each method in the implementation units. The SDK takes care of everything else for you

The service information, including the type information, is stored in a special file format called “rodl”. This format is very close to Microsoft WSDL format, but it holds more information. It’s important to underline that you can import the service directly from your servers (optional naturally) as WSDL. So if you want to consume a Remoting SDK service using Delphi’s ordinary RIO components, that is not a problem. Visual Studio likewise imports and consumes services – so Remoting SDK behaves identical regardless of platform or language used.

Remoting SDK is not just for Delphi, just to be clear on that. If you are presently using both Delphi and C# (which is a common situation), you can buy a license for both C# and Delphi and use whatever language you feel is best for a particular task or service. You can even get Remoting SDK for Javascript and call your service-stack directly from your website if you like. So there are a lot of options for leveraging the technology.

Transport is not content

OK so Remoting SDK makes it easy to define distributed services and servers. But what about communication? Are we boxed into RemObjects way of doing things?

The remoting framework comes with a ton of components, divided into 3 primary groups:

  • Servers
  • Channels (clients)
  • Messages

The reason for this distinction is simple: the ability to transport data, is never the same as the ability to describe data. For example, a message is always connected to a standard. It’s job is ultimately to serialize (represent) and de-serialize data according to a format. The server’s job is to receive a request and send a response. So these concepts are neatly decoupled for maximum agility.

As of writing the SDK offers the following message formats:

  • Binary
  • Post
  • SOAP
  • JSON

If you are exposing a service that will be consumed from JavaScript, throwing in a TROJSONMessage component is the way to go. If you expect messages to be posted from your website using ordinary web forms, then TROPostMessage is a perfect match. If you want XML then TROSOAPMessage rocks, and if you want fast, binary messages – well then there is TROBinaryMessage.

What you must understand is that you don’t have to pick just one! You can drop all 4 of these message formats and hook them up to your server or channel. The SDK is smart enough to recognize the format and use the correct component for serialization. So creating a distributed service that can be consumed from all major platforms is a matter of dropping components and setting a property.

channels

If you double-click on a server or channel, you can link message components with a simple click. No messy code snippets in sight.

Multi-tenancy out of the box

With the release of Rad-Server as a part of Delphi, people have started to ask what exactly multi-tenancy is and why it matters. I have to be honest and say that yes, it does matter if you are creating a service stack where you want to isolate the logic for each customer in compartments – but the idea that this is somehow new or unique is not the case. Remoting SDK have given users multi-tenancy support for 15+ years, which is also why I haven’t been too enthusiastic with Rad-Server.

Now don’t get me wrong, I don’t have an axe to grind with Rad-Server. The only reason I mention it is because people have asked how i feel about it. The tech itself is absolutely welcome, but it’s the licensing and throwing Interbase in there that rubs me the wrong way. If it could run on SQLite3 and was free with Enterprise I would have felt different about it.

mt-models

There are various models for multi-tenancy, but they revolve around the same principles

To get back on topic: multi-tenancy means that you can dynamically load services and expose them on demand. You can look at it as a form of plugin functionality. The idea in Rad-Server is that you can isolate a customer’s service in a separate package – and then load the package into your server whenever you need it.

ro_comps

Some of the components that ship with the system

The reason I dislike Rad-Server in this respect, is because they force you to compile with packages. So if you want to write a Rad-Server system, you have to compile your entire project as package-based, and ship a ton of .dpk files with your system. Packages is not wrong or bad per-se, but they open your system up on a fundamental level. There is nothing stopping a customer from rolling his own spoof package and potentially bypass your security.

There is also an issue with un-loading a package, where right now the package remains in memory. This means that hot-swapping packages without killing the server wont work.

Rad-Server is also hardcoded to use Interbase, which suddenly bring in licensing issues that rubs people the wrong way. Considering the price of Delphi in 2019, Rad-Server stands out as a bit of an oddity. And hardcoding a database into it, with the licensing issues that brings -just rendered the whole system mute for me. Why should I pay more to get less? Especially when I have been using multi-tenancy with RemObjects for some 15 years?

With Remoting SDK you have something called DLL servers, which does the exact same thing – but using ordinary DLL files (not packages!). You don’t have to compile your system with packages, and it takes just one line of code to make your main dispatcher aware of the loaded service.

This actually works so well that I use Remoting SDK as my primary “plugin” system. Even when I write ordinary desktop applications that has nothing to do with servers or services – I always try to compartmentalize features that could be replaced in the future.

For example, I’m a huge fan of ElevateDB, which is a native Delphi database engine that compiles directly into your executable. By isolating that inside a DLL as a service, my application is now engine agnostic – and I get a break from buying a truck load of components every time Delphi is updated.

Saving money

The thing about DLL services, is that you can save a lot of money. I’m actually using an ElevateDB license that was for Delphi 2007. I compiled the engine using D2007 into a DLL service — and then I consume that DLL from my more modern Delphi editions. I have no problem supporting or paying for components, that is right and fair, but having to buy new licenses for every single component each time Delphi is updated? This is unheard of in other languages, and I would rather ditch the platform all together than forking out $10k ever time I update.

dll_project

A DLL server can be used for many things if you are creative about it

While we are on the subject – Hydra is another great money saver. It allows you to use .net and Java libraries (both visual and non-visual) with Delphi. With Hydra you can design something in .net, compile it into a DLL file, and then use that from Delphi.

But — you can also compile things from Delphi, and use it in newer versions of Delphi. Im not forking out for a Developer Express update just to use what I have already paid for in the latest Delphi. I have one license, I compile the forms and components into a Hydra Module — and then use it from newer Delphi editions.

hydra

Hydra, which is a separate product, allows you to stuff visual components and forms inside a vanilla DLL. It allows cross  language use, so you can finally use Java and .net components inside your Delphi application

Bonjour support

Another feature I love is the zero configuration support. This is one of those things that you often forget, but that suddenly becomes important once you deploy a service stack on cluster level.

apple_bonjour_medium-e1485166557218Remoting SDK comes with support for Apple Bonjour, so if you want to use that functionality you have to install the Bonjour library from Apple. Once installed on your host machines, your RemObjects services can find each other.

ZeroConfig is not that hard to code manually. You can roll your own using UDP or vanilla messages. But getting service discovery right can be fiddly. One thing is broadcasting an UDP message saying “here I am”, it’s something else entirely to allow service discovery on cluster level.

If Bonjour is not your cup of tea, the SDK provides a second option, which is RemObjects own zero-config hub. You can dig into the documentation to find out more about this.

What about that IPC stuff you mentioned?

I mentioned IPC (inter process communication) at the beginning here, which is a must have if you are making a service stack where each member is expected to talk to the others. In a large server-system the services might not exist on the same, physical hardware either, so you want to take height for that.

With the SDK this is just another service. It takes 10 minutes to create a DLL server with the functionality to send and receive messages – and then you just load and plug that into all your services. Done. Finished.

Interestingly, Remoting SDK supports named-pipes. So if you are running on a Windows network it’s even easier. Personally I prefer to use a vanilla TCP/IP based server and channel, that way I can make use of my Linux blades too.

Building on the system

There is nothing stopping you from expanding the system that RemObjects have established. You are not forced to only use their server types, message types and class framework. You can mix and match as you see fit – and also inherit out your own variation if you need something special.

firm_foundation-720x340For example, WebSocket is an emerging standard that has become wildly popular. Remoting SDK does not support that out of the box, the reason is that the standard is practically identical to the RemObjects super-server, and partly because there must be room for third party vendors.

Andre Mussche took the time to implement a WebSocket server for Remoting SDK a few years back. Demonstrating in the process just how easy it is to build on the existing infrastructure. If you are already using Remoting SDK or want WebSocket support, head over to his github repository and grab the code there: https://github.com/andremussche/DelphiWebsockets

I could probably write a whole book covering this framework. For the past 15 years, RemObjects Remoting SDK is the first product I install after Delphi. It has become standard for me and remains an integral part of my toolkit. Other packages have come and gone, but this one remains.

Hopefully this post has tickled your interest in the product. No matter if you are maintaining a legacy service stack, or thinking about re implementing your existing system in something future-proof, this framework will make your life much, much easier. And it wont break the bank either.

You can visit the product page here: https://www.remotingsdk.com/ro/default.aspx

And you can check out the documentation here: https://docs.remotingsdk.com/

Augmented reality, I don’t think so

July 20, 2019 Leave a comment

The world used to be a big place, but having worked around europe for a few years – and lately in the US, it appears to me much smaller. But the fact of the matter is, that different nationalities have different tastes and interests.

1_B3h4Q-19cjz1jOqq3ZP6Mw

The world is not as big as it used to be

In the US right now there is a strong interest in virtual-reality. The interest was so strong that Sony jumped on the VR bandwagon early, offering a full VR kit for the Playstation 4. This has been available for two years already (or is it three?). Here in Scandinavia though, VR is not that hot. People buy it, but not in the same volume we see in the US. It is expected to pick up this fall when the dark period begins; Norway, Sweden, Denmark and Finland are largely without sunlight most of the year – and during that time people use indoor hobbies more. But right now, VR is not really a thing up here.

In parallel with VR, Microsoft picked up the gauntlet that Google threw away earlier, namely that of augmented reality.  You probably remember the Google Glasses craze that hit California a decade ago right? Well Microsoft have continued to research and develop the technology which is now available for pre-order.

The problem? Microsoft Holo-lens II is a $3500 gadget that is presently aimed at business only. With emphasis on industrial design and medical applications. I don’t know about you, but forking up $3500 for what is ultimately just a curiosity is way out of my budget. There are much more important things to spend $3500 on to be frank.

Asia, the mother of implementation

While America and Europe are hotbeds of invention, it is ultimately Asia that implements, refine and eventually perfect technology. Some might disagree with that, there are always exceptions to the rule (3d printers and VR systems are very much American made), but what im talking about are “traits and inclinations” in the market. Patterns of commerce if you like.

What usually happens when something new is made, is that it costs a fortune. To push prices down companies move production to Asia, and there materials etc. are swapped out for cheaper alternatives. Some re-designing takes place — and before you know it, a product that cost $3500 when made in the US or the EU, can be picked up for $799 on Amazon. After some time, production volume has reached it’s zenith and the device that once cost an arm or a leg, can now be bought $299 or $199.

But, there are exceptions! When there is a technology that is wildly popular, like augmented reality and VR, Asia is quick to produce their own take on the technology early – trying to get a piece of the proverbial pie. This is often a good thing, especially for consumers. It can also be a good thing for the US and EU based companies – because mistakes and design-flaws they haven’t noticed yet are taken care of by proxy.

With that in mind, I ventured into the Asian markets to see what I could find.

Banggood and Alibaba

Having searched through an avalanche of cheap VR glasses for mobile phones, I finally found something that could be worth looking into. The advert was extremely thin, citing only augmented reality in english – with everything else in chinese (I presume, I must admit I would not spot the difference between chinese, japanese and korean if my life depended on it. Tibetan I can spot due to my Buddhist training, but that’s about it).

When I ran the string of characters through google, it returned this:

“VISION-800 3D Glasses Video Android 4.4
MTK6582 1G/2G 5MP AC WIFI BT4.0 2060P MIC”

Looking at the glasses they have pretty much everything you expected from an Augmented reality setup. There is a camera up front, lenses, audio jacks on both sides, a few buttons and switches – and the magic words “powered by Android  Wearable”. The price was $249 so it wouldn’t break the bank either.

glasses

The Vision 800 in all their glory

I should also mention that websites like Banggood and Alibaba have pretty good return policies too. These websites are actually front-ends for hundreds (if not thousands) of smaller, local outlets. The outlets gets a place to sell their goods to the west, and Alibaba and Banggood takes a small cut of each sale.

To manage this and make it viable, they have a rating system in place. So if one of the outlets scams you – you can vote them down. 3 complaints is enough to get the outlet kicked from either site, which I presume is a dire financial loss (considering the volume these websites push on a daily basis). So there is some degree of consumer safety compared to direct order. I would never order anything directly from a tech cornershop in mainland china, because should they rip you off – there is nothing you can do about it.

Augmented? I don’t think so

When the glasses finally arrived i was surprised at how light they were. You might expect them to be top-heavy and tip forward on the ridge of your nose – but since the weight is actually the lenses, not the circuitry, they balance quite well.

But augmented reality? Im sorry, but these glasses are not even in the ballpark.

The device is running a fork of Android – but not the fork adapted for wearables! The glasses also comes with a stock mouse (cordless), and you are in fact greeted by a plain desktop. The cordless mouse does work really well though, but I thankfully had the sense to order a $5 air-mouse (read: remote control for android devices) or I would go insane trying to exit applications properly.

What you can do is download some augmented reality apps from Google Play. These will tap into the camera on the glasses, and you can play a few games. But that’s really it. I noticed that the outlet had changed the title and text for these glasses a few days before they arrived here, so the whole deal is a little bit fishy. Looking at the instruction leaflet, these glasses have been sold as “movie glasses”. I would even go so far as to say they have nothing to do with augmented reality.

Media glasses

Augmented reality aside, there are interesting uses for glasses like this. If the field of view is good enough, they could make for a novel screen “on the road”. I mean, if you plug in a hybrid USB dongle that gives you both keyboard and mouse, there is nothing in the way of using the glasses as a monitor / travel PC. You have the same apps that you enjoy on your phone; a modern browser that gives you access to cloud services etc.

The glasses also have an SD card slot which is super handy – and 2Gb onboard storage. So if you are taking a flight from Europe to Australia and want to tune out noise and watch movies – these glasses might just be the ticket.

glasses2

The audio works well

I must admit it was fun to install NetFlix on these and kick back. But this is  also when i started to have some issues.

The first issue is that there is no focal lense involved. You are literally looking at two tiny screens. And if you use regular glasses like I do, watching without my ordinary glasses is a complete blur. I had to use contact-lenses (which I hate) to get any use out of these. But if your eyesight is fine, you will no doubt have a better experience.

For me being 100% dependent on my regular glasses, it actually makes more sense to buy a cheap, second-hand Samsung Galaxy Edge, which were designed to be used as a proper VR display, and then permanently fixing it to a set of cheap Samsung VR casing. Even the most rudimentary VR casing offers focal lenses, so you can adjust the focus to compensate for poor eyesight.

The second issue has to do with display resolution. If you have 20/20 eyesight then high resolutions is wonderful. But in my case I would actually see better if the resolution was slightly lower. Sadly the devices seem fixed to what I can only presume is 1600×1024 (or something like that), and there are no options for changing resolution, offset display or skew the picture. Again, these are factors that only become important if you have poor eyesight.

Audio

The way they solved audio is actually quite good. On each arm of the glasses you have an audio-jack out. And the kit comes with two small earplugs. And again – if you are on a long flight and just want to snuggle up in your own world – this works really well.

If you have ear-pods like I do, you can use them via the standard BT interface. But I noticed that there was a slight lag when using these; no doubt the CPU had problems handling audio when playing a full HD movie. The lag was not there when i used the normal jack – so the culprit is probably the BT device cache.

Gaming?

I’m not a huge gamer myself, I mostly play games like Destiny in the Playstation. On the odd occasion that I jump into other games, it’s mostly retro stuff. And I have a house pretty much packed with Amiga, Silicon Graphics, and more arcade hardware than god ever intended a person to own.

Having said that, the device is capable of both native Android games – and emulation. I had no problem installing UAE (Unix Amiga Emulator), and it’s more than powerful enough to emulate an A1200 with AGA (advanced graphics architecture).

I didn’t test the casting option – because the device can display-cast to your living room TV. But somehow it seems backwards using these as a casting source – when you already have a supercomputer in your pocket. Any modern phone, be it a Samsung or Apple device, will outperform whatever is powering these glasses – so if gaming is your thing, look elsewhere.

Final words

glasses3These glasses have potential. Or perhaps better phrased – the technology holds a lot of promise. If they had opted for focal-lenses and a wider field of vision, they would have been a fantastic experience. I have no problem imagining this type of tech replacing monitors in the near future – at least for movie experiences.

I must admit it’s really tricky to hammer down a verdict. On one hand they are really fun, you can install NetFlix, browse the web and watch movies if you copy them over to an sd-card (the glasses come with a 16Gb sd-card). You have mouse control, BT and i have no problem seeing myself on a flight to hong-kong enjoying a movie marathon wearing these.

But are they worth $250 ? I would have to say no. A fair price would be something in the $70 region. If they corrected the lenses I would have no problem buying a pair at $99. And if they expanded the field of vision to cover the width of the glasses – I would absolutely pick them up at $150. But $250 for this? Sadly I can’t say they are worth the money.

I was also surprised to find pornhub as a pre-defined shortcut in the browser (I mean, who does that?). It made me laugh first, thinking it was a cheeky joke – but as a parent who could have bought these for a child, it is utterly unacceptable. It’s not the first time I have found smut on a device from Asia. But yeah, a bit creepy.

So, I would have to give them a 3 out of 6 verdict. If you have money to burn and a long flight ahead, then by all means – they will give you a comfy way of watching movies during the flight. But the technology is (for the lack of a better word) premature.

As for augmented reality – forget it. You are better off stuffing your phone inside a $100 Samsung VR casing. The official Samsung Galaxy Edge casing probably cost next to nothing by now. And for $250 you should have no problem sourcing a used Galaxy Edge phone too. Which will be 100 times better than this.

I started this post citing the inherent differences between nationalities in what they enjoy, but I must admit that through these, I can see why VR holds such potential. I can’t see myself strapping on a full suit, helmet and gloves just to play a game or do some work. But glasses like these (but not these) is absolutely in the vicinity of “practical”. Just a damn shame they didn’t do a full width LCD with focal lenses; then I would have promoted them.

Right now: a fun curiocity, good for watching the odd movie if you eyesight is perfect – but for the rest of us, it’s just not worth the dough

 

30% discount on all RemObjects products!

July 8, 2019 Leave a comment

This is brilliant. RemObjects is giving a whopping 30% discount on all products!

This means you can now pick up RemObjects Remoting Framework, Data Abstract, Hydra or the Elements compiler toolchain – with a massive 30% saving!

These are battle-hardened, enterprise level solutions that have been polished over years and they are in constant development. Each solution integrates seamlessly into Embarcadero Delphi and provides a smooth path to delivering quality products in days rather than weeks.

But you better hurry because it’s only valid for one week (!)

Use the coupon code: “DelphiDeveloper”

66825092_10156336639680906_8015817715019153408_o

Use the Delphi Developer coupon to get 30% discount – click here

 

A Delphi propertybag

July 7, 2019 14 comments

A long, long time ago, way back in the previous century, I often had to adjust a Visual Basic project my company maintained. Going from object-pascal to VB was more than a little debilitating; Visual Basic was not a compiled language like Delphi is, and it lacked more or less every feature you needed to produce good software.

source

I could probably make a VB clone using Delphi pretty easily. But I think the world has experienced enough suffering, no need to add more evil to the universe

Having said that, I have always been a huge fan of Basic (it was my first language after all, it’s what schools taught in the 70s and 80s). I think it was a terrible mistake for Microsoft to retire Basic as a language, because it’s a great way to teach kids the fundamentals of programming.

Visual Basic is still there though, available for the .Net framework, but to call it Basic is an insult of the likes of GFA Basic, Amos Basic and Blitz Basic; the mighty compilers of the past. If you enjoyed basic before Microsoft pushed out the monstrosity that is Visual Basic, then perhaps swing by GitHub and pick up a copy of BlitzBasic?  BlitzBasic is a completely different beast. It compiles to machine-code, allows inline assembly, and has been wildly popular for game developers over the years.

A property bag

The only feature that I found somewhat useful in Visual Basic, was an object called a propertybag. It’s just a fancy name for a dictionary, but it had a couple of redeeming factors beyond lookup ability. Like being able to load name-value-pairs from a string, recognizing datatypes and exposing type-aware read/write methods. Nothing fancy but handy when dealing with database connection-strings, shell parameters and the like.

So you could feed it strings like this:

first=12;second=hello there;third=3.14

And the class would parse out the names and values, stuff it in a dictionary, and you could easily extract the data you needed. Nothing fancy, but handy on rare occasions.

A Delphi version

Im mostly porting code from Delphi to Oxygene these days, but here is my Delphi implementation of the propertybag object. Please note that I haven’t bothered to implement the propertybag available in .Net. The Delphi version below is based on the Visual Basic 6 version, with some dependency injection thrown in for good measure.

unit fslib.params;

interface

{.$DEFINE SUPPORT_URI_ENCODING}

uses
  System.SysUtils,
  System.Classes,
  Generics.Collections;

type

  (* Exceptions *)
  EPropertybag           = class(exception);
  EPropertybagReadError  = class(EPropertybag);
  EPropertybagWriteError = class(EPropertybag);
  EPropertybagParseError = class(EPropertybag);

  (* Datatypes *)
  TPropertyBagDictionary = TDictionary ;

  IPropertyElement = interface
    ['{C6C937DF-50FA-4984-BA6F-EBB0B367D3F3}']
    function  GetAsInt: integer;
    procedure SetAsInt(const Value: integer);

    function  GetAsString: string;
    procedure SetAsString(const Value: string);

    function  GetAsBool: boolean;
    procedure SetAsBool(const Value: boolean);

    function  GetAsFloat: double;
    procedure SetAsFloat(const Value: double);

    function  GetEmpty: boolean;

    property Empty: boolean read GetEmpty;
    property AsFloat: double read GetAsFloat write SetAsFloat;
    property AsBoolean: boolean read GetAsBool write SetAsBool;
    property AsInteger: integer read GetAsInt write SetAsInt;
    property AsString: string read GetAsString write SetAsString;
  end;

  TPropertyBag = Class(TInterfacedObject)
  strict private
    FLUT:       TPropertyBagDictionary;
  strict protected
    procedure   Parse(NameValuePairs: string);
  public
    function    Read(Name: string): IPropertyElement;
    function    Write(Name: string; Value: string): IPropertyElement;

    procedure   SaveToStream(const Stream: TStream);
    procedure   LoadFromStream(const Stream: TStream);
    function    ToString: string; override;
    procedure   Clear; virtual;

    constructor Create(NameValuePairs: string); virtual;
    destructor  Destroy; override;
  end;

implementation

{$IFDEF SUPPORT_URI_ENCODING}
uses
  system.NetEncoding;
{$ENDIF}

const
  cnt_err_sourceparameters_parse =
  'Failed to parse input, invalid or damaged text error [%s]';

  cnt_err_sourceparameters_write_id =
  'Write failed, invalid or empty identifier error';

  cnt_err_sourceparameters_read_id =
  'Read failed, invalid or empty identifier error';

type

  TPropertyElement = class(TInterfacedObject, IPropertyElement)
  strict private
    FName:      string;
    FData:      string;
    FStorage:   TPropertyBagDictionary;
  strict protected
    function    GetEmpty: boolean; inline;

    function    GetAsInt: integer; inline;
    procedure   SetAsInt(const Value: integer); inline;

    function    GetAsString: string; inline;
    procedure   SetAsString(const Value: string); inline;

    function    GetAsBool: boolean; inline;
    procedure   SetAsBool(const Value: boolean); inline;

    function    GetAsFloat: double; inline;
    procedure   SetAsFloat(const Value: double); inline;

  public
    property    AsFloat: double read GetAsFloat write SetAsFloat;
    property    AsBoolean: boolean read GetAsBool write SetAsBool;
    property    AsInteger: integer read GetAsInt write SetAsInt;
    property    AsString: string read GetAsString write SetAsString;
    property    Empty: boolean read GetEmpty;

    constructor Create(const Storage: TPropertyBagDictionary; Name: string; Data: string); overload; virtual;
    constructor Create(Data: string); overload; virtual;
  end;

//#############################################################################
// TPropertyElement
//#############################################################################

constructor TPropertyElement.Create(Data: string);
begin
  inherited Create;
  FData := Data.Trim();
end;

constructor TPropertyElement.Create(const Storage: TPropertyBagDictionary;
  Name: string; Data: string);
begin
  inherited Create;
  FStorage := Storage;
  FName := Name.Trim().ToLower();
  FData := Data.Trim();
end;

function TPropertyElement.GetEmpty: boolean;
begin
  result := FData.Length < 1;
end;

function TPropertyElement.GetAsString: string;
begin
  result := FData;
end;

procedure TPropertyElement.SetAsString(const Value: string);
begin
  if Value  FData then
  begin
    FData := Value;
    if FName.Length > 0 then
    begin
      if FStorage  nil then
        FStorage.AddOrSetValue(FName, Value);
    end;
  end;
end;

function TPropertyElement.GetAsBool: boolean;
begin
  TryStrToBool(FData, result);
end;

procedure TPropertyElement.SetAsBool(const Value: boolean);
begin
  FData := BoolToStr(Value, true);

  if FName.Length > 0 then
  begin
    if FStorage  nil then
      FStorage.AddOrSetValue(FName, FData);
  end;
end;

function TPropertyElement.GetAsFloat: double;
begin
  TryStrToFloat(FData, result);
end;

procedure TPropertyElement.SetAsFloat(const Value: double);
begin
  FData := FloatToStr(Value);
  if FName.Length > 0 then
  begin
    if FStorage  nil then
      FStorage.AddOrSetValue(FName, FData);
  end;
end;

function TPropertyElement.GetAsInt: integer;
begin
  TryStrToInt(FData, Result);
end;

procedure TPropertyElement.SetAsInt(const Value: integer);
begin
  FData := IntToStr(Value);
  if FName.Length > 0 then
  begin
    if FStorage  nil then
      FStorage.AddOrSetValue(FName, FData);
  end;
end;

//#############################################################################
// TPropertyBag
//#############################################################################

constructor TPropertyBag.Create(NameValuePairs: string);

begin
  inherited Create;
  FLUT := TDictionary.Create();

  NameValuePairs := NameValuePairs.Trim();
  if NameValuePairs.Length > 0 then
    Parse(NameValuePairs);
end;

destructor TPropertyBag.Destroy;
begin
  FLut.Free;
  inherited;
end;

procedure TPropertyBag.Clear;
begin
  FLut.Clear;
end;

procedure TPropertyBag.Parse(NameValuePairs: string);
var
  LList:      TStringList;
  x:          integer;
  LId:        string;
  LValue:     string;
  LOriginal:  string;
  {$IFDEF SUPPORT_URI_ENCODING}
  LPos:       integer;
  {$ENDIF}
begin
  // Reset content
  FLUT.Clear();

  // Make a copy of the original text
  LOriginal := NameValuePairs;

  // Trim and prepare
  NameValuePairs := NameValuePairs.Trim();

  // Anything to work with?
  if NameValuePairs.Length > 0 then
  begin
    {$IFDEF SUPPORT_URI_ENCODING}
    // Check if the data is URL-encoded
    LPos := pos('%', NameValuePairs);
    if  (LPos >= low(NameValuePairs) )
    and (LPos  0 then
    Begin
      (* Populate our lookup table *)
      LList := TStringList.Create;
      try
        LList.Delimiter := ';';
        LList.StrictDelimiter := true;
        LList.DelimitedText := NameValuePairs;

        if LList.Count = 0 then
          raise EPropertybagParseError.CreateFmt(cnt_err_sourceparameters_parse, [LOriginal]);

        try
          for x := 0 to LList.Count-1 do
          begin
            LId := LList.Names[x].Trim().ToLower();
            if (LId.Length > 0) then
            begin
              LValue := LList.ValueFromIndex[x].Trim();
              Write(LId, LValue);
            end;
          end;
        except
          on e: exception do
          raise EPropertybagParseError.CreateFmt(cnt_err_sourceparameters_parse, [LOriginal]);
        end;
      finally
        LList.Free;
      end;
    end;
  end;
end;

function TPropertyBag.ToString: string;
var
  LItem: TPair;
begin
  setlength(result, 0);
  for LItem in FLut do
  begin
    if LItem.Key.Trim().Length > 0 then
    begin
      result := result + Format('%s=%s;', [LItem.Key, LItem.Value]);
    end;
  end;
end;

procedure TPropertyBag.SaveToStream(const Stream: TStream);
var
  LData: TStringStream;
begin
  LData := TStringStream.Create(ToString(), TEncoding.UTF8);
  try
    LData.SaveToStream(Stream);
  finally
    LData.Free;
  end;
end;

procedure TPropertyBag.LoadFromStream(const Stream: TStream);
var
  LData: TStringStream;
begin
  LData := TStringStream.Create('', TEncoding.UTF8);
  try
    LData.LoadFromStream(Stream);
    Parse(LData.DataString);
  finally
    LData.Free;
  end;
end;

function TPropertyBag.Write(Name: string; Value: string): IPropertyElement;
begin
  Name := Name.Trim().ToLower();
  if Name.Length > 0 then
  begin
    if not FLUT.ContainsKey(Name) then
      FLut.Add(Name, Value);

    result := TPropertyElement.Create(FLut, Name, Value) as IPropertyElement;
  end else
  raise EPropertybagWriteError.Create(cnt_err_sourceparameters_write_id);
end;

function TPropertyBag.Read(Name: string): IPropertyElement;
var
  LData:  String;
begin
  Name := Name.Trim().ToLower();
  if Name.Length > 0  then
  begin
    if FLut.TryGetValue(Name, LData) then
      result := TPropertyElement.Create(LData) as IPropertyElement
    else
      raise EPropertybagReadError.Create(cnt_err_sourceparameters_read_id);
  end else
  raise EPropertybagReadError.Create(cnt_err_sourceparameters_read_id);
end;


end.

BTree for Delphi

July 7, 2019 4 comments
lookup

Click here to read

A few weeks back I posted an article on RemObjects blog regarding universal code, and how you with a little bit of care can write code that easily compiled with both Oxygene, Delphi and Freepascal. With emphasis on Oxygene.

The example I used was a BTree class that I originally ported from Delphi to Smart Pascal, and then finally to Oxygene to run under WebAssembly.

Long story short I was asked if I could port the code back to Delphi in its more or less universal form. Naturally there are small differences here and there, but nothing special that distinctly separates the Delphi version from Oxygene or Smart Pascal.

Why this version?

If you google BTree and Delphi you will find loads of implementations. They all operate more or less identical, using records and pointers for optimal speed. I decided to base my version on classes for convenience, but it shouldn’t be difficult to revert that to use records if you absolutely need it.

What I like about this BTree implementation is that it’s very functional. Its easy to traverse the nodes using the ForEach() method, you can add items using a number as an identifier, but it also supports string identifiers.

I also changed the typical data reference. The data each node represent is usually a pointer. I changed this to variant to make it more functional.

Well, here is the Delphi version as promised. Happy to help.

unit btree;

interface

uses
  System.Generics.Collections,
  System.Sysutils,
  System.Classes;

type

  // BTree leaf object
  TQTXBTreeNode = class(TObject)
  public
    Identifier: integer;
    Data:       variant;
    Left:       TQTXBTreeNode;
    Right:      TQTXBTreeNode;
  end;

  [Weak]
  TQTXBTreeProcessCB = reference to procedure (const Node: TQTXBTreeNode; var Cancel: boolean);

  EBTreeError = class(Exception);

  TQTXBTree = class(TObject)
  private
    FRoot:    TQTXBTreeNode;
    FCurrent: TQTXBTreeNode;
  protected
    function  GetEmpty: boolean;  virtual;
    function  GetPackedNodes: TList;

  public
    property  Root: TQTXBTreeNode read FRoot;
    property  Empty: boolean read GetEmpty;

    function  Add(const Ident: integer; const Data: variant): TQTXBTreeNode; overload; virtual;
    function  Add(const Ident: string; const Data: variant): TQTXBTreeNode; overload; virtual;

    function  Contains(const Ident: integer): boolean; overload; virtual;
    function  Contains(const Ident: string): boolean; overload; virtual;

    function  Remove(const Ident: integer): boolean; overload; virtual;
    function  Remove(const Ident: string): boolean; overload; virtual;

    function  Read(const Ident: integer): variant; overload; virtual;
    function  Read(const Ident: string): variant; overload; virtual;

    procedure Write(const Ident: string; const NewData: variant); overload; virtual;
    procedure Write(const Ident: integer; const NewData: variant); overload; virtual;

    procedure Clear; overload; virtual;
    procedure Clear(const Process: TQTXBTreeProcessCB); overload; virtual;

    function  ToDataArray: TList;
    function  Count: integer;

    procedure ForEach(const Process: TQTXBTreeProcessCB);

    destructor Destroy; override;
  end;

implementation

//#############################################################################
// TQTXBTree
//#############################################################################

destructor TQTXBTree.Destroy;
begin
  if FRoot  nil then
    Clear();
  inherited;
end;

procedure TQTXBTree.Clear;
var
  lTemp:  TList;
  x:  integer;
begin
  if FRoot  nil then
  begin
    // pack all nodes to a linear list
    lTemp := GetPackedNodes();

    try
      // release each node
      for x := 0 to ltemp.Count-1 do
      begin
        lTemp[x].Free;
      end;
    finally
      // dispose of list
      lTemp.Free;

      // reset pointers
      FCurrent := nil;
      FRoot := nil;
    end;
  end;
end;

procedure TQTXBTree.Clear(const Process: TQTXBTreeProcessCB);
begin
  ForEach(Process);
  Clear();
end;

function TQTXBTree.GetPackedNodes: TList;
var
  LData:  Tlist;
begin
  LData := TList.Create();
  ForEach( procedure (const Node: TQTXBTreeNode; var Cancel: boolean)
  begin
    LData.Add(Node);
    Cancel  := false;
  end);
  result := LData;
end;

function TQTXBTree.GetEmpty: boolean;
begin
  result := FRoot = nil;
end;

function TQTXBTree.Count: integer;
var
  LCount: integer;
begin
  ForEach( procedure (const Node: TQTXBTreeNode; var Cancel: boolean)
    begin
      inc(LCount);
      Cancel  := false;
    end);
  result := LCount;
end;

function TQTXBTree.ToDataArray: TList;
var
  Data: TList;
begin
  Data := TList.Create();

  ForEach( procedure (const Node: TQTXBTreeNode; var Cancel: boolean)
    begin
      Data.add(Node.data);
      Cancel := false;
    end);
  result := data;
end;

function TQTXBTree.Add(const Ident: string; const Data: variant): TQTXBTreeNode;
begin
  result := Add( Ident.GetHashCode(), Data);
end;

function TQTXBTree.Add(const Ident: integer; const Data: variant): TQTXBTreeNode;
var
  lNode:  TQTXBtreeNode;
begin
  LNode := TQTXBTreeNode.Create();
  LNode.Identifier := Ident;
  LNode.Data := data;

  if FRoot = nil then
    FRoot := LNode;

  FCurrent := FRoot;

  while true do
  begin
    if (Ident  FCurrent.Identifier) then
    begin
      if (FCurrent.right = nil) then
      begin
        FCurrent.right := LNode;
        break;
      end else
      FCurrent := FCurrent.right;
    end else
    break;
  end;
  result := LNode;
end;

function TQTXBTree.Read(const Ident: string): variant;
begin
  result := Read( Ident.GetHashCode() );
end;

function TQTXBTree.Read(const Ident: integer): variant;
begin
  FCurrent := FRoot;
  while FCurrent  nil do
  begin
    if (Ident  Fcurrent.Identifier) then
      FCurrent := FCurrent.Right
    else
    begin
      result := FCUrrent.Data;
      break;
    end
  end;
end;

procedure TQTXBTree.Write(const Ident: string; const NewData: variant);
begin
  Write( Ident.GetHashCode(), NewData);
end;

procedure TQTXBTree.Write(const Ident: integer; const NewData: variant);
begin
  FCurrent := FRoot;
  while (FCurrent  nil) do
  begin
    if (Ident  Fcurrent.Identifier) then
      FCurrent := FCurrent.Right
    else
    begin
      FCurrent.Data := NewData;
      break;
    end
  end;
end;

function  TQTXBTree.Contains(const Ident: string): boolean;
begin
  result := Contains( Ident.GetHashCode() );
end;

function TQTXBTree.Contains(const Ident: integer): boolean;
begin
  result := false;
  if FRoot  nil then
  begin
    FCurrent := FRoot;

    while ( (not Result) and (FCurrent  nil) ) do
    begin
      if (Ident  Fcurrent.Identifier) then
        FCurrent := FCurrent.Right
      else
      begin
        Result := true;
        break;
      end
    end;
  end;
end;

function TQTXBTree.Remove(const Ident: string): boolean;
begin
  result := Remove( Ident.GetHashCode() );
end;

function TQTXBTree.Remove(const Ident: integer): boolean;
var
  LFound: boolean;
  LParent: TQTXBTreeNode;
  LReplacement,
  LReplacementParent: TQTXBTreeNode;
  LChildCount: integer;
begin
  FCurrent := FRoot;
  LFound := false;
  LParent := nil;
  LReplacement := nil;
  LReplacementParent := nil;

  while (not LFound) and (FCurrent  nil) do
  begin
    if (Ident  FCurrent.Identifier) then
    begin
      LParent := FCurrent;
      FCurrent := FCurrent.right;
    end else
    LFound := true;

    if LFound then
    begin
      LChildCount := 0;

      if (FCurrent.left  nil) then
        inc(LChildCount);

      if (FCurrent.right  nil) then
        inc(LChildCount);

      if FCurrent = FRoot then
      begin
        case (LChildCOunt) of
        0:  begin
              FRoot := nil;
            end;
        1:  begin
              if FCurrent.right = nil then
                FRoot := FCurrent.left
              else
                FRoot :=FCurrent.Right;
            end;
        2:  begin
              LReplacement := FRoot.left;
              while (LReplacement.right  nil) do
              begin
                LReplacementParent := LReplacement;
                LReplacement := LReplacement.right;
              end;

            if (LReplacementParent  nil) then
            begin
              LReplacementParent.right := LReplacement.Left;
              LReplacement.right := FRoot.Right;
              LReplacement.left := FRoot.left;
            end else
            LReplacement.right := FRoot.right;
          end;
        end;

        FRoot := LReplacement;
      end else
      begin
        case LChildCount of
        0:  if (FCurrent.Identifier < LParent.Identifier) then
            Lparent.left  := nil else
            LParent.right := nil;
        1:  if (FCurrent.Identifier < LParent.Identifier) then
            begin
              if (FCurrent.Left = NIL) then
              LParent.left := FCurrent.Right else
              LParent.Left := FCurrent.Left;
            end else
            begin
              if (FCurrent.Left = NIL) then
              LParent.right := FCurrent.Right else
              LParent.right := FCurrent.Left;
            end;
        2:  begin
              LReplacement := FCurrent.left;
              LReplacementParent := FCurrent;

              while LReplacement.right  nil do
              begin
                LReplacementParent := LReplacement;
                LReplacement := LReplacement.right;
              end;
              LReplacementParent.right := LReplacement.left;

              LReplacement.right := FCurrent.right;
              LReplacement.left := FCurrent.left;

              if (FCurrent.Identifier < LParent.Identifier) then
                LParent.left := LReplacement
              else
                LParent.right := LReplacement;
            end;
          end;
        end;
      end;
  end;

  result := LFound;
end;

procedure TQTXBTree.ForEach(const Process: TQTXBTreeProcessCB);

  function ProcessNode(const Node: TQTXBTreeNode): boolean;
  begin
    if Node  nil then
    begin
      if Node.left  nil then
      begin
        result := ProcessNode(Node.left);
        if result then
          exit;
      end;

      Process(Node, result);
      if result then
        exit;

      if (Node.right  nil) then
      begin
        result := ProcessNode(Node.right);
        if result then
          exit;
      end;
    end;
  end;

begin
  ProcessNode(FRoot);
end;

end.

Calling node.js from Delphi

July 6, 2019 Leave a comment

We got a good question about how to start a node.js program from Delphi on our Facebook group today (third one in a week?). When you have been coding for years you often forget that things like this might not be immediately obvious. Hopefully I can shed some light on the options in this post.

Node or chrome?

nodeJust to be clear: node.js has nothing to do with chrome or chromium embedded. Chrome is a web-browser, a completely visual environment and ecosystem.

Node.js is the complete opposite. It is purely a shell based environment, meaning that it’s designed to run services and servers, with emphasis on the latter.

The only thing node.js and chrome have in common, is that they both use the V8 JavaScript runtime engine to load, JIT compile and execute scripts at high speed. Beyond that, they are utterly alien to each other.

Can node.js be embedded into a Delphi program?

Technically there is nothing stopping a C/C++ developer from compiling the node.js core system as C++ builder compatible .obj files; files that can then be linked into a Delphi application through references. But this also requires a bit of scaffolding, like adding support for malloc_, free_ and a few other procedures – so that your .obj files uses the same memory manager as your Delphi code. But until someone does just that and publish it, im afraid you are stuck with two options:

  • Use a library called Toby, that keeps node.js in a single DLL file. This is the most practical way if you insist on hosting your own version of node.js
  • Add node.js as a prerequisite and give users the option to locate the node.exe in your application’s preferences. This is the way I would go, because you really don’t want to force users to stick with your potentially outdated or buggy build.

So yes, you can use toby and just add the toby dll file to your program folder, but I have to strongly advice against that. There is no point setting yourself up for maintaining a whole separate programming language, just because you want JavaScript support.

“How many in your company can write high quality WebAssembly modules?”

If all you want to do is support JavaScript in your application, then I would much rather install Besen into Delphi. Besen is a JavaScript runtime engine written in Freepascal. It is fully compatible with Delphi, and follows the ECMA standard to the letter. So it is extremely compatible, fast and easy to use.

Like all Delphi components Besen is compiled into your application, so you have no dependencies to worry about.

Starting a node.js script

The easiest way to start a node.js script, is to simply shell-execute out of your Delphi application. This can be done as easily as:

ShellExecute(Handle, 'open', PChar('node.exe'), pchar('script.js'), nil, SW_SHOW);

This is more than enough if you just want to start a service, server or do some work that doesn’t require that you capture the result.

If you need to capture the result, the data that your node.js program emits on stdout, there is a nice component in the Jedi Component Library. Also plenty of examples online on how to do that.

If you need even further communication, you need to look for a shell-execute that support pipes. All node.js programs have something called a message-channel in the Javascript world. In reality though, this is just a named pipe that is automatically created when your script starts (with the same moniker as the PID [process identifier]).

If you opt for the latter you have a direct, full duplex message channel directly into your node.js application. You just have to agree with yourself on a protocol so that your Delphi code understands what node.js is saying, and visa versa.

UDP or TCP

If you don’t want to get your hands dirty with named pipes and rolling your own protocol, you can just use UDP to let your Delphi application communicate with your node.js process. UDP is practically without cost since its fundamental to all networking stacks, and in your case you will be shipping messages purely between processes on localhost. Meaning: packets are never sent on the network, but rather delegated between processes on the same machine.

In that case, I suggest you ship in the port you want your UDP server to listen on, so that your node.js service acts as the server. A simple command-line statement like:

node.exe myservice.js 8090

Inside node.js you can setup an UDP server with very little fuzz:


function setupServer(port) {
  var os = require("os");
  var dgram = require("dgram");
  var socket = dgram.createSocket("udp4");

  var MULTICAST_HOST = "224.0.0.236";
  var BROADCAST_HOST = "255.255.255.255";
  var ALL_PORT = 60540;
  var MULTICAST_TTL = 1; // Local network

  socket.bind(port);
  socket.on('listening', function() {
    socket.setMulticastLoopback(true);
    socket.setMulticastTTL(MULTICAST_TTL);
    socket.addMembership(multicastHost);
    if(broadcast) { socket.setBroadcast(true); }
  });
  socket.on('message', parseMessage);
}

function parseMessage(message, rinfo) {
try {
  var messageObject = JSON.parse(message);
  var eventType = messageObject.eventType;
  } catch(e) {
  }
}

Note: the code above assumes a JSON text message.

You can then use any Delphi UDP client to communicate with your node.js server, Indy is good, Synapse is a good library with less overhead – there are many options here.

Do I have to learn Javascript to use node.js?

If you download DWScript you can hook-up the JS-codegen library (see library folder in the DWScript repository), and use that to compile DWScript (object pascal) to kick-ass Javascript. This is the same compiler that was used in Smart Mobile Studio.

“Adding WebAssembly to your resume is going to be a hell of a lot more valuable in the years to come than C# or Java”

Another alternative is to use Freepascal, they have a pas2js project where you can compile ordinary object-pascal to javascript. Naturally there are a few things to keep in mind, both for DWScript and Freepascal – like avoiding pointers. But clean object pascal compiles just fine.

If JavaScript is not your cup of tea, or you simply don’t have time to learn the delicate nuances between the DOM (document object model, used by browsers) and the 100% package oriented approach deployed by node.js — then you can just straight up to webassembly.

RemObjects Software has a kick-ass webassembly compiler, perfect if you dont have the energy or time to learn JavaScript. As of writing this is the fastest and most powerful toolchain available. And I have tested them all.

WebAssembly, no Javascript needed

RO-Single-Gear-512You might remember Oxygene? It used to be shipped with Delphi as a way to target Microsoft CLR (common language runtime) and the .net framework.

Since then Oxygene and the RemObjects toolchain has evolved dramatically and is now capable of a lot more than CLR support.

  • You can compile to raw, llvm optimized machine code for 8 platforms
  • You can compile to CLR/.Net
  • You can compile to Java bytecodes
  • You can compile to WebAssembly!

WebAssembly is not Javascript, it’s important to underline that. WebAssembly was created especially for developers using traditional languages, so that traditional compilers can emit web friendly, binary code. Unlike Javascript, WebAssembly is a purely binary format. Just like Delphi generates machine-code that is linked into a final executable, WebAssembly is likewise compiled, linked and emitted in binary form.

If that sounds like a sales pitch, it’s not. It’s a matter of practicality.

  • WebAssembly is completely barren out of the box. The runtime environment, be it V8 for the browser or V8 for node.js, gives you nothing out of the box. You don’t even have WriteLn() to emit text.
  • Google expects compiler makers to provide their own RTL functions, from the fundamental to the advanced. The only thing V8 gives you, is a barebone way of referencing objects and functions on the other side, meaning the JS and DOM world. And that’s it.

So the reason i’m talking a lot about Oxygene and RemObjects Elements (Elements is the name of the compiler toolchain RemObjects offers), is because it ships with an RTL. So you are not forced to start on actual, literal assembly level.

studio

If you don’t want to study JavaScript, Oxygene and Elements from RemObjects is the solution

RemObjects also delivers a DelphiVCL compatibility framework. This is a clone of the Delphi VCL / Freepascal LCL. Since WebAssembly is still brand new, work is being done on this framework on a daily basis, with updates being issued all the time.

Note: The Delphi VCL framework is not just for WebAssembly. It represents a unified framework that can work anywhere. So if you switch from WebAssembly to say Android, you get the same result.

The most important part of the above, is actually not the visual stuff. I mean, having HTML5 visual controls is cool – but chances are you want to use a library like Sencha, SwiftUI or jQueryUI to compose your forms right? Which means you just want to interface with the widgets in the DOM to set and get values.

jQuery UI Bootstrap

You probably want to use a fancy UI library, like jQuery UI. This works perfectly with Elements because you can reference the controls from your WebAssembly module. You dont have to create TButton, TListbox etc manually

The more interesting stuff is actually the non-visual code you get access to. Hundreds of familiar classes from the VCL, painstakingly re-created, and usable from any of the 5 languages Elements supports.

You can check it out here: https://github.com/remobjects/DelphiRTL

Skipping JavaScript all together

I dont believe in single languages. Not any more. There was a time when all you needed was Delphi and a diploma and you were set to conquer the world. But those days are long gone, and a programmer needs to be flexible and have a well stocked toolbox.

At least try the alternatives before you settle on a phone

Knowing where you want to be is half the journey

The world really don’t need yet-another-c# developer. There are millions of C# developers in India alone. C# is just “so what?”. Which is also why C# jobs pays less than Delphi or node.js system service jobs.

What you want, is to learn the things others avoid. If JavaScript looks alien and you feel uneasy about the whole thing – that means you are growing as a developer. All new things are learned by venturing outside your comfort zone.

How many in your company can write high quality WebAssembly modules?

How many within one hour driving distance from your office or home are experts at WebAssembly? How many are capable of writing industrial scale, production ready system services for node.js that can scale from a single instance to 1000 instances in a large, clustered cloud environment?

Any idiot can pick up node.js and knock out a service, but with your background from Delphi or C++ builder you have a massive advantage. All those places that can throw an exception that JS devs usually ignore? As a Delphi or Oxygene developer you know better. And when you re-apply that experience under a different language, suddenly you can do stuff others cant. Which makes your skills valuable.

qtx

The Quartex Media Desktop have made even experienced node / web developers gasp. They are not used to writing custom-controls and large-scale systems, which is my advantage

So would you learn JavaScript or just skip to WebAssembly? Honestly? Learn a bit of both. You don’t have to be an expert in JavaScript to compliment WebAssembly. Just get a cheap book, like “Node.js for beginners” and “JavaScript the good parts” ($20 a piece) and that should be more than enough to cover the JS side of things.

Adding WebAssembly to your resume and having the material to prove you know your stuff, is going to be a hell of a lot more valuable in the years to come than C#, Java or Python. THAT I can guarantee you.

And, we have a wicked cool group on Facebook you can join too: Click here to visit RemObjects Developer.

 

Enumerating network adapters in DWScript/Smart under Node.js

July 5, 2019 Leave a comment

This is something I never had the time to implement under Smart Pascal, but it should be easy enough to patch. If you are using DWScript with the QTX Framework this is already in place. But for Smart users, here is a quick recipe.

First, we need access to the node.js OS module:

unit qtx.node.os;

//#############################################################################
// Quartex RTL for DWScript
// Written by Jon L. Aasenden, all rights reserved
// This code is released under modified LGPL (see license.txt)
//#############################################################################

unit NodeJS.os;

interface

uses
  NodeJS.Core;

type

  TCpusResultObjectTimes = class external
    property user: Integer;
    property nice: Integer;
    property sys: Integer;
    property idle: Integer;
    property irq: Integer;
  end;

  TCpusResult = class external
    property model: String;
    property speed: Integer;
    property times: TcpusResultObjectTimes;
  end;

  JNetworkInterfaceInfo = class external
    property address:  string;
    property netmask:  string;
    property family:   string;
    property mac:      string;
    property scopeid:  integer;
    property internal: boolean;
    property cidr:     string;
  end;

  Jos_Exports = class external
  public
    function tmpDir: String;
    function hostname: String;
    function &type: String;
    function platform: String;
    function arch: String;
    function release: String;
    function uptime: Integer;
    function loadavg: array of Integer;
    function totalmem: Integer;
    function freemem: Integer;
    function cpus: array of TCpusResult;
    function networkInterfaces: variant;
    property EOL: String;
  end;

function NodeJSOsAPI: Jos_Exports;

implementation

function NodeJSOsAPI: Jos_Exports;
begin
  result := Jos_Exports(RequireModule("os") );
end;

end.

With that in place, we can start enumerating through the adapters. Remember that a PC can have several adapters attached, from a dedicated card to X number of USB wifi sticks.

Here is a little routine that goes through the adapters, and returns the first IPv4 LAN address it finds. This is very useful when writing servers, since you need the IP + port to setup a binding. And yes, you can just call HostName(), but the point here is to know how to run through the adapter array.

function GetMyV4LanIP: string;
begin
  var OSAPI := NodeJSOsAPI();
  var NetAdapters := OSAPI.networkInterfaces();

  for var Adapter in NetAdapters do
  begin
    // Skip loopback device
    if Adapter.Contains('Loopback') then
      continue;

    for var netIntf in NetAdapters[Adapter] do
    begin
      var address = JNetworkInterfaceInfo( NetAdapters[Adapter][netIntf] );
      if not address.internal then
      begin
        // force copy of string
        var lFam: string := string(address.family) + " ";

        // make sure its ipv4
        if lFam.ToLower().Trim() = 'ipv4' then
        begin
          result := address.address + " ";
          result := result.trim();
          break;
        end;
      end;
    end;
  end;

  if result.length < 1 then
    result := '127.0.0.1';
end;

Getting into Node.js from Delphi

July 1, 2019 Leave a comment

Delphi is one of the best development toolchains for Windows. I have been an avid fan of Delphi since it was first released, and before that – Turbo Pascal too. Delphi has a healthy following – and despite popular belief, Delphi scores quite well on the Tiobe Index.

As cool and efficient as Delphi might be, there are situations where native code wont work. Or at the very least, be less efficient than the alternatives. Delphi has a broad wingspan, from low-level assembler all the way to classes and generics. But JavaScript and emerging web technology is based on a completely different philosophy, one where native code is regarded as negative since it binds you to hardware.

Getting to grips with the whole JavaScript phenomenon, be it for mobile, embedded or back-end services, can be daunting if all you know is native code. But thankfully there are alternatives that can help you become productive quickly, something I will brush over in this post.

JavaScript without JavaScript

Before we dig into the tools of the trade, I want to cover alternative ways of enjoying the power of node.js and Javascript. Namely by using compilers that can convert code from a traditional language – and emit fully working JavaScript. There are a lot more options than you think:

qtx

Quartex Media Desktop is a complete environment written purely in JavaScript. Both Server, Cluster and front-end is pure JavaScript. A good example of what can be done.

  • Swift compiles for JavaScript, and Apple is doing some amazing things with the new and sexy SwiftUI tookit. If you know your way around Swift, you can compile for Javascript
  • Go can likewise be compiled to JS:
    • RemObjects Elements supports the Go language. Elements can target both native (llvm), .Net, Java and WebAssembly.
    • Go2Js
    • GopherJs
    • TARDISgo
  • C/C++ can be compiled to asm.js courtesy of EmScripten. It uses clang to first compile your code to llvm bitcode, and then it converts that into asm.js. You have probably seen games like Quake run in the browser? That was asm.js, a kind of precursor to WebAssembly.
  • NS Basic compiles for JavaScript, this is a Visual Basic 6 style environment with its own IDE even

For those coming straight from Delphi, there are a couple of options to pick from:

  • Freepascal (pas2js project)
  • DWScript compiles code to JavaScript, this is the same compiler that we used in Smart Pascal earlier
  • Oxygene, the next generation object-pascal from RemObjects compiles to WebAssembly. This is by far the best option of them all.
studio

I strongly urge you to have a look at Elements, here running in Visual Studio

JavaScript, Asm.js or WebAssembly?

Asm.js is by far the most misunderstood technology in the JavaScript ecosystem, so let me just cover that before we move on:

A few years back JavaScript gained support for memory buffers and typed arrays. This might not sound very exciting, but in terms of speed – the difference is tremendous. The default variable type in JavaScript is what Delphi developers know as Variant. It assumes the datatype of the values you assign to it. Needless to say, there is a lot of overhead when working with variants – so JavaScript suddenly getting proper typed arrays was a huge deal.

It was then discovered that JavaScript could manipulate these arrays and buffers at high speed, providing it only used a subset of the language. A subset that the JavaScript runtime could JIT compile more easily (turn into machine-code).

So what the EmScripten team did was to implement a bytecode based virtual-machine in Javascript, and then they compile C/C++ to bytecodes. I know, it’s a huge project, but the results speak for themselves — before WebAssembly, this was as fast as it got with JavaScript.

WebAssembly

WebAssembly is different from both vanilla JavaScript and Asm.js. First of all, it’s executed at high speed by the browser itself. Not like asm.js where these bytecodes were executed by JavaScript code.

water

Water is a fast, slick and platform independent IDE for Elements. The same IDE for OS X is called Fire. You can use RemObjects Elements from either Visual Studio or Water

Secondly, WebAssembly is completely JIT compiled by the browser or node.js when loading. It’s not like Asm.js where some parts are compiled, others are interpreted. WebAssembly runs at full speed and have nothing to do with traditional JavaScript. It’s actually a completely separate engine.

Out of all the options on the table, WebAssembly is the technology with the best performance.

Kits and strategies

The first thing you need to be clear about, is what you want to work with. The needs and requirements of a game developer will be very different from a system service developer.

Here are a couple of kits to think about:

  • Mobile developer
    • Implement your mobile applications using Oxygene, compiling for WebAssembly (Elements)
    • RemObjects Remoting SDK for client / server communication
    • Use Freepascal for vanilla JavaScript scaffolding when needed
  • Service developer
    • Implement libraries in Oxygene to benefit from the speed of WebAssembly
    • Use RemObjects Data Abstract to make data-access uniform and fast
    • Use Freepascal for boilerplate node.js logic
  • Desktop developer
    • For platform independent desktop applications, WebAssembly is the way to go. You will need some scaffolding (plain Javascript) to communicate with the application host  – but the 99.9% of your code will be better under WebAssembly.
    • Use Cordova / Phonegap to “bundle” your WebAssembly, HTML5 files and CSS styling into a single, final executable.

The most important part to think about when getting into JavaScript, is to look closely at the benefits and limitation of each technology.

WebAssembly is fast, wicked fast, and let’s you write code like you are used to from Delphi. Things like pointers etc are supported in Elements, which means ordinary code that use pointers will port over with ease. You are also not bound on hand-and-feet to a particular framework.

For example, EmScripten for C/C++ have almost nothing in terms of UI functionality. The visual part is a custom build of SDL (simple directmedia layer), which fakes the graphics onto an ordinary HTML5 canvas. This makes EmScripten a good candidate for porting games written in C/C++ to the web — but it’s less than optimal for writing serious applications.

Setting up the common tools

So far we have looked at a couple of alternatives for getting into the wonderful world of JavaScript in lieu of other languages. But what if you just want to get started with the typical tools JS developers use?

vscode

Visual Studio Code is a pretty amazing code-editor

The first “must have” is Visual Studio Code. This is actually a great example of what you can achieve with JavaScript, because the entire editor and program is written in JavaScript. But I want to stress that this editor is THE editor to get. The way you work with files in JS is very different from Delphi, C# and Java. JavaScript projects are often more fragmented, with less code in each file – organized by name.

typescript

TypeScript was invented by Anders Hejlsberg, who also made Delphi and C#

The next “must have” is without a doubt TypeScript. Personally im not too fond of TypeScript, but if ordinary JavaScript makes your head hurt and you want classes and ordinary inheritance, then TypeScript is a step up.

assemblyscriptNext on the list is AssemblyScript. This is a post-processor for TypeScript that converts your code into WebAssembly. It lacks much of the charm and elegance of Oxygene, but I suspect that has to do with old habits. When you have been reading object-pascal for 20 years, you feel more at home there.

nodeYou will also need to install node.js, which is the runtime engine for running JavaScript as services. Node.js is heavily optimized for writing server software, but it’s actually a brilliant way to write services that are multi-platform. Because Node.js delivers the same behavior regardless of underlying operating system.

phonegapAnd finally, since you definitely want to convert your JavaScript and/or WebAssembly into a stand-alone executable: you will need Adobe Phonegap.

Visual Studio

No matter if you want to enter JavaScript via Elements or something else, Visual Studio will save you a lot of time, especially if you plan on targeting Azure or Amazon services. Downloading and installing the community edition is a good idea, and you can use that while exploring your options.

dotnet-visual-studio

When it comes to writing system services, you also want to check out NPM, the node.js package manager. The JavaScript ecosystem is heavily package oriented – and npm gives you some 800.000 packages to play with free of charge.

Just to be clear, npm is a shell command you use to install or remove packages. NPM is also a online repository of said packages, where you can search and find what you need. Most packages are hosted on github, but when you install a package locally into your application folder – npm figures out dependencies etc. automatically for you.

Books, glorious books

41QSvp9fTcL._SX331_BO1,204,203,200_Last but not least, get some good books. Seriously, it will save you so much time and frustration. Amazon have tons of great books, be it vanilla JavaScript, TypeScript, Node.js — pick some good ones and take the time to consume the material.

And again, I strongly urge you to have a look at Elements when it comes to WebAssembly. WebAssembly is a harsh and barren canvas, and being able to use the Elements RTL is a huge boost.

But regardless of path you pick, you will always benefit from learning vanilla JavaScript.

 

Two new groups in the Developer family

July 1, 2019 2 comments

Delphi Developer is a group on Facebook that have been going strong for 12+ years. It was one of the first groups on Facebook, created the same week that Facebook allowed groups. With that group well established, it’s time to expand and clean up the feed.

RO-Single-Gear-512Last month I introduced a new group, RemObjects Developer, which is a group for developers that use RemObjects components, like the Remoting SDK, Data Abstract and/or Hydra – but more in particular, developers using Oxygene, C#, Swift, Java or Go via Elements (RemObjects compiler toolchain).

Two new groups

To further simplify syndication, and clean up the feeds (which so far has been a pot-purrey of many topics, dialects and products) an additional two groups is now in place:

Obviously there will be some overlapping. Since FPC and Delphi has much in common and are for the most part compatible, some news will be shared between those groups. But all in all this is to clean up the newsfeed which has so far been a mix and match of everything.

org

Simple overview of the groups

Node.js Developer is not meant to be purely about vanilla JavaScript. Node.js is ultimately a JavaScript runtime-engine. Which means you can use it to run or host WebAssembly libraries (as produced by Oxygene), or generate code via DWScript or Freepascal. You can think of it as a service-host if you like.

So if you are writing WebAssembly applications using Elements, then the node.js group will no doubt be interesting too. Same goes for DWScript users, Smart Pascal users and Freepascal users – providing web tech is what they like.

What is this Quartex Components?

It’s easier to manage multiple groups if you attach them to a parent-page. So if you wonder why all the groups says “by Quartex Components”, that is just a top-level page that helps me deal with with syndication. For some reason Facebook’s API only works for pages, not groups. So it’s impossible to auto-import news (for example) without a page.

The name, “Quartex Components” is ultimately the name of my personal company. I used to produce security components for Delphi, but decided to open-source those for the community.

So Quartex Components is just an organizational element.

Porting TextCraft to Oxygene

June 30, 2019 Leave a comment

TextCraft is a simple yet powerful text parser, designed for general purpose parsing jobs. I originally implemented it for Delphi, it’s the base-parser for the LDEF bytecode assembler amongst other things. It was ported to Smart Pascal, then Freepascal – and now finally Oxygene.

ldef

The LDEF Assembler is a part of the Quartex Media Desktop

The LDEF assembler and bytecode engine is currently implemented in Smart and compiles for Javascript. It’s a complete assembler and VM allowing coders to approach Asm.js from an established instruction-set. In short: you feed it source-code, it spits out bytecodes that you can execute super fast in either the browser or elsewhere. As long as there is a VM implementation available.

The Javascript version works really well, especially on node.js. In essence, i don’t need to re-compile the toolchain when moving between arm, x86, windows, linux or osx. Think of it as a type of Java bytecodes or CLR bytecodes.

Getting the code to run under Oxygene, means that I can move the whole engine into WebAssembly. The parser, assembler and linker (et-al) can thus run as WebAssembly, and I can use that from my JavaScript front-end code. Best of both worlds – the flamboyant creativity of JavaScript, and the raw speed of WebAssembly.

The port

Before I can move over the top-level parser + assembler etc, the generic parser code has to work. I was reluctant to start because I imagined the porting would take at least a day, but luckily it took me less than an hour. There are a few superficial differences between Smart, Delphi, Freepascal and Oxygene; for example the Copy() function for strings is not a lose function in Oxygene, instead you use String.SubString(). Functions like High() and Low() on strings likewise has to be refactored.

But all in all the conversion was straight-forward, and TextCraft is now a part of the QTX library for Oxygene. I’ll be uploading a commit to GIT with the whole shabam soon.

Well, hope the WordPress parser doesnt screw this up too bad.

namespace qtxlib;

//##################################################################
// TextCraft 1.2
//  Written by Jon L. Aasenden
//
//  This is a port of TC 1.2 from Freepascal. TextCraft is initially
//  a Delphi parser framework. The original repository can be found
//  on BitBucket at:
//
//  https://bitbucket.org/hexmonks/main
//
//##################################################################

{$DEFINE USE_INCLUSIVE}
{$define USE_BMARK}

interface

uses
  qtxlib, System, rtl,
  RemObjects.Elements.RTL.Delphi,
  RemObjects.Elements.RTL.Delphi.VCL;

type

  // forward declarations
  TTextBuffer         = class;
  TParserContext      = class;
  TCustomParser       = class;
  TParserModelObject  = class;

    // Exceptions
  ETextBuffer   = class(Exception);
  EModelObject  = class(Exception);

  // Callback functions
  TTextValidCB = function (Item: Char): Boolean;

  // Bookmark datatype
  TTextBufferBookmark = class
  public
    property bbOffset: Integer;
    property bbCol:    Integer;
    property bbRow:    Integer;
    function Equals(const ThisMark: TTextBufferBookmark): Boolean;
  end;

  {.$DEFINE USE_BMARK}

  TTextBuffer = class(TErrorObject)
  private
    FData:      String;
    FOffset:    Integer;
    FLength:    Integer;
    FCol:       Integer;
    FRow:       Integer;
    {$IFDEF USE_BMARK}
    FBookmarks: List;
    {$ENDIF}
    procedure   SetCacheData(NewText: String);
  public
    property    Column: Integer read FCol;
    property    Row: Integer read FRow;
    property    Count: Integer read FLength;
    property    Offset: Integer read FOffset;
    property    CacheData: String read FData write SetCacheData;

    // These functions map directly to the "Current"
    // character where the offset is placed, and is used to
    // write code that makes more sense to human eyes
    function    CrLf: Boolean;
    function    Space: Boolean;
    function    Tab: Boolean;
    function    SemiColon: Boolean;
    function    Colon: Boolean;
    function    ConditionEnter: Boolean;
    function    ConditionLeave: Boolean;
    function    BracketEnter: Boolean;
    function    BracketLeave: Boolean;
    function    Ptr: Boolean;
    function    Punctum: Boolean;
    function    Question: Boolean;
    function    Less: Boolean;
    function    More: Boolean;
    function    Equal: Boolean;
    function    Pipe: Boolean;
    function    Numeric: Boolean;

    function    Empty: Boolean;
    function    BOF: Boolean;
    function    EOF: Boolean;
    function    Current: Char;

    function    First: Boolean;
    function    Last: Boolean;

    // Same as "Next", but does not automatically
    // consume CR+LF, used when parsing textfragments
    function    NextNoCrLf: Boolean;

    // Normal Next function, will automatically consume
    // CRLF when it encounters it
    function    Next: Boolean;

    function    Back: Boolean;

    function    Bookmark: TTextBufferBookmark;
    procedure   Restore(const Mark: TTextBufferBookmark);
    {$IFDEF USE_BMARK}
    procedure   Drop;
    {$ENDIF}

    procedure   ConsumeJunk;
    procedure   ConsumeCRLF;

    function    Compare(const CompareText: String;
                const CaseSensitive: Boolean): Boolean;

    function    Read(var Fragment: Char): Boolean; overload;
    function    Read: Char; overload;
    function    ReadTo(const CB: TTextValidCB; var TextRead: String): Boolean; overload;
    function    ReadTo(const Resignators: TSysCharSet; var TextRead: String): Boolean; overload;
    function    ReadTo(MatchText: String): Boolean; overload;
    function    ReadTo(MatchText: String; var TextRead: String): Boolean; overload;

    function    ReadToEOL: Boolean;   overload;
    function    ReadToEOL(var TextRead: String): Boolean;   overload;

    function    Peek: Char; overload;
    function    Peek(CharCount: Integer; var TextRead: String): Boolean; overload;

    function    NextNonControlChar(const CompareWith: Char): Boolean;
    function    NextNonControlText(const CompareWith: String): Boolean;

    function    ReadWord(var TextRead: String): Boolean;

    function    ReadQuotedString: String;
    function    ReadCommaList(var cList: List): Boolean;

    function    NextLine: Boolean;

    procedure   Inject(const TextToInject: String);

    function    GetCurrentLocation: TTextBufferBookmark;

    function    Trail: String;

    procedure   Clear;
    procedure   LoadBufferText(const NewBuffer: String);

    constructor Create(const BufferText: String); overload; virtual;

    finalizer;
    begin
      {$IFDEF USE_BMARK}
      FBookmarks.Clear();
      disposeAndNil(FBookmarks);
      {$endif}
      Clear();
    end;
  end;

  TParserContext = class(TErrorObject)
  private
    FBuffer:    TTextBuffer;
    FStack:     Stack;
  public
    property    Buffer: TTextBuffer read FBuffer;
    property    Model: TParserModelObject;

    procedure   Push(const ModelObj: TParserModelObject);
    function    Pop: TParserModelObject;
    function    Peek: TParserModelObject;
    procedure   ClearStack;

    constructor Create(const SourceCode: String); reintroduce; virtual;

    finalizer;
    begin
      FStack.Clear();
      FBuffer.Clear();
      disposeAndNil(FStack);
      disposeAndNil(FBuffer);
    end;
  end;

  TCustomParser = class(TErrorObject)
  private
    FContext:   TParserContext;
  protected
    procedure   SetContext(const NewContext: TParserContext);
  public
    property    Context: TParserContext read FContext;
    function    Parse: Boolean; virtual;
    constructor Create(const ParseContext: TParserContext); reintroduce; virtual;
  end;

  TParserModelObject = class(TObject)
  private
    FParent:    TParserModelObject;
    FChildren:  List;
  protected
    function    GetParent: TParserModelObject; virtual;
    function    ChildGetCount: Integer; virtual;
    function    ChildGetItem(const Index: Integer): TParserModelObject; virtual;
    function    ChildAdd(const Instance: TParserModelObject): TParserModelObject; virtual;
  public
    property    Parent: TParserModelObject read GetParent;
    property    Context: TParserContext;
    procedure   Clear; virtual;
    constructor Create(const AParent: TParserModelObject); virtual;

    finalizer;
    begin
      Clear();
      FChildren := nil;
    end;

  end;

implementation

//#####################################################################
// Error messages
//#####################################################################

const
  CNT_ERR_BUFFER_EMPTY  = 'Buffer is empty error';
  CNT_ERR_OFFSET_BOF    = 'Offset at BOF error';
  CNT_ERR_OFFSET_EOF    = 'Offset at EOF error';
  CNT_ERR_COMMENT_NOTCLOSED = 'Comment not closed error';
  CNT_ERR_OFFSET_EXPECTED_EOF = 'Expected EOF error';
  CNT_ERR_LENGTH_INVALID = 'Invalid length error';

//#####################################################################
// TTextBufferBookmark
//#####################################################################

function TTextBufferBookmark.Equals(const ThisMark: TTextBufferBookmark): boolean;
begin
  result := ( (ThisMark  nil) and (ThisMark  self) )
        and (self.bbOffset = ThisMark.bbOffset)
        and (self.bbCol = ThisMark.bbCol)
        and (self.bbRow = ThisMark.bbRow);
end;

//#####################################################################
// TTextBuffer
//#####################################################################

constructor TTextBuffer.Create(const BufferText: string);
begin
  inherited Create();
  if length(BufferText) > 0 then
    LoadBufferText(BufferText)
  else
    Clear();
end;

procedure TTextBuffer.Clear;
begin
  FData := '';
  FOffset := -1;
  FLength := 0;
  FCol := -1;
  FRow := -1;
  {$IFDEF USE_BMARK}
  FBookmarks.Clear();
  {$ENDIF}
end;

procedure TTextBuffer.SetCacheData(NewText: string);
begin
  LoadBufferText(NewText);
end;

function TTextBuffer.Trail: string;
begin
  if not Empty then
  begin
    if not EOF then
      result := FData.Substring(FOffset, length(FData) );
      //result := Copy( FData, FOffset, length(FData) );
  end;
end;

procedure TTextBuffer.LoadBufferText(const NewBuffer: string);
begin
  // Flush existing buffer
  Clear();

  // Load in buffertext, init offset and values
  var TempLen := NewBuffer.Length;
  if TempLen > 0 then
  begin
    FData := NewBuffer;
    FOffset := 0; // start at BOF
    FCol := 0;
    FRow := 0;
    FLength := TempLen;
  end;
end;

function TTextBuffer.GetCurrentLocation: TTextBufferBookmark;
begin
  if Failed then
    ClearLastError();
  if not Empty then
  begin
    result := TTextBufferBookmark.Create;
    result.bbOffset := FOffset;
    result.bbCol := FCol;
    result.bbRow := FRow;
  end else
  raise ETextBuffer.Create
  ('Failed to return position, buffer is empty error');
end;

function TTextBuffer.Bookmark: TTextBufferBookmark;
begin
  if Failed then
    ClearLastError();
  if not Empty then
  begin
    result := TTextBufferBookmark.Create;
    result.bbOffset := FOffset;
    result.bbCol := FCol;
    result.bbRow := FRow;
    {$IFDEF USE_BMARK}
    FBookmarks.add(result);
    {$ENDIF}
  end else
  raise ETextBuffer.Create
  ('Failed to bookmark location, buffer is empty error');
end;

procedure TTextBuffer.Restore(const Mark: TTextBufferBookmark);
begin
  if Failed then
    ClearLastError();
  if not Empty then
  begin
    if Mark  nil then
    begin
      FOffset := Mark.bbOffset;
      FCol := Mark.bbCol;
      FRow := Mark.bbRow;
      Mark.Free;

      {$IFDEF USE_BMARK}
      var idx := FBookmarks.Count;
      if idx > 0 then
      begin
        dec(idx);
        FOffset := FBookmarks[idx].bbOffset;
        FCol := FBookmarks[idx].bbCol;
        FRow := FBookmarks[idx].bbRow;
        FBookmarks.Remove(idx);
        //FBookmarks.SetLength(idx)
        //FBookmarks.Delete(idx,1);
      end else
      raise ETextBuffer.Create('Failed to restore bookmark, none exist');
      {$ENDIF}
    end else
    raise ETextBuffer.Create('Failed to restore bookmark, object was nil error');
  end else
  raise ETextBuffer.Create
  ('Failed to restore bookmark, buffer is empty error');
end;

{$IFDEF USE_BMARK}
procedure TTextBuffer.Drop;
begin
  if Failed then
    ClearLastError();
  if not Empty then
  begin
    if FBookmarks.Count > 0 then
      FBookmarks.Remove(FBookmarks.Count-1)
    else
      raise ETextBuffer.Create('Failed to drop bookmark, none exist');
  end else
  raise ETextBuffer.Create
  ('Failed to drop bookmark, buffer is empty error');
end;
{$ENDIF}

function TTextBuffer.Read(var Fragment: char): boolean;
begin
  if Failed then
    ClearLastError();

  if not Empty then
  begin
    result := FOffset <= length(FData);
    if result then
    begin
      // return character
      Fragment := FData[FOffset];

      // update offset
      inc(FOffset)
    end else
    begin
      // return invalid char
      Fragment := #0;

      // Set error reason
      SetLastError('Offset at BOF error');
    end;
  end else
  begin
    result := false;
    Fragment := #0;
    SetLastError('Buffer is empty error');
  end;
end;

function TTextBuffer.Read: char;
begin
  if Failed then
    ClearLastError();

  if not Empty then
  begin
    result := Current;
    Next();
  end else
  result := #0;
end;

function TTextBuffer.ReadToEOL: boolean;
begin
  if Failed then
    ClearLastError();

  if not Empty() then
  begin
    if BOF() then
    begin
      if not First() then
        exit;
    end;

    if EOF() then
    begin
      SetLastError(CNT_ERR_OFFSET_EOF);
      exit;
    end;

    // Keep start
    var LStart := FOffset;

    // Enum until match of EOF
    {$IFDEF USE_INCLUSIVE}
    repeat
      if (FData[FOffset] = #13)
      and (FData[FOffset + 1] = #10) then
      begin
        result := true;
        break;
      end else
      begin
        inc(FOffset);
        inc(FCol);
      end;
    until EOF();
    {$ELSE}
    While FOffset < High(FData) do
    begin
      if (FData[FOffset] = #13)
      and (FData[FOffset + 1] = #10) then
      begin
        result := true;
        break;
      end else
      begin
        inc(FOffset);
        inc(FCol);
      end;
    end;
    {$ENDIF}

    // Last line in textfile might not have
    // a CR+LF, so we have to check for termination
    if not result then
    begin
      if EOF then
      begin
        if LStart = Low(FData)) and (FOffset = Low(FData)) and (FOffset = Low(FData)) and (FOffset = Low(FData)) and (FOffset = Low(FData)) and (FOffset = Low(FData)) and (FOffset = Low(FData)) and (FOffset = Low(FData)) and (FOffset = Low(FData)) and (FOffset = Low(FData)) and (FOffset = Low(FData)) and (FOffset = Low(FData)) and (FOffset = Low(FData)) and (FOffset <= high(FData) ) )
        and ( (FData[FOffset] = '= Low(FData)) and (FOffset ') );
end;

function  TTextBuffer.Equal: boolean;
begin
  result := (not Empty)
        and ( (FOffset >= Low(FData)) and (FOffset = Low(FData)) and (FOffset = Low(FData)) and (FOffset  LStart then
        begin
          // Any text to return? Or did we start
          // directly on a CR+LF and have no text to give?
          var LLen := FOffset - LStart;
          TextRead := FData.Substring(LStart, LLen);
          //TextRead := Copy(FData, LStart, LLen);
        end;

        // Either way, we exit because CR+LF has been found
        result := true;
        break;
      end;

      inc(FOffset);
      inc(FCol);
    until EOF();
    {$ELSE}
    While FOffset  LStart then
        begin
          // Any text to return? Or did we start
          // directly on a CR+LF and have no text to give?
          var LLen := FOffset - LStart;
          TextRead := copy(FData, LStart, LLen);
        end;

        // Either way, we exit because CR+LF has been found
        result := true;
        break;
      end;

      inc(FOffset);
      inc(FCol);
    end;
    {$ENDIF}

    // Last line in textfile might not have
    // a CR+LF, so we have to check for EOF and treat
    // that as a terminator.
    if not result then
    begin
      if FOffset >= high(FData) then
      begin
        if LStart  0 then
          begin
            TextRead := FData.Substring(LStart, LLen);
            //TextRead := Copy(FData, LStart, LLen);
            result := true;
          end;
          exit;
        end;
      end;
    end;

  end;
end;

function TTextBuffer.ReadTo(const CB: TTextValidCB; var TextRead: string): boolean;
begin
  if Failed then
    ClearLastError();

  TextRead := '';

  if not Empty then
  begin

    if BOF() then
    begin
      if not First() then
        exit;
    end;

    if EOF() then
    begin
      SetLastError(CNT_ERR_OFFSET_EOF);
      exit;
    end;

    if not assigned(CB) then
    begin
      SetLastError('Invalid callback handler');
      exit;
    end;

    {$IFDEF USE_INCLUSIVE}
    repeat
      if not CB(Current) then
        break
      else
        TextRead := TextRead + Current;

      if not Next() then
        break;
    until EOF();
    {$ELSE}
    while not EOF do
    begin
      if not CB(Current) then
        break
      else
        TextRead := TextRead + Current;

      if not Next() then
        break;
    end;
    {$ENDIF}
    result := TextRead.Length > 0;

  end else
  begin
    result := false;
    SetLastError(CNT_ERR_BUFFER_EMPTY);
  end;
end;

function TTextBuffer.ReadTo(const Resignators: TSysCharSet; var TextRead: string): boolean;
begin
  if Failed then
    ClearLastError();

  TextRead := '';
  if not Empty then
  begin

    if BOF() then
    begin
      if not First() then
        exit;
    end;

    if EOF() then
    begin
      SetLastError(CNT_ERR_OFFSET_EOF);
      exit;
    end;

    {$IFDEF USE_INCLUSIVE}
    repeat
      if not Resignators.Contains(Current) then
        TextRead := TextRead + Current
      else
        break;

      if not Next() then
        break;
    until EOF();
    {$ELSE}
    while not EOF do
    begin
      if not (Current in Resignators) then
        TextRead := TextRead + Current
      else
        break;

      if not Next() then
        break;
    end;
    {$ENDIF}

    result := TextRead.Length > 0;
  end else
  begin
    result := false;
    SetLastError(CNT_ERR_BUFFER_EMPTY);
  end;
end;

function TTextBuffer.ReadTo(MatchText: string): boolean;
begin
  if Failed then
    ClearLastError();

  if not Empty() then
  begin

    if BOF() then
    begin
      if not First() then
        exit;
    end;

    if EOF() then
    begin
      SetLastError(CNT_ERR_OFFSET_EOF);
      exit;
    end;

    var MatchLen := length(MatchText);
    if MatchLen > 0 then
    begin
      MatchText := MatchText.ToLower();

      repeat
        var TempCache := '';
        if Peek(MatchLen, TempCache) then
        begin
          TempCache := TempCache.ToLower();
          result := SameText(TempCache, MatchText);
          if result then
            break;
        end;

        if not Next then
          break;
      until EOF;
    end;

  end else
  begin
    result := false;
    SetLastError(CNT_ERR_BUFFER_EMPTY);
  end;
end;

function TTextBuffer.ReadTo(MatchText: string; var TextRead: string): boolean;
begin
  if Failed then
    ClearLastError();

  result := false;
  TextRead := '';

  if not Empty() then
  begin

    if BOF() then
    begin
      if not First() then
        exit;
    end;

    if EOF() then
    begin
      SetLastError(CNT_ERR_OFFSET_EOF);
      exit;
    end;

    if MatchText.Length > 0 then
    begin
      MatchText := MatchText.ToLower();

      repeat
        var TempCache := '';
        if Peek(MatchText.Length, TempCache) then
        begin
          TempCache := TempCache.ToLower();
          result := SameText(TempCache, MatchText);
          if result then
            break
          else
            TextRead := TextRead + Current;
        end else
          TextRead := TextRead + Current;

        if not Next() then
          break;
      until EOF;
    end;

  end else
  begin
    result := false;
    SetLastError(CNT_ERR_BUFFER_EMPTY);
  end;
end;

procedure TTextBuffer.Inject(const TextToInject: string);
begin
  if length(FData) > 0 then
  begin
    var lSeg1 := FData.Substring(1, FOffset);
    var lSeg2 := FData.Substring(FOffset + 1, length(FData));
    //var LSeg1 := Copy(FData, 1, FOffset);
    //var LSeg2 := Copy(FData, FOffset+1,  FData.Length);
    FData := lSeg1 + TextToInject + lSeg2;
  end else
    FData := TextToInject;
end;

function TTextBuffer.Compare(const CompareText: string;
    const CaseSensitive: boolean): boolean;
begin
  if Failed then
    ClearLastError();

  if not Empty() then
  begin
    if BOF() then
    begin
      if not First() then
        exit;
    end;

    if EOF() then
    begin
      SetLastError(CNT_ERR_OFFSET_EOF);
      exit;
    end;

    var LenToRead := CompareText.Length;
    if LenToRead > 0 then
    begin
      // Peek will set an error message if it
      // fails, so we dont need to set anything here
      var ReadData := '';
      if Peek(LenToRead, ReadData) then
      begin
        case CaseSensitive of
        false: result := ReadData.ToLower() = CompareText.ToLower();
        true:  result := ReadData = CompareText;
        end;
      end;
    end else
    SetLastError(CNT_ERR_LENGTH_INVALID);

  end else
  SetLastError(CNT_ERR_BUFFER_EMPTY);
end;

procedure TTextBuffer.ConsumeJunk;
begin
  if Failed then
    ClearLastError();

  if not Empty then
  begin

    if BOF() then
    begin
      if not First() then
        exit;
    end;

    if EOF() then
    begin
      SetLastError(CNT_ERR_OFFSET_EOF);
      exit;
    end;

    repeat
      case Current of
      ' ':
        begin
        end;
      '"':
        begin
          break;
        end;
      #8, #09:
        begin
        end;
      '/':
        begin
          (* Skip C style remark *)
          if Compare('/*', false) then
          begin
            if ReadTo('*/') then
            begin
              inc(FOffset, 2);
              Continue;
            end else
            SetLastError(CNT_ERR_COMMENT_NOTCLOSED);
          end else
          begin
            (* Skip Pascal style remark *)
            if Compare('//', false) then
            begin
              if ReadToEOL() then
              begin
                continue;
              end else
              SetLastError(CNT_ERR_OFFSET_EXPECTED_EOF);
            end;
          end;
        end;
      '(':
        begin
          (* Skip pascal style remark *)
          if Compare('(*', false)
            and not Compare('(*)', false) then
          begin
            if ReadTo('*)') then
            begin
              inc(FOffset, 2);
              continue;
            end else
            SetLastError(CNT_ERR_COMMENT_NOTCLOSED);
          end else
          break;
        end;
      #13:
        begin
          if FData[FOffset + 1] = #10 then
            inc(FOffset, 2)
          else
            inc(FOffset, 1);
          //if Peek = #10 then
          //  ConsumeCRLF;
          continue;
        end;
      #10:
        begin
          inc(FOffset);
          continue;
        end;
      else
        break;
      end;

      if not Next() then
        break;
    until EOF;

  end else
  SetLastError(CNT_ERR_BUFFER_EMPTY);
end;

procedure TTextBuffer.ConsumeCRLF;
begin
  if not Empty then
  begin

    if BOF() then
    begin
      if not First() then
        exit;
    end;

    if EOF() then
    begin
      SetLastError(CNT_ERR_OFFSET_EOF);
      exit;
    end;

    if  (FData[FOffset] = #13) then
    begin
      if FData[FOffset + 1] = #10 then
        inc(FOffset, 2)
      else
        inc(FOffset);

      inc(FRow);
      FCol := 0;
    end;

  end;
end;

function TTextBuffer.Empty: boolean;
begin
  result := FLength < 1;
end;

// This method will look ahead, skipping space, tab and crlf (also known
// as control characters), and when a non control character is found it will
// perform a string compare. This method uses a bookmark and will restore
// the offset to the same position as when it was entered.
//
// Notes: The method "NextNonControlChar" is a similar method that
// performs a char-only compare.
function TTextBuffer.NextNonControlText(const CompareWith: string): boolean;
begin
  if Failed then
    ClearLastError();

  if not Empty then
  begin

    if BOF() then
    begin
      if not First() then
        exit;
    end;

    if EOF() then
    begin
      SetLastError(CNT_ERR_OFFSET_EOF);
      exit;
    end;

    var Mark := Bookmark();
    try
      // Iterate ahead
      repeat
        if not (Current in [' ', #13, #10, #09]) then
          break;

        Next();
      until EOF();

      // Compare unless we hit the end of the line
      if not EOF then
        result := Compare(CompareWith, false);
    finally
      Restore(Mark);
    end;

  end else
  SetLastError(CNT_ERR_BUFFER_EMPTY);
end;

// This method will look ahead, skipping space, tab and crlf (also known
// as control characters), and when a non control character is found it will
// perform a string compare. This method uses a bookmark and will restore
// the offset to the same position as when it was entered.

function TTextBuffer.NextNonControlChar(const CompareWith: char): boolean;
begin
  if Failed then
    ClearLastError();

  if not Empty then
  begin
    if BOF() then
    begin
      if not First() then
        exit;
    end;

    if EOF() then
    begin
      SetLastError(CNT_ERR_OFFSET_EOF);
      exit;
    end;

    var Mark := Bookmark();
    try
      repeat
        if not (Current in [' ', #13, #10, #09]) then
          break;
        Next();
      until EOF();

      //if not EOF then
      result := Current.ToLower() = CompareWith.ToLower();
      //result := LowerCase(Current) = LowerCase(CompareWith);

    finally
      Restore(Mark);
    end;

  end else
  SetLastError(CNT_ERR_BUFFER_EMPTY);
end;

function TTextBuffer.Peek: char;
begin
  if Failed then
    ClearLastError();
  if not Empty then
  begin
    if (FOffset  0 do
        begin
          TextRead := TextRead + Current;
          if not Next() then
            break;
          dec(CharCount);
        end;
      finally
        Restore(Mark);
      end;

      result := TextRead.Length > 0;

    end else
    SetLastError(CNT_ERR_OFFSET_EOF);
  end else
  SetLastError(CNT_ERR_BUFFER_EMPTY);
end;

function TTextBuffer.First: boolean;
begin
  if Failed then
    ClearLastError();

  if not Empty then
  begin
    FOffset := Low(FData);
    result := true;
  end else
  SetLastError(CNT_ERR_BUFFER_EMPTY);
end;

function TTextBuffer.Last: boolean;
begin
  if Failed then
    ClearLastError();

  if not Empty then
  begin
    FOffset := high(FData);
    result := true;
  end else
  SetLastError(CNT_ERR_BUFFER_EMPTY);
end;

function TTextBuffer.NextNoCrLf: boolean;
begin
  if Failed then
    ClearLastError();

  if not Empty then
  begin
    // Check that we are not EOF
    result := FOffset <= high(FData);
    if result then
    begin
      // Update offset into buffer
      inc(FOffset);

      // update column, but not if its in a lineshift
      if not (FData[FOffset] in [#13, #10]) then
        inc(FCol);

    end else
    SetLastError(CNT_ERR_OFFSET_EOF);
  end else
  SetLastError(CNT_ERR_BUFFER_EMPTY);
end;

function TTextBuffer.Next: boolean;
begin
  if Failed then
    ClearLastError();

  if not Empty() then
  begin

    if BOF() then
    begin
      if not First() then
        exit;
    end;

    if EOF() then
    begin
      SetLastError(CNT_ERR_OFFSET_EOF);
      exit;
    end;

    // Update offset into buffer
    inc(FOffset);

    // update column
    inc(FCol);

    // This is the same as ConsumeCRLF
    // But this does not generate any errors since we PEEK
    // ahead into the buffer to make sure the combination
    // is correct before we adjust the ROW + offset
    if FOffset  Low(FData));
    if result then
      dec(FOffset)
    else
      SetLastError(CNT_ERR_OFFSET_BOF);
  end else
  SetLastError(CNT_ERR_BUFFER_EMPTY);
end;

function TTextBuffer.Current: char;
begin
  if Failed then
    ClearLastError();

  // Check that buffer is not empty
  if not Empty then
  begin
    // Check that we are on char 1 or more
    if FOffset >= Low(FData) then
    begin
      // Check that we are before or on the last char
      if (FOffset <= high(FData)) then
        result := FData[FOffset]
      else
      begin
        SetLastError(CNT_ERR_OFFSET_EOF);
        result := #0;
      end;
    end else
    begin
      SetLastError(CNT_ERR_OFFSET_BOF);
      result := #0;
    end;
  end else
  begin
    SetLastError(CNT_ERR_BUFFER_EMPTY);
    result := #0;
  end;
end;

function TTextBuffer.BOF: boolean;
begin
  if not Empty then
    result := FOffset  high(FData);
end;

function TTextBuffer.NextLine: boolean;
begin
  if Failed then
    ClearLastError();

  if not Empty then
  begin
    // Make sure we offset to a valid character
    // in the buffer.
    ConsumeJunk();

    if not EOF then
    begin
      var ThisRow := self.FRow;
      while Row = ThisRow do
      begin
        Next();
        if EOF then
        break;
      end;

      result := (Row  ThisRow) and (not EOF);
    end;
  end;
end;

function TTextBuffer.ReadWord(var TextRead: string): boolean;
begin
  if Failed then
    ClearLastError();

  TextRead := '';

  if not Empty then
  begin
    // Make sure we offset to a valid character
    // in the buffer.
    ConsumeJunk();

    // Not at the end of the file?
    if not EOF then
    begin
      repeat
        var el := Current;

        if (el in
        [ 'A'..'Z',
          'a'..'z',
          '0'..'9',
          '_', '-' ]) then
          TextRead := TextRead + el
        else
          break;

        if not NextNoCrLf() then
          break;

      until EOF;

      result := TextRead.Length > 0;

    end else
    SetLastError('Failed to read word, unexpected EOF');
  end else
  SetLastError('Failed to read word, buffer is empty error');
end;

function TTextBuffer.ReadCommaList(var cList: List): boolean;
var
  LTemp: String;
  LValue: String;
begin
  if cList = nil then
    cList := new List
  else
    cList.Clear();

  if not Empty then
  begin
    ConsumeJunk();

    While not EOF do
    begin
      case Current of
      #09:
        begin
          // tab, just skip
        end;
      #13, #10:
        begin
          // CR+LF, consume and continue;
          ConsumeCRLF();
        end;
      #0:
        begin
          // Unexpected EOL
          break;
        end;

      ';':
        begin
          //Perfectly sound ending
          result := true;
          break;
        end;
      '"':
        begin
          LValue := ReadQuotedString;
          if LValue.Length > 0 then
          begin
            cList.add(LValue);
            LValue := '';
          end;
        end;
      ',':
        begin
          LTemp := LTemp.Trim();
          if LTemp.Length>0 then
          begin
            cList.add(LTemp);
            LTemp := '';
          end;
        end;
      else
        begin
          LTemp := LTemp + Current;
        end;
      end;

      if not Next() then
        break;
    end;

    if LTemp.Length > 0 then
      cList.add(LTemp);

    result := cList.Count > 0;

  end;
end;

function TTextBuffer.ReadQuotedString: string;
begin
  if not Empty then
  begin
    if not EOF then
    begin

      // Make sure we are on the " entry quote
      if Current  '"' then
      begin
        SetLastError('Failed to read quoted string, expected index on " character error');
        exit;
      end;

      // Skip the entry char
      if not NextNoCrLf() then
      begin
        SetLastError('Failed to skip initial " character error');
        exit;
      end;

      while not EOF do
      begin
        // Read char from buffer
        var TempChar := Current;

        // Closing of string? Exit
        if TempChar = '"' then
        begin
          if not NextNoCrLf then
            SetLastError('failed to skip final " character in string error');
          break;
        end;

        result := result + TempChar;

        if not NextNoCrLf() then
          break;
      end;

    end;
  end;
end;

//##########################################################################
// TParserModelObject
//##########################################################################

constructor TParserModelObject.Create(const AParent:TParserModelObject);
begin
  inherited Create;
  FParent := AParent;
  FChildren := new List;
end;

function TParserModelObject.GetParent:TParserModelObject;
begin
  result := FParent;
end;

procedure TParserModelObject.Clear;
begin
  FChildren.Clear();
end;

function TParserModelObject.ChildGetCount: integer;
begin
  result := FChildren.Count;
end;

function TParserModelObject.ChildGetItem(const Index: integer): TParserModelObject;
begin
  result := TParserModelObject(FChildren[Index]);
end;

function TParserModelObject.ChildAdd(const Instance: TParserModelObject): TParserModelObject;
begin
  if FChildren.IndexOf(Instance) < 0 then
    FChildren.add(Instance);
  result := Instance;
end;

//###########################################################################
// TParserContext
//###########################################################################

constructor TParserContext.Create(const SourceCode: string);
begin
  inherited Create;
  FBuffer := TTextBuffer.Create(SourceCode);
  FStack := new Stack;
end;

procedure TParserContext.Push(const ModelObj: TParserModelObject);
begin
  if Failed then
    ClearLastError();

  try
    FStack.Push(ModelObj);
  except
    on e: Exception do
    SetLastError('Internal error:' + e.Message);
  end;
end;

function TParserContext.Pop: TParserModelObject;
begin
  if Failed then
    ClearLastError();
  try
    result := FStack.Pop();
  except
    on e: Exception do
    SetLastError('Internal error:' + e.Message);
  end;
end;

function TParserContext.Peek: TParserModelObject;
begin
  if Failed then
    ClearLastError();
  try
    result := FStack.Peek();
  except
    on e: Exception do
    SetLastError('Internal error:' + e.Message);
  end;
end;

procedure TParserContext.ClearStack;
begin
  if Failed then
    ClearLastError();
  try
    FStack.Clear();
  except
    on e: Exception do
    SetLastError('Internal error:' + e.Message);
  end;
end;

//###########################################################################
// TCustomParser
//###########################################################################

constructor TCustomParser.Create(const ParseContext: TParserContext);
begin
  inherited Create;
  FContext := ParseContext;
end;

function TCustomParser.Parse: boolean;
begin
  result := false;
  SetLastErrorF('No parser implemented for class %s',[ClassName]);
end;

procedure TCustomParser.SetContext(const NewContext: TParserContext);
begin
  FContext := NewContext;
end;

end.

Generic protect for FPC/Lazarus

June 30, 2019 Leave a comment

Freepascal is not frequently mentioned on my blog. I have written about it from time to time, not always in a positive light though. Just to be clear, FPC (the compiler) is fantastic; it was one particular fork of Lazarus I had issues with, involving a license violation.

On the whole, freepascal and Lazarus is capable of great things. There are a few quirks here and there (if not oddities) that prevents mass adoption (the excessive use of include-files to “fake” partial classes being one), but as object-pascal compilers go, Freepascal is a battle-hardened, production ready system.

It’s been Linux in particular that I have used Freepascal on. In 2015 Hydro Oil wanted to move their back-end from Windows to Linux, and I spent a few months converting windows-only services into Linux daemons.

Today I find myself converting parts of the toolkit I came up with to Oxygene, but that’s a post for another day.

Generic protect

If you work a lot with multithreaded code, the unit im posting here might come in handy. Long story short: sharing composite objects between threads and the main process, always means extra scaffolding. You have to make sure you don’t access the list (or it’s elements) at the same time as another thread for example. To ensure this you can either use a critical-section, or you can deliver the data with a synchronized call. This is more or less universal for all languages, no matter if you are using Oxygene, C/C++, C# or Delphi.

When this unit came into being, I was doing quite elaborate classes with a lot of lists. These classes could not share ancestor, or I could have gotten away with just one locking mechanism. Instead I had to implement the same boilerplate code over and over again.

The unit below makes insulating (or protecting) classes easier. It essentially envelopes whatever class-instance you feed it, and returns the proxy object. Whenever you want to access your instance, you have to unlock it first or use a synchronizer (see below).

Works in both Freepascal and Delphi

The unit works for both Delphi and Freepascal, but there is one little difference. For some reason Freepascal does not support anonymous procedures, so we compensate and use inline-procedures instead. While not a huge deal, I really hope the FPC team add anonymous procedures, it makes life a lot easier for generics based code. Async programming without anonymous procedures is highly impractical too.

So if you are in Delphi you can write:

var
 lValue: TProtectedValue;
 lValue.Synchronize( procedure (var Value: integer)
 begin
   Value := Value * 12;
 end);

But under Freepascal you must resort to:

var
 lValue: TProtectedValue;

procedure _UpdateValue(var Data: integer);
begin
 Data := Data * 12;
end;

begin
  lValue.Synchronize(@_UpdateValue);
end;

On small examples like these, the benefit of this style of coding might be lost; but if you suddenly have 40-50 lists that needs to be shared between 100-200 active threads, it will be a time saver!

You can also use it on intrinsic datatypes:

lazarus

OK, here we go:

unit safeobjects;

// 	SafeObjects
//	==========================================================================
//	Written by Jon-Lennart Aasenden
//	Copyright Quartex Components LTD, all rights reserved
//
//	This unit is a part of the QTX Patreon Library
//
//	NOTES ABOUT FREEPASCAL:
//	=======================
//	Freepascal does not allow anonymous procedures, which means we must
//	resort to inline procedures instead:
//
// 	Where we in Delphi could write the following for an atomic,
//	thread safe alteration:
//
// var
// 	LValue: TProtectedValue;
//
//	LValue.Synchronize( procedure (var Value: integer)
//	begin
//		Value := Value * 12;
//	end);
//
//	Freepascal demands that we use an inline procedure instead, which
//  is more or less the same code, just organized slightly differently.
//
// var
// 	LValue: TProtectedValue;
//
//  procedure _UpdateValue(var Data: integer);
//  begin
//  	Data := Data * 12;
//  end;
//
// begin
//	LValue.Synchronize(@_UpdateValue);
// end;
//
//
//
//

{$mode DELPHI}
{$H+}

interface

uses
  {$IFDEF FPC}
  SysUtils,
  Classes,
  SyncObjs,
  Generics.Collections;
	{$ELSE}
  System.SysUtils,
  System.Classes,
  System.SyncObjs,
  System.Generics.Collections;
  {$ENDIF}

type

  {$DEFINE INHERIT_FROM_CRITICALSECTION}

  TProtectedValueAccessRights = set of (lvRead, lvWrite);

  EProtectedValue = class(exception);
  EProtectedObject = class(exception);

  (* Thread safe intrinsic datatype container.
     When sharing values between processes, use this class
     to make read/write access safe and protected. *)

  {$IFDEF INHERIT_FROM_CRITICALSECTION}
  TProtectedValue = class(TCriticalSection)
  {$ELSE}
  TProtectedValue = class(TObject)
  {$ENDIF}
  strict private
    {$IFNDEF INHERIT_FROM_CRITICALSECTION}
    FLock: TCriticalSection;
    {$ENDIF}
    FData: T;
    FOptions: TProtectedValueAccessRights;
  strict protected
    function GetValue: T;virtual;
    procedure SetValue(Value: T);virtual;
    function GetAccessRights: TProtectedValueAccessRights;
    procedure SetAccessRights(Rights: TProtectedValueAccessRights);
  public
    type
  		{$IFDEF FPC}
      TProtectedValueEntry = procedure (var Data: T);
  		{$ELSE}
      TProtectedValueEntry = reference to procedure (var Data: T);
      {$ENDIF}
  public
    constructor Create(Value: T); overload; virtual;
    constructor Create(Value: T; const Access: TProtectedValueAccessRights); overload; virtual;
    constructor Create(const Access: TProtectedValueAccessRights); overload; virtual;
    destructor Destroy;override;

    {$IFNDEF INHERIT_FROM_CRITICALSECTION}
    procedure Enter;
    procedure Leave;
    {$ENDIF}
    procedure Synchronize(const Entry: TProtectedValueEntry);

    property AccessRights: TProtectedValueAccessRights read GetAccessRights;
    property Value: T read GetValue write SetValue;
  end;

  (* Thread safe object container.
     NOTE #1: This object container **CREATES** the instance and maintains it!
              Use Edit() to execute a protected block of code with access
              to the object.

     Note #2: SetValue() does not overwrite the object reference, but
              attempts to perform TPersistent.Assign(). If the instance
              does not inherit from TPersistent an exception is thrown. *)
  TProtectedObject = class(TObject)
  strict private
    FData:      T;
    FLock:      TCriticalSection;
    FOptions:   TProtectedValueAccessRights;
  strict protected
    function    GetValue: T;virtual;
    procedure   SetValue(Value: T);virtual;
    function    GetAccessRights: TProtectedValueAccessRights;
    procedure   SetAccessRights(Rights: TProtectedValueAccessRights);
  public
    type
			{$IFDEF FPC}
      TProtectedObjectEntry = procedure (const Data: T);
	    {$ELSE}
      TProtectedObjectEntry = reference to procedure (const Data: T);
      {$ENDIF}
  public
    property    Value: T read GetValue write SetValue;
    property    AccessRights: TProtectedValueAccessRights read GetAccessRights;

    function    Lock: T;
    procedure   Unlock;
    procedure   Synchronize(const Entry: TProtectedObjectEntry);

    Constructor Create(const AOptions: TProtectedValueAccessRights = [lvRead,lvWrite]); virtual;
    Destructor  Destroy; override;
  end;

  (* TProtectedObjectList:
     This is a thread-safe object list implementation.
     It works more or less like TThreadList, except it deals with objects *)
  TProtectedObjectList = class(TInterfacedPersistent)
  strict private
    FObjects: TObjectList;
    FLock: TCriticalSection;
  strict protected
    function GetEmpty: boolean;virtual;
    function GetCount: integer;virtual;

    (* QueryObject Proxy: TInterfacedPersistent allows us to
       act as a proxy for QueryInterface/GetInterface. Override
       and provide another child instance here to expose
       interfaces from that instread *)
  protected
    function GetOwner: TPersistent;override;

  public
    type
      {$IFDEF FPC}
      TProtectedObjectListProc = procedure (Item: TObject; var Cancel: boolean);
      {$ELSE}
      TProtectedObjectListProc = reference to procedure (Item: TObject; var Cancel: boolean);
      {$ENDIF}
  public
    constructor Create(OwnsObjects: Boolean = true); virtual;
    destructor  Destroy; override;

    function    Contains(Instance: TObject): boolean; virtual;
    function    Enter: TObjectList; virtual;
    Procedure   Leave; virtual;
    Procedure   Clear; virtual;

    procedure   ForEach(const CB: TProtectedObjectListProc); virtual;

    Property    Count: integer read GetCount;
    Property    Empty: boolean read GetEmpty;
  end;

implementation

//############################################################################
//  TProtectedObjectList
//############################################################################

constructor TProtectedObjectList.Create(OwnsObjects: Boolean = True);
begin
  inherited Create;
  FObjects := TObjectList.Create(OwnsObjects);
  FLock := TCriticalSection.Create;
end;

destructor TProtectedObjectList.Destroy;
begin
  FLock.Enter;
  FObjects.Free;
  FLock.Free;
  inherited;
end;

procedure TProtectedObjectList.Clear;
begin
  FLock.Enter;
  try
    FObjects.Clear;
  finally
    FLock.Leave;
  end;
end;

function TProtectedObjectList.GetOwner: TPersistent;
begin
  result := NIL;
end;

procedure TProtectedObjectList.ForEach(const CB: TProtectedObjectListProc);
var
  LItem:  TObject;
  LCancel:  Boolean;
begin
	LCancel := false;
  if assigned(CB) then
  begin
    FLock.Enter;
    try
    	{$HINTS OFF}
      for LItem in FObjects do
      begin
        LCancel := false;
        CB(LItem, LCancel);
        if LCancel then
        	break;
      end;
      {$HINTS ON}
    finally
      FLock.Leave;
    end;
  end;
end;

function TProtectedObjectList.Contains(Instance: TObject): boolean;
begin
  result := false;
  if assigned(Instance) then
  begin
    FLock.Enter;
    try
      result := FObjects.Contains(Instance);
    finally
      FLock.Leave;
    end;
  end;
end;

function TProtectedObjectList.GetCount: integer;
begin
  FLock.Enter;
  try
    result :=FObjects.Count;
  finally
    FLock.Leave;
  end;
end;

function TProtectedObjectList.GetEmpty: Boolean;
begin
  FLock.Enter;
  try
    result := FObjects.Count<1;
  finally
    FLock.Leave;
  end;
end;

function TProtectedObjectList.Enter: TObjectList;
begin
  FLock.Enter;
  result := FObjects;
end;

procedure TProtectedObjectList.Leave;
begin
  FLock.Leave;
end;

//############################################################################
//  TProtectedObject
//############################################################################

constructor TProtectedObject.Create(const AOptions: TProtectedValueAccessRights = [lvRead, lvWrite]);
begin
  inherited Create;
  FLock := TCriticalSection.Create;
  FLock.Enter();
  try
  	FOptions := AOptions;
  	FData := T.Create;
  finally
    FLock.Leave();
  end;
end;

destructor TProtectedObject.Destroy;
begin
	FData.free;
  FLock.Free;
  inherited;
end;

function TProtectedObject.GetAccessRights: TProtectedValueAccessRights;
begin
  FLock.Enter;
  try
    result := FOptions;
  finally
    FLock.Leave;
  end;
end;

procedure TProtectedObject.SetAccessRights(Rights: TProtectedValueAccessRights);
begin
  FLock.Enter;
  try
    FOptions := Rights;
  finally
    FLock.Leave;
  end;
end;

function TProtectedObject.Lock: T;
begin
  FLock.Enter;
  result := FData;
end;

procedure TProtectedObject.Unlock;
begin
  FLock.Leave;
end;

procedure TProtectedObject.Synchronize(const Entry: TProtectedObjectEntry);
begin
  if assigned(Entry) then
  begin
    FLock.Enter;
    try
      Entry(FData);
    finally
      FLock.Leave;
    end;
  end;
end;

function TProtectedObject.GetValue: T;
begin
  FLock.Enter;
  try
    if (lvRead in FOptions) then
    	result := FData
  	else
    	raise EProtectedObject.CreateFmt('%s:Read not allowed error',[classname]);
  finally
    FLock.Leave;
  end;
end;

procedure TProtectedObject.SetValue(Value: T);
begin
  FLock.Enter;
  try
    if (lvWrite in FOptions) then
    begin
      if (TObject(FData) is TPersistent)
      or (TObject(FData).InheritsFrom(TPersistent)) then
      	TPersistent(FData).Assign(TPersistent(Value))
    	else
      	raise EProtectedObject.CreateFmt
        	('Locked object assign failed, %s does not inherit from %s',
        	[TObject(FData).ClassName,'TPersistent']);

    end else
    raise EProtectedObject.CreateFmt('%s:Write not allowed error',[classname]);
  finally
    FLock.Leave;
  end;
end;

//############################################################################
//  TProtectedValue
//############################################################################

Constructor TProtectedValue.Create(const Access: TProtectedValueAccessRights);
begin
  inherited Create;
  {$IFNDEF INHERIT_FROM_CRITICALSECTION}
  FLock := TCriticalSection.Create;
  {$ENDIF}
  FOptions := Access;
end;

constructor TProtectedValue.Create(Value: T);
begin
  inherited Create;
  {$IFNDEF INHERIT_FROM_CRITICALSECTION}
  FLock := TCriticalSection.Create;
  {$ENDIF}
  FOptions := [lvRead, lvWrite];
  FData := Value;
end;

constructor TProtectedValue.Create(Value: T; const Access: TProtectedValueAccessRights);
begin
  inherited Create;
  {$IFNDEF INHERIT_FROM_CRITICALSECTION}
  FLock := TCriticalSection.Create;
  {$ENDIF}
  FOptions := Access;
  FData := Value;
end;

Destructor TProtectedValue.Destroy;
begin
  {$IFNDEF INHERIT_FROM_CRITICALSECTION}
  FLock.Free;
  {$ENDIF}
  inherited;
end;

function TProtectedValue.GetAccessRights: TProtectedValueAccessRights;
begin
  Enter();
  try
    result := FOptions;
  finally
    Leave();
  end;
end;

procedure TProtectedValue.SetAccessRights(Rights: TProtectedValueAccessRights);
begin
  Enter();
  try
    FOptions := Rights;
  finally
    Leave();
  end;
end;

{$IFNDEF INHERIT_FROM_CRITICALSECTION}
procedure TProtectedValue.Enter;
begin
  FLock.Enter;
end;

procedure TProtectedValue.Leave;
begin
  FLock.Leave;
end;
{$ENDIF}

procedure TProtectedValue.Synchronize(const Entry: TProtectedValueEntry);
begin
  if assigned(Entry) then
  Begin
    Enter();
    try
      Entry(FData);
    finally
      Leave();
    end;
  end;
end;

function TProtectedValue.GetValue: T;
begin
  Enter();
  try
    if (lvRead in FOptions) then
    	result := FData
    else
    	raise EProtectedValue.CreateFmt('%s: Read not allowed error', [Classname]);
  finally
    Leave();
  end;
end;

procedure TProtectedValue.SetValue(Value: T);
begin
  Enter();
  try
    if (lvWrite in FOptions) then
    	FData:=Value
    else
    	raise EProtectedValue.CreateFmt('%s: Write not allowed error', [Classname]);
  finally
    Leave();
  end;
end;

end.

Quartex Desktop: a brief look at the API

June 29, 2019 Leave a comment

The Quartex Media Desktop (codename Amibian.js) has gotten a lot of cool attention lately. But telling people why it’s so awesome is not always easy. Not everyone is a software developer, and even then – very few Oxygene, Lazarus or Delphi developers have my level of background into HTML5/JS. Not that I have some hidden talent others lack, but rather that I have spent years working on this particular hybrid technology. And summing it all up is a tall order.

qtx

The Quartex Media Desktop has come a long way

Once in a while I post a few words about why the desktop matters, and why the system is going to be very important for developers and users alike. It’s growing at a rapid pace, with more and more of the underlying mechanics surfacing. I mean, me spending a month solving god knows how much – don’t mean a thing to users that just was a cool desktop. Some frankly don’t care how it works at all.

Well, in this post I will talk about The Desktop API and how it works. This is more practical information – and its the information that will help you when you start coding applications meant to integrate closely with the system.

The visual desktop

The desktop, despite being a pretty front end, serves no purpose right? Well you could not be more wrong, because there are layers of code beneath the pretty exterior that is unique to the world of JavaScript. But before we dig into that, lets have a look at how the desktop is organized.

desktop_layout

The desktop organization is very simple, but highly effective

System Menu

The Quartex Media Desktop (nicknamed “Amibian.js”) follows a long tradition where a small part of the display is always occupied by a system-menu. The menu, once learned is a powerful tool. One that will help you navigate around the system faster.

Menu app-region

The system menu is also capable of hosting smaller, helper applications. The main menu reserves a small region for such apps, simply called the menu app region. This region can stretch depending on it’s content. But such mini-apps are expected with use as little space as possible, with a hard limit of 300 pixels each.

Amibian.js ships with two standard menu apps, those are integral to the system and cannot be deleted, only disabled.

  • Time and date
  • Account name and IP address

Icon Dock

The Icon dock should be no stranger. Ubuntu Linux has a similar dock (albeit on the left side of the display), and in Windows you can create as many docking regions as you see fit. So a good docking bar is a good thing.

The purpose is to have your favorite applications readily available when you login to your system.

There is not that much to write about the icon-dock. You can edit the list of items there and change other options in the preferences. The dock can alight to the right, the left and even to the bottom of the screen.

The first button on the dock, will always be a quick-link to the preferences display. Instead of isolating preferences outside the desktop, as a separate process. I have made it intrinsic. So clicking on the Preferences button will slide the desktop out of view, and the preferences screen into view.

prefsview

The preferences view is still under construction, but its always the first item on the dock

Hosted Software

After this quick tour of the superficial, visual layer of the desktop, you could be forgiven for thinking this is all there is too it. Perhaps you imagine that “starting a program” is just loading stuff into frames and making it look like windows?

Actually, its a lot more elaborate that!

The purpose of the Quartex Media Desktop is to provide developers with common grounds. The market is filled with these juiced up, blinged to the hilt, superficial and outright fraudulent “web desktops”. Any idiot can sit down and make a website that looks like a desktop. Which is also why these desktop’s can do much beyond their initial programming.

You also have companies like CodeStamp that use native languages like C/C++ to create a custom server which deals with the grunt-work. Something I find amusing, but mostly sad. They have spent a fortune re-inventing technology that was made available 20 years ago, and that has been in use ever since.

The problem with these companies is that they are dinosaurs. I could have finished Quartex Media Desktop in a few months if I used Delphi or C++ builder. What CodeStamp have missed, is that their so-called revolutionary idea has been active and running for close to 20 years in the Delphi community. We are falling over each other in options for web desktops. I can have a fully fledged, theme based desktop up and running in less than a work day — with kick ass, llvm optimized, bug free code compiled for Windows, Linux and OS X.

The challenge, which is where the true values exists, is to get rid of native code. To write not just the client (desktop) in JavaScript, but beyond all — to write the entire back-end as Javascript! Only then do we have a truly portable and truly scalable platform to build on.

Amibian.js is designed to deal with 4 types of executables:

  • Local web applications
  • Remote web applications
  • LDEF bytecode binaries
  • Server-side shell

Let’s look at the first two since these fall into the category of “hosted applications”.

A hosted application is a normal web app that can run anywhere. It can be a simple website if you like. And like i mentioned above, external resources are always executed within the safe confounds of an iFrame.

Amibian.js allows hosted applications to call system functions that the desktop exposes. But in order for that to happen, the application must first complete a security process. But once the application is recognized and known (a process known as hand-shaking), the hosted application can integrate tightly with the desktop – so tight that it becomes indistinguishable from a local application.

But more importantly: communication between the desktop and a hosted application, is exclusively through messages. The hosted application cannot call potentially dangerous code, neither directly or indirectly. The methods it can call is held in check by the security policy for that program, which is under your control. So a bit of thought has gone into this work.

The desktop API

Behind the sweet exterior of our desktop, there are practically thousands of functions. And we must not forget that the back-end servers (Quartex Media Desktop is a distributed, clustered system).

Some of the functions a hosted-program can call, might actually exist on the server. So the desktop will accept the call, but relay that call to the back-end. When the call finishes, the response is likewise routed back to the application that initiated it.

desktop_comm

For example, if a hosted application wants to display a “load-file requester”, it would call a function named ShowRequesterFile(). This is a proxy method in the public framework that constructs a message for you, and then send that message to the desktop (browsers use pipes internally).

opendialog

A hosted application calling the ShowRequesterFile() API method. The desktop will go into modal mode and show the requester, just like you would expect from a native application

The desktop receives the message and executes the code designated for it. This involves setting the screen into modal mode, and show the “open file” dialog. When the user selects a file and the dialog closes, the result is shipped back to the application. The hosted application itself is never in direct contact with the filesystem. That is an important distinction.

Also, like mentioned earlier – some of the functions exposed by the public framework, is not a part of the desktop at all. The code to enumerate files and folders is not a part of the HTML5 code (obviously). So the desktop relay such calls to the back-end server(s) and further relay the response when that arrives.

System services

In my next article on the Quartex Media Desktop, we will have a peek at the system services and some of the functions they expose.