Archive

Archive for the ‘Object Pascal’ Category

Amibian.js under the hood

December 5, 2018 2 comments

Amibian.js is gaining momentum as more and more developers, embedded systems architects, gamers and retro computer enthusiasts discover the project. And I have to admit I’m pretty stoked about what we are building here myself!

intro

In a life-preserver no less ūüėÄ

But, with any new technology or invention there are two common traps that people can fall into: The first trap is to gravely underestimate a technology. JavaScript certainly invites this, because only a decade ago the language was little more than a toy. Since then JavaScript have evolved to become the most widely adopted programming language in the world, and runtime engines like Google’s V8 runs JavaScript almost as fast as compiled binary code (“native” means machine code, like that produced by a C/C++ compiler, Pascal compiler or anything else that produces programs that run under Linux or Windows).

It takes some adjustments, especially for traditional programmers that havent paid attention to where browsers have gone – but long gone are the days of interpreted JavaScript. Modern JavaScript is first parsed, tokenized and compiled to bytecodes. These bytecodes are then JIT compiled (“just in time”, which means the compilation takes place inside the browser) to real machine-code using state of the art techniques (LLVM). So the JavaScript of 2018 is by no means the JavaScript of 2008.

The second trap you can fall into – is to exaggerate what a new technology can do, and attach abilities and expectations to a product that simply cannot be delivered.¬†It is very important to me that people don’t fall into either trap, and that everyone is informed about what Amibian.js actually is and can deliver – but also what it wont deliver. Rome was not built-in a day, and it’s wise to study all the factors before passing judgement.

I have been truly fortunate that people support the project financially via Patreon, and as such I feel it’s my duty to document and explain as much as possible. I am a programmer and I often forget that not everyone understands what I’m talking about. We are all human and make mistakes.

Hopefully this post will paint a clearer picture of Amibian.js and what we are building here. The project is divided into two phases: first to finish Amibian.js itself, and secondly to write a Visual Studio clone that runs purely in the browser. Since it’s easy to mix these things up, I’m underlining this easy – just in case.

What the heck is Amibian.js?

Amibian.js is a group of services and libraries that combined creates a portable operating-system that renders to HTML5. A system that was written using readily available web technology, and designed to deliver advanced desktop functionality to web applications.

The services that make up Amibian.js was designed to piggyback on a thin Linux crust, where Linux deals with the hardware, drivers and the nitty-gritty we take for granted. There is no point trying to write a better kernel in 2018, because you are never going to catch up with Linus Torvalds. It’s must more interesting to push modern web technology to the absolute limits, and build a system that is truly portable and distributed.

smart_ass

Above: Amibian.js is created in Smart Pascal and compiled to JavaScript

The service layer is written purely in node.js (JavaScript) which guarantees the same behavior regardless of host platform. One of the benefits of using off-the-shelves web technology is that you can physically copy the whole system from one machine to the other without any changes. So if you have a running Amibian.js system on your x86 PC, and copy all the files to an ARM computer – you dont even have to recompile the system. Just fire up the services and you are back in the game.

Now before you dismiss this as “yet another web mockup” please remember what I said about JavaScript: the JavaScript in 2018 is not the JavaScript of 2008. No other language on the planet has seen as much development as JavaScript, and it has evolved from a “browser toy” – into the most important programming language of our time.

So Amibian.js is not some skin-deep mockup of a desktop (lord knows there are plenty of those online). It implements advanced technologies such as remote filesystem mapping, an object-oriented message protocol (Ragnarok), RPCS (remote procedure call invocation stack), video codec capabilities and much more — all of it done with JavaScript.

In fact, one of the demos that Amibian.js ships with is Quake III recompiled to JavaScript. It delivers 120 fps flawlessly (browser is limited to 60 fps) and makes full use of standard browser technologies (WebGL).

utube

Click on picture above to watch Amibian.js in action on YouTube

So indeed, the JavaScript we are talking about here is cutting edge. Most of Amibian.js is compiled as “Asm.js” which means that the V8 runtime (the code that runs JavaScript inside the browser, or as a program under node.js) will JIT compile it to highly efficient machine-code.

Which is why Amibian.js is able to do things that people imagine impossible!

Ok, but what does Amibian.js consist of?

Amibian.js consists of many parts, but we can divide it into two categories:

  • A HTML5 desktop client
  • A system server and various child processes

These two categories have the exact same relationship as the X desktop and the Linux kernel. The client connects to the server, invokes procedures to do some work, and then visually represent the response This is identical to how the X desktop calls functions in the kernel or one of the Linux libraries. The difference between the traditional, machine code based OS and our web variation, is that our version doesn’t have to care about the hardware. We can also assign many different roles to Ambian.js (more about that later).

smartdesk

Enjoying other cloud applications is easy with Amibian.js, here is Plex, a system very much based on the same ideas as Amibian.js

And for the record: I’m¬†trying to avoid a bare-metal OS, otherwise I would have written the system using a native programming language like C or Object-Pascal. So I am not using JavaScript because I lack skill in native languages, I am using JavaScript because native code is not relevant for the tasks Amibian.js solves. If I used a native back-end I could have finished this in a couple of months, but a native server would be unable to replicate itself between cloud instances because chipset and CPU would be determining factors.

The Amibian.js server is not a single program. The back-end for Amibian.js consists of several service applications (daemons on Linux) that each deliver specific features. The combined functionality of these services make up “the amibian kernel” in our analogy with Linux. You can think of these services as the library files in a traditional system, and programs that are written for Amibian.js can call on these to a wide range of tasks. It can be as simple as reading a file, or as complex as registering a new user or requesting admin rights.

The greatest strength of Amibian.js is that it’s designed to run clustered, using as many CPU cores as possible. It’s also designed to scale, meaning that it will replicate itself and divide the work between different instances. This is where things get’s interesting, because an Amibian.js cluster doesn’t need the latest and coolest hardware to deliver good performance. You can build a cluster of old PC’s in your office, or a handful of embedded boards (ODroid XU4, Raspberry PI’s and Tinkerboard are brilliant candidates).

But why Amibian.js? Why not just stick with Linux?

That is a fair question, and this is where the roles I mentioned above comes in.

As a software developer many of my customers work with embedded devices and kiosk systems. You have companies that produce routers and set-top boxes, NAS boxes of various complexity, ticket systems for trains and busses; and all of them end up having to solve the same needs.

What each of these manufacturers have in common, is the need for a web desktop system that can be adapted for a specific program. Any idiot can write a web application, but when you need safe access to the filesystem, unified API’s that can delegate signals to Amazon, Azure or your company server, things suddenly get’s more complicated. And even when you have all of that, you still need a rock solid application model suitable for distributed computing. You might have 1 ticket booth, or 10.000 nation wide. There are no systems available that is designed to deal with web-technology on that scale. Yet ūüėČ

Let’s look at a couple of real-life scenarios that I have encountered, I’m confident you will recognize a common need. So here are some roles that Amibian.js can assume and help deliver a solution rapidly. It also gives you some ideas of the economic possibilities.

Updated: Please note that we are talking javascript here, not native code. There are a lot of native solutions out there, but the whole point here is to forget about CPU, chipset and target and have a system floating on top of whatever is beneath.

  • When you want to change some settings on your router – you login to your router. It contains a small apache server (or something similar) and you do all your maintenance via that web interface. This web interface is typically skin-deep, annoying to work with and a pain for developers to update since it’s connected to a native apache module which is 100% dependent on the firmware. Each vendor end up re-inventing the wheel over and over again.
  • When you visit a large museum notice the displays. A museum needs to display multimedia, preferably on touch capable devices, throughout the different exhibits. The cost of having a developer create native applications that displays the media, plays the movies and gives visual feedback is astronomical. Which is why most museums adopt web technology to handle media presentation and interaction. Again they re-invent the wheel with varying degree of success.
  • Hotels have more or less the exact same need but on a smaller scale, especially the larger hotels where the lobby have information booths, and each room displays a web interface via the TV.
  • Shopping malls face the same challenge, and depending on the size they can need anything from a single to a hundred nodes.
  • Schools and education spend millions on training software and programming languages every year. Amibian.js can deliver both and the schools would only pay for maintenance and adaptation – the product itself is free. Kids get the benefit of learning traditional languages and enjoying instant visual feedback! They can learn Basic, Pascal, JavaScript and C. I firmly believe that the classical languages will help make them better programmers as they evolve.

You are probably starting to see the common denominator here?

They all need a web-based desktop system, one that can run complex HTML5 based media applications and give them the same depth as a native operating-system; Which is pretty hard to achieve with JavaScript alone.

Amibian.js provides a rich foundation of more than 4000 classes that developers can use to write large, complex and media rich applications (see Smart Mobile Studio below). Just like Linux and Windows provides a wealth of libraries and features for native application development – Amibian.js aims to provide the same for cloud and embedded systems.

And as the name implies, it has roots in the past with the machine that defined multimedia, namely the Commodore Amiga. So the relation is more than just visually, Amibian.js uses the same system architecture – because we believe it’s one of the best systems ever designed.

If JavaScript is so poor, why should we trust you to deliver so much?

First of all I’m not selling anything. It’s not like this project is something that is going to make me a ton of cash. I ask for support during the development period because I want to allocate proper time for it, but when done Amibian.js will be free for everyone (LGPL). And I’m also writing it because it’s something that I need and that I havent seen anywhere else. I think you have to write software for yourself, otherwise the quality wont be there.

Secondly, writing Amibian.js in raw JavaScript with the same amount of functions and depth would take years. The reason I am able to deliver so much functionality quickly, is because I use a compiler system called Smart Mobile Studio. This saves months and years of development time, and I can use all the benefits of OOP.

Prior to starting the Amibian.js project, I spent roughly 9 years creating Smart Mobile Studio. Smart is not a solo project, many individuals have been involved – and the product provides a compiler, IDE (editor and tools), and a vast run-time library of pre-made classes (roughly 4000 ready to use classes, or building-blocks).

amibian_shell

Writing large-scale node.js services in Smart is easy, fun and powerful!

Unlike other development systems, Smart Mobile Studio compiles to JavaScript rather than machine-code. We have spent a great deal of time making sure we could use proper OOP (object-oriented programming), and we have spent more than 3 years perfecting a visual application framework with the same depth as the VCL or FMX (the core visual frameworks for C++ builder and Delphi).

The result is that I can knock out a large application that a normal JavaScript coder would spend weeks on – in a single day.

Smart Mobile Studio uses the object-pascal language, a dialect which is roughly 70% compatible with Delphi. Delphi is exceptionally well suited for writing large, data driven applications. It also thrives for embedded systems and low-level system services. In short: it’s a lot easier to maintain 50.000 lines of object pascal code, than 500.000 lines of JavaScript code.

Amibian.js, both the service layer and the visual HTML5 client application, is written completely using Smart Mobile Studio. This gives me as the core developer of both systems a huge advantage (who knows it better than the designer right?). I also get to write code that is truly OOP (classes, inheritance, interfaces, virtual and abstract methods, partial classes etc), because our compiler crafts something called a VMT (virtual method table) in JavaScript.

Traditional JavaScript doesn’t have OOP, it has something called prototypes. With Smart Pascal I get to bring in code from the object-pascal community, components and libraries written in Delphi or Freepascal – which range in the hundreds of thousands. Delphi alone has a massive library of code to pick from, it’s been a popular toolkit for ages (C is 3 years older than pascal).

But how would I use Amibian.js? Do I install it or what?

Amibian.js can be setup and used in 4 different ways:

  • As a true desktop, booting straight into Amibian.js in full-screen
  • As a cloud service, accessing it through any modern browser
  • As a NAS or Kiosk front-end
  • As a local system on your existing OS, a batch script will fire it up and you can use your browser to access it on https://127.0.0.1:8090

So the short answer is yes, you install it. But it’s the same as installing Chrome OS. It’s not like an application you just install on your Linux, Windows or OSX box. The whole point of Amibian.js is to have a platform independent, chipset agnostic system. Something that doesn’t care if you using ARM, x86, PPC or Mips as your CPU of preference. Developers will no doubt install it on their existing machines, Amibian.js is non-intrusive and does not affect or touch files outside its own eco-system.

But the average non-programmer will most likely setup a dedicated machine (or several) or just deploy it on their home NAS.

The first way of enjoying Amibian.js is to install it on a PC or ARM device. A disk image will be provided for supporters so they can get up and running ASAP. This disk image will be based on a thin Linux setup, just enough to get all the drivers going (but no X desktop!). It will start all the node.js services and finally enter a full-screen web display (based on Chromium Embedded) that renders the desktop. This is the method most users will prefer to work with Amibian.js.

The second way is to use it as a cloud service. You install Amibian.js like mentioned above, but you do so on Amazon or Azure. That way you can login to your desktop using nothing but a web browser. This is a very cost-effective way of enjoying Amibian.js since renting a virtual instance is affordable and storage is abundant.

The third option is for developers. Amibian.js is a desktop system, which means it’s designed to host more elaborate applications. Where you would normally just embed an external website into an IFrame, but Amibian.js is not that primitive. Hosting external applications requires you to write a security manifest file, but more importantly: the application must interface with the desktop through the window’s message-port. This is a special object that is sent to the application as a hand-shake, and the only way for the application to access things like the file-system and server-side functionality, is via this message-port.

Calling “kernel” level functions from a hosted application is done purely via the message-port mentioned above. The actual message data is JSON and must conform to the Ragnarok client protocol specification. This is not as difficult as it might sound, but Amibian.js takes security very seriously – so applications trying to cause damage will be promptly shut down.

You mention hosted applications, do you mean websites?

Both yes and no: Amibian.js supports 3 types of applications:

  • Ordinary HTML5/JS based applications, or “websites” as many would call them. But like I talked about above they have to establish a dialog with the desktop before they can do anything useful.
  • Hybrid applications where half is installed as a node.js service, and the other half is served as a normal HTML5 app. This is the coolest program model, and developers essentially write both a server and a client – and then deploy it as a single package.
  • LDEF compiled bytecode applications, a 68k inspired assembly language that is JIT compiled by the browser (commonly called “asm.js”) and runs extremely fast. The LDEF virtual machine is a sub-project in Amibian.js

The latter option, bytecodes, is a bit like Java. A part of the Amibian.js project is a compiler and runtime system called LDEF.

patron_asm2

Above: The Amibian.js LDEF assembler, here listing opcodes + disassembling a method

The first part of the Amibian.js project is to establish the desktop and back-end services. The second part of the project is to create the worlds first cloud based development platform. A full Visual Studio clone if you like, that allows anyone to write cloud, mobile and native applications directly via the browser (!)

Several languages are supported by LDEF, and you can write programs in Object Pascal, Basic and C. The Basic dialect is especially fun to work with, since it’s a re-implementation of BlitzBasic (with a lot of added extras). Amiga developers will no doubt remember BlitzBasic, it was used to create some great games back in the 80s and 90s. It’s well suited for games and multimedia programming and above all – very easy to learn.

More advanced developers can enjoy Object Pascal (read: Delphi) or a sub-set of C/C++.

And please note: This IDE is designed for large-scale applications, not simple snippets. The ultimate goal of Amibian.js is to move the entire development cycle to the cloud and away from the desktop. With Amibian.js you can write a cool “app” in BlitzBasic, run it right in the browser — or compile it server-side and deploy it to your Android Phone as a real, natively compiled application.

So any notion of a “mock desktop for HTML” should be firmly put to the side. I am not playing around with this product and the stakes are very real.

But why don’t you just use ChromeOS?

There are many reasons, but the most important one is chipset independence. Chrome OS is a native system, meaning that it’s core services are written in C/C++ and compiled to machine code. The fundamental principle of Amibian.js is to be 100% platform agnostic, and “no native code allowed”. This is why the entire back-end and service layer is targeting node.js. This ensures the same behavior regardless of processor or host system (Linux being the default host).

Node.js has the benefit of being 100% platform independent. You will find node.js for ARM, x86, Mips and PPC. This means you can take advantage of whatever hardware is available. You can even recycle older computers that have lost mainstream support, and use them to run Amibian.js.

A second reason is: Chrome OS might be free, but it’s only as open as Google want it to be. ChromeOS is not just something you pick up and start altering. It’s dependence on native programming languages, compiler toolchains and a huge set of libraries makes it extremely niche. It also shields you utterly from the interesting parts, namely the back-end services. It’s quite frankly boring and too boxed in for any practical use; except for Google and it’s technology partners that is.

I wanted a system that I could move around, that could run in the cloud, on cheap SBC’s. A system that could scale from handling 10 users to 1000 users – a system that supports clustering and can be installed on multiple machines in a swarm.

A system that anyone with JavaScript knowledge can use to create new and exciting systems, that can be easily expanded and serve as a foundation for rich media applications.

What is this Amiga stuff, isn’t that an ancient machine?

In computing terms yes, but so is Unix. Old doesn’t automatically mean bad, it actually means that it’s adapted and survived challenges beyond its initial design. While most of us remember the Amiga for its games, I remember it mainly for its elegant and powerful operating-system. A system so flexible that it’s still in use around the world – 33 years after the machine hit the market. That is quite an achievement.

image2

The original Amiga OS, not bad for a 33-year-old OS! It was and continues to be way ahead of everyone else. A testament to the creativity of its authors

Amibian.js as the name implies, borrows architectural elements en-mass from Amiga OS. Quite simply because the way Amiga OS is organized and the way you approach computing on the Amiga is brilliant. Amiga OS is much more intuitive and easier to understand than Linux and Windows. It’s a system that you could learn how to use fully with just a couple of days exploring; and no manuals.

But the similarities are not just visual or architectural. Remember I wrote that hosted applications can access and use the Amibian.js services? These services implement as much of the original ROM Kernel functions as possible. Naturally I can’t port all of it, because it’s not really relevant for Amibian.js. Things like device-drivers serve little purpose for Amibian.js, because Amibian.js talks to node.js, and node talks to the actual system, which in turn handles hardware devices. But the way you would create windows, visual controls, bind events and create a modern, event-driven application has been preserved to the best of my ability.

But how does this thing boot? I thought you said server?

If you have setup a dedicated machine with Amibian.js then the boot sequence is the same as Linux, except that the node.js services are executed as background processes (daemons or services as they are called), the core server is initialized, and then a full-screen HTML5 view is set up that shows the desktop.

But that is just for starting the system. Your personal boot sequence which deals with your account, your preferences and adaptations – that boots when you login to the system.

When you login to your Amibian.js account, no matter if it’s just locally on a single PC, a distributed cluster, or via the browser into your cloud account — several things happen:

  1. The client (web-page if you like) connects to the server using WebSocket
  2. Login is validated by the server
  3. The client starts loading preferences files via the mapped filesystem, and then applies these to the desktop.
  4. A startup-sequence script file is loaded from your account, and then executed. The shell-script runtime engine is built into the client, as is REXX execution.
  5. The startup-script will setup configurations, create symbolic links (assigns), mount external devices (dropbox, google drive, ftp locations and so on)
  6. When finished the programs in the ~/WbStartup folder are started. These can be both visual and non-visual.

As you can see Amibian.js is not a mockup or “fake” desktop. It implements all the advanced features you expect from a “real” desktop. The filesystem mapping is especially advanced, where file-data is loaded via special drivers; drivers that act as a bridge between a storage service (a harddisk, a network share, a FTP host, Dropbox or whatever) and the desktop. Developers can add as many of these drivers as they want. If they have their own homebrew storage system on their existing servers, they can implement a driver for it. This ensures that Amibian.js can access any storage device, as long as the driver conforms to the driver standard.

In short, you can create, delete, move and copy files between these devices just like you do on Windows, OSX or the Linux desktop. And hosted applications that run inside their own window can likewise request access to these drivers and work with the filesystem (and much more!).

Wow this is bigger than I thought, but what is this emulation I hear about? Can Amibian.js really run actual programs?

Amibian.js has a JavaScript port of UAE (Unix Amiga Emulator). This is a fork of SAE (scripted Amiga Emulator) that has been heavily optimized for web. Not only is it written in JavaScript, it performs brilliantly and thus allows us to boot into a real Amiga system. So if you have some floppy-images with a game you love, that will run just fine in the browser. I even booted a 2 gigabyte harddisk image ūüôā

But Amiga emulation is just the beginning. More and more emulators are ported to JavaScript; you have Nes, SNes, N64, PSX I & II, Sega Megadrive and even a NEO GEO port. So playing your favorite console games right in the browser is pretty straight forward!

But the really interesting part is probably QEmu. This allows you to run x86 instances directly in the browser too. You can boot up in Windows 7 or Ubuntu inside an Amibian.js window if you like. Perhaps not practical (at this point) but it shows some of the potential of the system.

I have been experimenting with a distributed emulation system, where the emulation is executed server-side, and only the graphics and sound is streamed back to the Amibian.js client in real-time. This has been possible for years via Apache Guacamole, but doing it in raw JS is more fitting with our philosophy: no native code!

I heard something about clustering, what the heck is that?

Remember I wrote about the services that Amibian.js has? Those that act almost like libraries on a physical computer? Well, these services don’t have to be on the same machine — you can place them on separate machines and thus its able to work faster.

47470965_10155861938320906_4959664457727868928_n

Above: The official Amibian.js cluster, 4 x ODroid XU4s SBC’s in a micro-rack

A cluster is typically several computers connected together, with the sole purpose of having more CPU cores to divide the work on. The cool thing about Amibian.js is that it doesn’t care about the underlying CPU. As long as node.js is available it will happily run whatever service you like – with the same behavior and result.

The official Amibian.js cluster consists of 5 ODroid XU4/S SBC (single board computers). Four of these are so-called “headless” computers, meaning that they don’t have a HDMI port – and they are designed to be logged into and software setup via SSH or similar tools. The last machine is a ODroid XU4 with a HDMI out port, which serves as “the master”.

The architecture is quite simple: We allocate one whole SBC for a single service, and allow the service to copy itself to use all the CPU cores available (each SBC has 8 CPU cores). With this architecture the machine that deals with the desktop clients don’t have to do all the grunt work. It will accept tasks from the user and hosted applications, and then delegate the tasks between the 4 other machines.

Note: The number of SBC’s is not fixed. Depending on your use you might not need more than a single SBC in your home setup, or perhaps two. I have started with 5 because I want each part of the architecture to have as much CPU power as possible. So the first “official” Amibian.js setup is a 40 core monster shipping at around $250.

But like mentioned, you don’t have to buy this to use Amibian.js. You can install it on a single spare X86 PC you have, or daisy chain a couple of older PC’s on a switch for the same result.

Why Headless? Don’t you need a GPU?

The headless SBC’s in the initial design all have GPU (graphical processing unit) as well as audio capabilities. What they lack is GPIO pins and 3 additional USB ports. So each of the nodes on our cluster can handle graphics at blistering speed — but that is ultimately not their task. They serve more as compute modules that will be given tasks to finish quickly, while the main machine deals with users, sessions, traffic and security.

The 40 core cluster I use has more computing power than northern europe had in the early 80s, that’s something to think about. And the pricetag is under $300 (!). I dont know about you but I always wanted a proper mainframe, a distributed computing platform that you can login to and that can perform large tasks while I do something else. This is as close as I can get on a limited budget, yet I find the limitations thrilling and fun!

Part of the reason I have opted for a clustered design has to do with future development. While UAE.js is brilliant to emulate an Amiga directly in the browser – a more interesting design is to decouple the emulation from the output. In other words, run the emulation at full speed server-side, and just stream the display and sounds back to the Amibian.js display. This would ensure that emulation, of any platform, runs as fast as possible, makes use of multi-processing (read: multi threading) and fully utilize the network bandwidth within the design (the cluster runs on its own switch, separate from the outside world-wide-web).

I am also very interested in distributed computing, where we split up a program and run each part on different cores. This is a topic I want to investigate further when Amibian.js is completed. It would no doubt require a re-design of the LDEF bytecode system, but this something to research later.

Will Amibian.js replace my Windows box?

That depends completely on what you use Windows for. The goal is to create a self-sustaining system. For retro computing, emulation and writing cool applications Amibian.js will be awesome. But Rome was not built-in a day, so it’s wise to be patient and approach Amibian.js like you would Chrome OS. Some tasks are better suited for native systems like Linux, but more and more tasks will run just fine on a cloud desktop like Amibian.js.

Until the IDE and compilers are in place after phase two, the system will be more like an embedded OS. But when the LDEF compiler and IDE is in place, then people will start using it en-mass and produce applications for it. It’s always a bit of work to reach that point and create critical mass.

tomes

Object Pascal is awesome, but modern, native development systems are quite demanding

My personal need has to do with development. Some of the languages I use installs gigabytes onto my PC and you need a full laptop to access them. I love Amibian.js because I will be able to work anywhere in the world, as long as a browser and normal internet line is available. In my case I can install a native compiler on one of the nodes in the cluster, and have LDEF emit compatible code; voila, you can build app-store ready applications from within a browser environment.

 

I also love that I can set-up a dedicated platform that runs legacy applications, games – and that I can write new applications and services using modern, off the shelve languages. And should a node in the cluster break down, I can just copy the whole system over to a new, affordable SBC and keep going. No super expensive hardware to order, no absurd hosting fees, and finally a system that we all can shape and use in a plethora of systems. From a fully fledged desktop to a super advanced NAS or Router that use Amibian.js to give it’s customers a fantastic experience.

And yes, I get to re-create the wonderful reality of Amiga OS without the absurd egoism that dominates the Amiga owners to this day. I don’t even know where to begin with the present license holders – and I am so sick of the drama that rolling my own seemed the only reasonable path forward.

Well — I hope this helps clear up any misconceptions about Amibian.js, and that you find this as interesting as I do. As more and more services are pushed cloud-side, the more relevant Amibian.js will become. It is perfect as a foundation for large-scale applications, embedded systems — and indeed, as a solo platform running on embedded devices!

I cant wait to finish the services and cluster this sucker on the ODroid rack!

If you find this project interesting, head over to my Patreon website and get involved! I could really use your support, even if it’s just a $5 “high five”. Visit the project at:¬†http://www.patreon.com/quartexNow

Mirroring groups on the MeWe network

November 18, 2018 1 comment

Following my Administrator woes on Facebook post I have had a look at alternative places to run a forum. I realized that Facebook is getting pretty intrinsic in society around the world, so I know everyone won’t be interested in a new venue. But honestly, MeWe is very simple to use and have an UI experience very close to the Facebook app.

amibian_shell

This picture was flagged as “hateful” on Facebook, which has rendered my account frozen for the next 30 days. While I agree to the strict rules that FB advocates, they really must deploy more human beings if they intend to have success in this endeavour. And that means really investigating what is flagged, reading threads in all languages etc. Because the risk of flagging the wrong guy is just too high. Admins get flagged all the time for kicking out bullies, and the use of reporting tools as a revenge strategy *must* carry a penalty.

MeWe is thankfully not like G+ which (in my personal opinion) was counter-intuitive and damn right intrusive. We all remember the G+ auto-upload feature, where some 3 million users had their family photos, vacation photos and .. ehrm, “explicitly personal” photos uploaded without consent.

Well, the MeWe app is very simple, and registration is as easy as it should be. You make a user name, a password, and type in your email; then you verify your email and that’s it!

Besides, my main use for Facebook or MeWe is to run the groups – I spend very little of my time socializing anyways. With the amount of groups and media i push on a daily basis it’s quite frankly their loss.

mewe

The MeWe group functionality is very good, and almost identical to Facebook

The alternative to MeWe is to setup a proper web forum instead. I have bought 6 domains that are now collecting dust so yes, I will look into that – but the whole purpose of a social platform is that you don’t have to do maintenance beyond daily management – so MeWe saves us some time.

So head over to MeWe and register! Here are the two main groups I manage these days. The main groups are on facebook, but i have now registered the same groups on MeWe.

MeWe doesn’t cost anything and takes less than 5 minutes to join. Just like G+ and Facebook, MeWe can be installed as an app for your phone (both iOS and Android). So as far as alternatives go, it’s a good alternative. One more app wont do much harm I imagine.

Note: I will naturally keep my Facebook account for the sake of the groups, but having experienced this 4 times in 9 years, my tolerance of Mr. Suckerberg is quickly reaching its limits. If I have blurted something out I have no problems standing for that and taking the penalty, but posting a picture of software development? In a group dedicated to software development? That takes some impressive mental acrobatics to accept.

Admin woes on Delphi Developer

November 17, 2018 8 comments

For well over 10 years I have been running different interest groups on Facebook. While Delphi Developer is without a doubt the one that receives most attention from myself and my fellow moderators, I also run the Quartex Components group and lately, Amiga Disrupt. The latter dedicated to my favorite hobby, namely retro computing.

I have to say, it’s getting harder to operate these groups under the current Facebook regime. I applaud them for implementing a moral codex, that is both fair and good, but that also means that their code must be able to distinguish between random acts of hate and bullying, and moderator operations.

A couple of days ago I posted an update picture from Amibian.js. This is a picture of my vmware development platform, with pascal code, node.js and the HTML5 desktop running. You would  have be completely ignorant of technology to not recognize the picture as having to do with software development.

amibian_shell

This picture was flagged as hateful, and was enough to get an admin’s account frozen for 30 days

Sadly facebook contains all sorts of people, and for some reason even grown men will get into strange, ideological debates about what constitutes retro-computing. In this case the user was a die-hard original-amiga fan, who on seeing my post about amibian.js went on a spectacular rant. Listing in alphabetical and chronological order, the depths of depravity that people have stooped to in implementing 68k as Javascript.

Well, I get 2-3 of these comments a week and the rules for the group is crystal clear: if you post comments like that, or comments that are racist, hateful or otherwise regarded as a provocative to the general group standard — you are given a single warning and then you are out.

So I gave him a warning that such comments are not welcome; He immediately came back with a even worse response – and that was the end of that.

But before I managed to kick the user, he reported a picture of Amibian as hateful. Again, we are talking about a screen-dump from VMWare with pascal code. No hate, no poor choice of images – nothing that would violate ordinary Facebook standards.

The result? Facebook has now frozen my account for 30 days (!)

Well I’m not even going to bother being upset, because this is not the first time. When people seem to willfully seek out conflict, only to use the FB’s reporting tools as weapons of revenge — well, there is not much I can do.

Anyways, Gunnar, Glenn, Peter and Dennis got you covered – and I’ll see you in a month. I think it’s time i contact FB in Oslo and establish separate management profiles.

Delphi Developer Demo Competition votes

November 3, 2018 Leave a comment

A month ago we setup a demo competition on Delphi Developer. It’s been a few years since we did this, and demo competitions are always fun no matter what, so it was high time we set this up!

all_prices

This years prizes are awesome!

Initially we had a limit of at least 10 contestants for the competition to go through, but I will make an exception this time. The prices are great and worth a good effort. I was a bit surprised by the low number of contestants since more than 60 developers signed our poll about the event. So I was hoping for at least 20 to be honest.

I think the timing was a bit off, we are closer to the end of the year and most developers are working under deadlines. So next year I think I’ll move the date to June or July.

Be that as it may – a demo competition is a tradition by now, so we proceed to the voting process!

The contestants

The contestants this year are:

  • Christian Hackbart
  • Mogens Lundholm
  • steven Chesser
  • Jens Borrisholt
  • Paul Nicholls

Note: Dennis is a moderator on Delphi Developer, as such he cannot partake in the voting process.

The code

Each contestant has submitted a project to the following repositories (in the same order as the names above), so make sure you check out each one and inspect them carefully before casting your vote.

Voting

We use the poll function built-into Facebook, so just visit us at Delphi Developer to cast your vote! You can only vote once and there is a 1 week deadline on this (so votes are done on the 10th this month.

Leaving The Smart Company

October 30, 2018 Leave a comment

Effective immediately (30.10.2018) I am leaving The Smart Company AS and I have re-distributed my shares.

It’s almost unreal to think that it’s close to nine years since I started this project. Smart Mobile Studio continues to be a technology I am passionate about, and I must admit this is a tough call. It has taken me months to arrive at this decision, but I sadly see no other alternative given the circumstances.

In retrospect, we probably released the technology too early. I see more and more Delphi and C++ builder developers waking up to JavaScript and what web technology can do in the right hands. In other words, they are now where I was nine years ago.

When it comes to reasons there is not really much to say. There have been a few internal issues that were unfortunate, but for me this boils down to time, money and vision. Not really anything juicy to share, im simply not interested in being a partner under the terms the board currently operates with, not because I don’t believe in the product, but because I find the modus operandi counter productive.

Having said that, I am thankful for the journey and everything I have learned, and wish the team all the best for the future.

Smart Mobile Studio lives on

Even though I’m leaving the company and am re-distributed my stock, the product will continue without me. I still use and will continue to use Smart Mobile Studio in my work. But I no longer represent the company, nor will I be involved in further development. So my¬†role as head of research and development is over.

smart_ass

My new compiler core and web IDE is written in Smart Pascal

There is a time and place for all things, and while it breaks my heart to hand Smart Mobile Studio over to a future without me; my time right now is better spent at Embarcadero – working to promote and deliver the language I love above all else; namely Delphi.

Besides Embarcadero I do consulting and occasional training sessions. I have also taken on responsibilities connected with my Patreon project. So I have more than enough to keep me occupied, both at Embarcadero and personally.

But this is not a clean cut. There is no animosity involved. I will continue to use Smart Mobile Studio to build cool stuff. I will publish articles on things I make and continue to evolve the QTX Framework (which has been dormant for two years now).

Sincerely

Jon L. Aasenden

Delphi Developer Competition

September 28, 2018 Leave a comment

The Delphi Developer group on Facebook has been around for a few years, and in that time we have held two very interesting demo competitions. The last competition we held was for Smart Pascal (Smart Mobile Studio) only, but we are extending it to include the dialects supported by our group; meaning Delphi, Smart Pascal, Freepascal and Remobjects Oxygene!

Embarcadero shipped over some extra goodies for us, so the competition this year is indeed a magical one. The top 3 contestants all get the official Embarcadero T-Shirt. We also throw in 10 Sencha ball-pens for each of the top 3 contestants; this is in addition to the actual prizes listed below (!)

The #1 winner not only get the 100‚ā¨ FPGA devkit (see prizes below), he or she walks off with a high-quality, stainless steel Embarcadero branded coffee mug that holds half a litre of breakfast! (I seriously wanted to keep this for myself).

all_prices

The prizes in all their glory!

Submission rules are:

  • Source submission (GPL, LGPL) + binary
  • No dependencies on commercial libraries or components
  • Submissions must be available through GIT or BitBucket
  • Submission must include everything it needs to be compiled

Submission categories are:

  • Graphical demo (demo-scene style)
  • Games and multimedia
  • General purpose (utility programs)

Use the following Google form to register:

The purpose of the submissions is to show off both the language and your skills. Back in 2013 we got a ton of really cool demo-scene stuff, demonstrating timeless techniques; everything from bouncing meta-balls, gouraud shaded vectors, sinus scroll-texts and webgl landscape flight. We also had a fantastic fractal explorer program, bitmap rotozoom generator – and two great games! Which both made it onto AppStore and Google Play!

First prize

first_price.png

The winner walks off with some exciting stuff!

The first prize this year is something really, really special. The winner walks off with a spiffing Altera Cyclone IV FPGA starter board. This is a spectacular FPGA kit that allows you to upload a wide range of ready-to-rock FPGA core’s, as well as your own logic designs.

But to make it more accessible we added a retro daughter board, this gives you VGA, audio, keyboard, mouse, MicroSD, serial and two old school joystick ports. The daughterboard is needed if you plan on using some of the retro-cores out there. I personally love the Amiga core (shock, I know) but you can run anything from a humble Spectrum to Sega Megadrive, SNES, Atari ST/E, Neo-Geo and many others.

While the daughter-board makes this wonderful for retro-computing and gaming, fpga is first and¬†foremost a tool for engineering. It ships with a USB-Blaster which allows you to connect it directly to your PC and it will be recognized as a device. FPGA modeling applications will pick this up and you can test out designs “live”, or just place a core on the SD-card and edit the boot config.

The kit sells for roughly 100‚ā¨ with a case, but getting both the motherboard and the retro daughter-board is difficult. These things are sold separately, and the daughter board is produced in small numbers by dedicated hackers. So winning a kit that is pre-assembled, soldered and ready to go is quite a prize!

If you are even remotely interested in FPGA programming, this should give you goosebumps!

Second prize

tinker

The most powerful SBC I have ever used

The silver medal is the powerful Asus Tinkerboard, this is probably the most powerful SBC you can get below 100‚ā¨. It delivers 10 times the firepower a Raspberry PI 3b can muster – and is superbly suited for Android development, Smart Mobile Studio kiosk systems and much, much more.

Of all the board I have tested and own this is the one with enough CPU grunt (even the mighty ODroid XU4 can’t touch this) to rival a low-end x86 laptop. You have to fork out for a SnapDragon IV to beat the Tinkerboard.

I have two of these around the house myself, one as a game console running Emulation Station (emulates PSX 1, 2 and 3 games), and another under my TV with Kodi and a 2 terabyte movie collection.

Third prize

Last but not least the bronze medal is a Raspberry PI 3b. The PI should be no stranger to programmers today, it more or less defines the IOT revolution and has, by far, the biggest collection of software available of all SBC (single board computers) available today.

Raspberry_Pi_3_Large

The device that represents the IOT phenomenon

The PI is a wonderful starter board for Delphi developers who want to play with hardware under android. It’s also a fantastic board for Smart and FPC development.

I use a PI to test node.js services written in Smart Mobile Studio.

Dates

We start the clock on the 1st of october and submission must be delivered by the 31st. So you have a full month to code something cool!

Remember comments

While not always possible, try to write clean code. Part of the point here is to use these demos as an educational source.

We wont reject non-commented code, but please try to avoid 20k lines of spaghetti.

Hints and tips

Delphi has brilliant support for DirectX and OpenGL, so taking advantage of hardware acceleration should not be a problem. FMX is largely powered by the GPU and has 3d rendering and modeling as an integral feature – so Delphi developers have a slight advantage there.

16_bit_smb2_smm_wip_by_trackmasterfan341-da3nch3

Tilesets are graphics-blocks that can be used to create large game levels with a map-editor

If you want to use DIB’s under vanilla WinAPI there is always Graphics32, a wonderful and exceptionally detailed library for fast graphics.

Music: Most demo-scene code use mod music (actually today people play MP3’s as well), and there are good wrappers for player libraries like Bass. It’s always a nice touch to add a spot of music (and literally millions of free mod tracks¬†freely available). So give your demo some flair by adding a kick-ass mod track, or impress us by writing a score yourself?

In the world of demo coding anything goes! Bring out that teenage spirit and go wild, create wonderful graphical effects, vector objects, scrolling texts, games or whatever tickles your fancy. If you need inspiration, check out the demo scene videos on YouTube (if that is what you would like to submit of course). A kick-ass database application, X server renderer, paint program or a compiler — it’s all good!

Make people go WOW that is cool!

Tile graphics: which is often used in games and demos, can be found almost anywhere. If you google “tileset” or “game tiles” you should get more than you need. Brilliant for parallax scrolling. Why not give Super Mario a run for its money? Show the next generation how to code a platform game! Check out the Tiled map-editor, this has a JSON export filter for you Smart Pascal coders.

screenshot-objects

Tiled is a powerful map editor. There is also mappy, which I believe have a Delphi player

OK guys, the game is a-foot! May the best coder win!

Smart Mobile Studio presentation in Oslo

September 28, 2018 Leave a comment

Yesterday evening I traveled to Oslo and held a presentation on Smart Mobile Studio. The response was very positive and I hope that everyone who attended left with some new ideas regarding JavaScript, the direction the world of software is heading –¬†and how Smart Mobile Studio can be of service to Delphi.

Smart Pascal is especially exciting in concert with Rad-Server, where it opens the doors to Node based, platform independent services and sub clustering. With relatively little effort Rad-Server can absorb the wealth that node has to offer through Smart – but on your terms, and under Delphi’s control. The best of both worlds.

You get the stability and structure that makes Delphi so productive, and then infuse that with the flamboyance, flair and async brilliance that JavaScript represents.

More important than technology is the community! It’s been a few years since I took part in the Oslo Delphi Club’s meetups, so it was great to chat with Halvard Vassbotten,¬†Trond Gr√łntoft, Alf Christoffersen, Torgeir Amundsen and Robin Bakker face to face again. I also had the pleasure of meeting some new Delphi developers.

prespic

Presentation at¬†ABG Sundal Collier’s offices in Oslo

Thankfully the number of attendees were a moderate 14, considering this was my first presentation ever. Last time I visited was when our late PaweŇā GŇāowacki presented FMX, and the turnout was in the ballpark of a hundred. So it was an easy-going, laid-back atmosphere throughout the evening.

Conflict of interest?

Some might wonder why a person working for Embarcadero will present Smart Mobile Studio, which some still regard as competition. Smart is not in competition with Delphi and never will be. It is written by Delphi developers for Delphi developers as a means to bridge two worlds. It’s a project of loyalty and passion. We continue because we love what it enables us to do.

The talks on Smart that I am holding now, including the november talk in London, were booked before I started at Embarcadero (so it’s not a case of me promoting Smart in leu of Embarcadero). I also made it perfectly clear when I accepted the job that my work on Smart will continue in my spare time. And Embarcadero is fine with that. So I am free to spend my after-work hours and weekend time as I see fit.

smart_desktop

The Smart Desktop, codename Amibian.js, is a solid foundation for building large-scale web front-ends. Importing Sencha’s JS API’s can be done via our TypeScript wizard

So, after my presentation in London in november Smart Mobile Studio presentations (at least hosted by me) can only take place during weekends. Which is fair and the way it should be.

Recording the English version

Since the presentation last evening was in Norwegian, there was little point in recording it. Norway have a healthy share of Delphi developers, but a programming language available internationally must be presented in English.

techA couple of months back, before I started working for Embarcadero I promised to do a video presentation that would be available on Delphi Developer and YouTube. I very much like to keep that promise. So I will re-do the presentation in English as soon as possible. I would have done it today after work, but buying tech from the US have changed quite dramatically in just a couple of years.

In short: I haven’t received the remaining equipment I ordered for professional video recording and audio podcasting (which is a part of my Patreon offering as well), as such there will be no live video-feed /slash/ webinar – and questions will be limited to either the comment-section on Delphi Developer; or perhaps more appropriate, the Smart Mobile Studio Forums.

I’m hoping to get the HD camera, mic-table-arm and various bits-and-bobs i ordered from the US sometime next week. I have no idea why FedEx have become so difficult lately, but the package is apparently at LaGuardia, and I have to send receipts that document that these items are paid for before they ship them abroad (so the package manifest listing me as the customer, my address, phone number and receipt from the seller is somehow not enough). This is a first for me.

Interestingly they also stopped a package from Embarcadero with giveaways for my upcoming Delphi presentation in Sweden – at which point I had to send them a copy of my work contract to prove that I indeed work for an American company.

But a promise is a promise, so come rain or shine it will be done. Worst case scenario we can put Samsung’s claims to the test and hook up a mic + photo lens and see if their commercials have any merit.

Linux: political correctness vs Gnu-Linux hacker spirit

September 26, 2018 6 comments

Unless you have been living under a rock, the turmoil and crisis within the Linux community these past weeks cannot have escaped you. And while not directly connected to Delphi or the Delphi Developer group on Facebook, the effects of a potential collapse within the core Linux development team will absolutely affect how Delphi developers go about their business. In the worst possible scenario, should the core team and it’s immediate extended developers decide to walk away, their code walks with them. Rendering much of the work countless companies have invested in the platform unreliable at best – or in need of a rewrite at worst¬†(there is a legal blind-spot in GPL revision 1 and 2, allowing developers to¬†rescind their¬†code).

Large parts of the kernel would have to be re-invented, a huge chunk of the sub-strata and bedrock that distributions like Ubuntu, Mint, Kali and others rests on – could risk being removed, or rescinded as the term may be, from the core repositories. And perhaps worst of all, the hundreds of patches and new features yet to be released might never see the light of day.

To underline just how dire the situation¬†has been the past couple of weeks, Richard Stallman,¬†Eric S. Raymond, Linus Torvalds and others are threatening, openly and legally, to pull all their code (September 20th, Linux Kernel Mailing List)¬†if the bullying by a handful of activist groups doesn’t stop. Linus is still in limbo having accepted the code of conduct these activist demand implemented, but has yet to return to work.

Cohen-Linus-Torvalds

Linus Torvalds is famous for many things, but his personality is not one of them

But the interesting part of the Linux debacle is not the if’s and but’s, but rather the methods used by these groups to get their way. How can you enforce a “code of conduct” using methods that themselves are in violation with that code of conduct? It really is a case of “do as I say, not as I do”; And it has escalated into a gutter fight masquerading as social¬†warfare¬†where slander, stigmata, false accusations and personal attacks of the worst possible type are the weapons. All of which is now having a real and tangible impact on business and¬†technology.

Morally bankrupt actions is not activism

These activists, if they deserve that title, even went as far as deciding to dig into the sexual-life of one of the kernel developers. And when finding out that he was into BDSM¬†(a form of sexual role-play), they publicly stigmatized the coder as a¬†rape sympathizer (!).¬†Not because it’s true, but because the verbal association alone¬†makes it easier for bullies like Coraline to justify the social execution of a man in public.

What makes my jaw drop in all this, is the complete lack of compassion these so-called activists demonstrate. They seem blind to the price of such stigmata for the innocent; not to mention the insult to people who have suffered sexual abuse in their lives. For each false accusation of rape that is made, the difficulty for actual abuse victims to seek justice increases exponentially. It is a heartless, unforgivable act.

Personally, I can’t say I understand the many sexual preferences people have these days. I find myself googling what the different abbreviations mean. The movie 50 shades of gray¬†revolved around this stuff. But one thing is clear:¬† as long as there are consenting adults involved, it is none of our business. If there is evidence of a crime, then it should be brought before the courts.¬†And no matter what we might feel about the subject at hand, it can never justify stigmatizing a person for the rest of his life. Not only is this a violation of the very code of conduct these groups wants implemented – it’s also illegal in most of the civilized world.¬†And damn immoral and out-of-line if you ask me.

The goal cannot justify the means

The irony in all of this, is that the accusation came from Coraline herself. A transgender woman born in the wrong body; a furious feminist now busy fighting to put an end to bullying  of transgender minorities in the workplace (which she claims is the reason she got fired from Github). Yet she has no problems being the worst kind of bully herself on a global scale. I question if Coraline is even morally fit to represent a code of conduct. I mean, to even use slander such as rape-sympathizer in context with getting a code of conduct implemented? Digging into someones personal life and then using their sexual preference as leverage? It is utterly outrageous!

It is unacceptable and has no place in civilized society. Nor does a code of conduct, beyond ordinary expectations of decency and tolerance, have any place in a rebel motivated R&D movement like Linux.

Linux is not Windows or OS X. It was born out of the free software movement back in the late 1960’s (Stallman with GNU) and the Scandinavian demo and hacker scene during the 80’s and 90’s (the Linux kernel that GNU rests on). This is hacker territory and what people might feel about this in 2018 it utterly irrelevant. These are people that start the day with 4Chan for pete sake! The primary motivation of Stallman and Linus is to undermine, destroy and bury Microsoft and Apple in particular. And they have made no secret of this agenda.

Expecting Linux or their makers to be politically correct is infantile and naive, because Linux is at its heart a rebellion, “a protest of technical excellence and access to technology undermining¬†commercial tyranny and corporate slavery”. That is not my personal opinion, that is straight out of a Richard Stallman book Free as in Freedom; His papers reads more like a religious manifesto; a philosophical foundation for a technological utopia, seeded and influenced by the hippie spirit of the 1960s. Which is where Stallman’s heart comes from.

You cannot but admire Stallman for sticking to his principles for 50+ years. And thinking he is going to just roll over because activists in this particular decade has a beef with how hackers address each other or comment their code, well — I don’t think these activists understand the hacker community at all. If they did they would back off and stop poking dragons.

Linux vs the sensitivity movement?

Yesterday I posted a video article that explained some of this in simple, easy terms on Delphi Developer. I picked the video that summed up the absurdities involved (as outlined above) as quickly as possible, rather than some 80 minute talk on YouTube. We have a long tradition of posting interesting IT news, things that are indirectly connected with Delphi, C++ builder or programming in general. We also post articles that have no direct connection at all – except being headlines within the world of IT. This helps people stay aware of interesting developments, trends and potential investments.

42318056_270283593825810_4377349158193856512_o

The head of the “moral codex” doesn’t strike me as unbiased and without an axe to grind

As far as politics is concerned I have no interest what so ever. Nor would I post political material in the group because our focus is technology, Delphi, object pascal and software development in general. The exception being if there is a bill or law passed in the US or EU that affects how we build applications or handle data.

Well, this post was no different.

What was different was that some individuals are so acclimatized to political debate that they interpret everything as a political statement. So criticism of the methods used are made synonymous with criticism of a cause. This can happen to the best of us; human beings are passionate animals and I think we can all agree that politics has taken up an unusual amount of space lately. I can’t ever remember politics causing so much bitterness, divide and hate as it does today. Nor can I remember sound reason being pushed aside in favour of immediate emotional trends. And it really scares me.

Anyways, I wrote that “I stand by my god given rights to write obscene comments in my code“. Which is a reference to one of the topics Linus is being flamed for, namely his use of the F word in his own code. My argument is that, the kernel is ultimately Torvalds work, and it’s something he gives away for free. I dont have any need for obscenity in my code, but I sure as hell reserve the right to do so in my personal projects. How two external groups (in this case a very aggressive feminist group combined with¬†LGBTQIA) should have any say in how Linus formats his code (or you for that matter) or the comments he writes – it makes no sense. It’s free, take it or leave it. And if you join a team and feel offended by how things are done, you either ignore it or leave.

It might not be appropriate of Linus to use obscenity in his comments, but do you really want people to tell you what you can or cannot write in your own code?¬†Lord knows there are pascal units online that have language unfit for publishing, but nobody is forcing you to use them. I cant stand Java but I dont join their forums and sit there like a 12 year old bitching about how terrible Java is. It’s just infantile, absurd mentality.

So that is what my reference was to, and I took for granted that people would pick up on that since Linus is infamous for his spectacular rants in the kernel (and verbally in interviews). Some of his commits have more rants than code, which I find hilarious. There is a collection of them online and people read them for kicks because he is, for all means and purposes, the Gordon Ramsey of programming.

And I also made a reference to “tree hugging millennial moralists”. Not exactly hard-core statements in these trying times. We live in a decade where¬†vegan customers are looking to sue restaurants for serving meat. Maybe I’m old-fashioned but for me, that is like something out of Monty Python or Mad Magazine. I respect vegans, but I will not be dictated by them.

I mean, the group people call millennials¬†is after all recognized as a distinct generation¬†due to a pattern of unreasonable demands on society (and in extreme cases, reality itself). In some parts of the world this is a real problem, because you have a whole generation that expects to bag upper-management salary on a paper route. When this is not met you face a tantrum and aggressiveness that should not exist beyond a certain age. Having a meltdown like a six-year-old when you are twenty-six is, well, not something I’m even remotely interested in dealing with.

And I speak from experience here, I had the misfortune of working with one extreme case for a couple of years. He had a meltdown roughly once a month and verbally abused everyone in the office. Including his boss. I still can’t believe he put up with it for so long, a lesser man would have physically educated him on the spot.

The sensitivity movement

But (and this is important) like always, a stereotype is never absolute. The majority within the millennial age group are nothing like these extreme cases. In fact we have two administrators in Delphi Developer that both fall under the millennial age group Рyet they are the exact opposite of the stereotype. They are extremely hard-working, demonstrate good moral and good behavior, they give of themselves to the community and are people I am proud to call my friends.

The people I refer to as¬†the sensitivity movement consists of men and women that hold, in my view, demands to life that are unreasonable. We live in times where for some reason, and don’t ask me why, minorities have gotten away with terrible things (slander, straw-men tactics, blame shifting, perversion of facts, verbal abuse, planting dangerous rumours and false accusation; things that can ruin a person for life) to impose their needs opposed to the greater good and majority. And no, this has nothing to do with politics, it has to do with expectation of normal decency and minding your own business. As a teenager I had my share of rebellion (some would say three shares), but I never blamed society; instead I wanted to understand why society was the way it is, which led me to studying history, comparative religion and philosophy.

The minorities of 2018 have no interest in understanding why, they mistake preference with offence, confuse kindness with weakness – and are painfully unable to discern knowledge from wisdom. The difference between fear and respect might be subtle, but on reflection a person should make important discoveries about their own nature. Yet this seem utterly lost on men and women in their 20s today.

And just to make things crystal clear: the minorities I am referring to here as the so-called sensitivity movement, are not the underprivileged or individuals suffering a disadvantage. The minorities are in fact highly privileged individuals – enjoying the very freedom of expression they so eagerly want taken away from people they don’t like. That is a very dangerous path.

Linux, the bedrock of the anti-establishment movement

The Linux community has a history of being difficult. Personally I find them both helpful and kind, but the core motivation behind Linux as a phenomenon cannot be swept under the carpet or ignored: these are rebels, rogues, people who refuse to bend the knee.

Linux itself is an act of defiance, and it exists due to two key individuals who both are extremely passionate by nature, namely Richard Stallman and Linus Torvalds.

Attacking these from all sides is shameful. I find no other words for it. Especially since its not a matter of principles or sound moral values, but rather a matter of pride and selfish ideals.

Name calling will not be tolerated

The reason I wrote this post was not to involve everyone in the dire situation of Linux, at least not to bring an external problem into our community and make it our problem. It was news that is of some importance.

I wrote this blogpost because a member somehow nicknamed me as “maga right-wing” something. And I’m not even sure how to respond to something like that.

First of all I have no clue what maga even is, I think it’s that cap slogan trump uses? Make america great again or something like that? Secondly, I live in Norway and know very little of the intricacies of domestic american politics. I have voted left for some 20 years, with exception of last norwegian election when I¬†voted center. How my respect for¬†Stallman and Linus, and how the hacker community operates (I grew up in the hacker community)¬†– somehow connects me to some political agenda on another continent, is quite frankly beyond me.

But this is exactly the thing I wrote about above – the method being deployed by these groups. A person read something he or she doesn’t like,¬†connects that to a pre-defined personality type, this is then supposed to justify wild accusations – and he or she then proceeded directly to treating someone accordingly. THAT behavior IS offensive to me, because there should be a dialog quite early in that chain of events. We have dialog to avoid causing harm – not as a means to cause further damage.

Is it the end of Linux as we know it?

No. Linus has been a loud mouth for ages, and he actually have people who purge his code of swear words (which is kinda funny) – but he has accepted the code of conduct and taken some time off.

The threat Stallman and the core team has made however is very real, meaning that the inner circle of Linux developers can flick¬†the kill switch if they want to, but I think the negative press Coraline and those forcing their agenda onto the Linux foundation is getting, will make them regret it. And of course, articles like the New Yorker published didn’t help the situation.

Having said that, these developers are not normal people. Normal is a cut of average behavior. And neither Stallman, Linus of the hacker community fall under the term “normal” in the absolutesense of the word. Not a single individual that has done something of importance technologically fall under that group. Nor do they have any desire to be normal either, which is a death sentence in the hacker community. The lowest, most worthless status you can hold as a hacker, is normal.

These are people who build operating systems for fun. They are passion driven, artistic and highly emotional. And as such they could, should more gutter tricks be deployed, decide to burn the house down before they hand it over.

So it’s an interesting case well worth keeping an eye on. Preferably one that doesn’t add or subtract from what is there.

Help&Doc, documentation made easy

September 13, 2018 Leave a comment

I have been flamed so much lately for not writing proper docs for Smart Mobile Studio, that I figured it was time to get this under wraps. Now in my defence I’m not the only one on the Smart Pascal team, sure I have the most noise, but Smart is definitely not a solo operation.

So the irony of getting flamed for lack of docs, having perpetually lobbied for docs at every meeting since 2014; well that my friend is mother nature at her finest. If you stick your neck out, she will make it her personal mission to mess you up.

So off I went in search of a good documentation system ..

The mission

My dilemma is simple: I need to find a tool that makes writing documentation simple. It has to be reliable, deal with cross chapter links, handle segments of code without ruining the formatting of the entire page – and printing must be rock solid.

dims

Writing documentation in Open Office feels very much like this

If you are pondering why I even mention printing in this digital age, it’s because I prefer physical media. Writing a solid book, be it a mix between technical reference and user’s guide, can’t compare to a blog post. You need to let the material breathe for a couple of days between sessions to spot mistakes. I usually print things out, let it rest, then go over it with an old fashion marker.

Besides, my previous documentation suite couldn’t do PDF printing. I’m sure it could, just not around me. Whenever I picked Microsoft PDF printer as the output, it promptly committed suicide. Not even an exception, nothing, just “poff” and it terminated. The first time this happened I lost half a days work. The third time I uninstalled it, never to look back.

Another thing I would like to see, is that the program deals with graphics more efficiently than Google Docs, and at the very least more¬†intuitively than Open Office (Oo in short). Now before you argue with me over Oo, let me just say that I’m all for Open-Office, it has a lot of great features. But¬†in their heroic pursuit of cloning Microsoft to death, they also cloned possibly the worst layout mechanisms ever invented; namely the layout engine of Microsoft Word 2001.

Let’s just say that scaling and perspective is not the best in Open Office. Like Microsoft Word back in the day, it faithfully favours page-breaks over perspective based scaling. It will even flip the orientation if you don’t explicitly tell it not to.

Help & Doc

As far as I know, there are only two documentation suite’s on the market related with Delphi and coding. At least when it comes to producing technical manuals, help files and being written in Delphi.

First you have the older and perhaps more established Help & Manual. This is followed by the younger but equally capable Help & Doc. I ended up with the latter.

main_window

Help & Doc’s main window, clean and pleasing to the eye

Both suite’s¬†have more in common than similar names (which is really confusing), they offer pretty much the exact same functionality.¬†Except¬†Help & Doc is considerably cheaper and have a couple features that developers favour. At least I do, and I imagine the needs of other developers will be similar.

Being older, Help & Manual have worked up more infrastructure , something which can be helpful in larger organizations. But their content-management strategy is (at least to me) something of a paradox. You need more than .NET documentation and shared editing to justify the higher price -and having to install a CMS to enjoy shared editing? It might make sense if you are a publisher, ghostwriter or if you have a large department with 5+ people doing nothing but documentation; but competing against Google Documents in 2018? Sorry, I don’t see that going anywhere.

For me, Help & Doc makes more sense because it remains true to its basic role: to help you create documentation for your products. And it does that very, very well.

server_window

Help & Doc has a built-in server for testing web documentation with minimum of fuzz

I also like that Help & Doc are crystal clear about their origins. Help & Manual have anonymized their marketing to tap into .Net and Java; they are not alone, quite a few companies try to hide the fact that their flagship product is written in object pascal. So you get a very different vibe from these two websites and their products.

The basics

Much like the competition, Help & Doc offers a complete WYSIWYG text editor with support for computed fields. So you can insert fields that hold variable data, like time, date (and various pieces of  a full datetime), project title, author name [and so on]. I hope to see script support at some point here, so that a script could provide data during PDF/Web generation.

The editor is responsive and well written, supports tables, margins and formatting like you expect from a modern editor. Not really sure how much I need to write about a text editor, most Delphi and C++ developers have high standards and I suspect they have used RichView, which is a well-known, high quality component.

One thing I found very pleasing is that fonts are not hidden away but easily accessible; various text styles hold a prominent place under the Write tab on top of the window. This is very helpful because you don’t have to apply a style to see how it will look, you can get an idea directly from the preview.

styles_window

Very nice, clear and one click away

Being able to insert conditional sections is something I found very easy. It’s no doubt part of other offerings too, but I have never really bothered to involve myself. But with so many potential targets, mobile phones, iPads, desktops, Kindle – suddenly this kind of functionality becomes a thing.

insert_condition

Adding conditional sections is easy

For example if you have documentation for a component, one that targets both Delphi, .NET and COM (yes people still use COM believe it or not) you don’t need 3 different copies of the same documentation – with only small variations between them. Using the conditional operations you can isolate the differences.

With Apple OSX, iOS and Android added to the compiler target (for Delphi), the need to separate Apple only instructions on how to use a library [for example], and then only include that for the Apple output is real. Windows and Linux can have their own, unique sections — and you don’t need to maintain 3 nearly similar documentation projects.

When you combine that with script support, Help & Doc is flexing some powerful muscles. I’m very much impressed and don’t regret getting this over the more expensive Help and Manual. Perhaps it would be different if I was writing casual books for a publisher, or if I made .NET components (oh the humanity!) and desperately needed to please Visual Studio. But for a hard-core Delphi and object pascal developer, Help & Doc has everything I need – and then some!

Wait, what? Script support?

Scripting docs

One of the really cool things about Help & Doc is that it supports pascal scripting. You can do some pretty amazing things with a script, and being able to iterate through the documentation in classical parent / child relationships is very helpful.

script_window

The central role of Object Pascal is not exactly hidden in Help & Doc

If you are wondering why a script engine would even be remotely interesting for an author, consider the following: you maintain 10-12 large documentation projects, and at each revision there will be plenty of small and large changes. Things like class-names getting a new name. If you have mentioned a class 300 times in each manual, changing a single name is going to be time-consuming.

This is where scripting is really cool because you can write code that literates through the documentation, chapter by chapter, section by section, paragraph by paragraph – and automatically replace all of them in a second.

snap01

Metablaster was a desktop search engine I made in 1999. I used scripts to target each search engine

I haven’t spent a huge amount of time with the scripting API Help & Doc offers yet (more busy writing), but I imagine that a plugin framework is a natural step in its evolution. I made a desktop search engine once, back between 1999 and 2005 (just after the bronze age) where we bolted Pascal Script into the system, then implemented each search engine parser as a script. This was very flexible and we could adapt to changes faster than our competitors.

While I can only speculate and hope the makers of Help & Doc reads this, creating an API that gives you a fair subset of Delphi (streams, files, string parsing et-al) that is accessible from scripts, and then defining classes for import scripts, export scripts, document processing scripts; that way developers can write their own import code to support a custom format (medical documentation springs to mind as an example). Likewise developers could write export code.

This is a part of the software I will explore more in the weeks to come!

Verdict – is it worth it?

As of writing you get Help & Doc professional at¬†249 ‚ā¨, and you can pick up the standard edition for 99‚ā¨. Not exactly an earth shattering price for the mountain of work involved in creating such an elaborate system. If you factor in how much time it saves you: yes, why on earth would you even think twice!

new_window

Using Help & Doc is very easy, here we are creating a new doc with a few chapters

I have yet to find a function that their competition offers that would change my mind. As a developer who is part of a small team, or even as a solo developer – documentation has to be there. I can list 10.000 reasons why Smart never got the documentation it deserves, but at least now I can scratch one of them off my list. Writing 500 A4 pages in markdown would have me throwing myself into the fjords at -35 degrees celsius.

And being the rogue that I am, should I find intolerable bugs you will be sure to hear about them — but I have nothing to complain about here.

Its one of the most pleasant pieces of software I have used in a long time.

Human beings and licenses

Before I end this article, I also want to mention that Help & Doc has a licensing system that surprised me. If you buy 2 licenses for example, you get to link that with a computer. So you have very good control over your ownership. Should you run out of licenses, well then you either have to relocate an existing license or get a new one. You are not locked out and they don’t frag you with compliance threats.

licenses

Doesn’t get much easier than this

I use VMWare a lot and sometimes forget that I’m running a clone on top of a clone, and believe me I have gotten some impressive emails in the past. I think the worst was Xamarin Mono which actually deactivated my entire environment until I called them and explained I was doing live debugging between two VMWare instances.

So very cool to see that you can re-allocate an existing license to whatever device you want without problems.

To sum up: worth every penny!

HexLicense, Patreon and all that

September 6, 2018 Comments off

Apparently using modern service like Patreon to maintain components has become a point of annoyance and confusion. I realize that I formulated the initial HexLicense post somewhat vague and confusing, in retrospect I will admit that and also take critique for not spending a little more time on preparations.

Having said that, I also corrected the mistake quickly and clarified the situation. I feel some of the comments have been excessively critical for something that, ultimately, is a service to the community. But I’ll roll with the punches and let’s just put this issue to bed.

From the top please

fromthetopI have several products and frameworks that naturally takes time to maintain and evolve. And having to maintain websites, pay for tax and invoicing services, pay for hosting (and so on), well it consumes a lot of hours. Hours that I can no longer afford to spend (my work at Embarcadero must come first, I have a family to support). So Patreon is a great way to optimize a very busy schedule.

Today developers solve a lot of the business strain by using Patreon. They make their products open source, but give those that support and help fund the development special perks, such as early access, special builds and a more direct line of control over where the different projects and sub-projects are heading.

The public repository that everyone has access to is maintained by pushing the code on interval, meaning that the public “free stuff” (LGPL v3 license) will be some months behind the early-access that patrons enjoy. This is common and the same approach both large and small teams go about things in 2018. Quite radical compared to what we “old-timers” are used to, but that’s how things work now. I just go with flow and try to do the most amount of good on the journey.

Benefits of Patreon

The benefits are many, but first and foremost it has to do with time. Developer don’t have to maintain 3-4 websites, pay for invoicing services on said products, pay hosting fees and rent support forums — instead focus is on getting things done. So instead of an hour here and there, you can (based on the level of support) allocate X hours within a week or weekend that are continuous.

4a128ea6852444fbfc89022be4132e9b

Patreon solves two things: time and cost

Everyone wins. Those that support and help fund the projects enjoy early access and special builds. The community at large wins because the public repository is likewise maintained, albeit somewhat behind the cutting edge code patrons enjoy. And the developers wins because he or she doesn’t have to run around like a mad chicken maintaining X number of websites -wasting more time doing maintenance than building cool new features.

 

And above all, pricing goes down. By spreading the cost over a larger base of interest, people get access to code that used to cost $200 for $35. The more people that helps out, the more the cost can be reduced per tier.

To make it crystal clear what the status of my frameworks and component packages are, here is a carbon copy from HexLicense.com

For immediate release

Effective immediately HexLicense is open-source, released under the GNU Lesser General Public License v3. You can read the details of that license by clicking here.

Patreon model

Patreon_logo.svgIn order to consolidate the various projects I maintain, I have established a Patreon account. This means that people can help fund further development on HexLicense, LDEF, Amibian and various Delphi libraries as a whole. This greatly simplifies things for everyone.

I will be able to allocate time based on a broader picture, I also don’t need to pay for invoicing¬†services, web hosting and more. This allows me to continue to evolve the components and code, but without so many separate product identities to maintain.

Patreon supporters will receive updates before anyone else and have direct access to the latest code at all times. The public bitbucket repository will be updated on interval, but will by consequence be behind the Patreon updates.

Further security

One of the core goals on Patreon is the evolution of a bytecode compiler. This should be of special interest to HexLicense users. Being able to compile modules that hackers will be unable to debug gives you a huge advantage. The engine is designed so that the instruction-set can be randomized for a particular build. Making it unique for your application.

patron_asm1

The LDEF assembler prototype running under Smart Mobile Studio

Well, I want to thank everyone involved. It has been a great journey to produce so many components, libraries and solutions over the years – but now it’s time for me to cut down on the number of projects and focus on core technology.

HexLicense with the update license files will be uploaded to BitBucket shortly.

Sincerly

Jon Lennart Aasenden

 

 

Support my work on Patreon, get awesome stuff

September 2, 2018 3 comments

For well over a decade now I have tried my best to be of service to the Delphi community. I run six pascal forums on Facebook, I teach Delphi for free in my spare time and I help people solve problems, find jobs and get inspired.

“to utterly re-write the traditional development toolchain and create
a desktop environment and development studio that is unbound
by chipset, cpu and platform”

I am about to embark on the biggest journey I have ever undertaken, namely to deliver a technological platforms that combined will give both users and developers unprecedented advantages.

patreon

Support my work by becoming a patron

The challenge with new and awesome technology, is that it can be difficult to convey. The full implications of something revolutionary needs a little bit of gestation, maturity and overview before the “OMG” factor hits home. But thankfully the Delphi and Smart Pascal community is amongst the most learned, creative and innovative I have ever seen. Not to mention the Amiga retro scene that also have supported me – a group made up of hardware wizards, FPGA programmers and hackers that eat assembly code for breakfast.

I won’t dazzle you with empty promises or quick fixes. Every part of what I present here is rooted in code I have running in my lab. I hope that the doors Smart Mobile Studio have opened, the work I have done on the RTL and the products I have made – that they at least have earned me your patience; and that you will read this and see if it’s worthy of your support.

Context

When we released Smart Mobile Studio 3.0 we made a live web desktop demo to showcase some of the potential the technology has to offer. What was not mentioned was that this in fact was not a mockup or slap-dash demo intended to impress you with Quake III or the Bassoon music tracker. It has deeper roots and is a re-formation on the Quartex Desktop API that has been an essential part of Smart Mobile Studio since the beginning.

The desktop, codename Amibian.js, is actually a platform that is a part of a larger, loftier goal. One that was outlined to investors as early as 2013. Sadly I was unable to secure funds for it, despite the fact that two companies are using the prototype for kiosk and embedded systems already (city kiosk terminals in Spain running on ODroid XU4 ARM boards, and also an educational platform for schools in New Zealand).

The goal, to cut it short, is quite simply:¬†to utterly re-write the traditional development toolchain and create a desktop environment and development studio that is unbound by chipset, cpu and platform. In other words, to re-implement and build a “visual studio” environment that lives completely in the cloud, that can be accessed by any modern browser, on any operating system, anywhere in the world.

I’m not talking about Notepad or Ace here, I am talking about a complete IDE with form designer, database designer, cloud endpoints, multi language support and above all – the ability to compile and deploy both virtual and native applications through established build services. All of it JavaScript, all of it running on Node.js, Electron or HTML5.

You wont be drag & dropping components, you will be dropping entire ecosystems.

Smart Mobile Studio, new tools for a new age

When I started some eight years ago, this would have been impossible. There were no compilers that could take a complex language like object pascal or C++ and successfully express that as JavaScript. JavaScript on its own, at least compared to C++ or Delphi, is quite poor. Things we take for granted like classes, linear inheritance, virtual and abstract methods (requires a VMT), interfaces (and more) simply does not exist. There have been some advances lately of course, but JavaScript is and will always be, a prototype based runtime system.

For eight years the Smart Mobile Studio team have worked to create the ecosystem needed to make large-scale application development for JSVM (Javascript virtual machine, the browser, Phonegap, NodeJS and more) a reality. We have forged the compiler, the support code and an RTL spanning thousands of classes from scratch.

If is now possible to write JS based applications that rival native applications both in scope and complexity. This has without a doubt been one of the hardest tasks I have ever been involved in.

With Smart Mobile Studio in place and the foundation stone set – we can finally get to work on the real product. Namely a cloud forge unlike any other.

The Amibian desktop environment

The desktop platform that forms the basis of my work Рwas nicknamed Amibian due to its visual inspiration from Amiga OS 4.1, a modern but somewhat obscure operating system for PPC computers. But while there are cunning visual similarities, Amibian.js is a very different beast under the hood.

First of all Amibian.js is written from scratch to be cloud oriented. The Ragnarok message server at the heart of the system, is capable of delegating hundreds of users each dispatching high data volume simultaneously. It is a server system that is designed from scratch to be clustered, scalable and distributed.

devkit

The Ragnarok message protocol performs brilliantly, here testing IO messages live

You can run it together with the client, forming an OS much like ChromeOS, on something as small as a Tinkerboard ($70 embedded board) or scale it to a 100 node Amazon cluster. If node.js can be installed, Amibian can run. CPU or chipset is quite frankly irrelevant.

This is the foundation that the next generation IDE and compiler toolchain will be built on. A toolchain that doesn’t care if you prefer Linux, Windows, OSX or Android.

If you have a HTML5 compliant browser, you can create full-scale applications with the same level of depth as Delphi, and target 8 operating systems and more than 50 embedded devices.

What does that mean for Delphi users

Like Smart Mobile Studio, Amibian is not meant to compete with Delphi. It is designed to complement and extend Delphi – allowing Delphi developers to reach avenues where native code might be impractical or less cost-effective.

The new compiler is based around the LDEF virtual machine specification that I drafted spring 2018. It is written in Smart Pascal and runs on every system that node.js supports (which as of writing is substantial). LDEF is a bytecode specification designed to make native code generation easy. Unlike .Net or Java, LDEF is a register based virtual machine. It is a cross-section of how ARM, x86 and MC68000¬†CPU’s work in real life. It has stacks, registers, condition flags, data control, program control, absolute and relative addressing; and of course instructions that all CPUs support.

patron_asm1

The LDEF assembler is implemented completely in Smart Pascal. The picture shows the testbed with a visual coding editor. The assembler is meant to run under node.js server-side but can also be hosted on a website or post compiled into a native executable

When executing this bytecode under JavaScript, the runtime uses the subset of JavaScript called “Asm.js” out of the box. AsmJS is more mature than WebAssembly and less restrictive (modules are not sandboxed from the DOM). So to make it short: the code runs close to native courtesy of JIT optimization.

LDef is modular, meaning that parser, compiler, assembler and codegen (the part that converts bytecodes it to something else) are separate modules. Writing a WebAssembly codegen, x86 codegen or ARM codegen can be done separately without breaking the existing tooling.

patron_asm2

Having assembled the code (see picture above) the list command dumps the bytecodes to the console in readable fashion. It is then disassembled using the “dasm” command.

The LDEF prototype has been completely written in Smart Pascal, but a port is underway for Delphi and C++ builder. This gives Delphi developers the benefit of using bytecode libraries in their code. If you install Delphi server-side, you can use Amibian as a pure web front end for Delphi (!)

Create applications anywhere, on anything

Since everything is JavaScript you are no longer bound to chipset or CPU. You can set up Amibian on Amazon or Azure, an office server or an affordable, off the shelves SBC (single board computer). You can daisy chain 10 older PC’s into a cluster and get 5 more years out of the hardware; the compiler is made in JS; it doesn’t care if the real CPU is outdated. It cares about bytes and endian-ness, that’s it.

Screenshot

Early implementation of the desktop, here running native 68k (Amiga) code directly. Both x86 and PPC runtimes are now possible – the days of cloud are here

You can be on holiday in Spain armed only with an iPad and a BlueTooth keyboard, and should inspiration strike, you can login and write your application without even installing an app on your iPad. You just need a modern browser to start writing applications.

Patreon Tiers

Depending on your level of support, you get access to different parts of my work. As of writing I have 4 frameworks that is being maintained and that I want to continue to maintain for those that support me:

  • $5: High five! Support the work as a nice gesture
  • $10: Access to and support for developing my tweening library for VCL
  • $25:¬†License management for VCL and FMX, full source code access to Hexlicense and support for porting Ironwood to Delphi + a new REST based registration server
  • $35:¬†Rage libraries, get full access to the ByteRage database framework, Pixelrage graphics library and support their evolution. The timeline includes SQL and condition parsing which will not be covered by the current running tutorial. Want a clean Delphi alternative to SQLite? Well, let me make it for you.
  • $45:¬†LDEF assembler and virtual machine. Get full source code access to the Smart Pascal assembler (runs on node.js) and the Delphi port as soon as it rolls off the assembly line (pun intended). Enjoy proper documentation for instructions, bytecode format and enjoy both the native and web assembler application!¬†As a bonus, this level gives you access to video tutorials and recordings dealing with LDEF, HexLicense, Tweening and everything else.
  • $50: Amibian and Ragnarok: Amibian.js client, server and development toolchain.
    This is the motherload and you get to enjoy all of it before anyone else.

    • Full access to beta builds, updates, new features – all of it before anyone else!
    • Explore the Ragnarok client / server message API
    • Follow my video tutorials and let me help you dig into Smart Pascal and node.js
    • Ask questions and get a deeper understanding of both Smart Mobile Studio, Amibian.js and LDEF.
    • Have a front seat reserved as we unleash the power of Delphi, Smart Pascal and JavaScript on the world.
  • $100:¬†Amibian Embedded Setup:¬†For the true Amibian.js supporters! You get all the perks of previous tiers, but with an added bonus of pre-made Amibian.js disk images for the ODroid XU4 and the Asus Tinkerboard once LDEF and the IDE has been implemented.These disk images starts the Ragnarok server as a daemon (Linux Service) during the boot sequence. The system then continues booting into a full-screen webview that renders the Amibian.js desktop. There is no Linux desktop involved.
    This is by far the most cost effective setup for Kiosk and Embedded work with either a touch display or keyboard access.

    As an extra perk this version of Amibian.js contains an optimized version of uae.js (Amiga emulation) and is capable of executing ADF disks and harddisk images directly in their own window.

    With the service layer now fully developed, combined with truly platform independent compiler technology Рwe have in fact created an interesting alternative to ChromeOS. One with a minimal footprint that is cost effective and easy to expand. A system that you have full control over and can change, rebrand, modify and enjoy!

    Congratulations! You have helped bring Amibian.js and a whole new development paradigm into this world!

If this wets your appetite then head over to my Patreon site and show your support! I start shipping code to those that support me next week, so get onboard and let’s make it happen!

Final words

26229892_10155095303530906_800744220432589611_nPatreon is not the same as a kickstarter or a formal investment, I think this is important to underline. I hope however that you find my work interesting and that you would like to see this realized.

LDEF is not just a fancy bytecode runtime, it is also a framework that other developers can use to make new languages. The whole point of this is to blow the old limitations away and to really push technology to the maximum.

Being able to write system services that work the same on all operating-systems, and then deploy entire ecosystems¬†– this used to be science fiction. Now it’s not.

I want to thank those that have become patrons – it really means so much! If enough support my work I can allocate more time for implementing the tools the community needs and be of greater service to everyone.

Thank you for your time

Jon Lennart Aasenden

Getting organized: register a Delphi user group or club!

August 28, 2018 Leave a comment

It’s been a hectic week at Delphi Developer, but a highly productive one! I am very happy that so many developers have responded and help with the organizational work. Because Delphi and C++ builder developers must get organized. If you want to see lasting, positive results, this has to happen. There are wast quantities of individuals, groups and companies that use Delphi and C++ builder around the world. Yet we all sit in our own bubble, thinking we are alone. It’s time to change that.

“we have decades of experience and technical expertise. And that is worth protecting”

In 2016 I was contacted by a Norwegian HR company (read: head hunters) and offered a Delphi position as at a local business. Turned out the business had struggled to find Delphi programmers for over six months. When I told them about Oslo Delphi club and showed them the 7500 members we have in Delphi Developer on Facebook, they were gobsmacked. The human resource company was equally oblivious to the sheer number of Developers just in Norway, let alone internationally.

Part of what I do today as an Embarcadero SC, is to front human-resource companies with clear information as to where they can look for competent Delphi developers. But in order to deliver that effectively, we first have to establish a map.

Put your local club or interest group on the map!

Last friday (24.08.2018) I published an open document on Delphi Developer. This is a document open and available to everyone, with the sole purpose of making it easier for developers to find clubs and interest groups in their region (jobs are often found through acquaintances, so connecting to a local group is important). It will also simplify how we as a community can approach human resource companies. Our document is growing but we still need more! So please take five minutes to add your local user group.

Ebusiness Concept

The Delphi and C++ builder community is large, but we need representation with HR

Delphi and C++ builder is seeing a stable and healthy growth. It has taken a lot of hard work and effort to get where we are today, both by Embarcadero and developers that use RAD Studio as their business backbone.

My hope is that everyone who read this can allocate few minutes, just five minutes to add to our document. So if you know of a Delphi¬†or C++ builder user group, perhaps a club or organization? Then please check the document (Note: The document is pinned as an announcement on top of the Facebook group feed, but members can also reach it directly by clicking here) and add the club if it’s not already there.

Note: Please make sure that the information is correct. Call the club or group if possible. Remember, this document is for everyone. We want to maintain the document and keep it available 24/7.

Building bridges

The work members are doing for the community is quite important. It determines where we can go next. In fact, I will contact each and every club to establish communication and co-operation. There is much to debate, such as capacity for tutoring, courseware, primary contact for new users and more. If need be I will personally travel so we can meet face to face. I am deadly serious about this, because there is no other way to build critical mass. Our group alone have thousands of members whom have invested a lot of money in software, components, formal training and education; we have decades of experience and technical expertise. And that is worth protecting.

Getting organized to safeguard our education, our language of preference, our jobs and ultimately to nurture our future is a worthy cause. I hope I have everyone’s blessing in this — but I can’t do everything alone. It is impossible for me to know if there are 3 Delphi clubs in Venezuela, 4 in Canada and 15 in India. We need to get them pinned on a map and formulate a strategy for lasting, positive results.

turn-the-page-look-to-the-future-660x330

The past is experience, the future is opportunities

I want to thank each and every one that has added to the document. Thank you so much, this will help our community more than you think. It might seem as a small step, but that first step is the most important of them all. All great things start as an idea, but when you apply force and determination – it becomes reality.

I am extremely lucky because this work is now a part of my job. My work includes a bit of everything: studies, authoring, coding, consulting and presentations. But the part I love the most is to connect people.

Real life results

If you think the document in question is a waste of time, think again!

4a128ea6852444fbfc89022be4132e9bLast week we had 3 rather frustrated members that desperately needed a job. After calming the situation down I made some calls and was able to find remote work for all of them.

It is a wonderful feeling when you can help someone. It is also what community is all about. The more organized we get, the better it will be for everyone. LinkedIn is great but networking without an infrastructure that responds can bear no fruits. And that is where Delphi Developer comes in. We are very much alive and kicking.

So with less than a week of organization behind us, we found and delivered jobs as a direct consequence of the Delphi Developer Facebook Group.

 

Building a Delphi Database engine, part two

August 16, 2018 Leave a comment

In the first episode of this tutorial we looked at some of the fundamental ideas behind database storage. We solved the problem of storing arbitrary length data by dividing the database file into conceptual parts; we discovered how we could daisy-chain these parts together to form a sequence; and we looked at how this elegantly solves reclaiming lost space and recycling that for new records. Last but not least we had a peek at the bit-buffer that helps us keep track of what blocks are taken, so we can easily grow and shrink the database on demand.

In this article we will be getting our hands dirty and put our theories into practise. The goal today is to examine the class that deals with these blocks or parts; While we could maybe get better performance by putting everything into a monolithic class, but the topics are best kept separate while learning the ropes. So let’s favour OOP and class encapsulation for this one.

The DbLib framework

Prior to writing this tutorial I had to write the code you would need. It would be a terrible mistake to run a tutorial with just theories to show for it. Thankfully I have been coding for a number of years now, so I had most of the code in my archives. To make life easier for you I have unified the code into a little framework.

This doesn’t mean that you have to use the framework. The units I provide are there to give you something tangible to play with. I have left ample room for optimization and things that can be done differently on purpose.

I have set up a bitbucket git repository for this tutorial, so your first business of the day is to download or fork our repository:

https://bitbucket.org/cipher_diaz/dbproject/src/master/

The database file class

The first installment of this tutorial ended with a few words on the file header. This is the only static or fixed data segment in our file. And it must remain fixed because we need a safe space where we can store offsets to our most important sequences, like the root sequence.

The root sequence is simply put the data that describes the database, also known as metadata. So all those items I listed at the start of the previous article, things like table-definitions, index definitions, the actual binary table data (et al), well we have to keep track of these somewhere right?

Well, that’s where the file header comes in. The header is the keeper of all secrets and is imperative for the whole engine.

The record list

Up until this point we have covered blocks, sequences, the file header, the bit-buffer that keeps track of available and reserved blocks — but what about the actual records?

db_file_sequenceWhen someone performs a high-level insert operation, the binary data that makes up the record is written as a sequence; that should be crystal clear by now. But having a ton of sequences stored in a file is pretty useless without a directory or list that remembers them. If we have 10.000 records (sequences) in a file – then we must also keep track of 10.000 offsets right? Otherwise, how can we know where record number 10, 1500 or 9000 starts?

Conceptually, metadata is not actual data. Metadata is a description of data, like a table definition or index definition. The list that holds all the record offsets is real data; As such I don’t want to store it together with the metadata but keep it separate. The bit buffer that keeps track of block availability in the file is likewise “real” data, so I would like to keep that in a separate sequence too.

When we sit down and define our file-header record, which is a structure that is always at the beginning of the file (or stream), we end up with something like this:

  • Unique file signature: longword
  • Version minor, major, revision: longword (byte, byte, word)
  • Database name: 256 bytes [holds utf8 encoded text]
  • Encryption cipher: integer
  • Compression id: integer
  • root-sequence: longword
  • record-list-sequence: longword
  • bit-buffer-sequence: longword

If you are wondering about the encryption and compression fields, don’t overthink it. It’s just a place to store something that identifies whatever encryption or compression we have used. If time allows we will have a look at zlib and RC4, but even if we don’t it’s good to define these fields for future expansion.

The version longword is actually more important than you think. If the design of your database and header changes dramatically between versions, you want to check the version number to make absolutely sure you can even handle the file. I have placed this as the second field in the record, 4 bytes into the header, so that it can be read early. The moment you have more than one version of your engine you might want to write a routine that just reads the first 8 bytes of the file and check compatibility.

What are those buffers?

node

The buffer classes are Delphi implementation of Node.JS buffers, including insert and remove functionality

Having forked the framework, you suddenly have quite a few interesting units. But you can also feel a bit lost if you don’t know what the buffer classes do, so I want to start with those first.

The buffer classes are alternatives to streams. Streams are excellent but they can be quite slow if you are doing intense read-write operations. More importantly streams lack two fundamental features for DB work, namely insert and remove. For example, lets say you have a 100 megabyte file – and then you want to remove 1 megabyte from the middle of this file. It’s not a complex operation but you still need to copy the trailing data backwards as quick as possible before scaling the stream size. The same is true if you want to inject data into a large file. It’s not a huge operation, but it has to be 100% accurate and move data as fast as possible.

I could have just inherited from TStream, but I wanted to write classes that were faster, that had more options and that were easier to expand in the future. The result of those experiments were the TBuffer classes.

So mentally, just look at TDbBuffer, TDbBufferMemory and TDbBufferFile as streams on steroids. If you need to move data between a stream and a buffer, just create a TDbLibStreamAdapter instance and you can access the buffer like a normal TStream decendent.

Making a file of blocks

With enough theory behind us, let’s dig into the codebase and look at the class that deals with a file as blocks, or parts. Open up the unit dblib.partaccess.pas and you will find the following class:

  TDbLibPartAccess = class(TObject)
  private
    FBuffer:    THexBuffer;
    FheadSize:  integer;
    FPartSize:  integer;
  protected
    function GetPartCount: integer; inline;
  public
    property Buffer: THexBuffer read FBuffer;
    property ReservedHeaderSize: integer read FheadSize;
    property PartSize: integer read FPartSize;
    property PartCount: integer read GetPartCount;
    procedure ReadPart(const PartIndex: Integer; var aData); overload;
    procedure ReadPart(const PartIndex: Integer; const Data: THexBuffer); overload;
    procedure WritePart(const PartIndex: Integer; const Data; const DataLength: Integer); overload;
    procedure WritePart(Const PartIndex: Integer; const Data: THexBuffer); overload;

    procedure AppendPart(const Data; DataLength: Integer); overload;
    procedure AppendPart(const Data: THexBuffer); overload;

    function CalcPartsForData(const DataSize: Int64): integer; inline;
    function CalcOffsetForPart(const PartIndex: Integer): Int64; inline;

    constructor Create(const DataBuffer: THexBuffer;
      const ReservedHeaderSize: Integer; const PartSize: Integer); virtual;
  End;

As you can see this class is pretty straight forward. You pass a buffer (either memory or file) via the constructor together with the size of the file-header. This helps the class to avoid writing to the first section of the file by mistake. Whenever the method CalcOffsetForPart() is called, it will add the size of the header to the result, shielding the header from being over-written.

The other methods should be self-explanatory; you have various overloads for writing a sequence part (block), appending them to the database file, reading them – and all these methods are offset based; meaning you give it the part-number and it calculates where that part is physically located inside the file.

One important method is the CalcPartsForData() function. This is used before splitting a piece of data into a sequence. Let’s say you have 1 megabyte to data you want to store inside the database file, well then you first call this and it calculates how many blocks you need.

Once you know how many blocks you need, the next step is to check the bit-buffer (that we introduced last time) if the file has X number of free blocks. If the file is full, well then you either have to grow the file to fit the new data – or issue an error message.

See? It’s not that complex once you have something to build on!

Proof reading test, making sure what we write is what we read

With the scaffolding in place, let’s write a small test to make absolutely sure that the buffer and class logistics check out ok. We are just going to do this on a normal form (this is the main project in the bitbucket project folder), so you don’t have to type this code. Just fork the code from the URL mentioned at the top of this article and run it.

Our test is simple:

  • Define our header and part records, doesn’t have to be accurate at this point
  • Create a database file buffer (in memory) with size for header + 100 parts
  • Create a TDblibPartAccess class, feed in the sizes as mentioned above
  • Create a write buffer the same size as part/block record
  • Fill that buffer with some content we can easily check
  • Write the writebuffer content to all the parts in the file
  • Create a read buffer
  • Read back each part and compare content with the write buffer

If any data is written the wrong way or overlapping, what we read back will not match our write buffer. This is a very simple test to just make sure that we have IO fidelity.

Ok, lets write some code!

unit mainform;

interface

uses
  Winapi.Windows, Winapi.Messages, System.SysUtils,
  System.Variants, System.Classes, Vcl.Graphics,
  Vcl.Controls, Vcl.Forms, Vcl.Dialogs, Vcl.StdCtrls,
  dblib.common,
  dblib.buffer,
  dblib.partaccess,
  dblib.buffer.memory,
  dblib.buffer.disk,
  dblib.encoder,
  dblib.bitbuffer;

const
  CNT_PAGESIZE = (1024 * 10);

type

  TDbVersion = packed record
    bvMajor:  byte;
    bvMinor:  byte;
    bvRevision: word;
  end;

  TDbHeader = packed record
    dhSignature:  longword;     // Signature: $CAFEBABE
    dhVersion:    TDbVersion;   // Engine version info
    dhName:       shortstring;  // Name of database
    dhMetadata:   longword;     // Part# for metadata
  end;

  TDbPartData = packed record
    ddSignature:  longword;
    ddRoot:       longword;
    ddNext:       longword;
    ddBytes:      integer;
    ddData: packed array [0..CNT_PAGESIZE-1] of byte;
  end;

  TfrmMain = class(TForm)
    btnWriteReadTest: TButton;
    memoOut: TMemo;
    procedure btnWriteReadTestClick(Sender: TObject);
  private
    { Private declarations }
    FDbFile:    TDbLibBufferMemory;
    FDbAccess: TDbLibPartAccess;
  public
    { Public declarations }
    constructor Create(AOwner: TComponent); override;
    destructor  Destroy; override;
  end;

var
  frmMain: TfrmMain;

implementation

{$R *.dfm}

{ TfrmMain }

constructor TfrmMain.Create(AOwner: TComponent);
begin
  inherited;
  // Create our database file, in memory
  FDbFile := TDbLibBufferMemory.Create(nil);

  // Reserve size for our header and 100 free blocks
  FDBFile.Size := SizeOf(TDbHeader) + ( SizeOf(TDbPartData) * 100 );

  // Create our file-part access class, which access the file
  // as a "block" file. We pass in the size of the header + part
  FDbAccess := TDbLibPartAccess.Create(FDbFile, SizeOf(TDbHeader), SizeOf(TDbPartData));
end;

destructor TfrmMain.Destroy;
begin
  FDbAccess.Free;
  FDbFile.Free;
  inherited;
end;

procedure TfrmMain.btnWriteReadTestClick(Sender: TObject);
var
  LWriteBuffer:  TDbLibBufferMemory;
  LReadBuffer: TDbLibBufferMemory;
  LMask: ansistring;
  x:  integer;
begin
  memoOut.Lines.Clear();

  LMask := 'YES!';

  // create a temporary buffer
  LWriteBuffer := TDbLibBufferMemory.Create(nil);
  try
    // make it the same size as our file-part
    LWriteBuffer.Size := SizeOf(TDbPartData);

    // fill the buffer with our test-pattern
    LWriteBuffer.Fill(0, SizeOf(TDbPartData), LMask[1], length(LMask));

    // Fill the dbfile by writing each part, using our
    // temporary buffer. This fills the file with our
    // little mask above
    for x := 0 to FDbAccess.PartCount-1 do
    begin
      FDbAccess.WritePart(x, LWriteBuffer);
    end;

    LReadBuffer := TDbLibBufferMemory.Create(nil);
    try
      for x := 0 to FDBAccess.PartCount-1 do
      begin
        FDbAccess.ReadPart(x, LReadBuffer);

        if LReadBuffer.ToString  LWriteBuffer.ToString then
          memoOut.Lines.Add('Proof read part #' + x.ToString() + ' = failed')
        else
          memoOut.Lines.Add('Proof read part #' + x.ToString() + ' = success');
      end;
    finally
      LReadBuffer.Free;
    end;

  finally
    LWriteBuffer.Free;
  end;
end;

end.

The form has a button and a memo on it, when you click the button we get the following result:

writeread

Voila, we have IO fidelity!

Finally things are starting to become more interesting! We still have a way to go before we can start pumping records into this thing, but at least we have tangible code to play with.

In our next installment we will implement the sequence class, which takes the TDbLibPartAccess class and augments it with functionality to read and write sequences. We will also include the bit-buffer from our first article and watch as the siluette of our database engine comes into view.

Again, this is not built for speed but for education.

Until next time.

Building a Delphi Database engine, part one

August 10, 2018 Leave a comment

Databases come in all shapes and sizes. From blistering fast in-memory datasets intended to hold megabytes, to massive distributed systems designed to push terabytes around like lego. Is it even worth looking into a database engine these days?

Firebird, the tardis of database engines!

There are tons of both commerical and free DB engines for Delphi

Let me start by saying: NO. If you need a native database written in object pascal, there are several awesome engines available. Engines that have been in production for ages, that have been tried and tested by time with excellent supports, speed and track records. My personal favorite is ElevateDB which is the successor to DBIsam, an engine I used in pretty much all my projects before 64 bit became the norm. ElevateDB handles both 32 and 64 bit and is by far my database of choice.

The purpose of this article is not to create an alternative or anything along those lines, quite the opposite. It’s purely to demonstrate some of the ideas behind a database – and just how far we have come from the old “file of record” that is synonymous with the first databases. Like C/C++ Delphi has been around for a while so you can still find support for these older features, which I think is great because there are still places where such a “file of record” can be extremely handy. But again, that is not what we are going to do here.

The reason I mentioned “file of record” is because, even though we don’t use that any more – it does summarize the most fundamental idea of a database. What exactly is a database anyways? Is it really this black box?

A database file (to deal with that first) is supposed to contain the following:

  • One of more table definitions
  • One or more tables (as in data)
  • Indexes and lookup tables (word indexes)
  • Stored procedures
  • Views and triggers

So the question becomes, how exactly do we stuff all these different things into a single file? Does it have a filesystem? Are each of these things defined in a record of sorts? And what about this mysterious “page size” thingy, what is that about?

A file of blocks

All databases have the same problem, but each solves these problems differently. Most major databases are in fact not a single file anymore. Some even place the above in completely separate files. For me personally I don’t see any gain in having a single file with everything stuffed into it – but for sake of argument we will be looking at that in this article. It has some educational value.

The way databases are organized is directly linked to the problem above, namely that we need to store different types of data – of arbitrary length – into a single file. In other words you can’t use a fixed record because you cannot possibly know how many fields a table will have, nor can you predict the number of tables or procedures. So we have to create a system that can take any length of data and “somehow” be able to place that inside the file.

db_file_blocksSo ok, what if we divide the file into blocks, each capable of storing a piece of data? So if a single block is not enough, the data will be spread out over multiple blocks?

Indeed. And that is also where the page-size value comes in. The pagesize defines the capacity of a block, which indirectly affects how much data each block occupies. Pretty cool right?

But conceptually dividing a file into blocks doesn’t solve all our problems. How exactly will we know what blocks represents a record or a form definition? I mean, if some piece of data is spread over 10 blocks or 20 blocks, how do we know that these represents a single piece of data? How do we navigate from block #1 in a sequence, to block #2?

Well, each block in the file can house raw data, we have to remember that the whole “block” idea is just conceptual and a way that we approach a file in our code. When we write blocks to the file, we have to do that in a particular way, so it’s not just a raw slice of a stream or array of bytes.

We need a block header to recognize that indeed, this is a known block; we need the block number of the next block that holds more data – and it’s probably wise to sign each block with the block-number for the first pice.

So from a pseudo-code point of view, we end up with something like:

block_layout_example

Since our blocks will be hosted inside a stream (so no “file of record” like I said), we have to use the “packed” keyword. Delphi like any other language always tries to align records to a boundary, so even if the record is just 4 bytes long (for example), it will be aligned to eight bytes (you can control the boundary via the compiler options). That can cause problems when we calculate the offset to our blocks so we mark both the record and data as “packed”, which tells the compiler to drop alignment.

Let’s look at what each field in the record (descriptor) means:

  • fbHeader, a unique number that we check for before reading a block. If this number is missing or doesn’t match, something is wrong and the data is not initialized (or it’s not a db file). Lets use $CAFEBABE since it’s both unique and fun at the same time!
  • fbFirst, the block number of the first block in a sequence (piece of coherent data)
  • fbNext, the next block in the sequence after the current
  • fbUsed: the number of bytes uses in the fbData array. At the end of a sequence it might just use half of what the block can store – so we make sure we know how much to extract from fbData in each segment
  • fbData: a packed byte array housing “pagesize” number of bytes

A sequence of blocks

The block system solves the problem of storing variable length data by dividing the data into suitable chunks. As shown above we store the data with just enough information to find the next block, and next block, until we have read back the whole sequence.

So the block-sequence is pretty much a file. It’s actually very much a file because this is incidentally how harddisks and floppy disks organize files. It had a more complex layout of course, but the idea is the same. A “track-disk-device” stores blocks of data organized in tracks and sectors (sequences). Not a 1:1 comparison but neat none the less.

But ok, we now have some idea about storing larger pieces of data as a chunk of blocks. But why not just store the data directly? Why not just append each record to a file and to hell with block chunks – wouldnt that be faster?

db_file_sequence

Well yes, but how would you recycle the space? Lets say you have a database with 10.000 records, each with different sizes, and you want to delete record number 5500. If you just append stuff, how would you recycle that space? There is no way of predicting the size of the next sequence, so you could end up with large segments of empty space that could never be recycled.

By conceptually dividing the file into predictable blocks, and then storing data in chunks where each block “knows” it’s next of kin – and holds a reference to its root (the fbFirst field), we can suddenly solve the problem of recycling!

Ok, lets sum up what we have so far:

  • We have solved storing arbitrary length data by dividing the data into conceptual blocks. These blocks dont have to be next to each other.
  • We have solved reading X number of blocks to re-create the initial data
  • We call the collection of blocks that makes up a piece of data a “sequence”
  • The immediate benefit of block-by-block storage is that space can be recycled. Blocks dont have to be next to each other, a record can be made up by blocks scattered all over the place but still remain coherent.

Not bad! But we are still not home free, there is another challenge that looms above, namely how can we know if a block is available or occupied?

Keeping track of blocks

This is actually a pretty cool place in the buildup of an engine, because the way we read, write and figure out what blocks can be recycled – very much impacts the speed of high-level functions like inserts and navigation. This is where I would introduce memory mapped files before I moved on, but like I mentioned earlier -we will skip memory mapping because the article would quickly morph into a small essay. I don’t want memory mapping and MMU theory to overshadow the fundamental principles that I’m trying to pass on.

We have now reached the point where we ask the question “how do we quickly keep track of available free blocks?”, which is an excellent question with more than a single answer. Some database vendors use a separate file where each byte represents a block. My first experiment was to use a text file, thinking that functions like Pos() would help me locate blocks faster. It was a nice idea but we need more grunt than that.

What I landed on after some experimentation, which is a good compromise between size and speed, was to use a bit buffer to keep track of things. So a single bit is either taken (1) or available (0). You can quickly search for available bits because if a byte has any other value than $FF (255) you know there is a free bit there. It’s also very modest with regards to size, and you can keep track of 10.000 blocks with only 1250 bytes.

The code for the bit-buffer was written to be easy to use. In a high-end engine you would not waste the cpu time by isolating small calculations in a single method, but try to inline as much as possible. But for educational purposes my simple implementation will be more than adequate.

Note: I will be setting up a github folder for this project, so for our next article you can just fork that. WordPress has a tendency to mess up Delphi code, so if the code looks weird don’t worry, it will all be neatly organized into a project shortly.

The file header

Before we look at the bit code to keep track of blocks, you might be thinking “what good is it to keep track of blocks if we have nowhere to store that information?“. You are quite right, and this is where the file header comes in.

The file header is the only fixed part of a database file. Like I mentioned earlier there are engines that stuffs everything into a single file, but in most cases where performance is the highest priority – you want to use several files and avoid mixing apples and oranges. I would store the block-map (bit buffer) in a separate file – because that maps easily into memory under normal use. I would also store the table definitions, indexes and more as separate files. If nothing else than to make repairing and compacting a database sane. But i promised to do a single file model (me and my big fingers), so we will be storing the meta-data inside the database file, so let’s do just that.

The file-header is just a segment of the database-file (the start of the file) that contains some vital information. When we calculate the stream offset to each block (for either reading or writing), we simply add the size of the header to that. We don’t want to accidentally overwrite that part of the file.

Depending on how we evolve the reading and writing of data sequences, the header doesn’t have to contain that much data. You probably want to keep track of the page-size, followed by the start block for the table definitions. So when you open a database you would immediately start by reading the block-sequence containing all the definitions the file contains. How we organize the data internally is irrelevant for the block-file and IO scaffolding. It’s job is simple: read or write blocks, calculate offsets, avoid killing the header, pay attention to identifiers.

Some coders store the db schemas etc. at the end of the file, so when you close the DB the information is appended to the filestream Рand the offset is stored in the header. This is less messy, but also opens up for corruption. If the DB is not properly closed you risk the DB information never being streamed out.  Which is again another nail in the coffin for single-file databases (at least from my personal view, there are many ways to Rome and database theory can drive you nuts at times).

So I end this first article of our series with that. Hopefully I have given you enough ideas to form a mental image of how the underlying structure of a database file is organized. There are always exceptions to the rule and a wealth of different database models exists. So please keep in mind that this article has just scratched the surface on a slab of concrete.

unit qtx.utils.bits;

interface

uses
  System.SysUtils, System.Classes;

type

  (* Exception classes *)
  EQTXBitBuffer = Class(Exception);

  TBitOffsetArray = packed array of NativeUInt;

  (* About TQTXBitBuffer:
    This class allows you to manage a large number of bits,
    much like TBits in VCL and LCL.
    However, it is not limited by the shortcomings of the initial TBits.

    - The bitbuffer can be saved
    - The bitbuffer can be loaded
    - The class exposes a linear memory model
    - The class expose methods (class functions) that allows you to
    perform operations on pre-allocated memory (memory you manage in
    your application).

    Uses of TQTXBitbuffer:
    Bit-buffers are typically used to represent something else,
    like records in a database-file. A bit-map is often used in Db engines
    to represent what hapes are used (bit set to 1), and pages that can
    be re-cycled or compacted away later. *)

  TQTXBitBuffer = Class(TObject)
  Private
    FData: PByte;
    FDataLng: NativeInt;
    FDataLen: NativeInt;
    FBitsMax: NativeUInt;
    FReadyByte: NativeUInt;
    FAddr: PByte;
    BitOfs: 0 .. 255;
    FByte: byte;
    function GetByte(const Index: NativeInt): byte;
    procedure SetByte(const Index: NativeInt; const Value: byte);
    function GetBit(const Index: NativeUInt): boolean;
    procedure SetBit(const Index: NativeUInt; const Value: boolean);
  Public
    property Data: PByte read FData;
    property Size: NativeInt read FDataLen;
    property Count: NativeUInt read FBitsMax;
    property Bytes[const Index: NativeInt]: byte Read GetByte write SetByte;
    property bits[const Index: NativeUInt]: boolean Read GetBit
      write SetBit; default;

    procedure Allocate(MaxBits: NativeUInt);
    procedure Release;
    function Empty: boolean;
    procedure Zero;

    function ToString(const Boundary: integer = 16): string; reintroduce;

    class function BitsOf(Const aBytes: NativeInt): NativeUInt;
    class function BytesOf(aBits: NativeUInt): NativeUInt;

    class function BitsSetInByte(const Value: byte): NativeInt; inline;
    class Function BitGet(Const Index: NativeInt; const Buffer): boolean;
    class procedure BitSet(Const Index: NativeInt; var Buffer;
      const Value: boolean);

    procedure SaveToStream(const stream: TStream); virtual;
    procedure LoadFromStream(const stream: TStream); virtual;

    procedure SetBitRange(First, Last: NativeUInt; const Bitvalue: boolean);
    procedure SetBits(const Value: TBitOffsetArray; const Bitvalue: boolean);
    function FindIdleBit(var Value: NativeUInt;
      const FromStart: boolean = false): boolean;

    destructor Destroy; Override;
  End;

implementation

resourcestring
  ERR_BitBuffer_InvalidBitIndex = 'Invalid bit index, expected 0..%d not %d';

  ERR_BitBuffer_InvalidByteIndex = 'Invalid byte index, expected 0..%d not %d';

  ERR_BitBuffer_BitBufferEmpty = 'Bitbuffer is empty error';

  ERR_ERR_BitBuffer_INVALIDOFFSET =
    'Invalid bit offset, expected 0..%d, not %d';

var
  CNT_BitBuffer_ByteTable:  array [0..255] of NativeInt =
  (0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4,
  1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
  1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
  2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
  1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
  2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
  2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
  3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
  1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
  2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
  2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
  3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
  2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
  3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
  3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
  4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8);

function QTXToNearest(const Value, Factor: NativeUInt): NativeUInt;
var
  LTemp: NativeUInt;
Begin
  Result := Value;
  LTemp := Value mod Factor;
  If (LTemp > 0) then
    inc(Result, Factor - LTemp);
end;

// ##########################################################################
// TQTXBitBuffer
// ##########################################################################

Destructor TQTXBitBuffer.Destroy;
Begin
  If not Empty then
    Release;
  inherited;
end;

function TQTXBitBuffer.ToString(const Boundary: integer = 16): string;
const
  CNT_SYM: array [boolean] of string = ('0', '1');
var
  x: NativeUInt;
  LCount: NativeUInt;
begin
  LCount := Count;
  if LCount > 0 then
  begin
    LCount := QTXToNearest(LCount, Boundary);
    x := 0;
    while x < LCount do
    begin
      if x = 0) then
  Begin
    LAddr := @Buffer;
    inc(LAddr, Index shr 3);

    LByte := LAddr^;
    BitOfs := Index mod 8;
    LCurrent := (LByte and (1 shl (BitOfs mod 8)))  0;

    case Value of
      true:
        begin
          (* set bit if not already set *)
          If not LCurrent then
            LByte := (LByte or (1 shl (BitOfs mod 8)));
          LAddr^ := LByte;
        end;
      false:
        begin
          (* clear bit if already set *)
          If LCurrent then
            LByte := (LByte and not(1 shl (BitOfs mod 8)));
          LAddr^ := LByte;
        end;
    end;

  end
  else
    Raise EQTXBitBuffer.CreateFmt(ERR_ERR_BitBuffer_INVALIDOFFSET,
      [maxint - 1, index]);
end;

procedure TQTXBitBuffer.SaveToStream(const stream: TStream);
var
  LWriter: TWriter;
begin
  LWriter := TWriter.Create(stream, 1024);
  try
    LWriter.WriteInteger(FDataLen);
    LWriter.Write(FData^, FDataLen);
  finally
    LWriter.FlushBuffer;
    LWriter.Free;
  end;
end;

Procedure TQTXBitBuffer.LoadFromStream(const stream: TStream);
var
  LReader: TReader;
  LLen: NativeInt;
Begin
  Release;
  LReader := TReader.Create(stream, 1024);
  try
    LLen := LReader.ReadInteger;
    if LLen > 0 then
    begin
      Allocate(BitsOf(LLen));
      LReader.Read(FData^, LLen);
    end;
  finally
    LReader.Free;
  end;
end;

Function TQTXBitBuffer.Empty: boolean;
Begin
  Result := FData = NIL;
end;

Function TQTXBitBuffer.GetByte(const Index: NativeInt): byte;
Begin
  If FData  NIL then
  Begin
    If (index >= 0) and (Index = 0) and (Index  Secondary then
        Result := (Primary - Secondary)
      else
        Result := (Secondary - Primary);

      if Exclusive then
      begin
        If (Primary < 1) or (Secondary < 1) then
          inc(Result);
      end;

      If (Result < 0) then
        Result := abs(Result);
    end
    else
      Result := 0;
  end;

Begin
  If (FData  nil) then
  Begin
    If (First  0 do
        Begin
          SetBit(x, Bitvalue);
          inc(x);
          SetBit(x, Bitvalue);
          inc(x);
          SetBit(x, Bitvalue);
          inc(x);
          SetBit(x, Bitvalue);
          inc(x);
          SetBit(x, Bitvalue);
          inc(x);
          SetBit(x, Bitvalue);
          inc(x);
          SetBit(x, Bitvalue);
          inc(x);
          SetBit(x, Bitvalue);
          inc(x);
          dec(LLongs);
        end;

        (* process singles *)
        LSingles := NativeInt(LCount mod 8);
        while (LSingles > 0) do
        Begin
          SetBit(x, Bitvalue);
          inc(x);
          dec(LSingles);
        end;

      end
      else
      begin
        if (First = Last) then
          SetBit(First, true)
        else
          Raise EQTXBitBuffer.CreateFmt(ERR_BitBuffer_InvalidBitIndex,
            [FBitsMax, Last]);
      end;
    end
    else
      Raise EQTXBitBuffer.CreateFmt(ERR_BitBuffer_InvalidBitIndex,
        [FBitsMax, First]);
  end
  else
    Raise EQTXBitBuffer.Create(ERR_BitBuffer_BitBufferEmpty);
end;

Procedure TQTXBitBuffer.SetBits(Const Value: TBitOffsetArray;
  Const Bitvalue: boolean);
var
  x: NativeInt;
  FCount: NativeInt;
Begin
  If FData  NIL then
  Begin
    FCount := length(Value);
    If FCount > 0 then
    Begin
      for x := low(Value) to High(Value) do
        SetBit(Value[x], Bitvalue);
    end;
  end
  else
    Raise EQTXBitBuffer.Create(ERR_BitBuffer_BitBufferEmpty);
end;

Function TQTXBitBuffer.FindIdleBit(var Value: NativeUInt;
  Const FromStart: boolean = false): boolean;
var
  FOffset: NativeUInt;
  FBit: NativeUInt;
  FAddr: PByte;
  x: NativeInt;
Begin
  Result := FData  NIL;
  if Result then
  Begin
    (* Initialize *)
    FAddr := FData;
    FOffset := 0;

    If FromStart then
      FReadyByte := 0;

    If FReadyByte < 1 then
    Begin
      (* find byte with idle bit *)
      While FOffset < NativeUInt(FDataLen) do
      Begin
        If BitsSetInByte(FAddr^) = 8 then
        Begin
          inc(FOffset);
          inc(FAddr);
        end
        else
          break;
      end;
    end
    else
      inc(FOffset, FReadyByte);

    (* Last byte exhausted? *)
    Result := FOffset  7 then
            FReadyByte := 0
          else
            FReadyByte := FOffset;

          break;
        end;
        inc(FBit);
      end;
    end;

  end;
end;

Function TQTXBitBuffer.GetBit(Const Index: NativeUInt): boolean;
begin
  If FData  NIL then
  Begin
    If index  7 then
              FReadyByte := 0;
          end;

        end;
      end
      else
      Begin
        (* clear bit if not already clear *)
        If (FByte and (1 shl (BitOfs mod 8)))  0 then
        Begin
          FByte := (FByte and not(1 shl (BitOfs mod 8)));
          PByte(FDataLng + NativeInt(index shr 3))^ := FByte;

          (* remember this byte pos *)
          FReadyByte := Index shr 3;
        end;
      end;

    end
    else
      Raise EQTXBitBuffer.CreateFmt(ERR_BitBuffer_InvalidBitIndex,
        [Count - 1, index]);
  end
  else
    Raise EQTXBitBuffer.Create(ERR_BitBuffer_BitBufferEmpty);
end;

Procedure TQTXBitBuffer.Allocate(MaxBits: NativeUInt);
Begin
  (* release buffer if not empty *)
  If FData  NIL then
    Release;

  If MaxBits > 0 then
  Begin
    (* Allocate new buffer *)
    try
      FReadyByte := 0;
      FDataLen := BytesOf(MaxBits);
      FData := AllocMem(FDataLen);
      FDataLng := NativeUInt(FData);
      FBitsMax := BitsOf(FDataLen);
    except
      on e: Exception do
      Begin
        FData := NIL;
        FDataLen := 0;
        FBitsMax := 0;
        FDataLng := 0;
        Raise;
      end;
    end;

  end;
end;

Procedure TQTXBitBuffer.Release;
Begin
  If FData  NIL then
  Begin
    try
      FreeMem(FData);
    finally
      FReadyByte := 0;
      FData := NIL;
      FDataLen := 0;
      FBitsMax := 0;
      FDataLng := 0;
    end;
  end;
end;

Procedure TQTXBitBuffer.Zero;
Begin
  If FData  NIL then
    Fillchar(FData^, FDataLen, byte(0))
  else
    raise EQTXBitBuffer.Create(ERR_BitBuffer_BitBufferEmpty);
end;

end.

My role at Embarcadero

August 8, 2018 4 comments

I have gotten quite a few requests regarding what exactly I’m doing at Embarcadero. I have elaborated quite a bit on¬†Delphi Developer. But I fully understand that not everyone is on Facebook, and I don’t mind elaborating a bit more if that helps. So here is a quick “drive-by” post on that.

36770254_10155525884245906_4507894182349635584_o

Setting sails for America

Sadly the facts of life are that I can’t talk about everything openly, that would violate the responsibility I have accepted in our NDA (non disclosure agreement), as well as personal trust between myself and the people involved. Hopefully everyone can sympathize with the situation.

My title is SC, Software Consultant, which is a branch under sales and support. I talk with companies about their needs, help them find competent employees, deliver ad-hoc solutions on site in my region and act as a “go to” guy that CTO’s can call on when they need something. And of course part of my role is to hold presentations, advocate Delphi and evangelize.

I am really happy about this because for the past 8 years I have been up to my nose in brain grinding, low-level compiler and rtl development; and while that is intellectually rewarding, it indirectly means everything else is on hold. With the release of Smart Mobile Studio 3.0 the product has reached a level of maturity where fixes and updates will be more structured. Focus is now on specific modules and specific components – which sadly doesn’t warrant a full-time job. So it’s been an incredible eight years at The Smart Company, and Smart is not going away (just to underline that) – but right now Delphi comes first. So my work on the RTL and the new compiler framework is partitioned accordingly.

Being able to advocate, represent and work with Delphi and C++ builder is a dream job come true. I have been fronting Delphi, helped companies and connected people within the community for 15 years anyways; and the companies and people I talk with are the same that I talked to last month. Not to mention new faces and people who have just discovered Delphi, or come back to Delphi after years elsewhere.

So being offered to do what I already love doing as a full-time job, I don’t see how I could have turned that down. As a teenager we used to talk about what company we wanted to work for. I remember a buddy of mine was absolutely fanatical about IBM, and he even went on to work for “big blue” after college. Others wanted to work at Microsoft, Oracle, Sun — but for me it was always Borland. And I have stuck with Delphi through thick and thin. Delphi has never failed me. Not once.

I set out to get object-pascal back on the map eight years ago. I have actively lobbied, blogged, started usergroups (our Facebook group now house 7500+ active Delphi developers), petitioned educational institutions, held presentations and done everything short of tattooing Delphi on my skin to make that a reality. Taking object-pascal out of education has been a catastrophe for software development as a whole.

Well, I hope this sheds some light on the role and what I do. I’m not a “professional blogger” like some have speculated. I do try to keep things interesting, but there is very little professional about my personal blog (which would be a paradox). But obviously my writing and presentations will have to adapt; meaning longer articles, on-topic writing style and good reference material.

I will be speaking in Oslo quite soon, then Sweden before I pop off to London in november. Very much looking forward to that. The London presentation and Oslo presentation will be hybrid talks, looking at Delphi and also how Smart Mobile Studio can help Delphi developers broaden the impact and ease web development for existing Delphi solutions. The talk in Sweden will be pure Delphi and C++ builder.

Get in touch with Jason Chapman or Adam Brett at the UK Delphi usergroup for more info

New article series on Delphi and C++ builder

August 7, 2018 4 comments

An army of Delphi developers

It’s been a while since I’ve done some hardcore Delphi articles, and since that is now my job I am happy that I can finally allocate a good chunk of time for that work. Dont worry, there will be plenty of Smart Pascal content too – but I think it’s time to clean up the blog situation a bit. This blog is personal and thus contains a pot-pourri of topics, from programming to 3d printing, embedded hardware to retro-gaming. It’s a fun blog, I enjoy being able to write about things I’m passionate about, but having one blog for each topic makes more sense.

So in the near future I think it’s good that I publish Smart Mobile Studio content (except random stuff and drive-by posts) to http://www.smartmobilestudio.com, and¬†Delphi to¬†Embarcadero’s¬†blog server. If nothing else it will be easier for the readers to deal with. If you only want to read about my Delphi escapades then embedded and retro stuff is not always interesting.

Deep dive into Delphi and C++ builder

So what can be cool to write about? I spent the better part of last weekend pondering this. Delphi articles have a little blind spot between beginner and advanced that I would like to focus on. There are plenty of “learn Delphi” articles out there, and there are likewise a lot of very advanced topics. So hopefully my first series will hit where it should, and be interesting for those in between.

We need a light database

Let’s peek under the hood!

Right, so the last time I read about database coding, and I mean “making your own database engine” was at least 10 years ago. The Delphi community has always been blessed with a large group of insightful and productive people, people who share their knowledge and help others. But everyone is working on something and finding the time to deep dive into subjects like this is not always easy. So hopefully my series on this will at least inspire people to experiment, try new things and fall in love with Delphi like I did.

The second article series that I am working on right now, is getting to grips with C++ builder. This is actually a very fun experiment since it serves more than a single function; I mean, just how hard is it for a Delphi developer to learn C++ ? What can Embarcadero do to help developers feel comfortable on both platforms? What are the benefits for a Delphi developer to learn C/C++?

 

cppbuilder

C++ builder Community Edition rocks!

And yes I have had more than one episode where the new concepts drove me up the wall. It would be the world’s shortest article-series if Delphi Developer didn’t have my back and I didn’t buy books. Say what you will about modern programming, but sometimes you just need to sit down, turn off the computer, and read. Old school but effective.

Reflections

Embarcadero is very different from what I expected. Before I worked here (which is still a bit surrealistic) I envisioned a stereotypical american company, located in some tall office building; utterly remote from its users and the needs of the punters in the field. This past week has forced me to reflect more than I would have liked, and my armour of strong opinions (if not arrogance) has a very visible dent; because the company that has welcomed me with open arms is everything but that imaginary stereotype.

spartan warrior

Et in Borland ego sum

The core of Embarcadero turned out to be a team of dedicated developers that are literally bending backwards to help as many developers as possible. I left yesterdays meeting with a taste of shame in my mouth, because in my blog I have given at least two of the people who now welcomed me, a less than fortunate overhaul in the past. Yet they turned out to be human beings with the exact same interests, passions and goals as myself.

Building large-scale development tools is really hard work. Seriously. As a developer you forget things like marketing, the sales apparatus, the level of support a developer will need, documentation, tutorials.¬†The amount of requests, conflicting requests that is, from users is overwhelming. You have users who focus on mobile who don’t care about legacy VCL support, then you have people who very much need VCL legacy support and dont care at all about mobile platforms; It’s a huge list of groups, topics and goals that is constantly shifting and needs prioritization.

But all in all the Delphi community and Embarcadero is in good shape. They have worked through a lot of old baggage that simply had to be transitioned, and the result is the change we see now: community editions and better dialog with the users. Compare that to the situation we had five years ago, or eight years ago for that matter. The changes have been many and the road long -but with a purpose: Delphi is growing at a healthy rate again.

What will you need and what will we do?

The goal of the Delphi article is to implement the underlying mechanics of a database. I’m not talking about a “file of record” here or something like that, but a page and sequence based filestream and it’s support apparatus for managing blocks and available resources. This forms the basis of all databases, large or small. So we will be coding the nitty-gritty that has to be in place before you venture into expression parsing.

510242661If time allows I will implement support for filters, but naturally a full SQL parser would be over the top. The techniques demonstrated should be more than enough for a budding young developer to take the ball and run with it. The filter function is somewhat close to a “select” statement – and the essence of expression parsing will be in the filter code.

Note: I will skip memory mapping techniques, for one reason only: it can get in the way of understanding the core principles. Once you have the principles under wraps – memory mapping is the natural next step and evolution of the thoughts involved, so it will fall into place in due time.

You wont need anything special, just Delphi. Most of the code will be classical object pascal, but the parser will throw in some generics and operators, so this is a good time to download the community edition or upgrade to a compiler from this century.

The C/C++ articles will likewise have zero dependencies except the community edition of C++ builder. I went out and bought two books, C++ Primer fifth edition and The C++ programming language by Bjarne Stroustrup himself. Which should be on presciption because i fell at sleep

My frontal lobe is already reduced to jello¬†at the sight of these books, but let’s jump in with both feet and see what we make of it from a Delphi developers point of view. I can’t imagine it can be more of a mess than raw webassembly, but C/C++ has a wingspan that rivals even Delphi so it’s wise not to underestimate the curriculum.

OK, let’s get cracking! I will see you all shortly and post the first Delphi article.

Graphics essentials in Smart Mobile Studio 3

August 5, 2018 Leave a comment

JavaScript and the DOM has a few quirks that can be a bit tricky for Delphi developers to instinctively understand. And while our RTL covers more or less everything, I would be an idiot if I said we havent missed a spot here and there. A codebase as large as Smart is like a living canvas; And with each revision we cover more and more of our blind-spots.

Where did TW3Image.SaveToStream vanish?

We used to have a SaveToStream method in TW3Image that took the raw DIB data (raw RGBA pixel data) and emitted that to a stream. That method was never really meant to save a picture in a compliant format, but to make it easy for game developers to cache images in a buffer and quickly draw the pixel-data to a canvas (or push it to localstorage, good if you are making a paint program). This should have been made more clear in the RTL unit, but sadly it escaped me. I apologize for that.

But in this blog-post we are going to make a proper Save() function, one that saves to a proper format like PNG or JPG. It should be an interesting read for everyone.

Resources are global in scope

Before we dig in, a few words about how the browser treats resources. This is essential because the browser is a resource oriented system. Just think about it: HTML loads everything it needs separately, things like pictures, sounds, music, css styles — all these resources are loaded as the browser finds them in the code – and each have a distinct URI (uniform resource identifier) to represent them.

So no matter where in your code you are (even a different form), if you have the URI for a resource – it can be accessed. It’s important to not mix terminology here because URI is not the same as a URL. URI is a unique identifier, an URL (uniform resource location) defines “where” the browser can find something (it can also contain the actual data).

If you look at the C/C++ specs, the URL class inherits from URI. Which makes sense.

Once a resource is loaded and is assigned an URI, it can be accessed from anywhere in your code. It is global in scope and things like forms or parent controls in the RTL means nothing to the underlying DOM.

Making new resources

When you are creating new resources, like generating a picture via the canvas, that resource doesn’t have an URI. Thankfully, generating and assigning an URI so it can be accessed is very simple — and once we have that URI the user can download it via normal mechanisms.

But the really cool part is that this system isn’t just for images. It’s also for raw data! You can actually assign a URI to a buffer and make that available for download. The browsers wont care about the content.

If you open the RTL unit SmartCL.System.pas and scroll down to line 107 (or there about), you will find the following classes defined:


  (* Helper class for streams, adds data encapsulation *)
  TAllocationHelper = class helper for TAllocation
    function  GetObjectURL: string;
    procedure RevokeObjectURL(const ObjectUrl: string);
  end;

  TW3URLObject = static class
  public
    class function  GetObjectURL(const Text, Encoding, ContentType, Charset: string): string; overload;
    class function  GetObjectURL(const Text: string): string; overload;
    class function  GetObjectURL(const Stream: TStream): string; overload;
    class function  GetObjectURL(const Data: TAllocation): string; overload;
    class procedure RevokeObjectURL(const ObjectUrl: string);

    // This cause a download in the browser of an object-url
    class procedure Download(const ObjectURL: string; Filename: string); overload;
    class procedure Download(const ObjectURL: string; Filename: string;
          const OnStarted: TProcedureRefS); overload;
  end;

The first class, TAllocationHelper, is just a helper for a class called TAllocation. TAllocation is the base-class for objects that allocate raw memory, and can be found in the unit System.Memory.Allocation.pas.
TAllocation is really central and more familiar classes like TMemoryStream expose this as a property. The idea here being that if you have a memory stream with something, making the data downloadable is a snap.

Hopefully you have gotten to know the central buffer class, TBinaryData, which is defined in System.Memory.Buffer. This is just as important as TMemoryStream and will make your life a lot easier when talking to JS libraries that expects an untyped buffer handle (for example) or a blob (more on that later).

The next class,¬†TW3URLObject, is the one that is of most interest here. You have probably guessed that TAllocationHelper makes it a snap to generate URI’s for any class that inherits from or expose a TAllocation instance (read: really handy for TMemoryStream). But TW3URLObject is the class you want.

The class contains 3 methods with various overloading:

  • GetObjectURL
  • RevokeObjectURL
  • Download

I think these are self explanatory, but in short they deliver the following:

  • GetObjectURL creates an URI for a resource
  • RevokeObjectURL removes a previously made URI from a resource
  • Download triggers the “SaveAs” dialog so users can, well, save the data to their local disk

The good news for graphics is that the canvas object contains a neat method that does this automatically, namely the ToDataUrl() function, which is a wrapper for the raw JS canvas method with the same name. Not only will it encode your picture in a normal picture format (defaults to png but supports all known web formats), it will also return the entire image as a URI encoded string.

This saves us the work of having to manually call GetObjectURL() and then invoke the save dialog.

Making some offscreen graphics

TW3Image is not meant for drawing, it’s like Delphi’s TImage and is a graphics container. So before we put a TW3Image on our form we are going to create the actual graphics to display. And we do this by creating an off-screen graphics context, assign a canvas to it, draw the graphics, and then encode the data via ToDataUrl().

To make things easier, lets use the Delphi compatible TBitmap and TCanvas classes. These can be found in SmartCL.Legacy. They are as compatible as I could make them.

  • Browsers only support 32 bit graphics, so only pf32bit is allowed
  • I havent implemented checkered, diagonal or other patterns – so bsSolid and bsClear are the only brush modes for canvas (and pen style as well).
  • Brush doesn’t have a picture property (yet), but this will be added later at some point. I have to replace the built-in linedraw() method with the Bresham algorithm for that to happen (and all the other primitives).
  • When drawing lines you have to call Stroke() to render. The canvas buffers up all the drawing operations and removes overlapping pixels to speed up the final drawing process — this is demanded by the browser sadly.

Right, with that behind us, lets create an off-screen bitmap, fill the background red and assign it to a TW3Image control.

To replicate this example please use the following recipy:

  1. Start a new “visual components project”
  2. Add the following units to the uses clause:
    1. System.Colors
    2. System.Types.Graphics
    3. SmartCL.Legacy
  3. Add a TW3Button to the form
  4. add a TW3Image to the form
  5. Save your project
  6. Double-Click on the button. This creates a code entry point for the default event, which for a button is OnClick.

Let’s populate the entry point with the following:

procedure TForm1.W3Button1Click(Sender: TObject);
var
  LBitmap:  TBitmap;
  LRect:    TRect;
begin
  LBitmap := TBitmap.Create;
  try
    LBitmap.Allocate(640, 480);
    LRect := TRect.Create(0, 0, LBitmap.width-1, LBitmap.Height-1);
    LBitmap.Canvas.Brush.Color := clRed;
    LBitmap.Canvas.FillRect(LRect);

    w3image1.LoadFromUrl( LBitmap.Canvas.ToDataURL('image/png') );

  finally
    LBitmap.free;
  end;
end;

The code above creates a bitmap, which is an off-screen (not visible) graphics context. We then set a background color to use (red) and fill the bitmap with that color. When this is done we load the picture-data directly into our TW3Image control so we can see it.

Triggering a download

With the code for creating graphics done, we now move on to the save mechanism. We want to download the picture when the user clicks the button.

offscreen

Offscreen graphics is quite fun once you know how it works

Since the image already have an URI, which it get’s when you call the ToDataURL() method, we don’t need to mess around with blob buffers and generating the URI manually. So forcing a download could not be simpler:

procedure TForm1.W3Button1Click(Sender: TObject);
var
  LBitmap:  TBitmap;
  LRect:    TRect;
begin
  LBitmap := TBitmap.Create;
  try
    LBitmap.Allocate(640, 480);
    LRect := TRect.Create(0, 0, LBitmap.width-1, LBitmap.Height-1);
    LBitmap.Canvas.Brush.Color := clRed;
    LBitmap.Canvas.FillRect(LRect);

    var LEncodedData:= LBitmap.Canvas.ToDataURL('image/png');
    w3image1.LoadFromUrl(LEncodedData);

    TW3URLObject.Download( LEncodedData, 'picture.png');

  finally
    LBitmap.free;
  end;
end;

Note: The built-in browser in Smart doesn’t allow save dialogs, so when you run this example remember to click the “open in browser” button on the execute window. Then click the button and voila — the image is downloaded directly.

Well, I hope this has helped! I will do a couple of more posts on graphics shortly because there really is a ton of cool features here. We picked heavily from various libraries when we implemented TW3Canvas and TCanvas, so if you like making games or display data – then you are in for a treat!

Hurry and get the Delphi Expert book for free on Packt!

August 3, 2018 Leave a comment
B05667

Get the book for free now!

Packt has a time limited offer where you can download the book Delphi Expert by our late Delphi guru, Pawel Glowacki. Pawel was and continues to be a well-known figure in the Delphi community. He held presentations, wrote books and helped promote Delphi and C++ builder in all corners of the world. He is sorely missed.

In my previous post I mentioned that starting with Delphi is faster if you get a good book on the subject; and Pawels book Delphi Expert – fits perfectly within that curriculum.

If you have been wondering when to start, then consider this is a sign. Download the community edition of Delphi and fetch Pawel’s book – then get cracking!

Cheers

/Jon

Delphi community edition, learn real coding

August 2, 2018 8 comments

Update: I updated the text to better point out writing in past-tense at one point. I apologize for not catching the formulation quicker, but I have edited the text to better reflect this now.

embheader

With the release of the community edition of Delphi and C++ builder, Embarcadero is finally making Delphi accessible to anyone who wants to enjoy the rich flavour of object-pascal that Delphi represents. In my 30+ years of coding I have yet to find a language or development toolkit as creative as object pascal, with Delphi being the flagship compiler and toolkit. Java and C# might appeal to some, but for developers solving real problems out there, the stability of Delphi is hard to match (more about that later).

Besides, object-pascal is fun, highly creative and easy to learn! Just imagine the wealth of knowledge a language that has stood the test of time has to offer!

Finally a straight up license

The license Embarcadero has landed on is easy to understand and straight to the point: the community edition is free for open source projects, and you can use it for commercial products until your sales reach a certain sum – and then you are expected to buy it. I wish I had this back in the day, I bought my first Delphi with money from my student loan.

maxresdefault

Unreal engine operates with a similar license

This license is incidentally the same used by market leading game and multimedia companies. Both Crytech (CryoEngine) and Epic Games (Unreal engine) operate with the same concept. Instead of charging you a sum up-front, you can create your product and pay when your earnings justify it. Unreal-engine has a fixed percentage if memory serves me correct. So the Embarcadero license is more than fair.

What the community edition of Delphi and C++ builder means in practical terms, is that you get to learn, build and bring your idea to market without that initial investment. When you boat floats and you make money, then you pay for the toolbox that helped you be successful.

If you are a startup company with investors and limited funds, you get to adjust your license fee to your runway after the fact rather than before. So if your product tanks and you never make the expected sum; well that’s one bill less to worry about.

The myth of free Microsoft products

visual_studioOne of the things I often hear when talking to developers, is the “visual studio myth”. The notion that Visual Studio is free and there are no strings attached. And this is a myth, just to be clear. Microsoft have ridicules amounts of money so they could afford to lend you Visual Studio for five years (which was how they operated until very recently). If you checked the license for Visual Studio that’s what it said: you get to use it for five years, then you better pony up the cash. And by that time you have no doubt advanced to Enterprise level, which means that check will be signed in blood.

So this illusion that Visual Studio is free, is just that. Young developers are just as likely to use a pirated copy as violating a community agreement – so even today with the subscription model and ordinary trial they don’t notice the devil lurking in the details.

But for entrepreneurs that are starting from scratch, that need to set up a budget for their product that a board or single investor can trust, well it’s hard work because development is never an exact science. The coding part is, but it’s the human factor that is challenged, not the technology. It’s the spirit and individuals ability to see solutions where others see only walls that is tested; especially when you are making something truly unique. Something that doesn’t exist yet.

And once you are in that basket and have your entire product, perhaps even your career, riding on the investors being happy (which are rarely developers I might add) – the temptation of going “all in” is very real and very tangible. Let’s use¬†MSSQL since we already have a VS license. Let’s use IIS instead and get rid of Apache. Lets use Sharepoint since we get a nice discount. Complete dependency doesn’t take long.

Now it’s no secret that my brief affair with C# makes me biased, and I am biased. Proudly biased. Bias on tap even. This is an object pascal blog where all things object pascal is loved and valued. So read my articles while imagining me with a cheeky smile in the corner of my mouth, and a slight sparkle in my eyes.

But in all fairness, the new Delphi community license is up-front, no hidden fees, honest and direct. If it wasnt.. well, I have a history of shooting myself in both feet by being painfully honest. I cant find anything wrong with the license and believe me I have tried. This is christmas and my birthday all rolled into one! Embarcadero has put their ears to the ground and listened to their customers.

With this in place Embarcadero is cementing a foundation of growth for our community, the languages they deliver and our future.

Rock solid

Delphi has always been known for producing rock solid, reliable database solutions. Delphi is awesome because it covers the whole spectrum of coding, from low level procedural dll files to system services, industrial scale servers, desktop and mobile applications Рthe list goes on. There are 3 million active Delphi developers around the world. Not to mention the millions more relying on older versions of Delphi or alternative compilers to power their businesses.

livebindings

Visually bind database fields to containers, its details like this that saves time

If you mentally jump into a time machine and travel back to the 1980s, then slowly walk along the timeline and look at the changes in computing. Look at all the challenges before Delphi and how in 1995 Delphi took the world by storm. Into Delphi, Anders Hejlsberg and his team invested all their knowledge and everything they had learned from previous compilers and run-time libraries. This investment never stopped. There have been many architects involved over the years, each adding their contribution.

The amount of skill, insight, technique and dedication is breathtaking.

C# might be the cool kid on the block right now, but it’s painfully unsuitable for a wide range of tasks. Tasks that require a programming language with more depth. There is also something to be said about the test of time. Delphi and C++ builder have decades of evolution behind them. Many of the core principles were inherited from Turbo Pascal which dominated the 1980s and early 1990s.

And let me back that up with an example:

I used to do some work for a Norwegian company that delivers POS terminals to most of northern europe. When I got there they had a C# department and a Delphi department. Obviously I thought they wanted me to work on the Delphi codebase, but to my surprise they threw me into C#.

While I was there I noticed that Delphi was used on the hardware, the actual terminals themselves and the data transmissions. POS terminals is a potentially fragile but important instrument for any store; it has to operate 24/7 and a single mistake can be a financial disaster. I doubt more needs to be said here.

terminal

A POS terminal consists of many parts, here showing the card reader. Instability in the terminal can lead to loss of data, corrupted backups and network problems

The irony in all this was – that two years earlier they had tried to replace Delphi on the terminals with C#. They invested millions into rewriting the whole thing from scratch. But the rollout of this monstrosity was a total fiasco.

The bro-grammer’s forgot that some things are there for a reason. They neglected the subtle nuances of how each language works and how code behaves under extreme conditions; conditions where ram, storage space and cpu power are severely limited. On cheap, low-powered embedded boards even the slightest fluctuation in CPU activity can tank the whole system.

C# and Java were unfit because the GC (garbage collector) would kick in on random intervals to clean up the heap, this caused CPU spikes. The spikes were enough to freeze the terminal for a brief second, disturbing network activity, disk operations and database stability. It was the first time since the early 90s that I actually saw “Disk C: has a read-write error” dialog. I had to bite my lip to not laugh out loud. I tried so hard, honestly.

The glorified update was haunted by broken transmissions, un-responsive UIs and ruined backups (the device backs up its receipt database both locally and remotely many times a day). After a couple of weeks they rolled back the whole thing. Customers demanded their old system back. The system written in Delphi (and if you think the C# “native image compiler” from Microsoft made things better, think again).

So Delphi and object pascal still powers a large amount of financial transactions in northern europe. You will find Delphi used by the government, security companies, oil companies, POS brokers, ATM’s, missile guidance systems – anywhere where a high level of reliability is essential.

Getting started

Jumping into a new programming language or learning your first one can be daunting. Thankfully Delphi has been around almost as long as C/C++ (3 years younger) so there is plenty of knowledge online, most of it free (always google something before asking on forums, make that a habit).

tomes

Study the classics that teaches you how and things work

But to really save you time I urge you to buy a couple of books on Delphi. Now before you run off to Amazon or google around, there are two types of books you want.

You want a book that teaches you Delphi in general, a modern book that shows you OOP, generics and all the features that were added to Delphi after the XE version naming. So make sure you buy a book that teaches you Delphi from (at the very least)¬†XE6 and upwards. Delphi “Berlin” or “Tokyo” is perfect.

The next book has to do with technique. What makes Delphi so incredibly powerful is this awesome depth. You can write libraries in hand optimized assembly code if you want – or you can write object-oriented, generics driven high-level mobile apps. Between those two extremes is a wealth of topics, including system services, your own servers, every database engine known to mankind and much, much more.

But you want a good book that teaches you techniques, techniques that underpin all the cool high-level features people take for granted. The most cherished book you will ever own for Delphi, is The tomes of Delphi: Algorithms and data structures (catchy title, but this is a book you can come back to over many years).

Now go download that thing and enjoy! Welcome to the coolest language in the world!

Happy coding!