Archive

Archive for January, 2017

Smart Pascal: Support for next generation Amiga A1222

January 30, 2017 2 comments

If you are between 35 and 45 years old, chances are you remember the Commodore Amiga. This was the major computer to have back in the 80’s and 90’s. And if you know your history you also know that it was way ahead of all the other systems out there. Microsoft prides itself on inventing multi-tasking, but truth is that Amiga OS had a full multitasking, window oriented desktop way back in 1984.

amiga_os41_b_lancastria

Classic AmigaOS, skinnable and very flexible even today

Before you think this post is about the old computers, let me just say that it’s not; I realize that most of you will immediately think retro when you hear the word Amiga – but fact is that Amiga is doing somewhat of a comeback these days.

A brand new operating system

AmigaOS was awesome for its time and really was a system from the future. In an age where the average PC was a monochrome green dos experience, and Mac’s were ridicules black and white boxes – the Amiga was like something out of a spaceship. PC’s didn’t even have a mouse, let alone any form of multimedia – and here was a powerhouse of a machine that had a full windowing desktop, bright and crisp colors, co-processors dedicated to graphics, sound and dispatching — people had never seen anything like it.

Sadly, due to some borderline insane management decisions, Commodore went bankrupt back in 1994. You would think that the Amiga just went away after that, but it’s still going to this day. Two decades after Commodore went out of business – people still write software for the platform and some of the coolest hardware you will ever see is released for it every year. It really is an astounding computer and community. I mean, only recently in 2016 at least 5 new games were released for the platform (!)

Needless to say, the old operating system that culminated in OS 3.9 is hopelessly trapped in the past. Which is why the people who bought the rights to the system have spent more than a few years bringing it up to speed (right now it’s at version 4.1). And the result speaks for itself. It really is one of the best looking desktop systems in the world.

Sexy, fast and modern: Amiga OS 4.1

Sexy, fast and modern: Amiga OS 4.1

Brand new hardware

Say what you want about Amiga hardware but the quality was outstanding. But outstanding hardware from the early 90’s can’t hope to compete with modern computers. We are used to GPU chips that spin millions of pixels around the screen per second without breaking a sweat; 24 bit sound, monstrous 3d graphics processors, multi threading, gigabytes of ram and terabytes of storage space (phew!).

With Commodore long dead and no company officially in charge of hardware, a man called Trevor Dickinson decided that he had enough. Back in the day everyone was waiting for Commodore to release the Amiga 5000 (the Amiga 4000 was the Commodore flagship back in the 90s, sadly that was the end of it), including Trevor. This dream of the next Amiga is something everyone that loved the platform share even to this day. Well, Trevor decided to do something about the situation. He picked up the gauntlet, sharpened his sword and his journey towards a new Amiga began.

The Amiga X1000 is an earlier PPC based model

The Amiga X1000 is an earlier PPC based model

Without going into too much nitty-gritty here, Trevor and his colleagues set out to create a PPC based motherboard capable of running Amiga OS 4. And yes, the operating system is presently PPC based. This is due to the fact that in the 90’s both the Amiga and Apple Mac’s used the MC68k line of processors. These were retired and replaced with PPC chips. This made perfect sense at the time. Since Apple used them in their computers – the Amiga could enjoy cheaper hardware and piggyback on Apples success.

Most people today think that PPC is an obsolete chipset, but fact is that they are still in production – largely so, because they are popular for embedded systems. The obscene problems with heating is no longer as pressing at it used to be due to changes in production and materials – and as a consequence the new Amiga uses PPC even now.

I have two monster PPC based Mac’s in my basement so I was a bit timid when I heard that OS 4.x was using PPC. I mean the top of the line powermac is more cooling metal than content. But thankfully that line of PPC chips is a thing of the past.

Two exciting models

There have been a ton of Commodore related attempts to get the Amiga back on the market, every five years or so someone buys the Commodore name and tries to revive the glory days of the Amiga (or so it seems). All of them have failed largely due to internal fighting between the many owners that has rights to various bits and pieces. If there is a reason that the Amiga never managed to get back into market – this is it. The rights to various parts of the system was auctioned off to the highest bidder, with the operating system going one way, chips going another and name and intellectual property. It has been a complete cluster fuck since that horrible day back in 1994.

Trevor on the other hand has solved this the only way a reasonable and sane human being can. Which is to simply abandon and leave the old hardware, the Commodore name and the old machines peculiarities where they belong; In the past. So what if you can’t slap Commodore on the physical machine? The new Amiga was not designed to be a 30 year old computer. It was designed from scratch to run Amiga OS 4 and to be a system based on the same principles as the original; And that is (at least to me) what really counts. I enjoyed the old games and demo scene, but the desktop and programming aspects of the Amiga was always where my heart was.

Trevor is an avid collector and have pretty much every Commodore model you can think of

Trevor is an avid collector and have pretty much every Commodore model you can think of

Besides, kids that grew up in the aftermath of Commodore going under – don’t have a clue what the Amiga was anyhow. Nor do they have any relationship to Commodore as a brand. And considering 20 years have passed -clinging to these old names is ultimately a complete waste of time and money. So Trevor’s being fed up and deciding to build a whole new machine for the updated and modernized Amiga operating system – that makes perfect sense to me.

Unlike previous attempts by various individuals and groups, Trevor’s work is more tangible. In fact, you can go out right now and order the high-end PPC based monster that is the Amiga x5000. This is the most expensive model and it’s going to set you back a whopping €1800. This may seem like a steep price but bear in mind the amount of work and cost of parts – so it’s actually not that bad.

img_5293

The ultra sexy X5000 is a high-end Amiga

As a modern developer my computer needs have grown exponentially every year. I no longer install my development tools on my actual PC, instead I use VMWare and have all my development platforms neatly organized as virtual machines. The result is that I can no longer buy the cheap or middle-range offerings. My latest PC cost me around €2500. So all things considered €1800 is in the same range as an Amiga 4000 went for back in the day. This is a work horse that professionals use, it’s not a low-end gaming model.

The A1222

The A1222 “Tabour” motherboard is a cheaper, more affordable entry machine. This is the one I’m getting to start with

Thankfully there is a cheaper model in the pipeline. I have already pre-ordered the A1222 which retails at around 400€ (give or take vat and shipping). This has a smaller CPU, less ram (you can of course stuff more in yourself) and you have to get your own mini-atx cabinet for it. As the name hints to this is equivalent to a bog standard A1200, which was my all time favorite. My old gem of a machine had a 40 megabyte harddisk, 4 megabytes of ram and an external CS-ROM. These specs are ridicules today, even my phone has more .. well, everything really; but 20 years ago that setup was the bomb.

When people talk retro like I do now, always remember that the Amiga operating system was able to deliver a close to Windows 95 experience. That should tell you something about how fast, well written and efficient the old Amiga was.

This was a machine that delivered an impressive and modern desktop experience even by today’s standards in 512 Kb (yes you read that right, kilobytes) of working ram. Imagine what an OS based on the same principles can do with 4 gigabytes of ram, 32 bit graphics, protected and paged memory, 24 bit audio and all the perks of modern computing.

Where emulation can’t go

Those that follow my blog know that I Amiga emulation and retro computing. So whenever I have time I whip out my Raspberry PI 3b based Amiga and enjoy my Amiga games, desktop and even do a spot of programming. We even managed to get Freepascal 3.x to compile on it, which means I can write applications with all the cool features of modern object-oriented programming at my disposal. On an Amiga 4000 emulated – the mind boggles when you think about this to long.

Compiling anything, even older stuff that is a joke by today standard, is painful on the Raspberry PI. Here showing my retro-fitted A500 PI with sexy led keyboard. It will soon get a makeover with an UP board :)

Here showing my retro-fitted A500 PI with sexy led keyboard, overclocked to the hilt running at 4 times the speed of an original Amiga 4000 (for $35!)

Sadly you can forget running Amiga OS 4 under emulation on the PI. Sure, it works fine on Windows or Linux – but emulation is only as good as the code emulating it. And while the PPC emulation layer is a monumental achievement and I applaud the original author, it’s not nearly as optimized as the older 68k layer. From what I know it’s still running ad-hoc with no JIT involved (or at least not cache’d). This means that you need a powerful PC to really enjoy Amiga OS 4. Emulation is a bit like bootleg movies, and a poor bootleg will ruin the movie completely. You can get away with a mid-range x86 PC I suppose, but you can forget about ARM or x86 embedded boards.

Perhaps you remember that I tested various embedded boards a while back? Part of my motivation in doing that (and buying HW for €300 on a whim) was to find a reasonably priced x86 or ARM single board computer that could emulate OS 4 without problems. The most expensive and capable of the boards I tested was the UP x86 board that retails at around $150 (I got the biggest model they had on offer). And yes, it did manage to run Amiga OS 4, just not in a way that made it usable or enjoyable. On my kick ass Intel i7 based PC it emulates just fine, but again – it becomes ridicules not buying a real Amiga since emulation will cost me more or the same. I mean, why not go for the real deal if its affordable?

So if a €400 Amiga is what it takes to run OS 4 properly, then I have no problem supporting Trevor’s work and get the real deal.

Freepascal and Smart

While Trevor doesn’t know me personally, I am a huge fan of his endeavour. And one of the things I want to start porting to the shiny new OS 4.x platform is of course – Smart Pascal. If you have ever heard of Turbo Pascal and Delphi (with emphasis on the latter) then you get some idea of what Smart is. Except that we compile for node.js and the browser rather than machine-code.

So before I can port that over I will have to get Chromium run on OS 4, and then link that up with Freepascal. I do enjoy C++ but its not even close to the productivity of object pascal so I prefer to work in that. And since freepascal 3.x has thankfully been ported already that is not a problem. So with a bit of work I think Delphi developers will be in for a treat, and people new to programming will love learning object pascal.

Porting the full might of Smart Mobile Studio to the Amiga is going to take an effort, but I think it will be worth it

Porting the full might of Smart Mobile Studio to the Amiga is going to take an effort, but I think it will be worth it

But naturally, not everyone is used to building from the ground up. It will take some work and probably weeks of adaptation before the full might of Smart Mobile Studio runs on the Amiga. But when it does – it will have a huge impact. Because then you can use the Amiga to create cloud servers, mobile applications for ALL platforms and much, much more!

So if being a part of using an operating system from the grass-root and up sounds exciting, why not take a peek at the A1222 or Amiga x5000?

Head over to http://www.a-eon.com/ and have a gander

Smart Pascal: Update specifics

January 20, 2017 Leave a comment

With what is probably the biggest update Smart Mobile Studio has seen to date only a couple of weeks away, what should the Smart Pascal programmer know about the new RTL?

I have written quite a few articles on the changes, so you may want to backtrack and look at them first – but in this post I will try to summerize two of the most important changes.

Unit files, a background

The first version of Smart Mobile Studio (sms) was designed purely for webkit based, HTML5 applications. As the name implies the initial release was designed purely for mobile application development; with the initial target firmly fixed on IPhone and IPad. As a result, getting rid of webkit exclusive code has been very time-consuming to say the least.

At the same time we added support for embedded devices, like the Espruino micro-controller, and in leu of that process -it became clear that the number of devices we can support is huge. As of writing we have code for more than 40 embedded devices lined up, ready to be added to the RTL (after the new RTL is in circulation).

Add to this the fact that node.js is now the most powerful cloud technology in the world, the old “DOM only”, webkit oriented codebase was hopelessly narrow minded and incapable of doing what we wanted. Something had to give, otherwise this would be impossible to maintain – not to mention document in any reasonable logical form.

A better unit system

I had to come up with a system that preserved universal code – yet exposed special features depending on the target you compile for.

So when you compile for HTML5 you get all the cool stuff the document object model (DOM) has to offer, but also full access to the universal units. Same goes for node.js, its has all the standard stuff that is universal – but expose a ton of JS objects that exist nowhere else.

What I ended up with was that each namespace should have the same units, but adapted for the platform it represents. So you will have units like System.Time.pas, which represents universal functionality regarding timing and synchronization; But you are supposed to use the one prefixed by your target namespace. For instance, SmartCL.Time or SmartNJ.Time. In cases where the platform doesnt expose anything beyond the universal code, you simply fall back on the System.*.pas unit.

namespace_scheme

It may look complex, but it’s super easy once you get it

The rules are super simple:

  • Coding for the browser? Use the units prefixed with SmartCL.*
  • Coding for node.js? Use the units prefixed with SmartNJ.*
  • Coding for embedded? Use the units prefixed with SmartIOT.*

Remember that you should always include the system version of the unit as well. So if you include SmartCL.Time.pas, add System.Time.pas to the uses list as well. It makes it easier to navigate should you want to CTRL + Click on a symbol.

Better synchronization engine

JavaScript is async by nature. Getting JavaScript to behave like traditional object pascal is not only hard, it is futile. Besides, you dont want to be treated like a child and shielded from the reality of JavaScript – because that’s where all the new cool stuff is at.

In Delphi and Lazarus we are used to our code being blocking. So if you create child objects in your constructur, these objects are available there and then. Well, thats not how JavaScript works. You can create an object, the code continues, but in reality the object is not finished yet. It may in fact be assembling in the background (!)

To make writing custom-controls sane and human, we had to change a few things. So in your constructor (which is InitializeObject in our RTL, not the actual class constructor) you are expected to create child object – but you should never use them.

Instead there is a method called ObjectReady() which naturally fires when all your child elements have been constructed properly and the control has been safely injected into the document object model. This is where you set properties and start to manipulate your child objects.

synchro

So finally the setup process is stable, fast and working as you expect. As long as you follow this simple rule, your controls will appear and function exactly like you expect them to.

  • Never override Create and Destroy, use InitializeObject and FinalizeObject
  • Override ObjectReady() to set initial values

Creation flags

We have also added creation flags to visual controls. This is a notion familiar to both Delphi’s VCl and Lazarus LCL component structures. It allows you to define a few flags that determines behavior.

For instance, if your control is sized manually and will never really need to adapt – then it’s a waste of time to have Resize() fire when the size is altered. Your code will run much faster if the RTL can just ignore these changes. The flags you can set are:

  TW3CreationFlags = set of
    (
    cfIgnoreReadyState,     // Ignore waiting for readystate
    cfSupportAdjustment,    // Controls requires boxing adjustment (default!)
    cfReportChildAddition,  // Dont call ChildAdded() on element insertion
    cfReportChildRemoval,   // Dont call ChildRemoved() on element removal
    cfReportMovement,       // Report movements? Managed call to Moved()
    cfReportResize,         // Report resize? Manages call to Resize()
    cfAllowSelection,       // Allow for text selection
    cfKeyCapture            // assign tabindex, causing the control to issue key and focus events
    );

To alter the default behavior you override the method CreationFlags(), which is a class function:

class function TW3TagObj.CreationFlags: TW3CreationFlags;
begin
  result := [
    // cfAllowSelection,
    cfSupportAdjustment,
    cfReportChildAddition,
    cfReportChildRemoval,
    cfReportMovement,
    cfReportReSize
    ];
end;

Notice the AlloSelection flag? This determines if mouse selection of content is allowed. If you are making a text display, or editor, or anything that the user should be allowed to select – then you want this flag set.

The cfKeyCapture is also a flag many people have asked about. It determines if keyboard input should be allowed for the element. By default this is set to OFF. But now you dont have to manually mess around with element attributes. Just add that flag and your control will fire the OnKeyPress event whenever it has focus and the user press a key.

Tweening and fallback effects

In the last update a bug managed to sneak into our SmartCL.effect.pas library, which means a lot of people havent been able to fully use all the effects. This bug has now been fixed, but we have also added a highly efficient tweening library. The concept is that effects will be implemented both as hardware acellerated, GPU powered effects – but also with CPU based tweening alternatives.

For mobile devices it is highly adviced to use the GPU versions (the normal SmartCl.Effects.pas versions).

When you add the unit SmartCL.Effects.pas to your forms, what happens is that ALL TW3MovableObject based controls are suddenly extended with around 20 fx prefixed methods. This is because we use partial classes (which means classes can be extended).

This is super powerful, and with a single line of code you can make elements fly over the screen, shrink, zoom out or scale in size. You even get a callback when the effect has finished – and you can daisy-chain calls to execute a whole list of effects in sequence.

FMyButton.fxSizeTo(10, 10, 200, 200, 0.8, procedure ()
  begin
   writeln("Effect is done!");
  end);

And execute in sequence. The effects will wait their turn and execute one by one.

FMyButton.fxMoveTo(10,10, 0.8)
  .fxSizeTo(100,100, 0.8)
  .fxFadeTo(0.3, 0.8);

Stay tuned for more info!

Smart Pascal: Download streams

January 20, 2017 Leave a comment

Real, binary streams has been a part of the Smart Pascal RTL for quite some time now. As a Delphi developer you probably take that for granted, but truth be told – no other JavaScript framework even comes close to our implementation. So this is unique to Smart Pascal, believe it or not.

The same can be said about the ability to allocate, move and work with memory buffers. Sure you can write similar code by hand in pure JavaScript, but the amount of code you have to write will quickly remind you why object orientation is so important.

Binary data counts

So you got streams, what of it? I hear you say. But you are missing the point here. If there is one thing JavaScript sucks at, it’s dealing with binary data. It has no concept really of bytes versus 32 bit integers, or 64bit integers. There is no such thing as a pointer in JavaScript. So while we have enjoyed pointers, memory allocations, being able to manipulate memory directly and use streams to abstract from our binary data for decades in Delphi — all of this is brand new under JavaScript.

672274_cd11_2

And binary data counts. The moment you want to write code that does something on any substancial level – the capacity for dealing with binary data in a uniform way is imperative. If you are into HTML5 game coding you will sooner or later get in contact with map editors (or level editors) that work best in binary format. If you plan on making a sound app that runs in the cloud, again being able to read binary files (just like we do in Delphi) is really, really important. Just stop and think for a few seconds how poor Delphi and C++ builder would be without streams.

Making data available

One developer asked me an important question earlier: how do you get data out? And he meant that quite literally. What if my Smart app is the producer of binary data? What if I use SMS to create the fancy map editor, or the waveform generator code or whatever – what then?

Indeed that is a good question, but thankfully an easy one.

Most JavaScript objects, or data objects in general, inside a browser can be exported. What this means is that the browser can tag a spesific object with an ID, and then make the data available as a normal link.

For instance, if you have a block of memory like an uint8Array and you want that data exported, you would call url.createObjectURL() and it will create an URL you can use to get that data. Let’s have a look at the code you need first:

function BinaryStreamToURLObject(Stream: TStream):String;
var
  mBlob:  THandle;
begin
  if stream<>NIL then
  begin
    stream.position:=0;
    var mTemp := TDatatype.BytesToTypedArray(Stream.read(stream.size));
    asm
      var encdec = window.URL || window.webkitURL;
      @mBlob = new Blob([@mTemp],{ type: "application/octet-binary" } );
      @result = encdec.createObjectURL(@mBlob);
      console.log(@result);
    end;
  end;
end;

procedure ForceDownloadOf(FileName: string; Stream: TStream);
var
  LARef:  TControlHandle;
begin
  if Stream <> nil then
  begin
    if Stream.Size > 0 then
    begin
      // Create node
      asm
        @LARef = document.createElement('a');
      end;

      // Setup values
      LARef.style := "display: none";
      LARef.href := BinaryStreamToURLObject(Stream);
      LARef.download := Filename;

      // Add to DOM
      asm
        document.body.appendChild(@LARef);
      end;

      // Wait for the obj to appear in the DOM
      LARef.readyExecute( procedure ()
        begin
          // Invoke click on link
          LARef.click();
        end);

    end;
  end;
end;

Note #1: Notice how I use a TControlHandle in the example above. Why? Because this handle has a helper class that gives us some perks, like readyExecute(), which fires when the element is safely in the DOM and is ready to be used.

Note #2: Since the built-in browser in Smart Mobile Studio doesnt have download functionality, nothing will happen when you run this inside the IDE. So click on the “Open in browser” and run your app there to see it.

The first function takes a stream and converts it into a blob object. The second function creates an anchor object, and then calls the click() method on that anchor. Essentially kick-starting the download. It is the exact same as you clicking on a download link, except we do it purely through code.

Let’s go through the steps

  • Grab all the data from the stream
  • Convert from TByteArray to a typed browser array
  • Fetch the browser’s URL object
  • Call createObjectURL, passing the data
  • Return the internal URL for the data, which is now kept safe
  • Create an anchor link object
  • Make sure the anchor is invisible
  • Set the URL to our blob above
  • Add the anchor to the DOM
  • Call the Click() method on the anchor

Voila! Not to hard was it 🙂

So now you can just have a button and in the onClick event you just call ForceDownload() and bob’s your uncle 🙂

Here is the internal link I got after saving a TW3Dataset to a stream and pushing it through the steps above: blob:http%3A//192.168.38.102%3A8090/9a351a97-5f6d-4a43-b23b-c81b77972e21

This link is relative to the content, so it will only work for as long as your Smart app is in memory (actually, only while the blob is managed, you can remove the blob as well).

Smart Pascal: Arduino and 39 other boards

January 17, 2017 Leave a comment

Did you know that you can use Smart Mobile Studio to write your Arduino controller code? People think they need to dive into C, native Delphi or (shrug) Java to work with controller boards or embedded SoC’s, but fact is – Smart Pascal is easier to set up, cheaper and a hell of lot more productive than C.

Arduino getting you down? Why bother, whip out Smart Pascal and do some mild wrapping and kill 40 birds with one stone

Arduino getting you down? Why bother, whip out Smart Pascal and do some mild wrapping and kill 40 birds with one stone

What may come as a surprice to you though is that Smart allows you to write code for no less than 40 micro-controllers and embedded devices! Yes you read right, 40 different boards from the same codebase.

Johnny number five

jfiveNode.js is the new favorite automation platform in the world of hardware, and board’s both large and small either support JavaScript directly, or at the very least try to be JS friendly. And since Smart Mobile Studio support Node.js very well (and it’s about to get even better support), that should give you some ideas about the advantages here.

The magic library is called Johnny number five (JN5) and it gives you a more or less unified API for working with GPIO and a ton of sensors. And it’s completely free!

As of writing I havent gotten around to writing the wrapper code for JN5, but the library is so clean and well-defined that it’s something that should not take more than a day to port over. You can also take a shortcut via our Typescript importer and use our import tool.

Check out the API here: http://johnny-five.io/api/

Here is a list of the sensors JN5 supports:

  • Altimeter
  • Animation
  • Barometer
  • Button
  • Compass
  • ESC
  • ESCs
  • Expander
  • Fn
  • GPS
  • Gyro
  • Hygrometer
  • IMU
  • IR.Reflect.Array
  • Joystick
  • Keypad
  • LCD
  • Led
  • Led.Digits
  • Led.Matrix
  • Led.RGB
  • Leds
  • Light
  • Motion
  • Motor
  • Motors
  • Multi
  • Piezo
  • Pin
  • Proximity
  • Relay
  • Relays
  • Sensor
  • Servo
  • Servos
  • ShiftRegister
  • Stepper
  • Switch
  • Thermometer

And enjoy one of the many tutorials here: http://johnny-five.io/articles/

Smart Pascal: Node.js by example

January 16, 2017 Leave a comment

nodeThe update for the Smart Mobile Studio RTL is nearing completion. We still have a little to go, but all in all we have cleaned up the VJL considerably and made a much better organization. We now have 3 namespaces (read: groups of units), namely: System, SmartCL and SmartNJ. The latter representing the Node.js aspect of our codebase. We also have a fourth namespace in the making, namely Embedded. Currently our embedded support is limited to Espruino – but I have begun working on the Arduino and Arduino mega codebase. When these are complete they will all be in the Embedded namespace.

Ok, node.js so where do I begin. Perhaps its best to explain a few fundamental keys to how the new RTL is organized so you understand just much Smart gives you over vanilla JavaScript.

I mentioned the system namespace above, but what exactly does that mean? The system namespace contains universal code. Code that runs on all JavaScript platforms regardless of a DOM being present or not. So you get to enjoy stuff like:

  • Traditional TStream, TMemoryStream and TFileStream
  • Codec classes
  • Memory allocation and blue pointers
  • Encryption classes
  • Delayed execution (TW3Dispatch, system.time.pas)
  • Event objects
  • Filesystem
  • Datatype conversion (system.type.convert)
  • .. and the whole system namespace

Notice the TFileStream there? You will find that the system namspace contains an abstract implementation of a class called TFileSystem. And this is where things get funky. When you create a SmartCL application (that is, a visual HTML5 application), the implemented filesystem is in SmartCL.Filesystem. When you create a node.js application, the implemented filesystem can be found in SmartNJ.filesystem. See what I’m getting at here? The idea is that regardless of the runtime environment, you will always be able to use the same code no matter what engine you run on.

Ok, hopefully that gave you some context to work with, now let’s move on and create a fancy server for node.js !

Sexy server time

We are going to create a simple yet powerful UDP client / server application. I picked this because it’s sort of the odd-ball in our collection of server objects. UDP is connection-less and is ridiculously easy to use. At the same time there is no verification and assurance of delivery (that’s the flipside). But UDP is excellent for “live” logging, or for inter-service communication between micro-services on the same network or router.

Right, let’s just jump straight in:

var Server: TNJUDPServer;
Server := TNJUDPServer.Create;

Server.OnMessage := procedure (Sender: TObject; Data: variant;
  RequestInfo: TNJUDPRequestInfo)
begin
  writeln("Recieved data:" + string(data));
end;

Server.OnError := procedure (Sender: TObject;
  ErrorObj: TJsErrorObject)
begin
  writelnF("Error[%s]: %s", [variant(ErrorObj.stack).toString(), ErrorObj.message]);
end;

Server.OnClose := procedure (Sender: TObject; ErrorObj: TJsErrorObject)
begin
  writeln("Server closed");
end;

Server.OnAfterServerStarted := procedure (sender: TObject)
begin
  server.Send("Your first Smart server is now running");
end;

Server.Port := 1881;
Server.Address := '127.0.0.1';
Server.MulticastLoopback := true;
Server.Broadcast := true;
server.Exclusive:= false;
Server.Start();

Congratulations! You have just created your first node.js powered UDP server! It doesnt get much easier than this does it? Try to guess how much JS code you would have to write to get all the perks of the system namespace + the node.js namespace.

Some guy commented on an article I wrote the other day, asking me to give him a single reason why Smart Mobile Studio is better than just writing JavaScript. The answere is simple: producivity. Some of the code you get in a simple class in Smart represents hundreds or thousands of lines of JavaScript code.

What do you think is more productive? Maintaining 2 megabyte of raw JavaScript code; a binary kebab of spaghetti code that can go belly up if you missplace as much as a comma. Or the 2k of clean, easy to read, easy to understand object oriented pascal code?

I rest my case.

Right, with the server done already, let’s have a peek at the client:

LClient := TNJUDPClient.Create;
LClient.Port := 1881;
LClient.Address := "localhost";
LClient.Bindings.Add('127.0.0.1',1881);
LClient.OnAfterStarted := procedure (sender: TObject)
begin
  LClient.Send("Client is now active", 1881, 'localhost',
  procedure (ErrorObj:   TJsErrorObject)
  begin
    writeln("An error occured:" + ErrorObj.Stack.ToString());
  end);
LClient.Active := true;

And that my friend is pretty much it!

Node.js really is awesome once you get to approach it on our terms. When you come to the JavaScript virtual machine through the OOP layer that Smart Pascal gives you, it’s a whole different reality.

Write a cluster ready, platform independent client/server application in less time than it takes to boot up Visual studio

Write a cluster ready, platform independent client/server application in less time than it takes to boot up Visual studio

What is the benefit here you might ask? All of this can be done in Delphi and Lazarus without much problem. The benefit is this: the generated code is platform independent, it will run on any platform as long as node.js is installed. It will behave identical regardless of operating-system. And you can cluster, clone and replicate instances to your heart’s desire.

Node.js is also easy to host, cheap and accessible. Native hosting is expensive and requires much more work. When something goes wrong inside a 200 megabyte service hosted on Amazon.. I have been there. Having to track down the bugs, re-build the executable while your boss is going mental – spend 10 minutes just uploading the damn thing, then you have to uninstall the service, reboot the whole instance, re-install the new .exe file and hope that you did manage to catch that bug.

Compare that to fixing the bug, uploading the new JS file and then inform PM2 that you need to hot-swap the service code. It’s a whole new paradigm.

Websocket server

With UDP behind us, let’s look at something a lot more complex. And when I write complex, I dont mean for you. Websocket is the fastest growing protocol standard these days. It’s basically an extension of the ordinary HTTP protocol. It is designed for long-term connections (read: not stateless, single shot operations like http) and is full duplex and async. So the server can talk to the client and visa versa without having to wait it’s turn. Just fire off as many messages as you like, whenever you like – and it will arive in the same order on the other side.

var
LServer: TNJWebSocketServer;

LServer := TNJWebSocketServer.Create;
LServer.ClientTracking := true;
LServer.Port := 1881;

// Setup our own protocol commands
LServer.On("command1", procedure (const Socket: JWsSocket; const Data: variant)
  begin
    writeln("Command #1 executed, recieved data:");
    writeln(data);
  end);

LServer.On("command2", procedure (const Socket: JWsSocket; const Data: variant)
  begin
    writeln("Command #2 executed");
    // broadcast a "hello" response with a byte array to *ALL*
    // connected clients. Making chat applications is very easy here
    socket.Emit("Hello", [12,13,14,14]);
  end);

LServer.Active := true;

If you look closely you will notice the use of the On keyword. This is what we use to define our server-side protocol. It’s essentially trigger words that you associate with a piece of code. When the server recieves a message it will read the message-name, look up a associated code and execute it. But let’s look at the background for this so you dont end up downloading the wrong package.

Normally you install the node.js websocket package first, and then you install something called socket.io on top of that. The On keyword we used above is actually not a part of the default websocket standard (which is more low level). This functionality is provided by a separate package called socket.io.

But, since most people install and use websocket just to use socket.io, it makes sense to merge these two packages into a single, stand-alone distro that gives you everything in one go. And this package is cleverly named Websocket-IO (it’s just knockout names isnt it).

I usually install node packages in my project output folder

I usually install node packages in my project output folder

If you want to read more about websocket-io, head over to the npm website and have a peek at: https://www.npmjs.com/package/nodejs-websocket

Client: Needless to say, the client class is just as simple so I dont really see a point in repeating that. In fact, all servers and clients inherit from the same base-class.

What should be stressed is that websocket is easy to use from Delphi as well. So if you have a native client and want to talk with your node.js server, I strongly suggest you stay clear of REST (which is one of the most wasteful protocols ever invented) and instead use websocket.

Note: since websocket is an extension to http, it is perfectly safe to use in a commercial environment. REST requires a lot more CPU and generates much more data on the network than websocket. The most time consuming and intensive aspects of a http call is during connection, and websocket is based on async and full duplex communication, with message caching and more handled automatically – so there is no reason why you should chose REST over websocket.

Websocket allows you to broadcast messages, or to filter messages based on criteria. Messaging and data-flow protocols are very easy to implement

Websocket allows you to broadcast messages, or to filter messages based on criteria. Messaging and data-flow protocols are very easy to implement

When you add the fact that websocket is also allowed from the browser, you can offer your services to the Javascript community without any extra development. So you get to cater for both native and JS clients. Being able to connect and interact with your custom servers directly from the browser opens up new possebilities. Want to display live information? With a full duplex connection the server can now inform you whenever something has changed instead of your clients polling on interval.

A Delphi extension to websocket (for Indy) was implemented and released by Andre Mussche a while back. The code is super easy to work with and bridges the world of Smart Pascal and Delphi quite nicely:
https://github.com/andremussche/DelphiWebsockets

If you need information about how to use websocket regardless of language, just google the topic. There are thousands of resources out there.

Last but not least

I hope you have found this super simple introduction to node.js server coding inspirational. Node.js is a huge topic and there are roughly 350.000 different packages available online for it. Writing wrapper classes for node packages is not hard either. You do have to know your way around object pascal and Javascript, but if you look at my units and how we have solved it, you should pick it up very fast.

In the future I hope to generate a 1:1 import tool that will download, convert and install packages for you automatically.

Cheers guys!

Understanding Smart Pascal

January 15, 2017 9 comments

One of the problems you get when working pro-bono on a project, is a constant lack of time. You have a fixed amount of hours you can spare, and every day you have to make decisions about where to invest those hours. The result is that Smart Mobile Studio has a wealth of technical resources and depth, but lacks the documentation you expect such a product to have. This has been and continues to be a problem.

Documentation really is a chicken and egg thing. It doesn’t start out that way, but once the product is launched, you get trapped in this boolean dynamics: “Few people buy it because it lacks documentation; You can’t afford to write documentation because few people buy it“. Considering the size of our codebase I don’t blame people for being a bit overwhelmed.

Despite our shortcomings Smart Mobile Studio is growing. It has a slow but steady growth as opposed to explosive growth. But all products needs periods of explosive growth to build up resources so that future evolution of the product can be financed. So this lack of solid documentation acts almost like a filter. Only those that are used to coding in Delphi or Lazarus at a certain level, writing their own classes and components, will feel comfortable using it.

It has become a kind of elite toolkit, used only by the most advanced coders.

The benefit of true OOP for JSVM is unquestionable

The benefit of true OOP for JSVM is unquestionable

Trying to explain

The other day I talked to a man who simply could not wrap his head around Smart Pascal at all. Compile for JavaScript? But.. how.. How do you get classes? He looked at me with a face of disbelief. I told him that we emit a VMT (virtual method table) in JavaScript itself. That way, you get real classes, real interfaces and real inheritance. But it was like talking to a wall.

In his defence, he understood conceptually what a VMT was, no doubt from having read about it in context with Delphi; but how it really works and that the principle is fundamental to object orientation at large, was alien to him.

var TObject={
  $ClassName: "TObject",
  $Parent: null,
  ClassName: function (s) { return s.$ClassName },
  ClassType: function (s) { return s },
  ClassParent: function (s) { return s.$Parent },
  $Init: function () {},
  Create: function (s) { return s },
  Destroy: function (s) { for (var prop in s) if (s.hasOwnProperty(prop)) delete s.prop },
  Destroy$: function(s) { return s.ClassType.Destroy(s) },
  Free: function (s) { if (s!==null) s.ClassType.Destroy(s) }
}

Above: In object orientation the methods are only compiled once while the instance is cloned. This is why methods in OOP languages are compiled with a secret first parameter that is the instance. Inheritance never duplicates the code inherited from ancestors.

In retrospect I have concluded that it had more to do with “saving face” than this guy not understanding. He had just spent months writing a project in JavaScript that he could complete in less than a day using Smart Pascal – so naturally, he would look the fool to admit that he just wasted a ton of company money. The easiest way to dismiss any ignorance on his part, was to push our tech into obscurity.

But what really baked my noodle was his lack of vision. He had failed completely in understanding what cloud is, where the market is going and what that will mean to both his skill-set, his job prospects and the future of software development.

It’s not his fault. If anything it’s my fault for not writing about it earlier. In my own defense I took it for granted that everyone understood this and knew what was coming. But that is unfair because the ability to get a good overview of the situation depends greatly on where you are.

JavaScript, the most important language in the world

It may be hard for many people to admit this, but it is none the less true. JavaScript has become the single most important language on the planet. 50% of all software written right now, regardless if it’s for the server or the browser, is written in JavaScript.

I knew this would happen as early as 2008, all the signs pointed to it. In 2010 Delphi was in a really bad place and I had a choice: drop Delphi and throw 20 years of hard-earned skills out the window and seek refuge in C++ or C#; or adapt object pascal to the new paradigm and try to save as much of our collective knowledge and skills as I could.

Even before I started on Smart I knew that something like node.js would appear. It was inevitable. Not because I am so clever, but because emerging new technology follows a pattern. Once it reaches critical mass – universal adoption and adaptation will happen. It follows logical steps of evolution that apply to all things, regardless of what the product or solution may be.

What is going to happen, regardless of what you feel

Ask yourself, what are the implication of program code being virtual? What are the logical steps when your code is 100% abstracted from hardware and the underlying, native operative system? What are the implications when script code enjoy the speed of native code (the JavaScript virtual machine uses LLVM to the point that JavaScript now runs en-par with native code), yet can be clustered, replicated, moved and even paused?

Let me say it like this: The next generation rapid application development wont deliver executable files or single platform installers. You will deliver entire eco-systems. Products that can be scaled, moved between hosts, replicated -that runs and exist in the cloud purely as virtual instances.

Norwegian developed FriendOS is just one of the purely cloud based operative systems in development

Norwegian developed FriendOS is just one of the cloud based operative systems in development right now. It will have a massive impact on the world

Where Delphi developers today drag and drop components on a form, future developers will drag and drop entire service stacks. You wont drop just a single picture on a form, but connectors to international media resource managers; services that guarantee accessibility across continents. 24 hours a day, seven days a week.

You think chrome-book is where it ends? It’s just the beginning.

Right now there are 3 cloud-based operating systems in development. All of them with support for the new, distributed software model. They allow you to write both the back-end and front-end of your program, which in the new model is regarded as a single entity or eco-system. Things like storage have been distributed for well over a decade now, and you can pipe data between Dropbox, Google drive or any host that implements the REST storage interface.

Some of the most powerful companies in the world are involved in this. Now take a wild guess what language these systems want you to use.

I’m sorry, but the way we make programs today is about to go extinct.

Understanding the new software model

As a Delphi or Lazarus developer you are used to the notion of server-side applications and client side applications. The distinction between this has always clear, but that is about to change. It’s still going to be around, at least for the next decade or so, but only for legacy purposes.

To backtrack just a second: Smart introduced support for node.js applications earlier, but it was on a very low-level. In the next update we introduce a large number high-level classes that is going to change the way you look at node completely.

Two new project types will be introduced in the future, giving you a very important distinction. Namely:

  • Cloud service
  • System service

To understand these concepts, you first have to understand the software model that next generation cloud operating systems work with. Superficially it may look almost identical to the good old two-tier model, but in the new paradigm it is treated as a single, portable, scalable, cluster capable entity.

In the new paradigm this is now a single entity

In the new paradigm this is now a single entity

The thing about clustering and scaling is what tends to confuse traditional developers. Because scaling in a native language is hard work. First you have to write your program in such a way that it can be scaled (e.g work as a member in a group, or cluster). Secondly you have to write a gate-keeper or head that delegates connections between the members of the cluster. If you don’t do this from the very beginning it will be a costly affair to retrofit a project with the required mechanisms.

Node.js is just awesome because it can cluster your code without you having to program for that. How? Because JavaScript is virtual. So you can fire up 50, 100 or 10.000 instances of the same process and the only thing you need to worry about is the gate-keeper process. You just park the cluster in another IP range that is only accessible by the gatekeeper, and that’s pretty much it.

When a software eco-system is installed on a cloud host, the entire architecture described by the package is created. So the backend part of your code is moved to an instance dedicated for that, the front end is installed where it belongs, same with database and so on. Forget about visual controls and TComponent, because on this level your building blocks are whole services; and the code you write scales from low-level socket coding to piping terabytes of data between processes.

Running your Smart Pascal server as a system-level daemon is very easy once you know what to look for :)

PM2 is a node.js process manager that gives you clustering and pooled application behavior for free out of the box. You don’t even have to tailor your code for it

Services that physically move

While global accessibility is fine and good, speed is a factor. It goes without saying that having to fetch data from Asia when you are in the US is going to be less than optimal. But this is where cloud is getting smarter.

Your services will actually physically move to a host closer to where you are. So let’s say you take a business trip from the US to Hong-Kong. The service will notice this, find a host closer to where you are, and replicate itself to that server.

This is not science-fiction, it’s already implemented in Azure and Google’s APIs take height for this behavior. It’s pretty cool if you ask me.

Is node.js really that powerful?

Let me present you with a few examples. Its important to understand that these examples doesnt mean everyone have to operate on this scale. But we feel it’s important to show people just what you can achieve and what node is capable of.

netflix-amazon-cloudNetflix is an online video streaming service that has become a household name in a very short time. Cloud can often be a vague term, but no other service demonstrates the potential of cloud technology as much as Netflix. In 2015 Netflix had more than 69 million paying customers in roughly 60 countries. It streams at average 100 million media hours a day.

Netflix moved from a traditional, native software model to a 100% clustered Node.js powered model in 2014. The ability for Netflix to run nearly universally on all devices, from embedded computers to Smart TV’s is largely due to their JavaScript codebase.

PayPal is a long-standing online banking and payment service that was first established in 1998. In Q4 of 2016 PayPal had 192 million registered customers world-wide. The service’s annual mobile payment volume was 66 billion US dollars in 2016. More than triple that of the previous year. Paypal moved from a traditional, native server model to Node.js back in 2015, when their entire transaction service layer was re-written from scratch.

Uber is a world-wide taxi company that is purely cloud and web-based. Unlike traditional taxi companies Uber owns no cars and doesn’t employ drivers; Instead its a service that anyone can partake in – either as a customer or a driver. In 2016 Uber operates in 551 cities across 60 countries. It delivers more than one million rides daily and have an estimated 10 million customers.

Uber’s server technology is based purely on Node.js and exists purely as a cloud based service. Uber has several client applications for various mobile devices, the majority of these are HTML5 applications that use Cordova Phonegap (same as Smart applications).

Understanding Smart

While the RTL and full scope of the technology has been a bit of a “black box” for many people, hopefully the idea and concepts around it has matured enough for people to open up for it. We were a bit early with it, and without the context that is showing up now I do understand that it can be hard to get the full scope of it (not to mention the potential).

With the cloud and some of its potential (and no, it’s not going away), a sense of urgency should start to set in. Native programming is not going away, but it will be marginalized to the point where it goes back to its origins: as a dicipline and part of engineering.

Public software and services will move to the cloud and as such, developers will be forced to use tools and languages better suited for that line of work.

We firmly believe that object pascal is one of the best languages ever created. Smart pascal has been adapted especially for this task, and the time-saving aspects and “edge” you get by writing object pascal over vanilla JavaScript is unquestionable. Inheritance alone is helpful, but the full onslaught of high-level features Smart brings takes it to the next level.

The benefits of writing object oriented, class based code is readability, order and maintainability. The benefits of a large RTL is productivity. The most important aspect of all in the world of software development.

The benefits of writing object oriented, class based code is readability, order and maintainability. The benefits of a large RTL is productivity. The most important aspect of all in the world of software development.

Hopefully the importance of our work will be easier to understand and more aparent now that cloud is becoming more visible and people are picking up the full implications of this.

The next and obvious step is moving Smart itself to the cloud, which we are planning for now. It will mean you can code and produce applications regardless of where you are. You can be in Spain, France or Oklahoma USA – all you will need is a browser, your object pascal skills and you’re good to go.

Things like “one click” hosting, instance budgets for auto scaling; the value for both developers and investors should be fairly obvious at this point.

Starting monday we will actively look for investors.

Sincerly

Jon Lennart Aasenden

LDef try/catch support

January 14, 2017 1 comment

Now this was a pickle: namely to support try/catch constructs in LDEF on assembly level. It may sound simple, but it all depends on how exactly the data-model stores individual instructions.

Since LDef has various blocks where code can be defined, abstracting the instruction buffers had to be done. With blocks I naturally mean code sections. A procedure for instance is such a block. A procedure contains instructions and calls to other code blocks. But – it can also contain sub-blocks. Consider the following:

/* block 1 */
public void SomeProc() {
  a = 12;
  b = 24;
  c = a + b;
  if (c >= 27) {
    /* Block 2 */
  } else {
    /* block 3 */
  }
}

The code above, ridicules in it’s simplicity, demonstrates a fundamental principle that all compilers must support, namely to execute different blocks based on some value. In this example block #2 will execute if “c” is more or equal to 27, or block #3 if its not.

This is pretty straight forward right? Well not quite. It all depends on how you store bytecodes in the data model. The first question you should ask is: how do we execute block #2 and not block #3. Remember that in assembly language (or bytecode) this is all one big chunk. Had this been machine code, the compiler would have to calculate the offset of block #3, also where block #3 ends. If the condition was false a jump to block #3 must be performed (skipping over block #2). Well, you get the idea I think.

LDEF was first written in Smart Pascal and runs on node.js and in vanilla html5 applications

LDEF was first written in Smart Pascal and runs on node.js and in vanilla html5 applications

Since LDef is very low-level, I have to come up with something similar. But I also wanted a solution that made things easier. Doing in-place forward calculations etc. is not hard, boring perhaps but not a showstopper by any means. But could I come up with a more flexible solution

First stop was to fragment the instruction blocks. So instead of having a single list of instructions associated with a procedure or function, these can now have as many instruction lists associated with it as memory can hold. The idea is that they are all glued together into a final list when the model is emitted to disk. But the ability to organize and work with chunks of code like this is really a step up from barebone assembly.

type
  TLDefModelParamType =
    (
    ptRegister,   // Parameter is a register
    ptVariable,   // Parameter is a variable (index follows in bytecode)
    ptConst,      // Parameter is a constant (index follows in bytecode)
    ptValue,      // Parameter is a direct value, raw data follows in bytecode
    ptDC          // Parameter is the data-control register
    );

  TLDefModelParam = class
  strict private
    FType:  TLDefModelParamType;  // Param type
    FIndex: integer;              // index (register only!)
    FData:  string;               // data (const + variable only!)
  public
    property  ParamType: TLDefModelParamType read FType write FType;
    property  Index: integer read FIndex write FIndex;
    property  Data: string read FData write FData;
  end;
  TLDefModelParamList = TObjectList;

  TLDefModelInstruction = class(TLDefModelSymbol)
  strict private
    FInstr:     integer;          // Index of instruction in dictionary
    FParams:    TLDefModelParamList;   // Parsed parameters
  public
    property    Id: integer read FInstr write FInstr;
    property    Params: TLDefModelParamList read FParams;
    constructor Create(const AParent: TParserModelObject); override;
    destructor  Destroy; override;
  end;

  TLDefModelInstructionIfThen = class(TLDefModelInstruction)
  strict private
    FThen:      TLDefModelInstructionList;
  public
    property    ThenCode: TLDefModelInstructionList read FThen;
    constructor Create(const AParent: TParserModelObject); override;
    destructor  Destroy; override;
  end;

  TLDefModelInstructionIfThenElse = class(TLDefModelInstructionIfThen)
  strict private
    FElse:      TLDefModelInstructionList;
  public
    property    ElseCode: TLDefModelInstructionList read FElse;
    constructor Create(const AParent: TParserModelObject); override;
    destructor  Destroy; override;
  end;

  TLDefModelInstructionTryCatch = class(TLDefModelInstruction)
  strict private
    FTryCode:   TLDefModelInstructionList;
    FCatchCode: TLDefModelInstructionList;
  public
    property  TryCode: TLDefModelInstructionList read FTryCode;
    property  CatchCode: TLDefModelInstructionList read FCatchCode;
    constructor Create(const AParent: TParserModelObject); override;
    destructor  Destroy; override;
  end;

  TLDefModelInstructionList = class(TLDefModelSymbol)
  strict protected
    function  GetItem(index: integer): TLDefModelInstruction;
  public
    property  Count: integer read ChildGetCount;
    property  Item[index: integer]: TLDefModelInstruction read GetItem;
    function  Add: TLDefModelInstruction; overload;
    function  Add(const NewInstance: TLDefModelInstruction): TLDefModelInstruction; overload;

    function  AddIfThen: TLDefModelInstructionIfThen;
    function  AddIfThenElse: TLDefModelInstructionIfThenElse;
    function  AddTryExcept: TLDefModelInstructionTryCatch;
  end;

  TLDefModelByteCodeChunk = class(TLDefCollectionSymbol)
  strict protected
    function  GetSegment(index: integer): TLDefModelInstructionList; virtual;
  public
    property  Count: integer read ChildGetCount;
    property  Segment[index: integer]: TLDefModelInstructionList read GetSegment;

    function  Add: TLDefModelInstructionList;
  end;

By splitting up TLDefMOdelInstructionList into these parts, especially the if/then, if/then/else and so on classes, working with conditional execution is no longer problematic. A list will always know it’s own size and length, so it’s not really that much work involved in emitting the jump instructions and test stuff.

Exceptions

Exceptions is an intricate part of the virtual machine. How to deal with them however is something I have thought long and hard about. I finally ended up with a system that is easy to use. The ES register will be 0 (zero) if no except has occured, otherwise it will contain the exception identifier.

When an exception occurs, the type and message is pushed on the stack by the virtual machine. A catch block then have to read them out and deal with them. You can also re-throw the exception via “rethrow;” or just throw a new one via “throw ”

    try {
      /* calc longs */
      move r1, count;
      mod r1, 8;
      move r2, count;
      move _longs, r1;
    } catch {
      /* The ES register contains the exception state,
         but the message will be on the stack */
      pop r0; /* get type */
      pop r1; /* get message */
      swap r0, r1; /* Syntax for showmessage wants text in r0 */
      syscall -rtl_showmessage;
    }

Well, fun times ahead! Cant wait to finish the emitters and get this puppy running 🙂

 

LDef Intermediate Language

January 13, 2017 Leave a comment

The LDEF bytecode engine is some time away from completion, but the IL source format that the assembler reads and turns into bytecode is getting there. At the moment there are only a few tidbits left to explore, like interfaces and generics, but those will be added.

It’s a real brain teaser because – some of the stuff that makes up a compileris not really related to code. When you think about generics you often make the mistake of thinking this is a code feature, like inheritance or virtual methods; it’s something that the code-emitter has to deal with or runtime engine to take height for. But generics is actually implemented higher up. It exists between the parser and code-emitter.

Interfaces is another mirage or technological illusion. When you work with classes and interfaces you get a sense that it’s a solid thing, you know – you write a class and create objects and people imagine these objects as independent, material objects in a virtual space. I tend to think of instances as molecules.

But objects is ultimately an illusion. Im not going to cover the intricate details of object orientation here, but OOP is actually about separating the data (fields) from the code acting on that data (members). So the only objects that really exist in the memory of a computer when you create instances, are buffers representing the fields of your class – combined with references and a list mapping the entrypoints for the members that makes up that instance (VMT). Compilers actually mimic object orientation by adding a secret parameter to all your methods, namely the “self” parameter. This “self” is a pointer to a record that contains all the information pertaining to that instance.

class_makeup

Which is really cool because then you can create as many instances as you like – and they all use the same code. Which ofcourse object orientation is all about. But there is no such thing as an independent instance floating around computer memory. That is an optical illusion of sorts.

LDEF virtual machine

Virtual machines get’s to have all the fun. I mean, writing a compiler that emits real machine code is in itself not that hard, but generating a scheme that makes object orientation work and that keeps track of everything is. The x86 cpu architecture may be powerful but it’s an absolute bitch to work with. It has few registers (compared to ARM and PPC), several instruction sets and is known to be a somewhat unfriendly place. This is the reason that compiler makers tend to stick to a golden mean of instructions. Compilers like C++ builder and Delphi could actually generate faster and more efficient code if they knew exactly what cpu and architecture you used. But since that’s not how PC’s work and there are some 20 different cpu models on the market at any given time – with huge differences between AMD and Intel, it makes sense to use a safe collection of instructions.

LDEF is a bytecode runtime engine. And one of the perks of bytecode is that it’s very much abstracted from the whole issue of real-life assembly. Most programmers think that – if you make bytecodes then you dont have to think about low level stuff, which is cheating. That may be the case for other languages and engines, but not LDEF. In fact the whole point of writing a new engine is because I wanted a bytecode format that was closer to machine code. This is very important because at some point myself or someone else will write a JIT compiler for this, and if the runtime is to high-level or abstract, that is going to be very hard.

LDEF testbed

LDEF testbed

The LDEF instruction-set represents a golden mean of the average processor instruction set. I have taken instructions that most processors have, typical stuff like add, subtract, divide, modulus, multiply (and so on). Testing is where I have given LDEF some advantages, for instance when testing a list of parameters for conditions. Instead of keeping that as two instructions (test, branch [condition]) I have isolated that in a single instruction.

Let’s look at the instruction set so far:

  • move
  • blit – move memory block
  • valloc – allocate variable
  • vfree – free variable
  • malloc – allocate memory
  • mfree – release memory
  • add
  • sub
  • mul
  • muldiv
  • div
  • mod
  • moddiv
  • lsr – logical bit-shift right
  • lsl – logical bit-shift left
  • cmp – compare
  • tst -test
  • bne – branch not equal
  • beq – branch equal
  • ble – branch less
  • bgt – branch greater
  • jsr – jump sub routine
  • jmp – jump absolute
  • push – push to stack
  • pop – pop from stack
  • IFnb – test and branch if not equal
  • IFtb – test and branch if equal
  • throw – cause exception
  • syscall – invoke engine spesific, delphi code

One of the benefits of a virtual machine, is that you can do some magic with variables. In a real machinecode compiler, variable allocation and doing read / write operations can be very complex. But in a virtual machine you thankfully get to do something about that.

So LDEF allows you to move data seamlessly between variables and registers, which really is a massive time saver. The code standard also supports two levels of resources, global and local. The latter meaning class in this case. So there is ample room for high-level languages to store their data and implement classical features (like “resourcestring” in Delphi).

You also have support for constants, these are stored separately in the bytecode and loaded into a lookup-table associated with a class. Constants are different from resources in that they are designed to be quickly referenced. Constants has more infrastructure in the form of lookup-tables – because they are meant to be used with the instructions. Resources are more like resource-strings in Delphi, they are stored in their own place in the bytecode and are loaded on demand. Constants are not. They are loaded into memory and are instantly available.

Having said that, the bytecode compiler also supports in-place data. Meaning that you can chose to write constant data where they are used. So instead of having to load a string constant (for example) before working with it, you can compile the string directly into the move instruction (or any other instruction).So this is actually perfectly valid:

move r0, "Hello world";

Other virtual machines, like the Java engine, force you to do this in two steps, which is slower:

lda r0, cost_id; // reference const by id
ldc r1, (r0); // get actual value into r1

You can also assign variables directly, you dont need to load the effective address first. So there is no extract cost involved in moving data between a register and variable, variable and variable, or resource / cost to a variable. This makes life much easier:

move V[$00F6], "String assignment";
move V[$01F0], 19875.32;
move V[$196], V[$197];

Registers

Another thing that is beneficial is that LDEF has 32 registers to work with (you can actually change that, so if you need 64 that’s no problem). How you use these is up to you, but it gives a lot of room for OOP schemes. For instance, if a language has a maximum limit of 6 parameters per method – you actually dont need to pass values on the stack at all (like Java does) but you can map parameters directly to registers.

Delphi, C++ builder and other native solutions tend to use stack-schemes to store information. So a pointer to some list is stored as the fourth item on the stack, and some other entry on the third (and so on). Which is perfectly fine and very effective on a real CPU (you really have no choice on a real x86). In LDEF you can now use spesific registers instead, which is a lot easier. What scheme you chose to use if naturally up to you – but at least the option is there.

Here are the registers LDEF presently supports

  • R0 – R31: general purpose registers
  • SP: stack pointer
  • PC: program control register
  • BC: branch control register
  • ES: exception state register

Optimization

If you are wondering why a virtual machine would support so much similar stuff, constants and resources, in place data or not, this all have to do with optimalization.

For example, if your compiler decides to collect all assignment values as constants, the codesize of your program might be smaller; it all depends on what your code looks like. You will naturally consolidate identical strings, integer values and so on. So even if you have a program that writes “Hello world!” 10.000 times to the console, only one “Hello world!” will be stored in the bytecode file. So constants gives you smaller code, but at a cost of speed – since constants needs a lookup whenever they are used.

Now constants here are not “cost” declarations. Constants on this level is actually the values you write in plain-text in your program, stuff like this:

FMyText := 'Welcome to my program';
FCount := 100;

Both “welcome to my program” and “100” are constants. These are values that will never change, and compilers usually have to deal with these values in one of two ways. Both of which I have explained above (in place or not).

Source format

The LDEF intermediate format looks pretty much like C++. C is fast and easier to parse than other languages and it made sense to pick that.But this similarity is paper thin, in that only the constructs of C++ is used. Methods only constains the LDEF assembly language code, and variables are isolated in Alloc sections. The virtual machine takes care of the OOP layer for you so you dont have to worry about that.

Here is an example to give you a feel for it:

#include <stdio>;
#include <oop>;

struct CustomType
{
  uint32 ctMagic;
  uint32 ctSize;
  uint8  ctData[4096];
}

resources
{
  dc.s #welcome, "welcome to LDef";
}

class TBaseObject: object
{
  /* class-fields */
  alloc {
    uint8 temp;
    uint32 counter;
    CustomType handle;
  }

  /* Parser now handles register mapping */
  public void main(r0 as count, r1 as text) {
    enter { }
    leave { }

    alloc {
      /* method variables */
      uint32 _longs;
      uint32 _singles;
    }
    /* calc longs */
    move r1, count;
    mod r1, 8;
    move r2, count;
    move _longs, r1;

    /* calc singles */
    move r3, r1;
    mul  r3, 8;
    sub  r2, r3;
    move _singles, r2
  }

  /* test multi push to stack */
  private void _cleanup() {
     push [r0, r1, r2];
  }
}

Keep in mind that the format is not 100% finished yet, there are still a few tidbits that needs to be worked out. But the general system is firmly in place.

One of the cool things is that I get to add a few missing features from my favorite language, Delphi (object pascal). If you look at the first method (main), you will notice that it has two extra sections: enter and leave.

These sections can contain code that execute directly before and after the body of the method. In pascal it would look something like this:

procedure TMyClass.SomeProc;
  before
    writeln('about to execute someproc');
  end;

  after
    writeln('Someproc finished');
  end;
begin
end;

The purpose of these, especially the before() section, is to make sure the parameters are valid. So before is used to check that the parameters are within a legal range. After is naturally used to ensure that the result (for functions) is valid.

It also opens up for some interesting logging and debugging aspects.

More to come

This is turning into a long rant, but hopefully you found it interesting. I’ll keep you posted as the engine progress. There is still some way to go, but we are getting there. Once LDEF is implemented the fun starts, thats when I’ll code high-level parsers that targets the engine. First stop is naturally and object pascal compiler 🙂

Parsing fun with anonymous methods

January 11, 2017 Leave a comment

Anonymous methods can save you days, weeks and months of coding. Bug hunting in a smaller codebase is always more effective

Anonymous methods can save you days, weeks and months of coding. Bug hunting in a smaller codebase is always more effective

I must have written 100 parsers in my day. Making programming languages, script engines or just text-based file-formats, is something I have enjoyed doing since the Amiga days (80’s and 90’s). For each parser I make, I discover new and interesting aspects. That’s the beauty about programming; any programmer that propose that he is fully taught, that he is the best or that there is nothing he cannot do, is a liar. Programming is an art that is never exhausted, and there will always be new techniques to learn and different aspects to explore.

Dont underestimate anonymous methods

Delphi has supported anonymous methods for many years now, but like other languages it can take a while before a feature impacts a community’s codebase at large. Delphi is like a whale, it can take a while to turn, but when it turns it come about full force. Thankfully we have now reached the stage where generics and anonymous procedures is almost everywhere. There is still tons of classical object pascal (Delphi 7 style) online. Probably hundreds of thousands of examples and snippets people can look at. But libraries, components and new code people post now make good use of the excellent features Delphi have to offer.

Take parsing. A common method that you find in almost all parsers is ReadTo(). It’s a function that keeps on reading, char by char, until some condition is met. Usually this condition is finding a specific character, or that the parser hits a character that it doesn’t expect.

For example like this:

TCharSet= ('A'..'Z','a'..'z','_');
function ReadTo(breakers: TCharSet; var text: string): boolean;

This is more than enough to read method-names, variable names and ordinary plain-text elements. The TCharset set acts as a filter, so when Read() hits a character not in that set – the procedure exits from the read loop, returning false. There is naturally more to it than this, but it’s just to give you an idea.

With anonymous methods, we can write more flexible parsing code. While sets work just fine, it is sometimes easier to do ad-hoc checking. In a complex language or file-format the amount of set-types can grow quite large in the old model. So in-place testing is very welcome, and this is where anonymous methods are super handy.

Consider this parsing function:

function TParserBuffer.ReadTo(const Validator: TParserValidator): boolean;
begin
  SetLength(Text, 0);
  if not Empty then
  begin
    if assigned(Validator) then
    begin
      while not EOF do
      begin
        if not Validator(Current) then
        break else
        Text := text + Current;

        if not next() then
        break;
      end;

      result := Text.Length > 0;
    end else
    result := false;
  end else
  result := false;
end;

With that in place we can now do ad-hoc reading and testing like this:

type
TParserValidator = reference to function (Value: char): boolean;

LParser.ReadTo( function (value:char): boolean
  begin
    result := CharInSet(Value,['A'..'Z','a'..'z','_','0'..'9']);
  end, LText);

Advanced parsing always results in pretty much the same techniques being evolved. You can take a programmer in the US, another programmer in Germany – isolate them with no internet access and ask them to write a language parser; perhaps with a deadline like 6 months. When you audit the code you will discover that both programmers have “discovered” and “invented” more or less the same techniques. There are many ways to do parsing, but common to the topic in general are a few fundamental principles.

One of those principles is called “parser context”. The idea is that you isolate information about a parsing session in a separate object (to avoid one massive, complex and nearly undecipherable class). So the current parser position (offset into the text buffer), the column and row info – but more importantly: the “data model” that your parser have built up, which is refered to as an AST (abstract symbol tree) and many other names – is instantly available via the context.

Let’s dig into it

Im not going to write to much on this (since its a huge topic), but ok let’s dig into benefits of the context object.

An important part of the context object is a stack mechanism. As the parser moves through the source-code, it creates objects that is stored in the data-model. Objects that have clear relationships. For example, a local variable will naturally be connected to a procedure or function. A method will likewise be connected to a class.

The context-stack helps you deal with recursive parsing. You will have a class parser, who in turn will invoke a method parser when it finds that, or a property parser when that is found, or a field parser (and so on).

To simplify recursive parsing, what people do is to push the current model-object on the context stack, then it calls the parser-method in question. That parser will in turn do the same if it has sub-elements that should be parsed into itself as the parent. Using a stack saves a lot of time and allows you to write very clean, effective parsing methods.

Take a Delphi class, a simple class with some fields, a couple of properties and a few methods. The process for parsing that would look something like this:

  • ParseClass
    • Parse ancestor
  • Parse fields
    • Parse field declaration
    • Parse datatype
  • Parse method
    • Parse parameters
      • Parse datatype
      • Parse result datatype (function)
  • Parse properties
    • Parse getter
    • Parse setter

Instead of having to write code to look-up the parent for each “parse job”, or even worse – having to pass along the parent via parameters or setting properties (remember, we use a context to de-couple data from process), the stack allows us to define the target or parent for each child job:

// pass the "unit" on the stack
Context.Stack.Push(self.unitmodel);
try
  ParseClass();
finally
  context.stack.pop();
end;

And the same style of code just continues in all the sub-elements:

// get the unit model-object from the stack
LUnit:= TUnitNModel(Context.Stack.peek);

// Add a class object to the model
LClass:= LUnit.AddClassModel();

// push that onto the stack for children
Context.Stack.Push(LClassModel );
try
  //Naturally more work here, but through the
  //context object the parser now always knows
  //the parent where new data should be stored
  ParseFields</span>();
  ParseMethods</span>();
  ParseProperties()
finally
  context.stack.pop();
end;

Anonymous procedures and functions now simplifies this even further. Being able to pass methods as parameters means that recursive and complex jobs can be written more effectively. Where I previously had to implement say, a “4 step” parsing job (where each parser calls the other for sub-elements) as classes, i can now do pretty much the same in 4 calls (!)

Context.Buffer.push();
try
  Context.Buffer.ReadTo( function(value:char):boolean
    begin
      context.buffer.push();
      try
        Context.Buffer.ReadTo( function(value:char):boolean
          begin
            if bla bla bla
      finally
        Context.Buffer.Pop();
      end;
    end);
finally
  Context.Buffer.pop();
end;

Notice the context.buffer.push()? Well, that a feature for pushing the current position and offset of the parser on a stack. So now I can easily proof-read ahead quite deep structures, and return to the very position the declaration started at. This is very helpful when dealing with distinctions. For instance, when telling the difference between a method call locally or in another object:

myproc;
FObject.someproc;

In the above source, we would hit “.”, then do an anonymous reading session to pickup “someproc;”, return back without losing the current position – and eval what laws should be applied to the text ahead. Dealing with parameter brackets, be they empty of with data, also becomes a recursive and simple task.

Now I have just started to play with anonymous methods in my latest parser, so there are going to be a lot of exciting techniques to explore.

Cheers guys!

Delphi bytecode compiler

January 10, 2017 1 comment

This is a pet project I have been playing with on/off for a couple of years now. Since I’m very busy with Smart Mobile Studio these days I havent really had time to write much about this in a while. Well, today I needed a break for JavaScript – and what is more fun than relaxing with a cup of coco and bytecodes?

The LDEF bytecode standard

Do you remember Quartex Pascal? It was an alternative pascal compiler I wrote a couple of years back. The plan was initially to get Smart Mobile Studio into that codebase, and essentially kill 3 birds with one stone. I already have parsers for various languages that I have written: Visual Basic .NET, Typescript, C# and C++. Adding support for Smart Pascal to that library of parsers would naturally give us a huge advantage.

In essence you would just pick what language you wanted to write your Smart apps in, then use the same classes and RTL across the board. You could also mix and match, perhaps write some in typescript (which is handy when you face a class that is difficult to “pascalify” via the import tool).

LDEF test environment

LDEF test environment

But parsing code and building an AST (abstract symbol tree, this is a hierarchy of objects the parser builds from your source-code. Basically your program in object form) is only half the job. People often start coding languages and script engines without really thinking it through (I have done that myself a few times. It’s a great way to learn the ropes, but you waste a lot of time), after a while they give up because it dawns on them that the hard part is still ahead: namely to generate executable code or at least run the code straight off the AST. There is a reason components like dwScript have taken years to polish and perfect (same can be said of PAX script which is probably one of the most powerful engines available to Delphi developers).

The idea behind Quartex Pascal was naturally that you could have more than one language, but they would all build the same AST and all of them emit the same bytecode. Not exactly a novel idea, this is how most scripting engines work – and also what Java and .Net have been doing since day one.

But the hard part was still waiting for me, namely to generate bytecode. This may sound easy but it’s not just a matter of dumping out a longword with the index of a method. You really need to think it through because if you make it to high-level, it will cripple your language. If you make it to low-level – the code will execute slow and the amount of bytecodes generated can become huge compared to normal assembly-code.

And no, I’m not making it compatible with CIL or Java, this is a pure object pascal solution for object pascal developers (and C++ builder naturally).

LDEF source language

I decided to do something a bit novel. Rather than just creating some classes to help you generate bytecode, I decided to create a language around the assembly language. Since C++ is easy to parse and looks that way because of its close connection to assembler, I decided to use C++ as the basic dialect.

So instead of you emitting bytecodes, your program only needs to emit LDEF source-code. Then you can just call the assembler program (command line) and it will parse, compile and turn it into assemblies for you.

Here is an example snippet that compiles without problem:

struct CustomType
{
  uint32 ctMagic;
  uint32 ctSize;
  uint8  ctData[4096];
}

class TBaseObject: object
{
  /* class-fields */
  alloc {
    uint8 temp;
    uint32 counter;
  }

  /* Parser now handles register mapping */
  public void main(r0 as count, r1 as text) {
    alloc {
      /* method variables */
      uint32 _longs;
      uint32 _singles;
    }
    move r1, count;
    jsr @_cleanup;
    push [text];
  }

  /* test multi push to stack */
  private void _cleanup() {
     push [r0, r1, r2];
  }
}

The LDEF virtual machine takes care of things like instance management, VMT (virtual method table), inheritance and everything else. All you have to do is to generate the LDEF assembly code that goes into the methods – and voila, the assembler will parse, compile and emit the whole shabam as assemblies (the -L option in the command-line assembler controls how you want to link the assemblies, so you can emit a single file).

The runtime engine is written in vanilla object pascal. It uses generics and lookup tables to achieve pretty good performance, and there is ample room for a JIT engine in the future. What was important for me was that i had a solution that was portable, require little maintenance, with an instruction set that could easily be converted to actual machine-code, LLVM or another high level language (like C++ that can piggyback on GCC regardless of platform).

LDEF instructions

The instruction set is a good mean of the most common instructions that real processors have. Conceptually the VM is inspired by the Motorola 68020 chip, in combination with the good old Acord Risc cpu. Some of the features are:

  • 32bit (4 byte) opcode size
  • 32 registers
  • 1024 byte cache (can be adjusted)
  • Stack
  • Built in variable management
  • Built in const and resource management
  • Move data seamlessly between registers and variables
  • Support for records (struct) and class-types
  • Support for source-maps (both from assembler to high-level and reverse)
  • Component based, languages and emitters can be changed

There are also a few neat language features that I have been missing from Delphi, like code criteria. Basically it allows you to define code that should be executed before the body of a procedure, and directly after. This allows you to check that parameter values are within range or valid before the procedure is allowed to execute.

Here is the pascal version:

function TMyClass.CalcValues(a,b,c: integer): integer;
begin
  before
    if (a<1) or (b<1) or (c<1) then
      Fail('Invalid values, %classname.%methodname never executed');
  end;

  result := a + b div c;

  after
    if result <23 then
      Fail('%classname.%methodname failed, input below lowest threshold error');
  end;
end;

I decided to add support for this in LDEF itself, including the C++ style intermediate language. Here it looks like this:

  /* Enter + Exit is directly supported */
  public void main(r0 as count, r1 as text) {
    enter {
    }

    leave {
    }

    alloc {
      /* method variables */
      uint32 _longs;
      uint32 _singles;
    }
    move r1, count;
    jsr @_cleanup;
    push [text];
  }

Why Borland/Embarcadero didn’t add this when they gave the compiler a full overhaul and support for generics – is beyond me. C++ have had this for many, many years now. C# have supported it since version 4, and Java I am usure about – but its been there for many years, thats for sure.

Attributes and RTTI

Attributes will be added but support for user-defined attributes will only appear later. Right now the only attributes I have plans for controls how memory is allocated for variables, if registers should be pushed to the stack on entry automatically, if the VMT should be flattened and the inheritance-chain reduced to a single table. More attributes will no doubt appear as I move forward, but right now I’ll stick to the basics.

RTTI is fairly good and presently linked into every assembly. You cant really omit that much from a bytecode system. To reduce the use of pure variables I introduced register mapping to make it easier for people to use the registers for as much as possible (much faster than variables):

  public void main(r0 as count, r1 as text) {
  }

You may have noticed the strange parameters on this method? Thats because it’s not parameters, but definitions that link registers to names (register mapping). That way you can write code that uses the original names of variables or parameters, and avoid allocating variables for everything. It has no effect except making it easier to write code.

LDEF, why should I care?

You should care because, with a bit of work we should be able to compile fairly modern Delphi source-code to portable bytecodes. The runtime or engine that executes these bytecodes can be compiled using freepascal, which means you can execute the program anywhere FPC is supported. So more or less every operative system on the planet.

You should care because you can now write programs for Ultibo, the pascal operative system for Raspberry PI. It can only execute 1 real program (your embedded program), but since you can now run bytecodes, you will be able to run as many programs as you like. Just run each bytecode program in a thread or play it safe and call next() on interval from a TTimer.

You should care because once LDEF is finished, being able to transcode from object pascal to virtually any language will be a lot easier. JavaScript? C++? Python? Take your pick. The point here is standards and a library that is written to last decades rather than years. So my code is very clean and no silly pointer tricks just to get a few extra cycles. Stability, portability and maintainance are the values here.

You should care because in the future, components like HexLicense will implement its security code in LDEF using a randomized instruction-set, making it very, very hard for hackers to break your product.

Thread safe values, version 1.2

January 6, 2017 2 comments

A while back (a year perhaps?) I posted an article about protected values. In short: anyone who has ever worked with multi-threading should know that sharing values between your application and thread(s) is not always as straight forward as we like to believe.

The latest versions of Delphi have made threading a lot simpler, especially async coding. But even though Delphi has gotten better on the subject, there is no denying that archetypical languages like object pascal and c++ are intrinsically low-level. It’s not like Java and C# where they put training-wheels on everything and you can just ignore the laws of physics (because your application is the digital version of a padded cell).

Note: While we are on the subject you may want to check out my delayed callback unit, which implents TW3Dispatch from Smart Mobile Studio.

Protected values version 1.2

So what are protected values? Well, in short it’s a series of classes that automatically protects a piece of data (intrinsic datatypes like “string”, “integer”, “boolean” but also class instances) and will lock and unlock when you read or change the value.

Multithreading can be a total bitch if you dont do it right

Multithreading can be a total bitch if you dont do it right

What is new in version 1.2 is that I have extended the model with an exclusive locking mechanism, an anonymous procedure, which gives you exclusive access to the data for the duration of your callback. You also have read/write access properties and a few other improvements.

It is, like many of my units, simple yet powerful. Some may argue that they don’t need it, that it’s not atomic – but that was never really the point. For atomic values you can dig into WinAPI but I personally like to keep my code as platform independent as possible.

The purpose of this library is, simply put, to have a uniform way of creating thread safe variables and fields. Values that can be safely changed from a thread without any extra padding or care, and likewise – values that a thread can expose without read/write synchronizing, instance checking and thread callbacks.

Examples

Creating a protected container is very simple. The unit uses generics, so you can take your pick of datatype:

var
  LData: TProtectedValue<string>;
begin
  LData := TProtectedValue<string>.create;
  RegisterWithThreads(LData);
  LData.Value := 'This is a test';
end;

In the code above the object is locked while the assignment lasts, so it will never collide with another thread accessing it. If you want exclusive access for the duration of a process or task, use the synchronize call:

var
  LData: TProtectedValue<string>;
begin
  LData := TProtectedValue<string>.create;
  RegisterWithThreads(LData);
  LData.Synchronize( procedure (var Data: string)
    begin
      //Access is blocked for the duration of this proc
      for item in GlobalList do
        Data := Data + item.ToString();
    end);
end;

The next feature I added was better support for object lists. In the previous version you had to create the list-instance yourself and then wrap that in a container. Now the container is the objectlist and we use generic to denote the type. The same locked access mechanisms are available there.

var
  LData: TProtectedObjectList<TMyItem>;
begin
  LData := TProtectedObjectList<TMyItem>.create;
  LData.ForEach( procedure (Item: TObject; var Cancel: boolean)
    begin
      // Set Cancel to True to abort the loop
      // ForEach() will iterate through the items stored in
      // the objectlist. Sadly you must typecast here due to scope
    end);
end;

And of course, the normal locking mechanisms for lists:

type
  TMyListType = TObjectList<TMyItem>;

var
  LData: TProtectedObjectList<TMyItem>;
begin
  LData := TProtectedObjectList<TMyItem>.create;
  LList := TMyListType( LData.Lock );
  try
    //Process list here
  finally
    LData.Unlock();
  end;
end;

While all of this is cool, the core value of the library (at least for me) is that I can safely move values between the main process and threads without the typical insulation involved. I can use protected values as fields in my class and publish them as read only properties, killing two birds with one stone (both protecting the value and conforming to the multiple read, singular write law of memory.

The unit is tried and tested. It is presently used in 3 commercial products, so you can put a little faith in it. But as always, if you don’t need it then leave it alone.

If you like it, enjoy 🙂

The code

unit hex.log.locktypes;

interface

uses
  System.SysUtils,
  System.SyncObjs,
  System.Classes,
  System.Generics.Collections;

type

  {$DEFINE USE_SYNCHRO}

  TProtectedValueAccessRights = set of (lvRead, lvWrite);

  EProtectedValue = class(exception);
  EProtectedObject = class(exception);

  (* Thread safe intrinsic datatype container.
     When sharing values between processes, use this class
     to make read/write access safe and protected. *)

  {$IFDEF USE_SYNCHRO}
  TProtectedValue = class(TCriticalSection)
  {$ELSE}
  TProtectedValue = class(TObject)
  {$ENDIF}
  strict private
    {$IFNDEF USE_SYNCHRO}
    FLock: TCriticalSection;
    {$ENDIF}
    FData: T;
    FOptions: TProtectedValueAccessRights;
  strict protected
    function GetValue: T;virtual;
    procedure SetValue(Value: T);virtual;
    function GetAccessRights: TProtectedValueAccessRights;
    procedure SetAccessRights(Rights: TProtectedValueAccessRights);
  public
    type
      TProtectedValueEntry = reference to procedure (var Data: T);
  public
    constructor Create(Value: T); overload; virtual;
    constructor Create(Value: T; const Access: TProtectedValueAccessRights= [lvRead, lvWrite]); overload; virtual;
    constructor Create(const Access: TProtectedValueAccessRights = [lvRead, lvWrite]); overload; virtual;
    destructor Destroy;override;

    {$IFNDEF USE_SYNCHRO}
    procedure Acquire;
    procedure Release;
    {$ENDIF}
    procedure Synchronize(const Entry: TProtectedValueEntry);

    property AccessRights: TProtectedValueAccessRights read GetAccessRights;
    property Value: T read GetValue write SetValue;
  end;

  (* Thread safe object container.
     NOTE #1: This object container **CREATES** the instance and maintains it!
              Use Edit() to execute a protected block of code with access
              to the object.

     Note #2: SetValue() does not overwrite the object reference, but
              attempts to perform TPersistent.Assign(). If the instance
              does not inherit from TPersistent an exception is thrown. *)
  TProtectedObject = class(TObject)
  strict private
    FData:      T;
    FLock:      TCriticalSection;
    FOptions:   TProtectedValueAccessRights;
  strict protected
    function    GetValue: T;virtual;
    procedure   SetValue(Value: T);virtual;
    function    GetAccessRights: TProtectedValueAccessRights;
    procedure   SetAccessRights(Rights: TProtectedValueAccessRights);
  public
    type
      TProtectedObjectEntry = reference to procedure (const Data: T);
  public
    Property    Value: T read GetValue write SetValue;
    Property    AccessRights: TProtectedValueAccessRights read GetAccessRights;

    Function    Lock: T;
    procedure   Unlock;
    procedure   Synchronize(const Entry: TProtectedObjectEntry);

    Constructor Create(const AOptions:TProtectedValueAccessRights = [lvRead,lvWrite]);virtual;
    Destructor  Destroy;override;
  end;

  (* TProtectedObjectList:
     This is a thread-safe object list implementation.
     It works more or less like TThreadList, except it deals with objects *)
  TProtectedObjectList = Class(TInterfacedPersistent)
  strict private
    FObjects:   TObjectList;
    FLock:      TCriticalSection;
  strict protected
    function GetEmpty: Boolean;virtual;
    function GetCount: Integer;virtual;

    (* QueryObject Proxy: TInterfacedPersistent allows us to
       act as a proxy for QueryInterface/GetInterface. Override
       and provide another child instance here to expose
       interfaces from that instread *)
    function GetOwner: TPersistent;override;
  public
    type
      TProtectedObjectListProc = reference to procedure (item:TObject;var Cancel:Boolean);
  public
    constructor Create(OwnsObjects: Boolean = True);virtual;
    destructor  Destroy;Override;

    function    Contains(Instance: TObject): boolean;virtual;
    function    Lock: TObjectList;virtual;
    Procedure   UnLock;
    Procedure   Clear;

    procedure   ForEach(Callback: TProtectedObjectListProc);

    Property    Count:Integer read GetCount;
    Property    Empty:Boolean read GetEmpty;
  end;

implementation

//############################################################################
//  TProtectedObjectList
//############################################################################

constructor TProtectedObjectList.Create(OwnsObjects: Boolean = True);
begin
  inherited Create;
  FObjects := TObjectList.Create(OwnsObjects);
  FLock := TCriticalSection.Create;
end;

destructor TProtectedObjectList.Destroy;
begin
  FLock.Enter;
  FObjects.Free;
  FLock.Free;
  inherited;
end;

procedure TProtectedObjectList.Clear;
begin
  FLock.Enter;
  try
    FObjects.Clear;
  finally
    FLock.Leave;
  end;
end;

function TProtectedObjectList.GetOwner: TPersistent;
begin
  result := NIL;
end;

procedure TProtectedObjectList.ForEach(Callback: TProtectedObjectListProc);
var
  mItem:  TObject;
  mCancel:  Boolean;
begin
  if assigned(Callback) then
  begin
    FLock.Enter;
    try
      mCancel:=False;
      for mItem in FObjects do
      begin
        CallBack(mItem,mCancel);
        if mCancel then
        break;
      end;
    finally
      FLock.Leave;
    end;
  end;
end;

function TProtectedObjectList.Contains(Instance: TObject): Boolean;
begin
  result := false;
  if assigned(Instance) then
  begin
    FLock.Enter;
    try
      result := FObjects.Contains(Instance);
    finally
      FLock.Leave;
    end;
  end;
end;

function TProtectedObjectList.GetCount: Integer;
begin
  FLock.Enter;
  try
    result :=FObjects.Count;
  finally
    FLock.Leave;
  end;
end;

function TProtectedObjectList.GetEmpty: Boolean;
begin
  FLock.Enter;
  try
    result := FObjects.Count<1;
  finally
    FLock.Leave;
  end;
end;

function TProtectedObjectList.Lock: TObjectList;
begin
  FLock.Enter;
  result:=FObjects;
end;

procedure TProtectedObjectList.UnLock;
begin
  FLock.Leave;
end;

//############################################################################
//  TProtectedObject
//############################################################################

constructor TProtectedObject.Create(const AOptions: TProtectedValueAccessRights = [lvRead, lvWrite]);
begin
  inherited Create;
  FLock:=TCriticalSection.Create;
  FOptions:=AOptions;
  FData := T.create;
end;

destructor TProtectedObject.Destroy;
begin
  FData.free;
  FLock.Free;
  inherited;
end;

function TProtectedObject.GetAccessRights: TProtectedValueAccessRights;
begin
  FLock.Enter;
  try
    result := FOptions;
  finally
    FLock.Leave;
  end;
end;

procedure TProtectedObject.SetAccessRights(Rights: TProtectedValueAccessRights);
begin
  FLock.Enter;
  try
    FOptions := Rights;
  finally
    FLock.Leave;
  end;
end;

function TProtectedObject.Lock: T;
begin
  FLock.Enter;
  result := FData;
end;

procedure TProtectedObject.Unlock;
begin
  FLock.Leave;
end;

procedure TProtectedObject.Synchronize(const Entry: TProtectedObjectEntry);
begin
  if assigned(Entry) then
  begin
    FLock.Enter;
    try
      Entry(FData);
    finally
      FLock.Leave;
    end;
  end;
end;

function TProtectedObject.GetValue: T;
begin
  FLock.Enter;
  try
    if (lvRead in FOptions) then
    result := FData else
    raise EProtectedObject.CreateFmt('%s:Read not allowed error',[classname]);
  finally
    FLock.Leave;
  end;
end;

procedure TProtectedObject.SetValue(Value: T);
begin
  FLock.Enter;
  try
    if (lvWrite in FOptions) then
    begin
      if (TObject(FData) is TPersistent)
      or (TObject(FData).InheritsFrom(TPersistent)) then
      TPersistent(FData).Assign(TPersistent(Value)) else
      raise EProtectedObject.CreateFmt
        ('Locked object assign failed, %s does not inherit from %s',
        [TObject(FData).ClassName,'TPersistent']);

    end else
    raise EProtectedObject.CreateFmt('%s:Write not allowed error',[classname]);
  finally
    FLock.Leave;
  end;
end;

//############################################################################
//  TProtectedValue
//############################################################################

constructor TProtectedValue.Create(Value: T);
begin
  Create([lvRead, lvWrite]);
end;

constructor TProtectedValue.Create(Value: T; const Access: TProtectedValueAccessRights = [lvRead, lvWrite]);
begin
  Create([lvRead, lvWrite]);
  Synchronize( procedure (var Data: T)
    begin
      Data := Value;
    end);
end;

Constructor TProtectedValue.Create(const Access: TProtectedValueAccessRights = [lvRead,lvWrite]);
begin
  inherited Create;
  {$IFNDEF USE_SYNCHRO}
  FLock := TCriticalSection.Create;
  {$ENDIF}
  FOptions:=Access;
end;

Destructor TProtectedValue.Destroy;
begin
  {$IFNDEF USE_SYNCHRO}
  FLock.Free;
  {$ENDIF}
  inherited;
end;

function TProtectedValue.GetAccessRights: TProtectedValueAccessRights;
begin
  Acquire;
  try
    result := FOptions;
  finally
    Release;
  end;
end;

procedure TProtectedValue.SetAccessRights(Rights: TProtectedValueAccessRights);
begin
  Acquire;
  try
    FOptions := Rights;
  finally
    Release;
  end;
end;

{$IFNDEF USE_SYNCHRO}
procedure TProtectedValue.Acquire;
begin
  FLock.Acquire;
end;

procedure TProtectedValue.Release;
begin
  FLock.Release;
end;
{$ENDIF}

procedure TProtectedValue.Synchronize(const Entry: TProtectedValueEntry);
begin
  if assigned(Entry) then
  Begin
    Acquire;
    try
      Entry(FData);
    finally
      Release;
    end;
  end;
end;

function TProtectedValue.GetValue: T;
begin
  Acquire;
  try
    if (lvRead in FOptions) then
    result := FData else
    Raise EProtectedValue.CreateFmt('%s: Read not allowed error',[classname]);
  finally
    Release;
  end;
end;

procedure TProtectedValue.SetValue(Value: T);
begin
  Acquire;
  try
    if (lvWrite in FOptions) then
    FData:=Value else
    Raise EProtectedValue.CreateFmt('%s: Write not allowed error',[classname]);
  finally
    Release;
  end;
end;

end.