Archive
Smart Pascal: Support for next generation Amiga A1222
If you are between 35 and 45 years old, chances are you remember the Commodore Amiga. This was the major computer to have back in the 80’s and 90’s. And if you know your history you also know that it was way ahead of all the other systems out there. Microsoft prides itself on inventing multi-tasking, but truth is that Amiga OS had a full multitasking, window oriented desktop way back in 1984.
Before you think this post is about the old computers, let me just say that it’s not; I realize that most of you will immediately think retro when you hear the word Amiga – but fact is that Amiga is doing somewhat of a comeback these days.
A brand new operating system
AmigaOS was awesome for its time and really was a system from the future. In an age where the average PC was a monochrome green dos experience, and Mac’s were ridicules black and white boxes – the Amiga was like something out of a spaceship. PC’s didn’t even have a mouse, let alone any form of multimedia – and here was a powerhouse of a machine that had a full windowing desktop, bright and crisp colors, co-processors dedicated to graphics, sound and dispatching — people had never seen anything like it.
Sadly, due to some borderline insane management decisions, Commodore went bankrupt back in 1994. You would think that the Amiga just went away after that, but it’s still going to this day. Two decades after Commodore went out of business – people still write software for the platform and some of the coolest hardware you will ever see is released for it every year. It really is an astounding computer and community. I mean, only recently in 2016 at least 5 new games were released for the platform (!)
Needless to say, the old operating system that culminated in OS 3.9 is hopelessly trapped in the past. Which is why the people who bought the rights to the system have spent more than a few years bringing it up to speed (right now it’s at version 4.1). And the result speaks for itself. It really is one of the best looking desktop systems in the world.
Brand new hardware
Say what you want about Amiga hardware but the quality was outstanding. But outstanding hardware from the early 90’s can’t hope to compete with modern computers. We are used to GPU chips that spin millions of pixels around the screen per second without breaking a sweat; 24 bit sound, monstrous 3d graphics processors, multi threading, gigabytes of ram and terabytes of storage space (phew!).
With Commodore long dead and no company officially in charge of hardware, a man called Trevor Dickinson decided that he had enough. Back in the day everyone was waiting for Commodore to release the Amiga 5000 (the Amiga 4000 was the Commodore flagship back in the 90s, sadly that was the end of it), including Trevor. This dream of the next Amiga is something everyone that loved the platform share even to this day. Well, Trevor decided to do something about the situation. He picked up the gauntlet, sharpened his sword and his journey towards a new Amiga began.
Without going into too much nitty-gritty here, Trevor and his colleagues set out to create a PPC based motherboard capable of running Amiga OS 4. And yes, the operating system is presently PPC based. This is due to the fact that in the 90’s both the Amiga and Apple Mac’s used the MC68k line of processors. These were retired and replaced with PPC chips. This made perfect sense at the time. Since Apple used them in their computers – the Amiga could enjoy cheaper hardware and piggyback on Apples success.
Most people today think that PPC is an obsolete chipset, but fact is that they are still in production – largely so, because they are popular for embedded systems. The obscene problems with heating is no longer as pressing at it used to be due to changes in production and materials – and as a consequence the new Amiga uses PPC even now.
I have two monster PPC based Mac’s in my basement so I was a bit timid when I heard that OS 4.x was using PPC. I mean the top of the line powermac is more cooling metal than content. But thankfully that line of PPC chips is a thing of the past.
Two exciting models
There have been a ton of Commodore related attempts to get the Amiga back on the market, every five years or so someone buys the Commodore name and tries to revive the glory days of the Amiga (or so it seems). All of them have failed largely due to internal fighting between the many owners that has rights to various bits and pieces. If there is a reason that the Amiga never managed to get back into market – this is it. The rights to various parts of the system was auctioned off to the highest bidder, with the operating system going one way, chips going another and name and intellectual property. It has been a complete cluster fuck since that horrible day back in 1994.
Trevor on the other hand has solved this the only way a reasonable and sane human being can. Which is to simply abandon and leave the old hardware, the Commodore name and the old machines peculiarities where they belong; In the past. So what if you can’t slap Commodore on the physical machine? The new Amiga was not designed to be a 30 year old computer. It was designed from scratch to run Amiga OS 4 and to be a system based on the same principles as the original; And that is (at least to me) what really counts. I enjoyed the old games and demo scene, but the desktop and programming aspects of the Amiga was always where my heart was.
Besides, kids that grew up in the aftermath of Commodore going under – don’t have a clue what the Amiga was anyhow. Nor do they have any relationship to Commodore as a brand. And considering 20 years have passed -clinging to these old names is ultimately a complete waste of time and money. So Trevor’s being fed up and deciding to build a whole new machine for the updated and modernized Amiga operating system – that makes perfect sense to me.
Unlike previous attempts by various individuals and groups, Trevor’s work is more tangible. In fact, you can go out right now and order the high-end PPC based monster that is the Amiga x5000. This is the most expensive model and it’s going to set you back a whopping €1800. This may seem like a steep price but bear in mind the amount of work and cost of parts – so it’s actually not that bad.
As a modern developer my computer needs have grown exponentially every year. I no longer install my development tools on my actual PC, instead I use VMWare and have all my development platforms neatly organized as virtual machines. The result is that I can no longer buy the cheap or middle-range offerings. My latest PC cost me around €2500. So all things considered €1800 is in the same range as an Amiga 4000 went for back in the day. This is a work horse that professionals use, it’s not a low-end gaming model.

The A1222 “Tabour” motherboard is a cheaper, more affordable entry machine. This is the one I’m getting to start with
Thankfully there is a cheaper model in the pipeline. I have already pre-ordered the A1222 which retails at around 400€ (give or take vat and shipping). This has a smaller CPU, less ram (you can of course stuff more in yourself) and you have to get your own mini-atx cabinet for it. As the name hints to this is equivalent to a bog standard A1200, which was my all time favorite. My old gem of a machine had a 40 megabyte harddisk, 4 megabytes of ram and an external CS-ROM. These specs are ridicules today, even my phone has more .. well, everything really; but 20 years ago that setup was the bomb.
When people talk retro like I do now, always remember that the Amiga operating system was able to deliver a close to Windows 95 experience. That should tell you something about how fast, well written and efficient the old Amiga was.
This was a machine that delivered an impressive and modern desktop experience even by today’s standards in 512 Kb (yes you read that right, kilobytes) of working ram. Imagine what an OS based on the same principles can do with 4 gigabytes of ram, 32 bit graphics, protected and paged memory, 24 bit audio and all the perks of modern computing.
Where emulation can’t go
Those that follow my blog know that I Amiga emulation and retro computing. So whenever I have time I whip out my Raspberry PI 3b based Amiga and enjoy my Amiga games, desktop and even do a spot of programming. We even managed to get Freepascal 3.x to compile on it, which means I can write applications with all the cool features of modern object-oriented programming at my disposal. On an Amiga 4000 emulated – the mind boggles when you think about this to long.

Here showing my retro-fitted A500 PI with sexy led keyboard, overclocked to the hilt running at 4 times the speed of an original Amiga 4000 (for $35!)
Sadly you can forget running Amiga OS 4 under emulation on the PI. Sure, it works fine on Windows or Linux – but emulation is only as good as the code emulating it. And while the PPC emulation layer is a monumental achievement and I applaud the original author, it’s not nearly as optimized as the older 68k layer. From what I know it’s still running ad-hoc with no JIT involved (or at least not cache’d). This means that you need a powerful PC to really enjoy Amiga OS 4. Emulation is a bit like bootleg movies, and a poor bootleg will ruin the movie completely. You can get away with a mid-range x86 PC I suppose, but you can forget about ARM or x86 embedded boards.
Perhaps you remember that I tested various embedded boards a while back? Part of my motivation in doing that (and buying HW for €300 on a whim) was to find a reasonably priced x86 or ARM single board computer that could emulate OS 4 without problems. The most expensive and capable of the boards I tested was the UP x86 board that retails at around $150 (I got the biggest model they had on offer). And yes, it did manage to run Amiga OS 4, just not in a way that made it usable or enjoyable. On my kick ass Intel i7 based PC it emulates just fine, but again – it becomes ridicules not buying a real Amiga since emulation will cost me more or the same. I mean, why not go for the real deal if its affordable?
So if a €400 Amiga is what it takes to run OS 4 properly, then I have no problem supporting Trevor’s work and get the real deal.
Freepascal and Smart
While Trevor doesn’t know me personally, I am a huge fan of his endeavour. And one of the things I want to start porting to the shiny new OS 4.x platform is of course – Smart Pascal. If you have ever heard of Turbo Pascal and Delphi (with emphasis on the latter) then you get some idea of what Smart is. Except that we compile for node.js and the browser rather than machine-code.
So before I can port that over I will have to get Chromium run on OS 4, and then link that up with Freepascal. I do enjoy C++ but its not even close to the productivity of object pascal so I prefer to work in that. And since freepascal 3.x has thankfully been ported already that is not a problem. So with a bit of work I think Delphi developers will be in for a treat, and people new to programming will love learning object pascal.

Porting the full might of Smart Mobile Studio to the Amiga is going to take an effort, but I think it will be worth it
But naturally, not everyone is used to building from the ground up. It will take some work and probably weeks of adaptation before the full might of Smart Mobile Studio runs on the Amiga. But when it does – it will have a huge impact. Because then you can use the Amiga to create cloud servers, mobile applications for ALL platforms and much, much more!
So if being a part of using an operating system from the grass-root and up sounds exciting, why not take a peek at the A1222 or Amiga x5000?
Head over to http://www.a-eon.com/ and have a gander
Smart Pascal: Download streams
Real, binary streams has been a part of the Smart Pascal RTL for quite some time now. As a Delphi developer you probably take that for granted, but truth be told – no other JavaScript framework even comes close to our implementation. So this is unique to Smart Pascal, believe it or not.
The same can be said about the ability to allocate, move and work with memory buffers. Sure you can write similar code by hand in pure JavaScript, but the amount of code you have to write will quickly remind you why object orientation is so important.
Binary data counts
So you got streams, what of it? I hear you say. But you are missing the point here. If there is one thing JavaScript sucks at, it’s dealing with binary data. It has no concept really of bytes versus 32 bit integers, or 64bit integers. There is no such thing as a pointer in JavaScript. So while we have enjoyed pointers, memory allocations, being able to manipulate memory directly and use streams to abstract from our binary data for decades in Delphi — all of this is brand new under JavaScript.
And binary data counts. The moment you want to write code that does something on any substancial level – the capacity for dealing with binary data in a uniform way is imperative. If you are into HTML5 game coding you will sooner or later get in contact with map editors (or level editors) that work best in binary format. If you plan on making a sound app that runs in the cloud, again being able to read binary files (just like we do in Delphi) is really, really important. Just stop and think for a few seconds how poor Delphi and C++ builder would be without streams.
Making data available
One developer asked me an important question earlier: how do you get data out? And he meant that quite literally. What if my Smart app is the producer of binary data? What if I use SMS to create the fancy map editor, or the waveform generator code or whatever – what then?
Indeed that is a good question, but thankfully an easy one.
Most JavaScript objects, or data objects in general, inside a browser can be exported. What this means is that the browser can tag a spesific object with an ID, and then make the data available as a normal link.
For instance, if you have a block of memory like an uint8Array and you want that data exported, you would call url.createObjectURL() and it will create an URL you can use to get that data. Let’s have a look at the code you need first:
function BinaryStreamToURLObject(Stream: TStream):String; var mBlob: THandle; begin if stream<>NIL then begin stream.position:=0; var mTemp := TDatatype.BytesToTypedArray(Stream.read(stream.size)); asm var encdec = window.URL || window.webkitURL; @mBlob = new Blob([@mTemp],{ type: "application/octet-binary" } ); @result = encdec.createObjectURL(@mBlob); console.log(@result); end; end; end; procedure ForceDownloadOf(FileName: string; Stream: TStream); var LARef: TControlHandle; begin if Stream <> nil then begin if Stream.Size > 0 then begin // Create node asm @LARef = document.createElement('a'); end; // Setup values LARef.style := "display: none"; LARef.href := BinaryStreamToURLObject(Stream); LARef.download := Filename; // Add to DOM asm document.body.appendChild(@LARef); end; // Wait for the obj to appear in the DOM LARef.readyExecute( procedure () begin // Invoke click on link LARef.click(); end); end; end; end;
Note #1: Notice how I use a TControlHandle in the example above. Why? Because this handle has a helper class that gives us some perks, like readyExecute(), which fires when the element is safely in the DOM and is ready to be used.
Note #2: Since the built-in browser in Smart Mobile Studio doesnt have download functionality, nothing will happen when you run this inside the IDE. So click on the “Open in browser” and run your app there to see it.
The first function takes a stream and converts it into a blob object. The second function creates an anchor object, and then calls the click() method on that anchor. Essentially kick-starting the download. It is the exact same as you clicking on a download link, except we do it purely through code.
Let’s go through the steps
- Grab all the data from the stream
- Convert from TByteArray to a typed browser array
- Fetch the browser’s URL object
- Call createObjectURL, passing the data
- Return the internal URL for the data, which is now kept safe
- Create an anchor link object
- Make sure the anchor is invisible
- Set the URL to our blob above
- Add the anchor to the DOM
- Call the Click() method on the anchor
Voila! Not to hard was it 🙂
So now you can just have a button and in the onClick event you just call ForceDownload() and bob’s your uncle 🙂
Here is the internal link I got after saving a TW3Dataset to a stream and pushing it through the steps above: blob:http%3A//192.168.38.102%3A8090/9a351a97-5f6d-4a43-b23b-c81b77972e21
This link is relative to the content, so it will only work for as long as your Smart app is in memory (actually, only while the blob is managed, you can remove the blob as well).
Understanding Smart Pascal
One of the problems you get when working pro-bono on a project, is a constant lack of time. You have a fixed amount of hours you can spare, and every day you have to make decisions about where to invest those hours. The result is that Smart Mobile Studio has a wealth of technical resources and depth, but lacks the documentation you expect such a product to have. This has been and continues to be a problem.
Documentation really is a chicken and egg thing. It doesn’t start out that way, but once the product is launched, you get trapped in this boolean dynamics: “Few people buy it because it lacks documentation; You can’t afford to write documentation because few people buy it“. Considering the size of our codebase I don’t blame people for being a bit overwhelmed.
Despite our shortcomings Smart Mobile Studio is growing. It has a slow but steady growth as opposed to explosive growth. But all products needs periods of explosive growth to build up resources so that future evolution of the product can be financed. So this lack of solid documentation acts almost like a filter. Only those that are used to coding in Delphi or Lazarus at a certain level, writing their own classes and components, will feel comfortable using it.
It has become a kind of elite toolkit, used only by the most advanced coders.
Trying to explain
The other day I talked to a man who simply could not wrap his head around Smart Pascal at all. Compile for JavaScript? But.. how.. How do you get classes? He looked at me with a face of disbelief. I told him that we emit a VMT (virtual method table) in JavaScript itself. That way, you get real classes, real interfaces and real inheritance. But it was like talking to a wall.
In his defence, he understood conceptually what a VMT was, no doubt from having read about it in context with Delphi; but how it really works and that the principle is fundamental to object orientation at large, was alien to him.
var TObject={ $ClassName: "TObject", $Parent: null, ClassName: function (s) { return s.$ClassName }, ClassType: function (s) { return s }, ClassParent: function (s) { return s.$Parent }, $Init: function () {}, Create: function (s) { return s }, Destroy: function (s) { for (var prop in s) if (s.hasOwnProperty(prop)) delete s.prop }, Destroy$: function(s) { return s.ClassType.Destroy(s) }, Free: function (s) { if (s!==null) s.ClassType.Destroy(s) } }
Above: In object orientation the methods are only compiled once while the instance is cloned. This is why methods in OOP languages are compiled with a secret first parameter that is the instance. Inheritance never duplicates the code inherited from ancestors.
In retrospect I have concluded that it had more to do with “saving face” than this guy not understanding. He had just spent months writing a project in JavaScript that he could complete in less than a day using Smart Pascal – so naturally, he would look the fool to admit that he just wasted a ton of company money. The easiest way to dismiss any ignorance on his part, was to push our tech into obscurity.
But what really baked my noodle was his lack of vision. He had failed completely in understanding what cloud is, where the market is going and what that will mean to both his skill-set, his job prospects and the future of software development.
It’s not his fault. If anything it’s my fault for not writing about it earlier. In my own defense I took it for granted that everyone understood this and knew what was coming. But that is unfair because the ability to get a good overview of the situation depends greatly on where you are.
JavaScript, the most important language in the world
It may be hard for many people to admit this, but it is none the less true. JavaScript has become the single most important language on the planet. 50% of all software written right now, regardless if it’s for the server or the browser, is written in JavaScript.
I knew this would happen as early as 2008, all the signs pointed to it. In 2010 Delphi was in a really bad place and I had a choice: drop Delphi and throw 20 years of hard-earned skills out the window and seek refuge in C++ or C#; or adapt object pascal to the new paradigm and try to save as much of our collective knowledge and skills as I could.
Even before I started on Smart I knew that something like node.js would appear. It was inevitable. Not because I am so clever, but because emerging new technology follows a pattern. Once it reaches critical mass – universal adoption and adaptation will happen. It follows logical steps of evolution that apply to all things, regardless of what the product or solution may be.
What is going to happen, regardless of what you feel
Ask yourself, what are the implication of program code being virtual? What are the logical steps when your code is 100% abstracted from hardware and the underlying, native operative system? What are the implications when script code enjoy the speed of native code (the JavaScript virtual machine uses LLVM to the point that JavaScript now runs en-par with native code), yet can be clustered, replicated, moved and even paused?
Let me say it like this: The next generation rapid application development wont deliver executable files or single platform installers. You will deliver entire eco-systems. Products that can be scaled, moved between hosts, replicated -that runs and exist in the cloud purely as virtual instances.

Norwegian developed FriendOS is just one of the cloud based operative systems in development right now. It will have a massive impact on the world
Where Delphi developers today drag and drop components on a form, future developers will drag and drop entire service stacks. You wont drop just a single picture on a form, but connectors to international media resource managers; services that guarantee accessibility across continents. 24 hours a day, seven days a week.
You think chrome-book is where it ends? It’s just the beginning.
Right now there are 3 cloud-based operating systems in development. All of them with support for the new, distributed software model. They allow you to write both the back-end and front-end of your program, which in the new model is regarded as a single entity or eco-system. Things like storage have been distributed for well over a decade now, and you can pipe data between Dropbox, Google drive or any host that implements the REST storage interface.
Some of the most powerful companies in the world are involved in this. Now take a wild guess what language these systems want you to use.
I’m sorry, but the way we make programs today is about to go extinct.
Understanding the new software model
As a Delphi or Lazarus developer you are used to the notion of server-side applications and client side applications. The distinction between this has always clear, but that is about to change. It’s still going to be around, at least for the next decade or so, but only for legacy purposes.
To backtrack just a second: Smart introduced support for node.js applications earlier, but it was on a very low-level. In the next update we introduce a large number high-level classes that is going to change the way you look at node completely.
Two new project types will be introduced in the future, giving you a very important distinction. Namely:
- Cloud service
- System service
To understand these concepts, you first have to understand the software model that next generation cloud operating systems work with. Superficially it may look almost identical to the good old two-tier model, but in the new paradigm it is treated as a single, portable, scalable, cluster capable entity.
The thing about clustering and scaling is what tends to confuse traditional developers. Because scaling in a native language is hard work. First you have to write your program in such a way that it can be scaled (e.g work as a member in a group, or cluster). Secondly you have to write a gate-keeper or head that delegates connections between the members of the cluster. If you don’t do this from the very beginning it will be a costly affair to retrofit a project with the required mechanisms.
Node.js is just awesome because it can cluster your code without you having to program for that. How? Because JavaScript is virtual. So you can fire up 50, 100 or 10.000 instances of the same process and the only thing you need to worry about is the gate-keeper process. You just park the cluster in another IP range that is only accessible by the gatekeeper, and that’s pretty much it.
When a software eco-system is installed on a cloud host, the entire architecture described by the package is created. So the backend part of your code is moved to an instance dedicated for that, the front end is installed where it belongs, same with database and so on. Forget about visual controls and TComponent, because on this level your building blocks are whole services; and the code you write scales from low-level socket coding to piping terabytes of data between processes.

PM2 is a node.js process manager that gives you clustering and pooled application behavior for free out of the box. You don’t even have to tailor your code for it
Services that physically move
While global accessibility is fine and good, speed is a factor. It goes without saying that having to fetch data from Asia when you are in the US is going to be less than optimal. But this is where cloud is getting smarter.
Your services will actually physically move to a host closer to where you are. So let’s say you take a business trip from the US to Hong-Kong. The service will notice this, find a host closer to where you are, and replicate itself to that server.
This is not science-fiction, it’s already implemented in Azure and Google’s APIs take height for this behavior. It’s pretty cool if you ask me.
Is node.js really that powerful?
Let me present you with a few examples. Its important to understand that these examples doesnt mean everyone have to operate on this scale. But we feel it’s important to show people just what you can achieve and what node is capable of.
Netflix is an online video streaming service that has become a household name in a very short time. Cloud can often be a vague term, but no other service demonstrates the potential of cloud technology as much as Netflix. In 2015 Netflix had more than 69 million paying customers in roughly 60 countries. It streams at average 100 million media hours a day.
Netflix moved from a traditional, native software model to a 100% clustered Node.js powered model in 2014. The ability for Netflix to run nearly universally on all devices, from embedded computers to Smart TV’s is largely due to their JavaScript codebase.
PayPal is a long-standing online banking and payment service that was first established in 1998. In Q4 of 2016 PayPal had 192 million registered customers world-wide. The service’s annual mobile payment volume was 66 billion US dollars in 2016. More than triple that of the previous year. Paypal moved from a traditional, native server model to Node.js back in 2015, when their entire transaction service layer was re-written from scratch.
Uber is a world-wide taxi company that is purely cloud and web-based. Unlike traditional taxi companies Uber owns no cars and doesn’t employ drivers; Instead its a service that anyone can partake in – either as a customer or a driver. In 2016 Uber operates in 551 cities across 60 countries. It delivers more than one million rides daily and have an estimated 10 million customers.
Uber’s server technology is based purely on Node.js and exists purely as a cloud based service. Uber has several client applications for various mobile devices, the majority of these are HTML5 applications that use Cordova Phonegap (same as Smart applications).
Understanding Smart
While the RTL and full scope of the technology has been a bit of a “black box” for many people, hopefully the idea and concepts around it has matured enough for people to open up for it. We were a bit early with it, and without the context that is showing up now I do understand that it can be hard to get the full scope of it (not to mention the potential).
With the cloud and some of its potential (and no, it’s not going away), a sense of urgency should start to set in. Native programming is not going away, but it will be marginalized to the point where it goes back to its origins: as a dicipline and part of engineering.
Public software and services will move to the cloud and as such, developers will be forced to use tools and languages better suited for that line of work.
We firmly believe that object pascal is one of the best languages ever created. Smart pascal has been adapted especially for this task, and the time-saving aspects and “edge” you get by writing object pascal over vanilla JavaScript is unquestionable. Inheritance alone is helpful, but the full onslaught of high-level features Smart brings takes it to the next level.

The benefits of writing object oriented, class based code is readability, order and maintainability. The benefits of a large RTL is productivity. The most important aspect of all in the world of software development.
Hopefully the importance of our work will be easier to understand and more aparent now that cloud is becoming more visible and people are picking up the full implications of this.
The next and obvious step is moving Smart itself to the cloud, which we are planning for now. It will mean you can code and produce applications regardless of where you are. You can be in Spain, France or Oklahoma USA – all you will need is a browser, your object pascal skills and you’re good to go.
Things like “one click” hosting, instance budgets for auto scaling; the value for both developers and investors should be fairly obvious at this point.
Starting monday we will actively look for investors.
Sincerly
Jon Lennart Aasenden
LDef try/catch support
Now this was a pickle: namely to support try/catch constructs in LDEF on assembly level. It may sound simple, but it all depends on how exactly the data-model stores individual instructions.
Since LDef has various blocks where code can be defined, abstracting the instruction buffers had to be done. With blocks I naturally mean code sections. A procedure for instance is such a block. A procedure contains instructions and calls to other code blocks. But – it can also contain sub-blocks. Consider the following:
/* block 1 */ public void SomeProc() { a = 12; b = 24; c = a + b; if (c >= 27) { /* Block 2 */ } else { /* block 3 */ } }
The code above, ridicules in it’s simplicity, demonstrates a fundamental principle that all compilers must support, namely to execute different blocks based on some value. In this example block #2 will execute if “c” is more or equal to 27, or block #3 if its not.
This is pretty straight forward right? Well not quite. It all depends on how you store bytecodes in the data model. The first question you should ask is: how do we execute block #2 and not block #3. Remember that in assembly language (or bytecode) this is all one big chunk. Had this been machine code, the compiler would have to calculate the offset of block #3, also where block #3 ends. If the condition was false a jump to block #3 must be performed (skipping over block #2). Well, you get the idea I think.
Since LDef is very low-level, I have to come up with something similar. But I also wanted a solution that made things easier. Doing in-place forward calculations etc. is not hard, boring perhaps but not a showstopper by any means. But could I come up with a more flexible solution
First stop was to fragment the instruction blocks. So instead of having a single list of instructions associated with a procedure or function, these can now have as many instruction lists associated with it as memory can hold. The idea is that they are all glued together into a final list when the model is emitted to disk. But the ability to organize and work with chunks of code like this is really a step up from barebone assembly.
type TLDefModelParamType = ( ptRegister, // Parameter is a register ptVariable, // Parameter is a variable (index follows in bytecode) ptConst, // Parameter is a constant (index follows in bytecode) ptValue, // Parameter is a direct value, raw data follows in bytecode ptDC // Parameter is the data-control register ); TLDefModelParam = class strict private FType: TLDefModelParamType; // Param type FIndex: integer; // index (register only!) FData: string; // data (const + variable only!) public property ParamType: TLDefModelParamType read FType write FType; property Index: integer read FIndex write FIndex; property Data: string read FData write FData; end; TLDefModelParamList = TObjectList; TLDefModelInstruction = class(TLDefModelSymbol) strict private FInstr: integer; // Index of instruction in dictionary FParams: TLDefModelParamList; // Parsed parameters public property Id: integer read FInstr write FInstr; property Params: TLDefModelParamList read FParams; constructor Create(const AParent: TParserModelObject); override; destructor Destroy; override; end; TLDefModelInstructionIfThen = class(TLDefModelInstruction) strict private FThen: TLDefModelInstructionList; public property ThenCode: TLDefModelInstructionList read FThen; constructor Create(const AParent: TParserModelObject); override; destructor Destroy; override; end; TLDefModelInstructionIfThenElse = class(TLDefModelInstructionIfThen) strict private FElse: TLDefModelInstructionList; public property ElseCode: TLDefModelInstructionList read FElse; constructor Create(const AParent: TParserModelObject); override; destructor Destroy; override; end; TLDefModelInstructionTryCatch = class(TLDefModelInstruction) strict private FTryCode: TLDefModelInstructionList; FCatchCode: TLDefModelInstructionList; public property TryCode: TLDefModelInstructionList read FTryCode; property CatchCode: TLDefModelInstructionList read FCatchCode; constructor Create(const AParent: TParserModelObject); override; destructor Destroy; override; end; TLDefModelInstructionList = class(TLDefModelSymbol) strict protected function GetItem(index: integer): TLDefModelInstruction; public property Count: integer read ChildGetCount; property Item[index: integer]: TLDefModelInstruction read GetItem; function Add: TLDefModelInstruction; overload; function Add(const NewInstance: TLDefModelInstruction): TLDefModelInstruction; overload; function AddIfThen: TLDefModelInstructionIfThen; function AddIfThenElse: TLDefModelInstructionIfThenElse; function AddTryExcept: TLDefModelInstructionTryCatch; end; TLDefModelByteCodeChunk = class(TLDefCollectionSymbol) strict protected function GetSegment(index: integer): TLDefModelInstructionList; virtual; public property Count: integer read ChildGetCount; property Segment[index: integer]: TLDefModelInstructionList read GetSegment; function Add: TLDefModelInstructionList; end;
By splitting up TLDefMOdelInstructionList into these parts, especially the if/then, if/then/else and so on classes, working with conditional execution is no longer problematic. A list will always know it’s own size and length, so it’s not really that much work involved in emitting the jump instructions and test stuff.
Exceptions
Exceptions is an intricate part of the virtual machine. How to deal with them however is something I have thought long and hard about. I finally ended up with a system that is easy to use. The ES register will be 0 (zero) if no except has occured, otherwise it will contain the exception identifier.
When an exception occurs, the type and message is pushed on the stack by the virtual machine. A catch block then have to read them out and deal with them. You can also re-throw the exception via “rethrow;” or just throw a new one via “throw ”
try { /* calc longs */ move r1, count; mod r1, 8; move r2, count; move _longs, r1; } catch { /* The ES register contains the exception state, but the message will be on the stack */ pop r0; /* get type */ pop r1; /* get message */ swap r0, r1; /* Syntax for showmessage wants text in r0 */ syscall -rtl_showmessage; }
Well, fun times ahead! Cant wait to finish the emitters and get this puppy running 🙂
LDef Intermediate Language
The LDEF bytecode engine is some time away from completion, but the IL source format that the assembler reads and turns into bytecode is getting there. At the moment there are only a few tidbits left to explore, like interfaces and generics, but those will be added.
It’s a real brain teaser because – some of the stuff that makes up a compileris not really related to code. When you think about generics you often make the mistake of thinking this is a code feature, like inheritance or virtual methods; it’s something that the code-emitter has to deal with or runtime engine to take height for. But generics is actually implemented higher up. It exists between the parser and code-emitter.
Interfaces is another mirage or technological illusion. When you work with classes and interfaces you get a sense that it’s a solid thing, you know – you write a class and create objects and people imagine these objects as independent, material objects in a virtual space. I tend to think of instances as molecules.
But objects is ultimately an illusion. Im not going to cover the intricate details of object orientation here, but OOP is actually about separating the data (fields) from the code acting on that data (members). So the only objects that really exist in the memory of a computer when you create instances, are buffers representing the fields of your class – combined with references and a list mapping the entrypoints for the members that makes up that instance (VMT). Compilers actually mimic object orientation by adding a secret parameter to all your methods, namely the “self” parameter. This “self” is a pointer to a record that contains all the information pertaining to that instance.
Which is really cool because then you can create as many instances as you like – and they all use the same code. Which ofcourse object orientation is all about. But there is no such thing as an independent instance floating around computer memory. That is an optical illusion of sorts.
LDEF virtual machine
Virtual machines get’s to have all the fun. I mean, writing a compiler that emits real machine code is in itself not that hard, but generating a scheme that makes object orientation work and that keeps track of everything is. The x86 cpu architecture may be powerful but it’s an absolute bitch to work with. It has few registers (compared to ARM and PPC), several instruction sets and is known to be a somewhat unfriendly place. This is the reason that compiler makers tend to stick to a golden mean of instructions. Compilers like C++ builder and Delphi could actually generate faster and more efficient code if they knew exactly what cpu and architecture you used. But since that’s not how PC’s work and there are some 20 different cpu models on the market at any given time – with huge differences between AMD and Intel, it makes sense to use a safe collection of instructions.
LDEF is a bytecode runtime engine. And one of the perks of bytecode is that it’s very much abstracted from the whole issue of real-life assembly. Most programmers think that – if you make bytecodes then you dont have to think about low level stuff, which is cheating. That may be the case for other languages and engines, but not LDEF. In fact the whole point of writing a new engine is because I wanted a bytecode format that was closer to machine code. This is very important because at some point myself or someone else will write a JIT compiler for this, and if the runtime is to high-level or abstract, that is going to be very hard.

LDEF testbed
The LDEF instruction-set represents a golden mean of the average processor instruction set. I have taken instructions that most processors have, typical stuff like add, subtract, divide, modulus, multiply (and so on). Testing is where I have given LDEF some advantages, for instance when testing a list of parameters for conditions. Instead of keeping that as two instructions (test, branch [condition]) I have isolated that in a single instruction.
Let’s look at the instruction set so far:
- move
- blit – move memory block
- valloc – allocate variable
- vfree – free variable
- malloc – allocate memory
- mfree – release memory
- add
- sub
- mul
- muldiv
- div
- mod
- moddiv
- lsr – logical bit-shift right
- lsl – logical bit-shift left
- cmp – compare
- tst -test
- bne – branch not equal
- beq – branch equal
- ble – branch less
- bgt – branch greater
- jsr – jump sub routine
- jmp – jump absolute
- push – push to stack
- pop – pop from stack
- IFnb – test and branch if not equal
- IFtb – test and branch if equal
- throw – cause exception
- syscall – invoke engine spesific, delphi code
One of the benefits of a virtual machine, is that you can do some magic with variables. In a real machinecode compiler, variable allocation and doing read / write operations can be very complex. But in a virtual machine you thankfully get to do something about that.
So LDEF allows you to move data seamlessly between variables and registers, which really is a massive time saver. The code standard also supports two levels of resources, global and local. The latter meaning class in this case. So there is ample room for high-level languages to store their data and implement classical features (like “resourcestring” in Delphi).
You also have support for constants, these are stored separately in the bytecode and loaded into a lookup-table associated with a class. Constants are different from resources in that they are designed to be quickly referenced. Constants has more infrastructure in the form of lookup-tables – because they are meant to be used with the instructions. Resources are more like resource-strings in Delphi, they are stored in their own place in the bytecode and are loaded on demand. Constants are not. They are loaded into memory and are instantly available.
Having said that, the bytecode compiler also supports in-place data. Meaning that you can chose to write constant data where they are used. So instead of having to load a string constant (for example) before working with it, you can compile the string directly into the move instruction (or any other instruction).So this is actually perfectly valid:
move r0, "Hello world";
Other virtual machines, like the Java engine, force you to do this in two steps, which is slower:
lda r0, cost_id; // reference const by id ldc r1, (r0); // get actual value into r1
You can also assign variables directly, you dont need to load the effective address first. So there is no extract cost involved in moving data between a register and variable, variable and variable, or resource / cost to a variable. This makes life much easier:
move V[$00F6], "String assignment"; move V[$01F0], 19875.32; move V[$196], V[$197];
Registers
Another thing that is beneficial is that LDEF has 32 registers to work with (you can actually change that, so if you need 64 that’s no problem). How you use these is up to you, but it gives a lot of room for OOP schemes. For instance, if a language has a maximum limit of 6 parameters per method – you actually dont need to pass values on the stack at all (like Java does) but you can map parameters directly to registers.
Delphi, C++ builder and other native solutions tend to use stack-schemes to store information. So a pointer to some list is stored as the fourth item on the stack, and some other entry on the third (and so on). Which is perfectly fine and very effective on a real CPU (you really have no choice on a real x86). In LDEF you can now use spesific registers instead, which is a lot easier. What scheme you chose to use if naturally up to you – but at least the option is there.
Here are the registers LDEF presently supports
- R0 – R31: general purpose registers
- SP: stack pointer
- PC: program control register
- BC: branch control register
- ES: exception state register
Optimization
If you are wondering why a virtual machine would support so much similar stuff, constants and resources, in place data or not, this all have to do with optimalization.
For example, if your compiler decides to collect all assignment values as constants, the codesize of your program might be smaller; it all depends on what your code looks like. You will naturally consolidate identical strings, integer values and so on. So even if you have a program that writes “Hello world!” 10.000 times to the console, only one “Hello world!” will be stored in the bytecode file. So constants gives you smaller code, but at a cost of speed – since constants needs a lookup whenever they are used.
Now constants here are not “cost” declarations. Constants on this level is actually the values you write in plain-text in your program, stuff like this:
FMyText := 'Welcome to my program'; FCount := 100;
Both “welcome to my program” and “100” are constants. These are values that will never change, and compilers usually have to deal with these values in one of two ways. Both of which I have explained above (in place or not).
Source format
The LDEF intermediate format looks pretty much like C++. C is fast and easier to parse than other languages and it made sense to pick that.But this similarity is paper thin, in that only the constructs of C++ is used. Methods only constains the LDEF assembly language code, and variables are isolated in Alloc sections. The virtual machine takes care of the OOP layer for you so you dont have to worry about that.
Here is an example to give you a feel for it:
#include <stdio>; #include <oop>; struct CustomType { uint32 ctMagic; uint32 ctSize; uint8 ctData[4096]; } resources { dc.s #welcome, "welcome to LDef"; } class TBaseObject: object { /* class-fields */ alloc { uint8 temp; uint32 counter; CustomType handle; } /* Parser now handles register mapping */ public void main(r0 as count, r1 as text) { enter { } leave { } alloc { /* method variables */ uint32 _longs; uint32 _singles; } /* calc longs */ move r1, count; mod r1, 8; move r2, count; move _longs, r1; /* calc singles */ move r3, r1; mul r3, 8; sub r2, r3; move _singles, r2 } /* test multi push to stack */ private void _cleanup() { push [r0, r1, r2]; } }
Keep in mind that the format is not 100% finished yet, there are still a few tidbits that needs to be worked out. But the general system is firmly in place.
One of the cool things is that I get to add a few missing features from my favorite language, Delphi (object pascal). If you look at the first method (main), you will notice that it has two extra sections: enter and leave.
These sections can contain code that execute directly before and after the body of the method. In pascal it would look something like this:
procedure TMyClass.SomeProc; before writeln('about to execute someproc'); end; after writeln('Someproc finished'); end; begin end;
The purpose of these, especially the before() section, is to make sure the parameters are valid. So before is used to check that the parameters are within a legal range. After is naturally used to ensure that the result (for functions) is valid.
It also opens up for some interesting logging and debugging aspects.
More to come
This is turning into a long rant, but hopefully you found it interesting. I’ll keep you posted as the engine progress. There is still some way to go, but we are getting there. Once LDEF is implemented the fun starts, thats when I’ll code high-level parsers that targets the engine. First stop is naturally and object pascal compiler 🙂
Parsing fun with anonymous methods

Anonymous methods can save you days, weeks and months of coding. Bug hunting in a smaller codebase is always more effective
Dont underestimate anonymous methods
Delphi has supported anonymous methods for many years now, but like other languages it can take a while before a feature impacts a community’s codebase at large. Delphi is like a whale, it can take a while to turn, but when it turns it come about full force. Thankfully we have now reached the stage where generics and anonymous procedures is almost everywhere. There is still tons of classical object pascal (Delphi 7 style) online. Probably hundreds of thousands of examples and snippets people can look at. But libraries, components and new code people post now make good use of the excellent features Delphi have to offer.
Take parsing. A common method that you find in almost all parsers is ReadTo(). It’s a function that keeps on reading, char by char, until some condition is met. Usually this condition is finding a specific character, or that the parser hits a character that it doesn’t expect.
For example like this:
TCharSet= ('A'..'Z','a'..'z','_'); function ReadTo(breakers: TCharSet; var text: string): boolean;
This is more than enough to read method-names, variable names and ordinary plain-text elements. The TCharset set acts as a filter, so when Read() hits a character not in that set – the procedure exits from the read loop, returning false. There is naturally more to it than this, but it’s just to give you an idea.
With anonymous methods, we can write more flexible parsing code. While sets work just fine, it is sometimes easier to do ad-hoc checking. In a complex language or file-format the amount of set-types can grow quite large in the old model. So in-place testing is very welcome, and this is where anonymous methods are super handy.
Consider this parsing function:
function TParserBuffer.ReadTo(const Validator: TParserValidator): boolean; begin SetLength(Text, 0); if not Empty then begin if assigned(Validator) then begin while not EOF do begin if not Validator(Current) then break else Text := text + Current; if not next() then break; end; result := Text.Length > 0; end else result := false; end else result := false; end;
With that in place we can now do ad-hoc reading and testing like this:
type TParserValidator = reference to function (Value: char): boolean; LParser.ReadTo( function (value:char): boolean begin result := CharInSet(Value,['A'..'Z','a'..'z','_','0'..'9']); end, LText);
Advanced parsing always results in pretty much the same techniques being evolved. You can take a programmer in the US, another programmer in Germany – isolate them with no internet access and ask them to write a language parser; perhaps with a deadline like 6 months. When you audit the code you will discover that both programmers have “discovered” and “invented” more or less the same techniques. There are many ways to do parsing, but common to the topic in general are a few fundamental principles.
One of those principles is called “parser context”. The idea is that you isolate information about a parsing session in a separate object (to avoid one massive, complex and nearly undecipherable class). So the current parser position (offset into the text buffer), the column and row info – but more importantly: the “data model” that your parser have built up, which is refered to as an AST (abstract symbol tree) and many other names – is instantly available via the context.
Let’s dig into it
Im not going to write to much on this (since its a huge topic), but ok let’s dig into benefits of the context object.
An important part of the context object is a stack mechanism. As the parser moves through the source-code, it creates objects that is stored in the data-model. Objects that have clear relationships. For example, a local variable will naturally be connected to a procedure or function. A method will likewise be connected to a class.
The context-stack helps you deal with recursive parsing. You will have a class parser, who in turn will invoke a method parser when it finds that, or a property parser when that is found, or a field parser (and so on).
To simplify recursive parsing, what people do is to push the current model-object on the context stack, then it calls the parser-method in question. That parser will in turn do the same if it has sub-elements that should be parsed into itself as the parent. Using a stack saves a lot of time and allows you to write very clean, effective parsing methods.
Take a Delphi class, a simple class with some fields, a couple of properties and a few methods. The process for parsing that would look something like this:
- ParseClass
- Parse ancestor
- Parse fields
- Parse field declaration
- Parse datatype
- Parse method
- Parse parameters
- Parse datatype
- Parse result datatype (function)
- Parse parameters
- Parse properties
- Parse getter
- Parse setter
Instead of having to write code to look-up the parent for each “parse job”, or even worse – having to pass along the parent via parameters or setting properties (remember, we use a context to de-couple data from process), the stack allows us to define the target or parent for each child job:
// pass the "unit" on the stack Context.Stack.Push(self.unitmodel); try ParseClass(); finally context.stack.pop(); end;
And the same style of code just continues in all the sub-elements:
// get the unit model-object from the stack LUnit:= TUnitNModel(Context.Stack.peek); // Add a class object to the model LClass:= LUnit.AddClassModel(); // push that onto the stack for children Context.Stack.Push(LClassModel ); try //Naturally more work here, but through the //context object the parser now always knows //the parent where new data should be stored ParseFields</span>(); ParseMethods</span>(); ParseProperties() finally context.stack.pop(); end;
Anonymous procedures and functions now simplifies this even further. Being able to pass methods as parameters means that recursive and complex jobs can be written more effectively. Where I previously had to implement say, a “4 step” parsing job (where each parser calls the other for sub-elements) as classes, i can now do pretty much the same in 4 calls (!)
Context.Buffer.push(); try Context.Buffer.ReadTo( function(value:char):boolean begin context.buffer.push(); try Context.Buffer.ReadTo( function(value:char):boolean begin if bla bla bla finally Context.Buffer.Pop(); end; end); finally Context.Buffer.pop(); end;
Notice the context.buffer.push()? Well, that a feature for pushing the current position and offset of the parser on a stack. So now I can easily proof-read ahead quite deep structures, and return to the very position the declaration started at. This is very helpful when dealing with distinctions. For instance, when telling the difference between a method call locally or in another object:
myproc; FObject.someproc;
In the above source, we would hit “.”, then do an anonymous reading session to pickup “someproc;”, return back without losing the current position – and eval what laws should be applied to the text ahead. Dealing with parameter brackets, be they empty of with data, also becomes a recursive and simple task.
Now I have just started to play with anonymous methods in my latest parser, so there are going to be a lot of exciting techniques to explore.
Cheers guys!
Thread safe values, version 1.2
A while back (a year perhaps?) I posted an article about protected values. In short: anyone who has ever worked with multi-threading should know that sharing values between your application and thread(s) is not always as straight forward as we like to believe.
The latest versions of Delphi have made threading a lot simpler, especially async coding. But even though Delphi has gotten better on the subject, there is no denying that archetypical languages like object pascal and c++ are intrinsically low-level. It’s not like Java and C# where they put training-wheels on everything and you can just ignore the laws of physics (because your application is the digital version of a padded cell).
Note: While we are on the subject you may want to check out my delayed callback unit, which implents TW3Dispatch from Smart Mobile Studio.
Protected values version 1.2
So what are protected values? Well, in short it’s a series of classes that automatically protects a piece of data (intrinsic datatypes like “string”, “integer”, “boolean” but also class instances) and will lock and unlock when you read or change the value.

Multithreading can be a total bitch if you dont do it right
What is new in version 1.2 is that I have extended the model with an exclusive locking mechanism, an anonymous procedure, which gives you exclusive access to the data for the duration of your callback. You also have read/write access properties and a few other improvements.
It is, like many of my units, simple yet powerful. Some may argue that they don’t need it, that it’s not atomic – but that was never really the point. For atomic values you can dig into WinAPI but I personally like to keep my code as platform independent as possible.
The purpose of this library is, simply put, to have a uniform way of creating thread safe variables and fields. Values that can be safely changed from a thread without any extra padding or care, and likewise – values that a thread can expose without read/write synchronizing, instance checking and thread callbacks.
Examples
Creating a protected container is very simple. The unit uses generics, so you can take your pick of datatype:
var LData: TProtectedValue<string>; begin LData := TProtectedValue<string>.create; RegisterWithThreads(LData); LData.Value := 'This is a test'; end;
In the code above the object is locked while the assignment lasts, so it will never collide with another thread accessing it. If you want exclusive access for the duration of a process or task, use the synchronize call:
var LData: TProtectedValue<string>; begin LData := TProtectedValue<string>.create; RegisterWithThreads(LData); LData.Synchronize( procedure (var Data: string) begin //Access is blocked for the duration of this proc for item in GlobalList do Data := Data + item.ToString(); end); end;
The next feature I added was better support for object lists. In the previous version you had to create the list-instance yourself and then wrap that in a container. Now the container is the objectlist and we use generic to denote the type. The same locked access mechanisms are available there.
var LData: TProtectedObjectList<TMyItem>; begin LData := TProtectedObjectList<TMyItem>.create; LData.ForEach( procedure (Item: TObject; var Cancel: boolean) begin // Set Cancel to True to abort the loop // ForEach() will iterate through the items stored in // the objectlist. Sadly you must typecast here due to scope end); end;
And of course, the normal locking mechanisms for lists:
type TMyListType = TObjectList<TMyItem>; var LData: TProtectedObjectList<TMyItem>; begin LData := TProtectedObjectList<TMyItem>.create; LList := TMyListType( LData.Lock ); try //Process list here finally LData.Unlock(); end; end;
While all of this is cool, the core value of the library (at least for me) is that I can safely move values between the main process and threads without the typical insulation involved. I can use protected values as fields in my class and publish them as read only properties, killing two birds with one stone (both protecting the value and conforming to the multiple read, singular write law of memory.
The unit is tried and tested. It is presently used in 3 commercial products, so you can put a little faith in it. But as always, if you don’t need it then leave it alone.
If you like it, enjoy 🙂
The code
unit hex.log.locktypes; interface uses System.SysUtils, System.SyncObjs, System.Classes, System.Generics.Collections; type {$DEFINE USE_SYNCHRO} TProtectedValueAccessRights = set of (lvRead, lvWrite); EProtectedValue = class(exception); EProtectedObject = class(exception); (* Thread safe intrinsic datatype container. When sharing values between processes, use this class to make read/write access safe and protected. *) {$IFDEF USE_SYNCHRO} TProtectedValue = class(TCriticalSection) {$ELSE} TProtectedValue = class(TObject) {$ENDIF} strict private {$IFNDEF USE_SYNCHRO} FLock: TCriticalSection; {$ENDIF} FData: T; FOptions: TProtectedValueAccessRights; strict protected function GetValue: T;virtual; procedure SetValue(Value: T);virtual; function GetAccessRights: TProtectedValueAccessRights; procedure SetAccessRights(Rights: TProtectedValueAccessRights); public type TProtectedValueEntry = reference to procedure (var Data: T); public constructor Create(Value: T); overload; virtual; constructor Create(Value: T; const Access: TProtectedValueAccessRights= [lvRead, lvWrite]); overload; virtual; constructor Create(const Access: TProtectedValueAccessRights = [lvRead, lvWrite]); overload; virtual; destructor Destroy;override; {$IFNDEF USE_SYNCHRO} procedure Acquire; procedure Release; {$ENDIF} procedure Synchronize(const Entry: TProtectedValueEntry); property AccessRights: TProtectedValueAccessRights read GetAccessRights; property Value: T read GetValue write SetValue; end; (* Thread safe object container. NOTE #1: This object container **CREATES** the instance and maintains it! Use Edit() to execute a protected block of code with access to the object. Note #2: SetValue() does not overwrite the object reference, but attempts to perform TPersistent.Assign(). If the instance does not inherit from TPersistent an exception is thrown. *) TProtectedObject = class(TObject) strict private FData: T; FLock: TCriticalSection; FOptions: TProtectedValueAccessRights; strict protected function GetValue: T;virtual; procedure SetValue(Value: T);virtual; function GetAccessRights: TProtectedValueAccessRights; procedure SetAccessRights(Rights: TProtectedValueAccessRights); public type TProtectedObjectEntry = reference to procedure (const Data: T); public Property Value: T read GetValue write SetValue; Property AccessRights: TProtectedValueAccessRights read GetAccessRights; Function Lock: T; procedure Unlock; procedure Synchronize(const Entry: TProtectedObjectEntry); Constructor Create(const AOptions:TProtectedValueAccessRights = [lvRead,lvWrite]);virtual; Destructor Destroy;override; end; (* TProtectedObjectList: This is a thread-safe object list implementation. It works more or less like TThreadList, except it deals with objects *) TProtectedObjectList = Class(TInterfacedPersistent) strict private FObjects: TObjectList; FLock: TCriticalSection; strict protected function GetEmpty: Boolean;virtual; function GetCount: Integer;virtual; (* QueryObject Proxy: TInterfacedPersistent allows us to act as a proxy for QueryInterface/GetInterface. Override and provide another child instance here to expose interfaces from that instread *) function GetOwner: TPersistent;override; public type TProtectedObjectListProc = reference to procedure (item:TObject;var Cancel:Boolean); public constructor Create(OwnsObjects: Boolean = True);virtual; destructor Destroy;Override; function Contains(Instance: TObject): boolean;virtual; function Lock: TObjectList;virtual; Procedure UnLock; Procedure Clear; procedure ForEach(Callback: TProtectedObjectListProc); Property Count:Integer read GetCount; Property Empty:Boolean read GetEmpty; end; implementation //############################################################################ // TProtectedObjectList //############################################################################ constructor TProtectedObjectList.Create(OwnsObjects: Boolean = True); begin inherited Create; FObjects := TObjectList.Create(OwnsObjects); FLock := TCriticalSection.Create; end; destructor TProtectedObjectList.Destroy; begin FLock.Enter; FObjects.Free; FLock.Free; inherited; end; procedure TProtectedObjectList.Clear; begin FLock.Enter; try FObjects.Clear; finally FLock.Leave; end; end; function TProtectedObjectList.GetOwner: TPersistent; begin result := NIL; end; procedure TProtectedObjectList.ForEach(Callback: TProtectedObjectListProc); var mItem: TObject; mCancel: Boolean; begin if assigned(Callback) then begin FLock.Enter; try mCancel:=False; for mItem in FObjects do begin CallBack(mItem,mCancel); if mCancel then break; end; finally FLock.Leave; end; end; end; function TProtectedObjectList.Contains(Instance: TObject): Boolean; begin result := false; if assigned(Instance) then begin FLock.Enter; try result := FObjects.Contains(Instance); finally FLock.Leave; end; end; end; function TProtectedObjectList.GetCount: Integer; begin FLock.Enter; try result :=FObjects.Count; finally FLock.Leave; end; end; function TProtectedObjectList.GetEmpty: Boolean; begin FLock.Enter; try result := FObjects.Count<1; finally FLock.Leave; end; end; function TProtectedObjectList.Lock: TObjectList; begin FLock.Enter; result:=FObjects; end; procedure TProtectedObjectList.UnLock; begin FLock.Leave; end; //############################################################################ // TProtectedObject //############################################################################ constructor TProtectedObject.Create(const AOptions: TProtectedValueAccessRights = [lvRead, lvWrite]); begin inherited Create; FLock:=TCriticalSection.Create; FOptions:=AOptions; FData := T.create; end; destructor TProtectedObject.Destroy; begin FData.free; FLock.Free; inherited; end; function TProtectedObject.GetAccessRights: TProtectedValueAccessRights; begin FLock.Enter; try result := FOptions; finally FLock.Leave; end; end; procedure TProtectedObject.SetAccessRights(Rights: TProtectedValueAccessRights); begin FLock.Enter; try FOptions := Rights; finally FLock.Leave; end; end; function TProtectedObject.Lock: T; begin FLock.Enter; result := FData; end; procedure TProtectedObject.Unlock; begin FLock.Leave; end; procedure TProtectedObject.Synchronize(const Entry: TProtectedObjectEntry); begin if assigned(Entry) then begin FLock.Enter; try Entry(FData); finally FLock.Leave; end; end; end; function TProtectedObject.GetValue: T; begin FLock.Enter; try if (lvRead in FOptions) then result := FData else raise EProtectedObject.CreateFmt('%s:Read not allowed error',[classname]); finally FLock.Leave; end; end; procedure TProtectedObject.SetValue(Value: T); begin FLock.Enter; try if (lvWrite in FOptions) then begin if (TObject(FData) is TPersistent) or (TObject(FData).InheritsFrom(TPersistent)) then TPersistent(FData).Assign(TPersistent(Value)) else raise EProtectedObject.CreateFmt ('Locked object assign failed, %s does not inherit from %s', [TObject(FData).ClassName,'TPersistent']); end else raise EProtectedObject.CreateFmt('%s:Write not allowed error',[classname]); finally FLock.Leave; end; end; //############################################################################ // TProtectedValue //############################################################################ constructor TProtectedValue.Create(Value: T); begin Create([lvRead, lvWrite]); end; constructor TProtectedValue.Create(Value: T; const Access: TProtectedValueAccessRights = [lvRead, lvWrite]); begin Create([lvRead, lvWrite]); Synchronize( procedure (var Data: T) begin Data := Value; end); end; Constructor TProtectedValue.Create(const Access: TProtectedValueAccessRights = [lvRead,lvWrite]); begin inherited Create; {$IFNDEF USE_SYNCHRO} FLock := TCriticalSection.Create; {$ENDIF} FOptions:=Access; end; Destructor TProtectedValue.Destroy; begin {$IFNDEF USE_SYNCHRO} FLock.Free; {$ENDIF} inherited; end; function TProtectedValue.GetAccessRights: TProtectedValueAccessRights; begin Acquire; try result := FOptions; finally Release; end; end; procedure TProtectedValue.SetAccessRights(Rights: TProtectedValueAccessRights); begin Acquire; try FOptions := Rights; finally Release; end; end; {$IFNDEF USE_SYNCHRO} procedure TProtectedValue.Acquire; begin FLock.Acquire; end; procedure TProtectedValue.Release; begin FLock.Release; end; {$ENDIF} procedure TProtectedValue.Synchronize(const Entry: TProtectedValueEntry); begin if assigned(Entry) then Begin Acquire; try Entry(FData); finally Release; end; end; end; function TProtectedValue.GetValue: T; begin Acquire; try if (lvRead in FOptions) then result := FData else Raise EProtectedValue.CreateFmt('%s: Read not allowed error',[classname]); finally Release; end; end; procedure TProtectedValue.SetValue(Value: T); begin Acquire; try if (lvWrite in FOptions) then FData:=Value else Raise EProtectedValue.CreateFmt('%s: Write not allowed error',[classname]); finally Release; end; end; end.
You must be logged in to post a comment.