Archive
Vector Containers For Delphi and FPC
Edit: Version 1.0.1 has been released, with a ton of powerful features. Read about it here and grab your fork: https://jonlennartaasenden.wordpress.com/2020/04/13/qtx-framework-for-delphi-and-fpc-is-available-on-bitbucket/
If you have been looking at C++ and envied them their std::vector classes, wanting the same for Delphi or being able to access untyped memory using a typed-view (basically turning a buffer into an array of <T>) then I have some good news for you!
Vector containers, unified storage model and typed views are just some of the highlights of my vector-library. I did an article on the subject at the Embarcadero community website, so head over and read up on how you can enjoy these features in your Delphi application!
I also added FreePascal support, so that the library can be used with TMS Web Framework.

Head over to the Embarcadero Community website to read the full article
Hydra, what’s the big deal anyway?
RemObjects Hydra is a product I have used for years in concert with Delphi, and like most developers that come into contact with RemObjects products – once the full scope of the components hit you, you never want to go back to not using Hydra in your applications.
Note: It’s easy to dismiss Hydra as a “Delphi product”, but Hydra for .Net and Java does the exact same thing, namely let you mix and match modules from different languages in your programs. So if you are a C# developer looking for ways to incorporate Java, Delphi, Elements or Freepascal components in your application, then keep reading.
But let’s start with what Hydra can do for Delphi developers.
What is Hydra anyways?
Hydra is a component package for Delphi, Freepascal, .Net and Java that takes plugins to a whole new level. Now bear with me for a second, because these plugins is in a completely different league from anything you have used in the past.
In short, Hydra allows you to wrap code and components from other languages, and use them from Delphi or Lazarus. There are thousands of really amazing components for the .Net and Java platforms, and Hydra allows you compile those into modules (or “plugins” if you prefer that); modules that can then be used in your applications like they were native components.

Hydra, here using a C# component in a Delphi application
But it doesn’t stop there; you can also mix VCL and FMX modules in the same application. This is extremely powerful since it offers a clear path to modernizing your codebase gradually rather than doing a time consuming and costly re-write.
So if you want to move your aging VCL codebase to Firemonkey, but the cost of having to re-write all your forms and business logic for FMX would break your budget -that’s where Hydra gives you a second option: namely that you can continue to use your VCL code from FMX and refactor the application in your own tempo and with minimal financial impact.
The best of all worlds
Not long ago RemObjects added support for Lazarus (Freepascal) to the mix, which once again opens a whole new ecosystem that Delphi, C# and Java developers can benefit from. Delphi has a lot of really cool components, but Lazarus have components that are not always available for Delphi. There are some really good developers in the Freepascal community, and you will find hundreds of components and classes (if not thousands) that are open-source; For example, Lazarus has a branch of Synedit that is much more evolved and polished than the fork available for Delphi. And with Hydra you can compile that into a module / plugin and use it in your Delphi applications.
This is also true for Java and C# developers. Some of the components available for native languages might not have similar functionality in the .Net world, and by using Hydra you can tap into the wealth that native languages have to offer.
As a Delphi or Freepascal developer, perhaps you have seen some of the fancy grids C# and Java coders enjoy? Developer Express have some of the coolest components available for any platform, but their focus is more on .Net these days than Delphi. They do maintain the control packages they have, but compared to the amount of development done for C# their Delphi offerings are abysmal. So with Hydra you can tap into the .Net side of things and use the latest components and libraries in your Delphi applications.
Financial savings
One of coolest features of Hydra, is that you can use it across Delphi versions. This has helped me leverage the price-tag of updating to the latest Delphi.
It’s easy to forget that whenever you update Delphi, you also need to update the components you have bought. This was one of the reasons I was reluctant to upgrade my Delphi license until Embarcadero released Delphi 10.2. Because I had thousands of dollars invested in components – and updating all my licenses would cost a small fortune.
So to get around this, I put the components into a Hydra module and compiled that using my older Delphi. And then i simply used those modules from my new Delphi installation. This way I was able to cut cost by thousands of dollars and enjoy the latest Delphi.

Using Firemonkey controls under VCL is easy with Hydra
A couple of years back I also took the time to wrap a ton of older components that work fine but are no longer maintained or sold. I used an older version of Delphi to get these components into a Hydra module – and I can now use those with Delphi 10.3 (!). In my case there was a component-set for working closely with Active Directory that I have used in a customer’s project (and much faster than having to go the route via SQL). The company that made these don’t exist any more, and I have no source-code for the components.
The only way I could have used these without Hydra, would be to compile them into a .dll file and painstakingly export every single method (or use COM+ to cross the 32-bit / 64-bit barrier), which would have taken me a week since we are talking a large body of quality code. With Hydra i was able to wrap the whole thing in less than an hour.
I’m not advocating that people stop updating their components. But I am very thankful for the opportunity to delay having to update my entire component stack just to enjoy a modern version of Delphi.
Hydra gives me that opportunity, which means I can upgrade when my wallet allows it.
Building better applications
There is also another side to Hydra, namely that it allows you to design applications in a modular way. If you have the luxury of starting a brand new project and use Hydra from day one, you can isolate each part of your application as a module. Avoiding the trap of monolithic applications.

Hydra for .Net allows you to use Delphi, Java and FPC modules under C#
This way of working has great impact on how you maintain your software, and consequently how you issue hotfixes and updates. If you have isolated each key part of your application as separate modules, you don’t need to ship a full build every time.
This also safeguards you from having all your eggs in one basket. If you have isolated each form (for example) as separate modules, there is nothing stopping you from rewriting some of these forms in another language – or cross the VCL and FMX barrier. You have to admit that being able to use the latest components from Developer Express is pretty cool. There is not a shadow of a doubt that Developer-Express makes the best damn components around for any platform. There are many grids for Delphi, but they cant hold a candle to the latest and greatest from Developer Express.
Why can’t I just use packages?
If you are thinking “hey, this sounds exactly like packages, why should I buy Hydra when packages does the exact same thing?“. Actually that’s not how packages work for Delphi.
Delphi packages are cool, but they are also severely limited. One of the reasons you have to update your components whenever you buy a newer version of Delphi, is because packages are not backwards compatible.

Delphi packages are great, but severely limited
A Delphi package must be compiled with the same RTL as the host (your program), and version information and RTTI must match. This is because packages use the same RTL and more importantly, the same memory manager.
Hydra modules are not packages. They are clean and lean library files (*.dll files) that includes whatever RTL you compiled them with. In other words, you can safely load a Hydra module compiled with Delphi 7, into a Delphi 10.3 application without having to re-compile.
Once you start to work with Hydra, you gradually build up modules of functionality that you can recycle in the future. In many ways Hydra is a whole new take on components and RAD. This is how Delphi packages and libraries should have been.
Without saying anything bad about Delphi, because Delphi is a system that I love very much; but having to update your entire component stack just to use the latest Delphi, is sadly one of the factors that have led developers to abandon the platform. If you have USD 10.000 in dependencies, having to pay that as well as buying Delphi can be difficult to justify; especially when comparing with other languages and ecosystems.
For me, Hydra has been a tremendous boon for Delphi. It has allowed me to keep current with Delphi and all it’s many new features, without losing the money I have already invested in components packages.
If you are looking for something to bring your product to the next level, then I urge you to spend a few hours with Hydra. The documentation is exceptional, the features and benefits are outstanding — and you will wonder how you ever managed to work without them.
External resources
Disclaimer: I am not a salesman by any stretch of the imagination. I realize that promoting a product made by the company you work for might come across as a sales pitch; but that’s just it: I started to work for RemObjects for a reason. And that reason is that I have used their products since they came on the market. I have worked with these components long before I started working at RemObjects.
BTree for Delphi
A few weeks back I posted an article on RemObjects blog regarding universal code, and how you with a little bit of care can write code that easily compiled with both Oxygene, Delphi and Freepascal. With emphasis on Oxygene.
The example I used was a BTree class that I originally ported from Delphi to Smart Pascal, and then finally to Oxygene to run under WebAssembly.
Long story short I was asked if I could port the code back to Delphi in its more or less universal form. Naturally there are small differences here and there, but nothing special that distinctly separates the Delphi version from Oxygene or Smart Pascal.
Why this version?
If you google BTree and Delphi you will find loads of implementations. They all operate more or less identical, using records and pointers for optimal speed. I decided to base my version on classes for convenience, but it shouldn’t be difficult to revert that to use records if you absolutely need it.
What I like about this BTree implementation is that it’s very functional. Its easy to traverse the nodes using the ForEach() method, you can add items using a number as an identifier, but it also supports string identifiers.
I also changed the typical data reference. The data each node represent is usually a pointer. I changed this to variant to make it more functional.
Well, here is the Delphi version as promised. Happy to help.
unit btree; interface uses System.Generics.Collections, System.Sysutils, System.Classes; type // BTree leaf object TQTXBTreeNode = class(TObject) public Identifier: integer; Data: variant; Left: TQTXBTreeNode; Right: TQTXBTreeNode; end; [Weak] TQTXBTreeProcessCB = reference to procedure (const Node: TQTXBTreeNode; var Cancel: boolean); EBTreeError = class(Exception); TQTXBTree = class(TObject) private FRoot: TQTXBTreeNode; FCurrent: TQTXBTreeNode; protected function GetEmpty: boolean; virtual; function GetPackedNodes: TList; public property Root: TQTXBTreeNode read FRoot; property Empty: boolean read GetEmpty; function Add(const Ident: integer; const Data: variant): TQTXBTreeNode; overload; virtual; function Add(const Ident: string; const Data: variant): TQTXBTreeNode; overload; virtual; function Contains(const Ident: integer): boolean; overload; virtual; function Contains(const Ident: string): boolean; overload; virtual; function Remove(const Ident: integer): boolean; overload; virtual; function Remove(const Ident: string): boolean; overload; virtual; function Read(const Ident: integer): variant; overload; virtual; function Read(const Ident: string): variant; overload; virtual; procedure Write(const Ident: string; const NewData: variant); overload; virtual; procedure Write(const Ident: integer; const NewData: variant); overload; virtual; procedure Clear; overload; virtual; procedure Clear(const Process: TQTXBTreeProcessCB); overload; virtual; function ToDataArray: TList; function Count: integer; procedure ForEach(const Process: TQTXBTreeProcessCB); destructor Destroy; override; end; implementation //############################################################################# // TQTXBTree //############################################################################# destructor TQTXBTree.Destroy; begin if FRoot nil then Clear(); inherited; end; procedure TQTXBTree.Clear; var lTemp: TList; x: integer; begin if FRoot nil then begin // pack all nodes to a linear list lTemp := GetPackedNodes(); try // release each node for x := 0 to ltemp.Count-1 do begin lTemp[x].Free; end; finally // dispose of list lTemp.Free; // reset pointers FCurrent := nil; FRoot := nil; end; end; end; procedure TQTXBTree.Clear(const Process: TQTXBTreeProcessCB); begin ForEach(Process); Clear(); end; function TQTXBTree.GetPackedNodes: TList; var LData: Tlist; begin LData := TList.Create(); ForEach( procedure (const Node: TQTXBTreeNode; var Cancel: boolean) begin LData.Add(Node); Cancel := false; end); result := LData; end; function TQTXBTree.GetEmpty: boolean; begin result := FRoot = nil; end; function TQTXBTree.Count: integer; var LCount: integer; begin ForEach( procedure (const Node: TQTXBTreeNode; var Cancel: boolean) begin inc(LCount); Cancel := false; end); result := LCount; end; function TQTXBTree.ToDataArray: TList; var Data: TList; begin Data := TList.Create(); ForEach( procedure (const Node: TQTXBTreeNode; var Cancel: boolean) begin Data.add(Node.data); Cancel := false; end); result := data; end; function TQTXBTree.Add(const Ident: string; const Data: variant): TQTXBTreeNode; begin result := Add( Ident.GetHashCode(), Data); end; function TQTXBTree.Add(const Ident: integer; const Data: variant): TQTXBTreeNode; var lNode: TQTXBtreeNode; begin LNode := TQTXBTreeNode.Create(); LNode.Identifier := Ident; LNode.Data := data; if FRoot = nil then FRoot := LNode; FCurrent := FRoot; while true do begin if (Ident FCurrent.Identifier) then begin if (FCurrent.right = nil) then begin FCurrent.right := LNode; break; end else FCurrent := FCurrent.right; end else break; end; result := LNode; end; function TQTXBTree.Read(const Ident: string): variant; begin result := Read( Ident.GetHashCode() ); end; function TQTXBTree.Read(const Ident: integer): variant; begin FCurrent := FRoot; while FCurrent nil do begin if (Ident Fcurrent.Identifier) then FCurrent := FCurrent.Right else begin result := FCUrrent.Data; break; end end; end; procedure TQTXBTree.Write(const Ident: string; const NewData: variant); begin Write( Ident.GetHashCode(), NewData); end; procedure TQTXBTree.Write(const Ident: integer; const NewData: variant); begin FCurrent := FRoot; while (FCurrent nil) do begin if (Ident Fcurrent.Identifier) then FCurrent := FCurrent.Right else begin FCurrent.Data := NewData; break; end end; end; function TQTXBTree.Contains(const Ident: string): boolean; begin result := Contains( Ident.GetHashCode() ); end; function TQTXBTree.Contains(const Ident: integer): boolean; begin result := false; if FRoot nil then begin FCurrent := FRoot; while ( (not Result) and (FCurrent nil) ) do begin if (Ident Fcurrent.Identifier) then FCurrent := FCurrent.Right else begin Result := true; break; end end; end; end; function TQTXBTree.Remove(const Ident: string): boolean; begin result := Remove( Ident.GetHashCode() ); end; function TQTXBTree.Remove(const Ident: integer): boolean; var LFound: boolean; LParent: TQTXBTreeNode; LReplacement, LReplacementParent: TQTXBTreeNode; LChildCount: integer; begin FCurrent := FRoot; LFound := false; LParent := nil; LReplacement := nil; LReplacementParent := nil; while (not LFound) and (FCurrent nil) do begin if (Ident FCurrent.Identifier) then begin LParent := FCurrent; FCurrent := FCurrent.right; end else LFound := true; if LFound then begin LChildCount := 0; if (FCurrent.left nil) then inc(LChildCount); if (FCurrent.right nil) then inc(LChildCount); if FCurrent = FRoot then begin case (LChildCOunt) of 0: begin FRoot := nil; end; 1: begin if FCurrent.right = nil then FRoot := FCurrent.left else FRoot :=FCurrent.Right; end; 2: begin LReplacement := FRoot.left; while (LReplacement.right nil) do begin LReplacementParent := LReplacement; LReplacement := LReplacement.right; end; if (LReplacementParent nil) then begin LReplacementParent.right := LReplacement.Left; LReplacement.right := FRoot.Right; LReplacement.left := FRoot.left; end else LReplacement.right := FRoot.right; end; end; FRoot := LReplacement; end else begin case LChildCount of 0: if (FCurrent.Identifier < LParent.Identifier) then Lparent.left := nil else LParent.right := nil; 1: if (FCurrent.Identifier < LParent.Identifier) then begin if (FCurrent.Left = NIL) then LParent.left := FCurrent.Right else LParent.Left := FCurrent.Left; end else begin if (FCurrent.Left = NIL) then LParent.right := FCurrent.Right else LParent.right := FCurrent.Left; end; 2: begin LReplacement := FCurrent.left; LReplacementParent := FCurrent; while LReplacement.right nil do begin LReplacementParent := LReplacement; LReplacement := LReplacement.right; end; LReplacementParent.right := LReplacement.left; LReplacement.right := FCurrent.right; LReplacement.left := FCurrent.left; if (FCurrent.Identifier < LParent.Identifier) then LParent.left := LReplacement else LParent.right := LReplacement; end; end; end; end; end; result := LFound; end; procedure TQTXBTree.ForEach(const Process: TQTXBTreeProcessCB); function ProcessNode(const Node: TQTXBTreeNode): boolean; begin if Node nil then begin if Node.left nil then begin result := ProcessNode(Node.left); if result then exit; end; Process(Node, result); if result then exit; if (Node.right nil) then begin result := ProcessNode(Node.right); if result then exit; end; end; end; begin ProcessNode(FRoot); end; end.
Two new groups in the Developer family
Delphi Developer is a group on Facebook that have been going strong for 12+ years. It was one of the first groups on Facebook, created the same week that Facebook allowed groups. With that group well established, it’s time to expand and clean up the feed.
Last month I introduced a new group, RemObjects Developer, which is a group for developers that use RemObjects components, like the Remoting SDK, Data Abstract and/or Hydra – but more in particular, developers using Oxygene, C#, Swift, Java or Go via Elements (RemObjects compiler toolchain).
Two new groups
To further simplify syndication, and clean up the feeds (which so far has been a pot-purrey of many topics, dialects and products) an additional two groups is now in place:
Obviously there will be some overlapping. Since FPC and Delphi has much in common and are for the most part compatible, some news will be shared between those groups. But all in all this is to clean up the newsfeed which has so far been a mix and match of everything.

Simple overview of the groups
Node.js Developer is not meant to be purely about vanilla JavaScript. Node.js is ultimately a JavaScript runtime-engine. Which means you can use it to run or host WebAssembly libraries (as produced by Oxygene), or generate code via DWScript or Freepascal. You can think of it as a service-host if you like.
So if you are writing WebAssembly applications using Elements, then the node.js group will no doubt be interesting too. Same goes for DWScript users, Smart Pascal users and Freepascal users – providing web tech is what they like.
What is this Quartex Components?
It’s easier to manage multiple groups if you attach them to a parent-page. So if you wonder why all the groups says “by Quartex Components”, that is just a top-level page that helps me deal with with syndication. For some reason Facebook’s API only works for pages, not groups. So it’s impossible to auto-import news (for example) without a page.
The name, “Quartex Components” is ultimately the name of my personal company. I used to produce security components for Delphi, but decided to open-source those for the community.
So Quartex Components is just an organizational element.
Generic protect for FPC/Lazarus
Freepascal is not frequently mentioned on my blog. I have written about it from time to time, not always in a positive light though. Just to be clear, FPC (the compiler) is fantastic; it was one particular fork of Lazarus I had issues with, involving a license violation.
On the whole, freepascal and Lazarus is capable of great things. There are a few quirks here and there (if not oddities) that prevents mass adoption (the excessive use of include-files to “fake” partial classes being one), but as object-pascal compilers go, Freepascal is a battle-hardened, production ready system.
It’s been Linux in particular that I have used Freepascal on. In 2015 Hydro Oil wanted to move their back-end from Windows to Linux, and I spent a few months converting windows-only services into Linux daemons.
Today I find myself converting parts of the toolkit I came up with to Oxygene, but that’s a post for another day.
Generic protect
If you work a lot with multithreaded code, the unit im posting here might come in handy. Long story short: sharing composite objects between threads and the main process, always means extra scaffolding. You have to make sure you don’t access the list (or it’s elements) at the same time as another thread for example. To ensure this you can either use a critical-section, or you can deliver the data with a synchronized call. This is more or less universal for all languages, no matter if you are using Oxygene, C/C++, C# or Delphi.
When this unit came into being, I was doing quite elaborate classes with a lot of lists. These classes could not share ancestor, or I could have gotten away with just one locking mechanism. Instead I had to implement the same boilerplate code over and over again.
The unit below makes insulating (or protecting) classes easier. It essentially envelopes whatever class-instance you feed it, and returns the proxy object. Whenever you want to access your instance, you have to unlock it first or use a synchronizer (see below).
Works in both Freepascal and Delphi
The unit works for both Delphi and Freepascal, but there is one little difference. For some reason Freepascal does not support anonymous procedures, so we compensate and use inline-procedures instead. While not a huge deal, I really hope the FPC team add anonymous procedures, it makes life a lot easier for generics based code. Async programming without anonymous procedures is highly impractical too.
So if you are in Delphi you can write:
var lValue: TProtectedValue; lValue.Synchronize( procedure (var Value: integer) begin Value := Value * 12; end);
But under Freepascal you must resort to:
var lValue: TProtectedValue; procedure _UpdateValue(var Data: integer); begin Data := Data * 12; end; begin lValue.Synchronize(@_UpdateValue); end;
On small examples like these, the benefit of this style of coding might be lost; but if you suddenly have 40-50 lists that needs to be shared between 100-200 active threads, it will be a time saver!
You can also use it on intrinsic datatypes:
OK, here we go:
unit safeobjects; // SafeObjects // ========================================================================== // Written by Jon-Lennart Aasenden // Copyright Quartex Components LTD, all rights reserved // // This unit is a part of the QTX Patreon Library // // NOTES ABOUT FREEPASCAL: // ======================= // Freepascal does not allow anonymous procedures, which means we must // resort to inline procedures instead: // // Where we in Delphi could write the following for an atomic, // thread safe alteration: // // var // LValue: TProtectedValue; // // LValue.Synchronize( procedure (var Value: integer) // begin // Value := Value * 12; // end); // // Freepascal demands that we use an inline procedure instead, which // is more or less the same code, just organized slightly differently. // // var // LValue: TProtectedValue; // // procedure _UpdateValue(var Data: integer); // begin // Data := Data * 12; // end; // // begin // LValue.Synchronize(@_UpdateValue); // end; // // // // {$mode DELPHI} {$H+} interface uses {$IFDEF FPC} SysUtils, Classes, SyncObjs, Generics.Collections; {$ELSE} System.SysUtils, System.Classes, System.SyncObjs, System.Generics.Collections; {$ENDIF} type {$DEFINE INHERIT_FROM_CRITICALSECTION} TProtectedValueAccessRights = set of (lvRead, lvWrite); EProtectedValue = class(exception); EProtectedObject = class(exception); (* Thread safe intrinsic datatype container. When sharing values between processes, use this class to make read/write access safe and protected. *) {$IFDEF INHERIT_FROM_CRITICALSECTION} TProtectedValue = class(TCriticalSection) {$ELSE} TProtectedValue = class(TObject) {$ENDIF} strict private {$IFNDEF INHERIT_FROM_CRITICALSECTION} FLock: TCriticalSection; {$ENDIF} FData: T; FOptions: TProtectedValueAccessRights; strict protected function GetValue: T;virtual; procedure SetValue(Value: T);virtual; function GetAccessRights: TProtectedValueAccessRights; procedure SetAccessRights(Rights: TProtectedValueAccessRights); public type {$IFDEF FPC} TProtectedValueEntry = procedure (var Data: T); {$ELSE} TProtectedValueEntry = reference to procedure (var Data: T); {$ENDIF} public constructor Create(Value: T); overload; virtual; constructor Create(Value: T; const Access: TProtectedValueAccessRights); overload; virtual; constructor Create(const Access: TProtectedValueAccessRights); overload; virtual; destructor Destroy;override; {$IFNDEF INHERIT_FROM_CRITICALSECTION} procedure Enter; procedure Leave; {$ENDIF} procedure Synchronize(const Entry: TProtectedValueEntry); property AccessRights: TProtectedValueAccessRights read GetAccessRights; property Value: T read GetValue write SetValue; end; (* Thread safe object container. NOTE #1: This object container **CREATES** the instance and maintains it! Use Edit() to execute a protected block of code with access to the object. Note #2: SetValue() does not overwrite the object reference, but attempts to perform TPersistent.Assign(). If the instance does not inherit from TPersistent an exception is thrown. *) TProtectedObject = class(TObject) strict private FData: T; FLock: TCriticalSection; FOptions: TProtectedValueAccessRights; strict protected function GetValue: T;virtual; procedure SetValue(Value: T);virtual; function GetAccessRights: TProtectedValueAccessRights; procedure SetAccessRights(Rights: TProtectedValueAccessRights); public type {$IFDEF FPC} TProtectedObjectEntry = procedure (const Data: T); {$ELSE} TProtectedObjectEntry = reference to procedure (const Data: T); {$ENDIF} public property Value: T read GetValue write SetValue; property AccessRights: TProtectedValueAccessRights read GetAccessRights; function Lock: T; procedure Unlock; procedure Synchronize(const Entry: TProtectedObjectEntry); Constructor Create(const AOptions: TProtectedValueAccessRights = [lvRead,lvWrite]); virtual; Destructor Destroy; override; end; (* TProtectedObjectList: This is a thread-safe object list implementation. It works more or less like TThreadList, except it deals with objects *) TProtectedObjectList = class(TInterfacedPersistent) strict private FObjects: TObjectList; FLock: TCriticalSection; strict protected function GetEmpty: boolean;virtual; function GetCount: integer;virtual; (* QueryObject Proxy: TInterfacedPersistent allows us to act as a proxy for QueryInterface/GetInterface. Override and provide another child instance here to expose interfaces from that instread *) protected function GetOwner: TPersistent;override; public type {$IFDEF FPC} TProtectedObjectListProc = procedure (Item: TObject; var Cancel: boolean); {$ELSE} TProtectedObjectListProc = reference to procedure (Item: TObject; var Cancel: boolean); {$ENDIF} public constructor Create(OwnsObjects: Boolean = true); virtual; destructor Destroy; override; function Contains(Instance: TObject): boolean; virtual; function Enter: TObjectList; virtual; Procedure Leave; virtual; Procedure Clear; virtual; procedure ForEach(const CB: TProtectedObjectListProc); virtual; Property Count: integer read GetCount; Property Empty: boolean read GetEmpty; end; implementation //############################################################################ // TProtectedObjectList //############################################################################ constructor TProtectedObjectList.Create(OwnsObjects: Boolean = True); begin inherited Create; FObjects := TObjectList.Create(OwnsObjects); FLock := TCriticalSection.Create; end; destructor TProtectedObjectList.Destroy; begin FLock.Enter; FObjects.Free; FLock.Free; inherited; end; procedure TProtectedObjectList.Clear; begin FLock.Enter; try FObjects.Clear; finally FLock.Leave; end; end; function TProtectedObjectList.GetOwner: TPersistent; begin result := NIL; end; procedure TProtectedObjectList.ForEach(const CB: TProtectedObjectListProc); var LItem: TObject; LCancel: Boolean; begin LCancel := false; if assigned(CB) then begin FLock.Enter; try {$HINTS OFF} for LItem in FObjects do begin LCancel := false; CB(LItem, LCancel); if LCancel then break; end; {$HINTS ON} finally FLock.Leave; end; end; end; function TProtectedObjectList.Contains(Instance: TObject): boolean; begin result := false; if assigned(Instance) then begin FLock.Enter; try result := FObjects.Contains(Instance); finally FLock.Leave; end; end; end; function TProtectedObjectList.GetCount: integer; begin FLock.Enter; try result :=FObjects.Count; finally FLock.Leave; end; end; function TProtectedObjectList.GetEmpty: Boolean; begin FLock.Enter; try result := FObjects.Count<1; finally FLock.Leave; end; end; function TProtectedObjectList.Enter: TObjectList; begin FLock.Enter; result := FObjects; end; procedure TProtectedObjectList.Leave; begin FLock.Leave; end; //############################################################################ // TProtectedObject //############################################################################ constructor TProtectedObject.Create(const AOptions: TProtectedValueAccessRights = [lvRead, lvWrite]); begin inherited Create; FLock := TCriticalSection.Create; FLock.Enter(); try FOptions := AOptions; FData := T.Create; finally FLock.Leave(); end; end; destructor TProtectedObject.Destroy; begin FData.free; FLock.Free; inherited; end; function TProtectedObject.GetAccessRights: TProtectedValueAccessRights; begin FLock.Enter; try result := FOptions; finally FLock.Leave; end; end; procedure TProtectedObject.SetAccessRights(Rights: TProtectedValueAccessRights); begin FLock.Enter; try FOptions := Rights; finally FLock.Leave; end; end; function TProtectedObject.Lock: T; begin FLock.Enter; result := FData; end; procedure TProtectedObject.Unlock; begin FLock.Leave; end; procedure TProtectedObject.Synchronize(const Entry: TProtectedObjectEntry); begin if assigned(Entry) then begin FLock.Enter; try Entry(FData); finally FLock.Leave; end; end; end; function TProtectedObject.GetValue: T; begin FLock.Enter; try if (lvRead in FOptions) then result := FData else raise EProtectedObject.CreateFmt('%s:Read not allowed error',[classname]); finally FLock.Leave; end; end; procedure TProtectedObject.SetValue(Value: T); begin FLock.Enter; try if (lvWrite in FOptions) then begin if (TObject(FData) is TPersistent) or (TObject(FData).InheritsFrom(TPersistent)) then TPersistent(FData).Assign(TPersistent(Value)) else raise EProtectedObject.CreateFmt ('Locked object assign failed, %s does not inherit from %s', [TObject(FData).ClassName,'TPersistent']); end else raise EProtectedObject.CreateFmt('%s:Write not allowed error',[classname]); finally FLock.Leave; end; end; //############################################################################ // TProtectedValue //############################################################################ Constructor TProtectedValue.Create(const Access: TProtectedValueAccessRights); begin inherited Create; {$IFNDEF INHERIT_FROM_CRITICALSECTION} FLock := TCriticalSection.Create; {$ENDIF} FOptions := Access; end; constructor TProtectedValue.Create(Value: T); begin inherited Create; {$IFNDEF INHERIT_FROM_CRITICALSECTION} FLock := TCriticalSection.Create; {$ENDIF} FOptions := [lvRead, lvWrite]; FData := Value; end; constructor TProtectedValue.Create(Value: T; const Access: TProtectedValueAccessRights); begin inherited Create; {$IFNDEF INHERIT_FROM_CRITICALSECTION} FLock := TCriticalSection.Create; {$ENDIF} FOptions := Access; FData := Value; end; Destructor TProtectedValue.Destroy; begin {$IFNDEF INHERIT_FROM_CRITICALSECTION} FLock.Free; {$ENDIF} inherited; end; function TProtectedValue.GetAccessRights: TProtectedValueAccessRights; begin Enter(); try result := FOptions; finally Leave(); end; end; procedure TProtectedValue.SetAccessRights(Rights: TProtectedValueAccessRights); begin Enter(); try FOptions := Rights; finally Leave(); end; end; {$IFNDEF INHERIT_FROM_CRITICALSECTION} procedure TProtectedValue.Enter; begin FLock.Enter; end; procedure TProtectedValue.Leave; begin FLock.Leave; end; {$ENDIF} procedure TProtectedValue.Synchronize(const Entry: TProtectedValueEntry); begin if assigned(Entry) then Begin Enter(); try Entry(FData); finally Leave(); end; end; end; function TProtectedValue.GetValue: T; begin Enter(); try if (lvRead in FOptions) then result := FData else raise EProtectedValue.CreateFmt('%s: Read not allowed error', [Classname]); finally Leave(); end; end; procedure TProtectedValue.SetValue(Value: T); begin Enter(); try if (lvWrite in FOptions) then FData:=Value else raise EProtectedValue.CreateFmt('%s: Write not allowed error', [Classname]); finally Leave(); end; end; end.
Hydra now supports Freepascal and Java
In case you guys missed it, RemObjects Hydra 6.2 now supports FreePascal!
This means that you can now use forms and units from .net and Java from your Freepascal applications – and (drumroll) also mix and match between Delphi, .net, Java and FPC modules! So if you see something cool that Freepascal lacks, just slap it in a Hydra module and you can use it across language barriers.
I have used Hydra for years with Delphi, and being able to use .net forms and components in Delphi is pretty awesome. It’s also a great framework for building modular applications that are easier to manage.
Being able to tap into Freepascal is a great feature. Or the other way around, with Freepascal showing forms from Delphi, .net or Java.
For example, if you are moving to Freepascal, you can isolate the forms or controls that are not available under Freepascal in a Hydra module, and voila – you can gradually migrate.
If you are moving to Oxygene Pascal the same applies, you can implement the immediate logic under .net, and then import and use the parts that can’t easily be ported (or that you want to wait with).
The best of four worlds — You gotta love that!
Check out Hydra here:
Delphi Developer Demo Competition votes
A month ago we setup a demo competition on Delphi Developer. It’s been a few years since we did this, and demo competitions are always fun no matter what, so it was high time we set this up!

This years prizes are awesome!
Initially we had a limit of at least 10 contestants for the competition to go through, but I will make an exception this time. The prices are great and worth a good effort. I was a bit surprised by the low number of contestants since more than 60 developers signed our poll about the event. So I was hoping for at least 20 to be honest.
I think the timing was a bit off, we are closer to the end of the year and most developers are working under deadlines. So next year I think I’ll move the date to June or July.
Be that as it may – a demo competition is a tradition by now, so we proceed to the voting process!
The contestants
The contestants this year are:
- Christian Hackbart
- Mogens Lundholm
- steven Chesser
- Jens Borrisholt
- Paul Nicholls
Note: Dennis is a moderator on Delphi Developer, as such he cannot partake in the voting process.
The code
Each contestant has submitted a project to the following repositories (in the same order as the names above), so make sure you check out each one and inspect them carefully before casting your vote.
- https://github.com/TetrisSQC/Galcon
- https://github.com/mogenslundholm/MidiAndMusicXmlPlayer
- https://github.com/jdredd87/PiDuinoDroidSystem
- https://github.com/JensBorrisholt/2048
- https://bitbucket.org/paul_nicholls/petasciifier/src
Voting
We use the poll function built-into Facebook, so just visit us at Delphi Developer to cast your vote! You can only vote once and there is a 1 week deadline on this (so votes are done on the 10th this month.
Using Smart Mobile Studio under Linux
Every now and then when I post something about Smart Mobile Studio, an individual or two wants to inform me how they cannot use Smart because it’s not available for Linux. While more rare, the same experience happens now and then with OS X.
While the request for Linux or OS X support is both valid and understandable (and something we take seriously), more often than not these questions can be a pre-cursor to a larger picture; one that touches on open-source, pricing and personal philosophical points of view; often with remarks on pricing.
Truth be told, the price we ask for Smart Mobile Studio is close to symbolic. Especially if you take the time to look at the body of work Smart delivers. We are talking hundreds of hand written units with thousands of classes, each spesifically adapted for HTML5, Phonegap and Node.js. Not to mention ports of popular JavaScript frameworks.
If you compare this to native object pascal development tools with similar functionality, they can set you back thousands of dollars. Our asking price of $149 for the pro edition, $399 for the enterprise edition, and a symbolic $42 for the basic edition, that is an affordable solution. Also keep in mind that this gives you access to updates for a duration of 12 months. When was the last time you bought a full development suite that allows you to write mobile applications, platform independent servers, platform independent system services and HTML5 web applications for less that $400 ?

Our price model is more than reasonable considering what you get
With platform independent we mean that in the true sense of the word: once compiled, no changes are required. You can write a system service on Windows and it will run just fine under Linux or OS X. No re-compile needed. You can take a server and copy it to Amazon or Azure, run it in a cluster or scale it from a single instance to 100 instances without any change. That is unheard of for object pascal until now.
Smart Mobile Studio is the only object pascal development system that delivers a stand-alone IDE, stand-alone compiler, a wast object-oriented run-time library written from scratch to cover HTML5, Node.js and embedded systems that run JavaScript.
And yes, we know there are other systems in circulation, but none of them are even close to the functionality that we deliver. Functionality that has been polished for seven years now. And our RTL is growing every day to capture and expose more and more advanced functionality that you can use to enrich your products.

The RTL class browser shows the depth of our RTL
As of writing we have a team of six people working on Smart Mobile Studio. We have things in our labs that is going to change the way people build applications forever. We were the first to venture into this new landscape. There were nobody we could imitate, draw inspiration from or learn from. We had to literally make the path as we moved forward.
And our vision and goal remains the same today as it was seven years ago: to empower object pascal developers – and to secure their investment in the language and methodology that is object pascal.
Discipline and purpose
There is so much I would like to work on right now. Things I want to add to Smart Mobile Studio because I find them interesting, powerful and I know people are going to love them. But that style of development, the “Garage days” as people call it, is over. It does wonders in the beginning of a project maybe, but eventually you reach a stage where a formal timeline and business plan must be carved in stone.
And once defined, you have to stick to it. It would be an insult to our customers if we pivoted left and right on a whim. Like every company we have room for research, even a couple of skunkwork projects, but our primary focus is to make our foundation rock solid before further growth.

By tapping into established JavaScript frameworks you can cover more than 40+ embedded systems and micro-controllers. More and more hardware supports JS for automation
Our “garage days” ended around three years ago, and through hard work we defined our timeline, business model and investor program. In 2017 we secured enough capital to continue full-time development.
Our timeline has been published earlier, but we can re-visit some core points here:
The visual components that shipped with Smart Mobile Studio in the beginning, were meant more as examples than actual ready-to-use modules. This was common for other development platforms of the day, such as Xamarin’s C# / Mono toolchain, where you were expected to inherit from and implement aspects of a “partial class”. This is also why Smart Pascal has support for partial classes (neither Delphi or Freepascal supports this wonderful feature).

One of our skunkwork projects is a custom linux distro that runs your Smart applications directly in the Linux framebuffer. No X or desktop, just your code. Here running “the smart desktop” as the only visual front-end under x86 vmware
Since developers coming from Delphi had different expectations, there was only one thing to do. To completely re-write every single visual control (and add a few new controls) so that they matched our customers expectations. So the first stretch of our timeline has been 100% dedicated to the visual aspects of our RTL. While doing so we have made the RTL faster, more efficient, and added some powerful sub-systems:
- A dedicated theme engine
- Unified event delegation
- Storage device classes
- Focus and control tracking
- Support for relative position modes
- Support for all boxing models
- Inline linking ( {$R “file.js”} will now statically link an external library)
- And much, much more
So the past eight months has been all about visual components.

Theming is important
The second stretch, which we are in right now, is dedicated to the non-visual infrastructure. This means in particular Node.js but also touches on non-visual components, TAction support and things that will appear in updates this year.
Node.js is especially important since it allows you to write platform and chipset independent web servers, system services and command-line applications. This is pioneering work and we are the first to take this road. We have successfully tamed the alien landscape of JavaScript, both for client, mobile and server – and terraformed it into a familiar, safe and productive environment for object pascal developers.
I feel the results speak for themselves, and our next update brings Smart Mobile Studio to the next level: full stack cloud and web development. We now cover back-end, middle-ware and front-end technologies. And our RTL now stretches from micro-controllers to mobile application to clustered cloud services.
This is the same technology used by Netflix to process terabytes of data every second on a global scale. Which should tell you something about the potential involved.
Working on Linux
Since Smart Mobile Studio was designed to be a swiss army knife for Delphi and Lazarus developers, capable to reaching segments of the market where native code is unsuitable – our primary focus is Microsoft Windows. At least for now.
Delphi itself is a Windows-based development system, and even though it supports multiple targets, Windows is still the bread and butter of commercial software development.
Like I mentioned above, we have a timeline to follow, and until we have reached the end of that line, we are not prepared to refactor our IDE for Linux or OS X. This might change sooner than people think, but until our timeline for the RTL is concluded, we will not allocate time for making the IDE platform independent.
But, you can actually run Smart Mobile Studio on both Linux and OS X today.
Linux has a system called Wine. This is a system that implements the Windows API, but delegates all the calls to Linux. So when a Windows program calls a WinAPI through Wine, its delegated to Linux variation of the same call. This is a massive undertaking, but it has years of work behind it and functions extremely well.
So on linux you can install it by opening up a shell and typing:
sudo apt-get install wine
I take for granted here that your Linux flavour has aperture installed (I’m using Ubuntu since that is easy to work with), which is the package manager that gives you the “apt-get” command. If you have some other system then just google how to install a package.
With Wine and it’s dependencies installed, you can run the Smart Mobile Studio installer. Wine will create a virtual, sandboxed disk for you – so that all the files end up where they should. Once finished you punch in the license serial number as normal, and voila – you can now use Smart Mobile Studio on Linux.
Note: in some cases you have to right-click the SmartMS.exe and select “run with -> Wine”, but usually you can just double-click the exe file and it runs.
Smart Mobile Studio on OSX
Wine has been ported to OS X, but it’s more user friendly. You download a program called wine-bottler, which takes Smart and bundles it with wine + any dependencies it needs. You can then start Smart Mobile Studio like it was a normal OS X application.
Themes and look
The only problem with Wine is that it doesnt support Windows themes out of the box. It would be illegal for them to ship those files. But you can manually copy over the windows theme files and install them via the Wine config application. Once installed, Smart will look as it should.
By default the old Windows 95 look & feel is used by Wine. Personally I dont care too much about this, it’s being able to code, compile and run the applications that matters to me – but if you want a more modern look then just copy over the Windows theme files and you are all set.
TextCraft 1.2 for Smart Pascal
TextCraft is a fast, generic object-pascal text parsing framework. It provides you with the classes you need to write fast text parsers that builds effective data models.
The Textcraft framework was recently moved up to version 1.2 and has been ported from Delphi to both Freepascal and Smart Pascal (the dialect used by Smart Mobile Studio). This is probably the only parsing framework that spans 3 compilers.
Smart Pascal coders can download the framework unit here. This can be placed in their $Install/Library folder (where $install is where Smart’s library and rtl folder is installed): BitBucket TextCraft Repository
Buffer, parser, model
Textcraft divides the job of parsing into 4 separate objects; each of them representing a concept familiar to people writing compilers; these are: buffer, parser, model and context. If you are parsing a programming language the “model” would be what people call the AST (short for “Abstract Symbol Tree”). This AST is later feed to the code generator, turning it into an executable program (Smart Pascal compiles to JavaScript so there really is no limit to the transformation, just level of complexity).
Note: Textcraft is not a compiler for any particular language, it is a generic text parsing framework that is language-agnostic. Meaning that it makes it easy for you to make parsers with it. We recently used it to parse command-line parameters for Freepascal, so it doesn’t have to be about languages.
The buffer
The buffer has one of the most demanding jobs in the framework. In other frameworks the buffer is often just a memory allocation with a simple read method; but in TextCraft the model is responsible for a lot more. It has to expose functions that makes text recognition simple and effective; it has to keep track of column and row position as you move through the buffer content – and much, much more. So in TextCraft the buffer is where text methodology is implemented in full.
The parser
Like mentioned the parser is responsible for using the buffer’s methods to recognize and make sense of a text. As it makes its way through the buffer content, it creates model-objects that represents each element. Typical for a language would be structures (records), classes, enums, properties and so on. Each of these will be registered in the AST data model.
The Model
The model is a construct. It is made up of as many mode-object instances as you need to express the text in symbolic form. It doesn’t matter if you are parsing a text document or source code, you would still have to define a model for it.
The model obviously reflect your needs. If you just need a superficial overview of the data then you create a simple model. If you need more elaborate information then you create that.
Note: When parsing a text document, a traditional organization would be to divide the model into: chapter, section, paragraph, line and individual words.
The Context
The context object is what links the parser to our model and buffer objects. By default the parser doesn’t know anything about the buffer or model. This helps us abstract away things that would otherwise turn our code into a haystack of references.
The way the context is used can be described like this:
When parsing complex data you often divide the job into multiple classes. Each class deals with one particular topic. For example: if parsing Delphi source code, you would write a class that parses records, a parser that handles classes, another that handles field declarations (and so on).
As a parser recognize mentioned objects, like say a record, it will create a record model object to hold the information. It will then add that to the context by pushing it onto its reference stack.
The first thing a parser does is to grab the model object from the reference to stack. This way the child parsers will always know where to store their model information. It doesn’t matter how deep or recursive something gets, the stack approach and passing the context object to the child parsers – will always make sure each parser “knows” where to store information.
Why is this important?
This is important because it’s cost-effective in computing terms. The TextCraft framework allows you to create parsers that can chew through complex data without turning your project into spaghetti.
So no matter if you are parsing phone-numbers, zip codes or complex C++ source code, TextCraft will make help you get the job done; in a way that is easy to understand and mentain.
Amiga OS 4, object pascal and everything
Those that read my blog knows that I’m a huge fan of the Commodore Amiga machines. This was a line of computers that took the world by storm around 1985 and held its ground until 1993. Sadly the company had to file for bankruptcy after a series of absurd financial escapades by its management.

The original team before it fell prey to mismanagement
The death of Commodore is one of the great tragedies in computing history. There is no doubt that Commodore represented a much-needed alternative to Microsoft and Apple – and the death of Commodore meant innovation of technology took a turn for the worse.
Large books have been written on this subject, as well as great documentaries and movies – so I’m not going to dig further into the drama here. Ars Technica has a range of articles covering the whole story, so if you want to understand how the market got the way it is today, head over and read up on the story.
On a personal level I find the classic Amiga machines a source of great inspiration even now. Despite Commodore dying in the 90’s, today 30 years after the fact I still stumble over amazing source-code on this awesome computer; There are a few things in Amiga OS that “hint” to its true age, but ultimately the system has aged with amazing elegance and grace. It just blows people away when they realize that the Amiga desktop hit the market in 1984 – and much of what we regard as a modern desktop experience is actually inherited from the Amiga.

Amiga OS is highly customizable. Here showing OS 3.9 [the last of the classic OS versions]
For instance: the realization of the new Amiga models have cost £ 1.2 million, so there are serious players involved in this.
The user-base is varied of course, it’s not all developers and engineers. You have gamers who love to kick back with some high quality retro-gaming. You have graphics designers who pixel large masterpieces (an almost lost art in this day and age). And you have musicians who write awesome tracks; then use that to spice up otherwise flat and dull PC based tracks.
What is even more awesome is the coding. Even the latest Freepascal has been ported, so if you were expecting people hand punching hex-codes you will be disappointed. While the Amiga is old in technical terms, it was so far ahead of the competition that people are surprised just how capable the classic systems are.
And yes, people code games, demos and utility programs for the classical Amiga systems even today. I just installed a Dropbox cloud driver on my system and it works brilliantly.
The brand new Amiga
Classic Amiga machines are awesome, but this post is not about the old models; it’s about the new models that are coming out now. Yes, you read right: next generation Amiga computers that have finally become a reality. Having waited for 22 years I am thrilled to say that I just ordered a brand new Amiga 5000! (and cant wait to install Freepascal and start coding).
It’s also quite affordable. The x5000 model (which is the power system) retails at around €1650, which is roughly half the price I paid for my Intel i7, Nvidia GeForce GTX 970 workstation. And the potential as a developer is enormous.
Just think about the onslaught of Delphi code I can port over, and how instrumental my software can become by getting in early. Say what you will about Freepascal but it tends to be the second compiler to hit a platform after GCC. And with Freepascal in place a Delphi developer can do some serious magic!
Right. So the first Amiga is the power model, the Amiga 5000. This can be ordered today. It cost the same as a good PC (1600€ range depending on import tax and vat). This is far less than I paid for my crap iMac (that I never use anymore).
The power model is best suited for people who do professional work on the machine. Software development doesn’t necessarily need all the firepower the x5000 brings, but more demanding tasks like 3d rendering or media composition will.
The next model is the A1222 which is due out around x-mas 2017 /slash/ first quarter

The A1222 “Tabour”
2018. You would perhaps expect a mid-range model, something retailing at around €800 or there abouts – but the A1222 is without a doubt a low-end model.
It should retail for roughly €450. I think this is a great idea because AEON (who makes hardware) have different needs from Hyperion (who makes the new Amiga OS [more about that further into the article]). AEON needs to get enough units out to secure the foundation – while Hyperion needs vertical market penetration (read: become popular and also hit other hardware platforms as well). These factors are mutually exclusive, just like they are for Windows and OS X. Which is probably why Apple refuse to sell OS X without a mac, or they could end up competing with themselves.
A brave new Amiga OS
But there is more to this “revival” than just hardware. Many would even say that hardware is the least interesting about the next generation systems, and that the true value at this point in time is the new and sexy operating system. Because what the world needs now more than hardware (in my opinion) is a lightweight alternative to Linux and Windows. A lean, powerful, easy to use, highly customizable operating system that will happily boot on a $35 Raspberry PI 3b, or a $2500 Intel i7 monster. Something that makes computing fun, affordable and most of all: portable!
And with lean I have to stress that the original Amiga operating system, the classic 3.x system that was developed all the way to the end – was initially created to thrive in as little as 512kb. At most I had 2 megabytes of ram in my Amiga 1200 and that was ample space to write and run large programs, play the latest games and enjoy the rich, colorful and user-friendly desktop environment. We have to remember that Amiga had a multi-tasking, window based OS a decade before Microsoft.
Naturally the next-generation systems is built to deal with the realities of 2017 and beyond, but incredibly enough the OS will run just fine with as little as 256 megabytes. Not even Windows embedded can boot up on that. Linux comes close with distributions like Puppy and DSL, but Amiga OS 4 gives you a lot more functionality out of the box.
What way to go?
OK so we have new hardware, but what about the software? Are the new Amiga’s supposed to run some ancient version of Amiga OS? Of-course not! The people behind the new hardware have teamed up with a second company, Hyperion, that has believe it or not, done a full re-implementation of Amiga OS! And naturally they have taken the opportunity to get rid of annoying behavior – and adding behavior people expect in 2017 (like double-clicking on a window header to maximize it, easy access to menus and much more). Visually Amiga OS 4 is absolutely gorgeous. Just stunning to look at.
Now there are many different theories and ideas about where a new Amiga should go. Sadly it’s not just as simple as “hey let’s make a new amiga“; the old system is literally boiled in patent and legislation issues. It is close to an investors worst nightmare since ownership is so fragmented. Back when Commodore died, different parts of the Amiga was sold to different companies and individuals. The main reason we havent seen a new Amiga until now – is because the owners have been fighting between themselves. The Amiga as we know it has been caught in limbo for close to two decades.
My stance on the whole subject is that Trevor Dickenson, the man behind the next generation Amiga systems, has done the only reasonable thing a sane human being can when faced with a proverbial patent kebab: the old hardware is magical for us that grew up on it – but by todays standard they are obsolete dinosaurs. The same can be said about the Amiga OS 3.9. So Trevor has gone for a full re-implementation and hardware.
The other predominant idea is more GNU/Linux in spirit, where people want Amiga OS to be platform independent (or at least written in a way that makes the code run on different hardware as long as some fundamental infrastructure exists). This actually resulted in a whole new OS being written, namely Aros, which is a community made Amiga OS clone. A project that has been perpetually maintained for 20 years now.
While I think the guys behind Aros should be applauded, I do feel that AEON and Hyperion have produced something better. There are still kinks to work out on both systems – and don’t get me wrong: I am thrilled that Aros is available, I just enjoy OS 4 more than I do Aros. Which is my subjective opinion of course.
New markets
Right. With all this in mind, let us completely disregard the old Amiga, the commodore drama and instead focus on the new operatingsystem as a product. It doesn’t take long before a few thrilling opportunities present themselves.
The first that comes to my mind is how well suited OS 4 would be as an embedded platform. The problem with Linux is ultimately the same that haunts OS X and Windows, namely that size and complexity grows proportionally over time. I have seen Linux systems as small as 20 megabytes, but for running X based full screen applications, taking advantage of hardware accelerated graphics – you really need a bigger infrastructure. And the moment you start adding those packages – Linux puts on weight and dependencies fast!

The embedded market is one place where Amiga OS would do wonders
With embedded systems im not just talking about head-less servers or single application devices. Take something simple like a ticket booth, an information kiosk or POS terminal. Most of these run either Windows embedded or some variation of Linux. Since both of these systems require a fair bit of infrastructure to function properly, the price of the hardware typically start at around 300€. Delphi and C++ based solutions, at least those that I have seen, end up using boards in the 300€ to $400€ range.
This price-tag is high considering the tasks you need to do in a POS terminal or ticket system. You usually have a touch enabled screen, a network connection, a local database that will cache information should the network be down – the rest is visual code for dealing with menus, options, identification and fault tolerance. If a visa terminal is included then a USB driver must also be factored in.
These tasks are not heavy in themselves. So in theory a smaller system if properly adapted for it could do the same if not better job – at a much better price.
More for less, the Amiga legacy
Amiga OS would be able to deliver the exact same experience as Windows and Linux – but running on more cost-effective hardware. Where modern Windows and Linux typically need at least 2 gigabyte of ram for a heavy-duty visual application, full network stack and database services – Amiga OS is happy to run in as little as 512 megabytes. Everything is relative of course, but running a heavy visual application with less than a gigabyte memory in 2017 is rare to say the least.
Already we have cut cost. Power ARM boards ships with 4 gigabytes of ram, powered by a snappy ARM v9 cpu – and medium boards ship with 1 or 2 gigabytes of ram and a less powerful cpu. The price difference is already a good 75€ on ram alone. And if the CPU is a step down, from ARM v9 to ARM v8, we can push it down by a good 120€. At least if you are ordering in bulk (say 100 units).
The exciting part is ultimately how well Amiga OS 4 scales. I have yet to try this since I don’t have access to the machine I have ordered yet – and sadly Amiga OS 4.1 is compiled purely for PPC. This might sound odd since everyone is moving to ARM, but there is still plenty of embedded systems based on PPC. But yes, I would urge our good friend Trevor Dickenson to establish a migration plan to ARM because it would kill two birds with one stone: upgrading the faithful Amiga community while entering into the embedded market at the same time. Since the same hardware is involved these two factors would stimulate the growth and adoption of the OS.

The PPC platform gives you a lot of bang-for-the-buck in the A1222 model
But for sake of argument let’s say that Amiga OS 4 scales exceptionally well, meaning that it will happily run on ARM v8 with 1 gigabyte of ram. This would mean that it would run on systems like the Asus Tinkerboard that retails at 60€ inc. vat. This would naturally not be a high performance system like the A5000, but embedded is not about that – it’s about finding something that can run your application safely, efficiently and without problems.
So if the OS scales gracefully for ARM, we have brought the cost down from 300€ to 60€ for the hardware (I would round that up to 100€, something always comes up). If the customers software was Windows-based, a further 50€ can be subtracted from the software budget for bulk licensing. Again buying in bulk is the key.
Think different means different
Already I can hear my friends that are into Linux yell that this is rubbish and that Linux can be scaled down from 8 gigabytes to 20 megabytes if so needed. And yes that is true. But what my learned friends forget is that Linux is a PITA to work with if you havent spent a considerable amount of time learning it. It’s not a system you can just jump into and expect to have results the next day. Amiga OS has a much more friendly architecture and things that are often hard to do on Windows and Linux, is usually very simple to achieve on the Amiga.
Another fact my friends tend to forget is that the great majority of commercial embedded projects – are done using commercial software. Microsoft actually presented a paper on this when they released their IOT support package for the Raspberry PI. And based on personal experience I have to agree with this. In the past 20 years I have only seen 2 companies that use Linux as their primary OS both in products and in their offices. Everyone else uses Windows embedded for their products and day-to-day management.
So what you get are developers using traditional Windows development tools like Visual Studio or Delphi (although that is changing rapidly with node.js). And they might be outstanding programmers but Linux is still reserved for server administrators and the odd few that use it on hobby basis. We simply don’t have time to dig into esoteric “man pages” or explore the intricate secrets of the kernel.
The end result is that companies go with what they know. They get Windows embedded and use an expensive x86 board. So where they could have paid 100€ for a smaller SBC and used Amiga OS to deliver the exact same product — they are stuck with a 350€ baseline.
Be the change
The point of this little post has been to demonstrate that yes, the embedded market is more than open for alternatives. Linux is excellent for those that have the time to learn its many odd peculiarities, but over the past 20 years it has grown into a resource hungry beast. Which is ironic because it used to be Windows that was the bloated scapegoat. And to be honest Windows embedded is a joy to work with and much easier to shape to your exact needs – but the prices are ridicules and it wont perform well unless you throw at least 2 gigabyte on it (relative to the task of course, but in broad strokes that’s the ticket).
But wouldn’t it be nice with a clean, resource friendly and extremely fast alternative? One where auto-starting applications in exclusive mode was a “one liner” in the startup-sequence file? A file which is actually called “startup-sequence” rather than some esoteric “init.d” alias that is neither a folder or an archive but something reminiscent of the Windows registry? A system where libraries and the whole folder structure that makes up drivers, shell, desktop and service is intuitively named for what they are?

Amiga OS could piggyback on the wave of low-cost ARM SBC’s that are flooding the market
You could learn how to use Amiga OS in 2 days tops; but it holds great depth so that you can grow with the system as your needs become more complex. But the general “how to” can be picked up in a couple of days. The architecture is so well-organized that even if you know nothing about settings, a folder named “prefs” doesn’t leave much room for misinterpretation.
But the best thing about AmigaOS is by far how elegant it has been architected. You know, when software is planned right it tends to refactor out things that would otherwise be an obstacle. It’s like a well oiled machinery where each part makes perfect sense and you don’t need a huge book to understand it.
From where I am standing, Amiga OS is ultimately the biggest asset the Hyperion and AEON have to offer. I love the new hardware that is coming out – but there is no doubt in my mind, and I know I am right about this, that the market these companies should focus on now is not PPC – but rather ARM and embedded systems.
It would take an effort to port over the code from a PPC architecture to ARM, but having said that – PPC and ARM have much more in common than say, PPC and x86.
I also think the time is ripe for a solid power ARM board for desktop computers. While smaller boards gets most of the attention, like the Raspberry PI, the ODroid XU4 and the (S)Tinkerboard – once you move the baseline beyond 300€ you see some serious muscle. Boards like iMX6 OpenRex SBC Ultra packs a serious punch, and like expected it ships with 4 gigabyte of ram out of the box.
While it’s impossible to do a raw comparison between the A1222 and the iMX6 OpenRex, I would be surprised if the iMX6 delivered terrible performance compared to the A1222 chipset. I am also sure that if we beefed up the price to 700€, aimed at home computing rather than embedded – the ARM power boards involved would wipe the floor with PPC. There are a ton of factors at play here – a fast CPU doesn’t necessarily mean better graphics. A good GPU should make up at least 1/5 of the price.
Another cool factor regarding ARM is that the bios gives you a great deal of features you can incorporate into your product. All the ARM board I have gives you FAT32 support out of the box for instance, this is supported by the SoC itself and you don’t need to write filesystem drivers for it. Most boards also support Ext2 and Ext3 filesystems. This is recognized automatically on boot. The rich bios/mini kernel is what makes ARM so attractive to code for, because it takes away a lot of the boring, low-level tasks that took months to get right in the past.
Final words
This has been a long article, from the early years of Commodore – all the way up to the present day and beyond. I hope some of my ideas make sense – and I also hope that those who are involved in the making of the new Amiga perhaps pick up an idea or two from this material.
Either way I will support the Amiga with everything I got – but we need a couple of smart ideas and concrete plans on behalf of management. And in my view, Trevor is doing exactly what is needed.
While we can debate the choice of PPC, it’s ultimately a story with a long, long background to it. But thankfully nothing is carved in stone and the future of the Amiga 5000 and 1222 looks bright! I am literally counting the days until I get one!
Understanding Smart Pascal
One of the problems you get when working pro-bono on a project, is a constant lack of time. You have a fixed amount of hours you can spare, and every day you have to make decisions about where to invest those hours. The result is that Smart Mobile Studio has a wealth of technical resources and depth, but lacks the documentation you expect such a product to have. This has been and continues to be a problem.
Documentation really is a chicken and egg thing. It doesn’t start out that way, but once the product is launched, you get trapped in this boolean dynamics: “Few people buy it because it lacks documentation; You can’t afford to write documentation because few people buy it“. Considering the size of our codebase I don’t blame people for being a bit overwhelmed.
Despite our shortcomings Smart Mobile Studio is growing. It has a slow but steady growth as opposed to explosive growth. But all products needs periods of explosive growth to build up resources so that future evolution of the product can be financed. So this lack of solid documentation acts almost like a filter. Only those that are used to coding in Delphi or Lazarus at a certain level, writing their own classes and components, will feel comfortable using it.
It has become a kind of elite toolkit, used only by the most advanced coders.
Trying to explain
The other day I talked to a man who simply could not wrap his head around Smart Pascal at all. Compile for JavaScript? But.. how.. How do you get classes? He looked at me with a face of disbelief. I told him that we emit a VMT (virtual method table) in JavaScript itself. That way, you get real classes, real interfaces and real inheritance. But it was like talking to a wall.
In his defence, he understood conceptually what a VMT was, no doubt from having read about it in context with Delphi; but how it really works and that the principle is fundamental to object orientation at large, was alien to him.
var TObject={ $ClassName: "TObject", $Parent: null, ClassName: function (s) { return s.$ClassName }, ClassType: function (s) { return s }, ClassParent: function (s) { return s.$Parent }, $Init: function () {}, Create: function (s) { return s }, Destroy: function (s) { for (var prop in s) if (s.hasOwnProperty(prop)) delete s.prop }, Destroy$: function(s) { return s.ClassType.Destroy(s) }, Free: function (s) { if (s!==null) s.ClassType.Destroy(s) } }
Above: In object orientation the methods are only compiled once while the instance is cloned. This is why methods in OOP languages are compiled with a secret first parameter that is the instance. Inheritance never duplicates the code inherited from ancestors.
In retrospect I have concluded that it had more to do with “saving face” than this guy not understanding. He had just spent months writing a project in JavaScript that he could complete in less than a day using Smart Pascal – so naturally, he would look the fool to admit that he just wasted a ton of company money. The easiest way to dismiss any ignorance on his part, was to push our tech into obscurity.
But what really baked my noodle was his lack of vision. He had failed completely in understanding what cloud is, where the market is going and what that will mean to both his skill-set, his job prospects and the future of software development.
It’s not his fault. If anything it’s my fault for not writing about it earlier. In my own defense I took it for granted that everyone understood this and knew what was coming. But that is unfair because the ability to get a good overview of the situation depends greatly on where you are.
JavaScript, the most important language in the world
It may be hard for many people to admit this, but it is none the less true. JavaScript has become the single most important language on the planet. 50% of all software written right now, regardless if it’s for the server or the browser, is written in JavaScript.
I knew this would happen as early as 2008, all the signs pointed to it. In 2010 Delphi was in a really bad place and I had a choice: drop Delphi and throw 20 years of hard-earned skills out the window and seek refuge in C++ or C#; or adapt object pascal to the new paradigm and try to save as much of our collective knowledge and skills as I could.
Even before I started on Smart I knew that something like node.js would appear. It was inevitable. Not because I am so clever, but because emerging new technology follows a pattern. Once it reaches critical mass – universal adoption and adaptation will happen. It follows logical steps of evolution that apply to all things, regardless of what the product or solution may be.
What is going to happen, regardless of what you feel
Ask yourself, what are the implication of program code being virtual? What are the logical steps when your code is 100% abstracted from hardware and the underlying, native operative system? What are the implications when script code enjoy the speed of native code (the JavaScript virtual machine uses LLVM to the point that JavaScript now runs en-par with native code), yet can be clustered, replicated, moved and even paused?
Let me say it like this: The next generation rapid application development wont deliver executable files or single platform installers. You will deliver entire eco-systems. Products that can be scaled, moved between hosts, replicated -that runs and exist in the cloud purely as virtual instances.

Norwegian developed FriendOS is just one of the cloud based operative systems in development right now. It will have a massive impact on the world
Where Delphi developers today drag and drop components on a form, future developers will drag and drop entire service stacks. You wont drop just a single picture on a form, but connectors to international media resource managers; services that guarantee accessibility across continents. 24 hours a day, seven days a week.
You think chrome-book is where it ends? It’s just the beginning.
Right now there are 3 cloud-based operating systems in development. All of them with support for the new, distributed software model. They allow you to write both the back-end and front-end of your program, which in the new model is regarded as a single entity or eco-system. Things like storage have been distributed for well over a decade now, and you can pipe data between Dropbox, Google drive or any host that implements the REST storage interface.
Some of the most powerful companies in the world are involved in this. Now take a wild guess what language these systems want you to use.
I’m sorry, but the way we make programs today is about to go extinct.
Understanding the new software model
As a Delphi or Lazarus developer you are used to the notion of server-side applications and client side applications. The distinction between this has always clear, but that is about to change. It’s still going to be around, at least for the next decade or so, but only for legacy purposes.
To backtrack just a second: Smart introduced support for node.js applications earlier, but it was on a very low-level. In the next update we introduce a large number high-level classes that is going to change the way you look at node completely.
Two new project types will be introduced in the future, giving you a very important distinction. Namely:
- Cloud service
- System service
To understand these concepts, you first have to understand the software model that next generation cloud operating systems work with. Superficially it may look almost identical to the good old two-tier model, but in the new paradigm it is treated as a single, portable, scalable, cluster capable entity.
The thing about clustering and scaling is what tends to confuse traditional developers. Because scaling in a native language is hard work. First you have to write your program in such a way that it can be scaled (e.g work as a member in a group, or cluster). Secondly you have to write a gate-keeper or head that delegates connections between the members of the cluster. If you don’t do this from the very beginning it will be a costly affair to retrofit a project with the required mechanisms.
Node.js is just awesome because it can cluster your code without you having to program for that. How? Because JavaScript is virtual. So you can fire up 50, 100 or 10.000 instances of the same process and the only thing you need to worry about is the gate-keeper process. You just park the cluster in another IP range that is only accessible by the gatekeeper, and that’s pretty much it.
When a software eco-system is installed on a cloud host, the entire architecture described by the package is created. So the backend part of your code is moved to an instance dedicated for that, the front end is installed where it belongs, same with database and so on. Forget about visual controls and TComponent, because on this level your building blocks are whole services; and the code you write scales from low-level socket coding to piping terabytes of data between processes.

PM2 is a node.js process manager that gives you clustering and pooled application behavior for free out of the box. You don’t even have to tailor your code for it
Services that physically move
While global accessibility is fine and good, speed is a factor. It goes without saying that having to fetch data from Asia when you are in the US is going to be less than optimal. But this is where cloud is getting smarter.
Your services will actually physically move to a host closer to where you are. So let’s say you take a business trip from the US to Hong-Kong. The service will notice this, find a host closer to where you are, and replicate itself to that server.
This is not science-fiction, it’s already implemented in Azure and Google’s APIs take height for this behavior. It’s pretty cool if you ask me.
Is node.js really that powerful?
Let me present you with a few examples. Its important to understand that these examples doesnt mean everyone have to operate on this scale. But we feel it’s important to show people just what you can achieve and what node is capable of.
Netflix is an online video streaming service that has become a household name in a very short time. Cloud can often be a vague term, but no other service demonstrates the potential of cloud technology as much as Netflix. In 2015 Netflix had more than 69 million paying customers in roughly 60 countries. It streams at average 100 million media hours a day.
Netflix moved from a traditional, native software model to a 100% clustered Node.js powered model in 2014. The ability for Netflix to run nearly universally on all devices, from embedded computers to Smart TV’s is largely due to their JavaScript codebase.
PayPal is a long-standing online banking and payment service that was first established in 1998. In Q4 of 2016 PayPal had 192 million registered customers world-wide. The service’s annual mobile payment volume was 66 billion US dollars in 2016. More than triple that of the previous year. Paypal moved from a traditional, native server model to Node.js back in 2015, when their entire transaction service layer was re-written from scratch.
Uber is a world-wide taxi company that is purely cloud and web-based. Unlike traditional taxi companies Uber owns no cars and doesn’t employ drivers; Instead its a service that anyone can partake in – either as a customer or a driver. In 2016 Uber operates in 551 cities across 60 countries. It delivers more than one million rides daily and have an estimated 10 million customers.
Uber’s server technology is based purely on Node.js and exists purely as a cloud based service. Uber has several client applications for various mobile devices, the majority of these are HTML5 applications that use Cordova Phonegap (same as Smart applications).
Understanding Smart
While the RTL and full scope of the technology has been a bit of a “black box” for many people, hopefully the idea and concepts around it has matured enough for people to open up for it. We were a bit early with it, and without the context that is showing up now I do understand that it can be hard to get the full scope of it (not to mention the potential).
With the cloud and some of its potential (and no, it’s not going away), a sense of urgency should start to set in. Native programming is not going away, but it will be marginalized to the point where it goes back to its origins: as a dicipline and part of engineering.
Public software and services will move to the cloud and as such, developers will be forced to use tools and languages better suited for that line of work.
We firmly believe that object pascal is one of the best languages ever created. Smart pascal has been adapted especially for this task, and the time-saving aspects and “edge” you get by writing object pascal over vanilla JavaScript is unquestionable. Inheritance alone is helpful, but the full onslaught of high-level features Smart brings takes it to the next level.

The benefits of writing object oriented, class based code is readability, order and maintainability. The benefits of a large RTL is productivity. The most important aspect of all in the world of software development.
Hopefully the importance of our work will be easier to understand and more aparent now that cloud is becoming more visible and people are picking up the full implications of this.
The next and obvious step is moving Smart itself to the cloud, which we are planning for now. It will mean you can code and produce applications regardless of where you are. You can be in Spain, France or Oklahoma USA – all you will need is a browser, your object pascal skills and you’re good to go.
Things like “one click” hosting, instance budgets for auto scaling; the value for both developers and investors should be fairly obvious at this point.
Starting monday we will actively look for investors.
Sincerly
Jon Lennart Aasenden
Goodbye G+ Delphi Developers group
Looking into the actual data, with JavaScript leading and JavaScript libraries on client and server side (Angular.js, Node.js) on the rise, it was nice to see that while Delphi was not listed as an option, it was the most typed entry in the “others” category -Source: Marco Cantu
- Smart Mobile Studio is written 100% in Delphi
- Smart Mobile Studio was created from scratch to compliment Delphi
- Smart Mobile Studio supports Remobjects SDK out of the box
- Smart Mobile Studio supports Embarcadero Datasnap out of the box
- Smart Mobile Studio helps Delphi developers write the middleware or interfacing that sits between a native Delphi solutions and a customer’s node.js or purely web-based solution. This is 2016 after all.
- Where a Delphi developer would previously have to decline a job offering because the customer’s existing infrastructure is based on node or emits data unsuitable for traditional Delphi components, developers can now relax and write the missing pieces in a Smart pascal, a dialect roughly 90% compatible with the language they know and love to begin with.
- Smart Mobile Studio ships with a wast RTL that greatly simplify talking with Delphi. It also has a VCL inspired component hierarchy where more and more complex behavior is introduced vertically. This gives your codebase a depth which is very hard to achieve under vanilla JavaScript or Typescript.
- Smart Mobile Studio is all about Delphi. It is even used to teach programming in the UK. Teenagers who by consequence and association is statistically more likely to buy Delphi as they mature.
- Decline the project
- Learn JavaScript or Typescript and do battle with its absurd idiosyncrasies, lack of familiar data types, lack of inheritance and lack of everything you are used to
- Use Smart Mobile Studio to write the middleware between your native solution and the customer existing infrastructure
If you pick option number two, it wont take many days before you realize just how alien JavaScript is compared to Delphi or C++ builder. And you will consequently start to understand the value of our RTL which is written to deal with anything from low-level coding (allocmem, reallocmem, fillmemory, move, buffers, direct memory access, streams and even threading). Our RTL is written to make the JavaScript virtual machine palatable to Delphi developers.
Banning dialects?
Once you start banning dialects of a language or an auxiliary utillity designed to empower Delphi and make sure developers can better interface with that aspect of the marketplace – where does it stop? I am curious to where exactly the Google+ Delphi group draws the line here to be honest.
Should Remobject Oxygene likewise be banned since it helps Delphi developers target the dot net framework? That would be odd since Delphi shipped Oxygene for years.
Should script engines be banned? What about SQL? SQL is a language most Delphi developers know and use – but it is by no measure object pascal and never will be. Interestingly, Angular.js seems to be just fine for the Google+ Delphi group. Which is a bit hypocritical since that is JavaScript plain and simple.
What about report engines? Take FastReport for instance: FastReport have for the past decade or more bolted their own scripting engine into the product, a script engine that supports a subset of object pascal but also visual basic (blasphemy!). On iOS you are, if we are going to follow the Apple license agreement down to the letter, not even allowed to use FastReport. Apple is very clear on this. Any applications that embed scripting engines and at the same time download runnable code (which would be the case when downloading report files) is not allowed on appstore. The idea here is that should some code be downloaded that is harmful, well then Apple will give you a world of hurt. And even if you consider it a report-file, it does contain runnable code – and that is a violation.
So is Fastreport Delphi enough? Or is that banned as well?
Where exactly do we draw the line? I can understand banning posts about dot net if it’s all about C#, but if it’s in relation to Delphi or deals with Delphi talking with dot net (or an actual dialect like Oxygene) then I really don’t see why it could be banned or deleted. Even Delphi itself uses dot net. It’s one of the pre-requisites for installing Delphi in the first place. I guess Delphi should also be banned from the Delphi group then?
In our group on Facebook, all are welcome. Embarcadero, Lazarus and Free Pascal, Elevate software (be it their database engines or web builder), Pax compiler, DWScript, Smart Pascal, NewPascal (or whatever it’s called these days) and even Turbo Pascal for that matter. And we sure as shit dont delete posts of Delphi talking to another system.
So goodbye G+ and good luck
Raspberry PI fun
Just got this box in the mail, now just waiting for a my new touch-screens. Looks like I got the weekend covered.
There was a small LCD screen that came with the kit. I will be putting that in my Amiga 500 retro-mod. Will try to wire it up so that every time i start a game or program – the title of the executable will scroll over the display.
This will involve a fpc compiled signal bridge, but I may be able to do the whole thing in node.js. On the Amiga side of things a task dump running on interrupt should be enough to catch the filename, then broadcast it via UDP on localhost. That way the Linux side of things can pick up the data -and push the text to the display.
Nerdvana..
You must be logged in to post a comment.