Archive

Archive for the ‘C/C++’ Category

Vector Containers For Delphi and FPC

April 11, 2020 Leave a comment

Edit: Version 1.0.1 has been released, with a ton of powerful features. Read about it here and grab your fork: https://jonlennartaasenden.wordpress.com/2020/04/13/qtx-framework-for-delphi-and-fpc-is-available-on-bitbucket/


If you have been looking at C++ and envied them their std::vector classes, wanting the same for Delphi or being able to access untyped memory using a typed-view (basically turning a buffer into an array of <T>) then I have some good news for you!

Vector containers, unified storage model and typed views are just some of the highlights of my vector-library. I did an article on the subject at the Embarcadero community website, so head over and read up on how you can enjoy these features in your Delphi application!

I also added FreePascal support, so that the library can be used with TMS Web Framework.

vectors

Head over to the Embarcadero Community website to read the full article

C/C++ porting, QTX and general status

March 15, 2020 3 comments

C is a language that I used to play around with a lot back in the Amiga days. I think the last time I used a C compiler to write a library must have been in 1992 or something like that? I held on to my Amiga 1200 for as long as i could – but having fallen completely in love with Pascal, I eventually switched to x86 and went down the Turbo Pascal road.

Lately however, C++ developers have been asking for their own Developer group on Facebook. I run several groups on Facebook in the so-called “developer” family. So you have Delphi Developer, FPC Developer, Node.JS Developer and now – C++Builder developer. The groups more or less tend to themselves, and the node.js and FPC groups are presently being seeded (meaning, that the member count is being grown for a period).

The C++Builder group however, is having the same activity level as the Delphi group almost, thanks to some really good developers that post links, tips and help solve questions. I was also fortunate enough to have David Millington come on the Admin team. David is leading the C++Builder project, so his insight  and knowledge of both language and product is exemplary. Just like Jim McKeeth, he is a wonderful resource for the community and chime in with answers to tricky questions whenever he has time to spare.

Getting back in the saddle

Having working some 30 years with Pascal and Object Pascal, 25 of those years in Delphi, C/C++ is never far away. I have an article on the subject that i’ve written for the Idera Community website, so I wont dig too deep into that here — but needless to say, Rad Studio consists of two languages: Object Pascal and C/C++, so no matter how much you love either language, the other is never far away.

So I figured it was time for this old dog to learn some new tricks! I have always said that it’s wise to learn a language immediately below and above your comfort zone. So if Delphi is your favorite language, then C/C++ is below you (meaning: more low level and complex). Above you are languages like JavaScript and C#. Learning JavaScript makes strategic sense (or use DWScript to compile Pascal to JavaScript like I do).

When I started out, the immediate language below Object Pascal was never C, but assembler. So for the longest time I turned to assembler whenever I needed a speed boost; graphics manipulation and processing pixels is especially a field where assembly makes all the difference.

But since C++Builder is indeed an integral part of Rad Studio, and Object Pascal and C/C++ so intimately connected (they have evolved side by side), why not enjoy both assembly and C right?

So I decided to jump back into the saddle and see what I could make of it.

C/C++ is not as hard as you think

intf

I’m having a ball writing C/C++, and just like Delphi – you can start where you are.

While I’m not going to rehash the article I have already prepared for the Idera Community pages here, I do want to encourage people to give it a proper try. I have always said that if you know an archetypal language, you can easily pick up other languages, because the archetypal languages will benefit you for a lifetime. This has to do with archetypal languages operating according to how computers really work; as opposed to optimistic languages (a term from the DB work, optimistic locking), also called contextual languages, like C#, Java, JavaScript etc. are based on how human beings would like things to be.

So I now had a chance to put my money where my mouth is.

When I left C back in the early 90s, I never bothered with OOP. I mean, I used C purely for shared libraries anyways, while the actual programs were done in Pascal or a hybrid language called Blitz Basic. The latter compiled to razor sharp machine code, and you could use inline assembly – which I used a lot back then (very few programmers on those machines went without assembler, it was almost given that you could use 68k in some capacity).

Without ruining the article about to be published, I had a great time with C++Builder. It took a few hours to get my bearings, but since both the VCL and FMX frameworks are there – you can approach C/C++ just like you would Object Pascal. So it’s a matter of getting an overview really.

Needless to say, I’ll be porting  a fair share of my libraries to C/C++ when I have time (those that makes sense under that paradigme). It’s always good to push yourself and there are plenty of subtle differences that I found useful.

Quartex Media Desktop

When I last wrote about QTX we were nearing the completion of the FileSystem and Task Management service. The prototype had all its file-handling directly in the core service  (or server) which worked just fine — but it was linked to the Smart Pascal RTL. It has taken time to write a new RTL + a full multi-user, platform independent service stack and desktop (phew!) but we are seeing progress!

desktop

The QTX Baseline backend services is now largely done

The filesystem service is now largely done! There are a few synchronous calls I want to get rid of, but thankfully my framework has both async and sync variations of all file procedures – so that is now finished.

To make that clearer: first I have to wrap and implement the functionality for the RTL. Once they are in the RTL, I can use those functions to build the service functions. So yeah, it’s been extremely elaborate — but thankfully it’s also become a rich, well organized codebase (both the RTL and the Quartex Media Desktop codebases) – so I think we are ready to get cracking on the core!

The core is still operating with the older API. So our next step is to remove that from the core and instead delegate calls to the filesystem to our new service. So the core will simply be reduced to a post-office or traffic officer if you like. Messages come in from the desktops, and the core delegates the messages to whatever service is in charge of them.

But, this also means that both the core and the desktop must use the new and fancy messages. And this is where I did something very clever.

While I was writing the service, I also write a client class to test (obviously). And the way the core works — means that the same client that the core use to talk to the services — can be used by the desktop as well.

So our work in the desktop to get file-access and drives running again, is to wrap the client in our TQTXDevice ancestor class. The desktop NEVER accesses the API directly. All it knows about are these device drivers (or object instances). Which is  how we solve things like DropBox and Google Drive support. The desktop wont have the faintest clue that its using Dropbox, or copying files between a local disk and Google Drive for example — because it only communicates with these device classes.

Recursive stuff

One thing that sucked about node.js function for deleting a folder, is that it’s recursive parameter doesn’t work on Windows or OS X. So I had to implement a full recursive deletefolder routine manually. Not a big thing, but slightly more painful than expected under asynchronous execution. Thankfully, Object Pascal allows for inline defined procedures, so I didn’t have to isolate it in a separate class.

Here is some of the code, a tiny spec compared to the full shabam, but it gives you an idea of what life is like under async conditions:

unit service.file.core;

interface

{.$DEFINE DEBUG}

const
  CNT_PREFS_DEFAULTPORT     = 1883;
  CNT_PREFS_FILENAME        = 'QTXTaskManager.preferences.ini';
  CNT_PREFS_DBNAME          = 'taskdata.db';

  CNT_ZCONFIG_SERVICE_NAME  = 'TaskManager';

uses
  qtx.sysutils,
  qtx.json,
  qtx.db,
  qtx.logfile,
  qtx.orm,
  qtx.time,

  qtx.node.os,
  qtx.node.sqlite3,
  qtx.node.zconfig,
  qtx.node.cluster,

  qtx.node.core,
  qtx.node.filesystem,
  qtx.node.filewalker,
  qtx.fileapi.core,

  qtx.network.service,
  qtx.network.udp,

  qtx.inifile,
  qtx.node.inifile,

  NodeJS.child_process,

  ragnarok.types,
  ragnarok.Server,
  ragnarok.messages.base,
  ragnarok.messages.factory,
  ragnarok.messages.network,

  service.base,
  service.dispatcher,
  service.file.messages;

type

  TQTXTaskServiceFactory = class(TMessageFactory)
  protected
    procedure RegisterIntrinsic; override;
  end;

  TQTXFileWriteCB = procedure (TagValue: variant; Error: Exception);
  TQTXFileStateCB = procedure (TagValue: variant; Error: Exception);

  TQTXUnRegisterLocalDeviceCB = procedure (TagValue: variant; DiskName: string; Error: Exception);
  TQTXRegisterLocalDeviceCB = procedure (TagValue: variant; LocalPath: string; Error: Exception);
  TQTXFindDeviceCB = procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception);
  TQTXGetDisksCB = procedure (TagValue: variant; Devices: JDeviceList; Error: Exception);

  TQTXGetFileInfoCB = procedure (TagValue: variant; LocalName: string; Info: JStats; Error: Exception);
  TQTXGetTranslatePathCB = procedure (TagValue: variant; Original, Translated: string; Error: Exception);

  TQTXCheckDevicePathCB = procedure (TagValue: variant; PathName: string; Error: Exception);

  TQTXServerExecuteCB = procedure (TagValue: variant; Data: string; Error: Exception);

  TQTXTaskService = class(TRagnarokService)
  private
    FPrefs:     TQTXIniFile;
    FLog:       TQTXLogEmitter;
    FDatabase:  TSQLite3Database;

    FZConfig:   TQTXZConfigClient;
    FRegHandle: TQTXDispatchHandle;
    FRegCount:  integer;

    procedure   HandleGetDevices(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleGetDeviceByName(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleCreateLocalDevice(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleDestroyDevice(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleFileRead(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleFileReadPartial(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleGetFileInfo(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleFileDelete(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);

    procedure   HandleFileWrite(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleFileWritePartial(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleFileRename(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleGetDir(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);

    procedure   HandleMkDir(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleRmDir(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);

    procedure   ExecuteExternalJS(Params: array of string;
      TagValue: variant; const CB: TQTXServerExecuteCB);

    procedure   SendError(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage; Message: string);

  protected
    function    GetFactory: TMessageFactory; override;
    procedure   SetupPreferences(const CB: TRagnarokServiceCB);
    procedure   SetupLogfile(LogFileName: string;const CB: TRagnarokServiceCB);
    procedure   SetupDatabase(const CB: TRagnarokServiceCB);

    procedure   ValidateLocalDiskName(TagValue: variant; Username, DeviceName: string; CB: TQTXCheckDevicePathCB);
    procedure   RegisterLocalDevice(TagValue: variant; Username, DiskName: string; CB: TQTXRegisterLocalDeviceCB);
    procedure   UnRegisterLocalDevice(TagValue: variant; UserName, DiskName:string; CB: TQTXUnRegisterLocalDeviceCB);

    procedure   GetDevicesForUser(TagValue: variant; UserName: string; CB: TQTXGetDisksCB);
    procedure   FindDeviceByName(TagValue: variant; UserName, DiskName: string; CB: TQTXFindDeviceCB);
    procedure   FindDeviceByType(TagValue: variant; UserName: string; &Type: JDeviceType; CB: TQTXGetDisksCB);

    procedure   GetTranslatedPathFor(TagValue: variant; Username, FullPath: string; CB: TQTXGetTranslatePathCB);

    procedure   GetFileInfo(TagValue: variant; UserName: string; FullPath: string; CB: TQTXGetFileInfoCB);

    procedure   SetupTaskTable(const TagValue: variant; const CB: TRagnarokServiceCB);
    procedure   SetupOperationsTable(const TagValue: variant; const CB: TRagnarokServiceCB);
    procedure   SetupDeviceTable(const TagValue: variant; const CB: TRagnarokServiceCB);

    procedure   AfterServerStarted; override;
    procedure   BeforeServerStopped; override;
    procedure   Dispatch(Socket: TNJWebSocketSocket; Message: TQTXBaseMessage); override;

  public
    property    Preferences: TQTXIniFile read FPrefs;
    property    Database: TSQLite3Database read FDatabase;

    procedure   SetupService(const CB: TRagnarokServiceCB);

    constructor Create; override;
    destructor  Destroy; override;
  end;


implementation

//#############################################################################
// TQTXFileenticationFactory
//#############################################################################

procedure TQTXTaskServiceFactory.RegisterIntrinsic;
begin
  writeln("Registering task interface");
  &Register(TQTXFileGetDeviceListRequest);
  &Register(TQTXFileGetDeviceByNameRequest);
  &Register(TQTXFileCreateLocalDeviceRequest);
  &Register(TQTXFileDestroyDeviceRequest);
  &Register(TQTXFileReadPartialRequest);
  &Register(TQTXFileReadRequest);
  &Register(TQTXFileWritePartialRequest);
  &Register(TQTXFileWriteRequest);
  &Register(TQTXFileDeleteRequest);
  &Register(TQTXFileRenameRequest);
  &Register(TQTXFileInfoRequest);
  &Register(TQTXFileDirRequest);
  &Register(TQTXMkDirRequest);
  &Register(TQTXRmDirRequest);
  &Register(TQTXFileRenameRequest);
  &Register(TQTXFileDirRequest);
end;

//#############################################################################
// TQTXTaskService
//#############################################################################

constructor TQTXTaskService.Create;
begin
  inherited Create;
  FPrefs := TQTXIniFile.Create();
  FLog := TQTXLogEmitter.Create();
  FDatabase := TSQLite3Database.Create(nil);

  FZConfig := TQTXZConfigClient.Create();
  FZConfig.Port := 2292;

  self.OnUserSignedOff := procedure (Sender: TObject; Username: string)
  begin
    WriteToLogF("We got a service signal! User [%s] has signed off completely", [Username]);
  end;

  MessageDispatch.RegisterMessage(TQTXFileGetDeviceListRequest, @HandleGetDevices);
  MessageDispatch.RegisterMessage(TQTXFileGetDeviceByNameRequest, @HandleGetDeviceByName);
  MessageDispatch.RegisterMessage(TQTXFileCreateLocalDeviceRequest, @HandleCreateLocalDevice);
  MessageDispatch.RegisterMessage(TQTXFileDestroyDeviceRequest, @HandleDestroyDevice);

  MessageDispatch.RegisterMessage(TQTXFileReadRequest, @HandleFileRead);
  MessageDispatch.RegisterMessage(TQTXFileReadPartialRequest, @HandleFileReadPartial);

  MessageDispatch.RegisterMessage(TQTXFileWriteRequest, @HandleFileWrite);
  MessageDispatch.RegisterMessage(TQTXFileWritePartialRequest, @HandleFileWritePartial);

  MessageDispatch.RegisterMessage(TQTXFileInfoRequest, @HandleGetFileInfo);
  MessageDispatch.RegisterMessage(TQTXFileDeleteRequest, @HandleFileDelete);

  MessageDispatch.RegisterMessage(TQTXMkDirRequest, @HandleMkDir);
  MessageDispatch.RegisterMessage(TQTXRmDirRequest, @HandleRmDir);
  MessageDispatch.RegisterMessage(TQTXFileRenameRequest, @HandleFileRename);

  MessageDispatch.RegisterMessage(TQTXFileDirRequest, @HandleGetDir);
end;

destructor TQTXTaskService.Destroy;
begin
  // decouple logger from our instance
  self.logging := nil;

  // Release prefs + log
  FPrefs.free;
  FLog.free;
  FZConfig.free;
  inherited;
end;

procedure TQTXTaskService.SendError(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage; Message: string);
begin
  var reply := TQTXErrorMessage.Create(request.ticket);
  try
    reply.Code := CNT_MESSAGE_CODE_ERROR;
    reply.Routing.TagValue := Request.Routing.TagValue;
    reply.Response := Message;

    if Socket.ReadyState = rsOpen then
    begin
      try
        Socket.Send( reply.Serialize() );
      except
        on e: exception do
        WriteToLog(e.message);
      end;
    end else
      WriteToLog("Failed to dispatch error, socket is closed error");
  finally
    reply.free;
  end;
end;

procedure TQTXTaskService.ExecuteExternalJS(Params: array of string;
  TagValue: variant; const CB: TQTXServerExecuteCB);
begin
  var LTask: JChildProcess;

  var lOpts := TVariant.CreateObject();
  lOpts.shell := false;
  lOpts.detached := true;

  Params.insert(0, '--no-warnings');

  // Spawn a new process, this creates a new shell interface
  try
    LTask := child_process().spawn('node', Params, lOpts );
  except
    on e: exception do
    begin
      if assigned(CB) then
        CB(TagValue, e.message, e);
      exit;
    end;
  end;

  // Map general errors on process level
  LTask.on('error', procedure (error: variant)
  begin
    {$IFDEF DEBUG}
    writeln("error->" + error.toString());
    {$ENDIF}
    WriteToLog(error.toString());

    if assigned(CB) then
      CB(TagValue, "", nil);
  end);

  // map stdout so we capture the output
  LTask.stdout.on('data', procedure (data: variant)
  begin
    if assigned(CB) then
      CB(TagValue, data.toString(), nil);
  end);

  // map stderr so we can capture exception messages
  LTask.stderr.on('data', procedure (error:variant)
  begin
    {$IFDEF DEBUG}
    writeln("stdErr->" + error.toString());
    {$ENDIF}

    if assigned(CB) then
      CB(TagValue, "", nil);

    WriteToLog(error.toString());
  end);
end;

function TQTXTaskService.GetFactory: TMessageFactory;
begin
  result := TQTXTaskServiceFactory.Create();
end;

procedure TQTXTaskService.SetupService(const CB: TRagnarokServiceCB);
begin
  SetupPreferences( procedure (Error: Exception)
  begin
    // No logfile yet setup (!)
    if Error  nil then
    begin
      WriteToLog("Preferences setup: Failed!");
      WriteToLog(error.message);
      raise error;
    end else
    WriteToLog("Preferences setup: OK");

    // logfile-name is always relative to the executable
    var LLogFileName := TQTXNodeFileUtils.IncludeTrailingPathDelimiter( TQTXNodeFileUtils.GetCurrentDirectory );
    LLogFileName += FPrefs.ReadString('log', 'logfile', 'log.txt');

    // Port is defined in the ancestor, so we assigns it here
    Port := FPrefs.ReadInteger('networking', 'port', CNT_PREFS_DEFAULTPORT);

    SetupLogfile(LLogFileName, procedure (Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog("Logfile setup: Failed!");
        WriteToLog(error.message);
        raise error;
      end else
      WriteToLog("Logfile setup: OK");

      SetupDatabase( procedure (Error: Exception)
      begin
        if Error  nil then
        begin
          WriteToLog("Database setup: Failed!");
          WriteToLog(error.message);
          if assigned(CB) then
            CB(Error)
          else
            raise Error;
        end else
        WriteToLog("Database setup: OK");

        if assigned(CB) then
          CB(nil);
      end);

    end);
  end);
end;

procedure TQTXTaskService.SetupPreferences(const CB: TRagnarokServiceCB);
begin
  var lBasePath := TQTXNodeFileUtils.GetCurrentDirectory;
  var LPrefsFile := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(LBasePath) + CNT_PREFS_FILENAME;

  if TQTXNodeFileUtils.FileExists(LPrefsFile) then
  begin
    FPrefs.LoadFromFile(nil, LPrefsFile, procedure (TagValue: variant; Error: Exception)
    begin
      if Error  nil then
      begin
        if assigned(CB) then
          CB(Error)
        else
          raise Error;
        exit;
      end;

      if assigned(CB) then
        CB(nil);
    end);

  end else
  begin
    var LError := Exception.Create('Could not locate preferences file: ' + LPrefsFile);
    WriteToLog(LError.message);
    if assigned(CB) then
      CB(LError)
    else
      raise LError;
  end;
end;

procedure TQTXTaskService.SetupLogfile(LogFileName: string;const CB: TRagnarokServiceCB);
begin
  // Attempt to open logfile
  // Note: Log-object error options is set to throw exceptions
  try
    FLog.Open(LogFileName);
  except
    on e: exception do
    begin
      if assigned(CB) then
      begin
        CB(e);
        exit;
      end else
      begin
        writeln(e.message);
        raise;
      end;
    end;
  end;

  // We inherit from TQTXLogObject, which means we can pipe
  // all errors etc directly using built-in functions. So here
  // we simply glue our instance to the log-file, and its all good
  self.Logging := FLog as IQTXLogClient;

  if assigned(CB) then
    CB(nil);
end;

procedure TQTXTaskService.FindDeviceByType(TagValue: variant; UserName: string; &Type: JDeviceType; CB: TQTXGetDisksCB);
begin
  UserName := username.trim().ToLower();
  if Username.length < 1 then
  begin
    WriteToLog("Failed to lookup disk, username was invalid error");
    var lError := EException.Create("Failed to lookup devices, invalid username");
    if assigned(CB) then
      CB(TagValue, nil, lError)
    else
      raise lError;
    exit;
  end;

  GetDevicesForUser(TagValue, Username,
  procedure (TagValue: variant; Devices: JDeviceList; Error: Exception)
  begin
    if Error  nil then
    begin
      if assigned(CB) then
        CB(TagValue, nil, Error)
      else
        raise Error;
      exit;
    end;

    var x := 0;
    while x < Devices.dlDrives.Count do
    begin
      if Devices.dlDrives[x].&Type  &Type then
      begin
        Devices.dlDrives.delete(x, 1);
        continue;
      end;
      inc(x);
    end;

    if assigned(CB) then
      CB(TagValue, Devices, nil);
  end);
end;

procedure TQTXTaskService.FindDeviceByName(TagValue: variant; Username, DiskName: string; CB: TQTXFindDeviceCB);
begin
  UserName := username.trim().ToLower();
  if Username.length < 1 then
  begin
    var lLogText := "Failed to lookup device, username was invalid error";
    WriteToLog(lLogText);
    var lError := EException.Create(lLogText);
    if assigned(CB) then
      CB(TagValue, nil, lError)
    else
      raise lError;
    exit;
  end;

  DiskName := DiskName.trim();
  if DiskName.length < 1 then
  begin
    var lLogText := "Failed to lookup device, disk-name was invalid error";
    WriteToLog(lLogText);
    var lError := EException.Create(lLogText);
    if assigned(CB) then
      CB(TagValue, nil, lError)
    else
      raise lError;
    exit;
  end;

  GetDevicesForUser(TagValue, Username,
  procedure (TagValue: variant; Devices: JDeviceList; Error: Exception)
  begin
    if Error  nil then
    begin
      if assigned(CB) then
        CB(TagValue, nil, Error)
      else
        raise Error;
      exit;
    end;

    DiskName := DiskName.trim().ToLower();
    var lDiskInfo: JDeviceInfo := nil;


    for var disk in Devices.dlDrives do
    begin
      if disk.Name.ToLower() = DiskName then
      begin
        lDiskInfo := disk;
        break;
      end;
    end;

    if assigned(CB) then
      CB(TagValue, lDiskInfo, nil);
  end);
end;

procedure TQTXTaskService.GetTranslatedPathFor(TagValue: variant; Username, FullPath: string; CB: TQTXGetTranslatePathCB);
begin
  var lParser := TQTXPathParser.Create();
  try
    var lInfo: TQTXPathData;
    if lparser.Parse(FullPath, lInfo) then
    begin
      // Locate the device for the path belonging to the user
      FindDeviceByName(TagValue, UserName, lInfo.MountPart,
      procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
      begin
        if Error  nil then
        begin
          if assigned(CB) then
            CB(TagValue, FullPath, '', Error)
          else
            raise Error;
          exit;
        end;

        if Device.&Type  dtLocal then
        begin
          var lError := EException.CreateFmt('Failed to translate path, device [%s] is not local error', [Device.Name]);
          if assigned(CB) then
            CB(TagValue, FullPath, '', Error)
          else
            raise Error;
          exit;
        end;

        // We want the path + filename, so we can append that to
        // the actual localized filesystem
        var lExtract := FullPath;
        delete(lExtract, 1, lInfo.MountPart.Length + 1);

        // Construct complete storage location
        var lFullPath := TQTXNodeFileUtils.GetCurrentDirectory();
        lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + 'userdevices';
        lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + Device.location.trim();
        lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + lExtract;

        // Return translated path
        if assigned(CB) then
          CB(TagValue, FullPath, lFullPath, nil);

      end);
    end else
    begin
      var lErr := EException.CreateFmt("Invalid path [%s] error", [FullPath]);
      if assigned(CB) then
        CB(TagValue, FullPath, '', lErr)
      else
        raise lErr;
    end;
  finally
    lParser.free;
  end;
end;

procedure TQTXTaskService.GetFileInfo(TagValue: variant; UserName, FullPath: string; CB: TQTXGetFileInfoCB);
begin
  var lParser := TQTXPathParser.Create();
  try
    var lInfo: TQTXPathData;
    if lparser.Parse(FullPath, lInfo) then
    begin
      // Locate the device for the path belonging to the user
      FindDeviceByName(TagValue, UserName, lInfo.MountPart,
      procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
      begin
        if Error  nil then
        begin
          if assigned(CB) then
            CB(TagValue, '', nil, Error)
          else
            raise Error;
          exit;
        end;

        case Device.&Type of
        dtLocal:
          begin
            // We want the path + filename, so we can append that to
            // the actual localized filesystem
            var lExtract := FullPath;
            delete(lExtract, 1, lInfo.MountPart.Length + 1);

            // Construct complete storage location
            var lFullPath := TQTXNodeFileUtils.GetCurrentDirectory();
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + 'userdevices';
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + Device.location.trim();
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + lExtract;

            // Call the underlying OS to get the file statistics
            NodeJsFsAPI().lStat(lFullPath,
            procedure (Error: JError; Stats: JStats)
            begin
              if Error  nil then
              begin
                var lError := EException.Create(Error.message);
                if assigned(CB) then
                  CB(TagValue, lFullPath, nil, lError)
                else
                  raise lError;
                exit;
              end;

              // And deliver
              if assigned(CB) then
                CB(TagValue, lFullPath, Stats, nil);
            end);
          end;
        dtDropbox, dtGoogle, dtMsDrive:
          begin
            var lError := EException.Create("Cloud bindings not activated error");
            if assigned(CB) then
              CB(TagValue, '', nil, lError)
          end;
        end;
      end);
    end else
    begin
      var lErr := EException.CreateFmt("Invalid path [%s] error", [FullPath]);
      if assigned(CB) then
        CB(TagValue, '', nil, lErr)
      else
        raise lErr;
    end;
  finally
    lParser.free;
  end;
end;

procedure TQTXTaskService.GetDevicesForUser(TagValue: variant; Username: string; CB: TQTXGetDisksCB);
begin
  UserName := username.trim().ToLower();
  if Username.length < 1 then
  begin
    WriteToLog("Failed to lookup devices, username was invalid error");
    var lError := EException.Create("Failed to lookup devices, invalid username");
    if assigned(CB) then
      CB(TagValue, nil, lError)
    else
      raise lError;
    exit;
  end;

  var lTransaction: TQTXReadTransaction;
  if not TSQLite3Database(DataBase).CreateReadTransaction(lTransaction) then
  begin
    var lErr := EException.Create("Failed to create read-transaction error");
    if assigned(cb) then
      CB(TagValue, nil, lErr)
    else
      raise lErr;
    exit;
  end;

  var lQuery := TSQLite3ReadTransaction(lTransaction);
  lQuery.SQL := "select * from devices where owner=?";
  lQuery.Parameters.AddValueOnly(Username);

  lQuery.Execute(
  procedure (Sender: TObject; Error: Exception)
  begin
    if Error  nil then
    begin
      if assigned(CB) then
        CB(TagValue, nil, Error)
      else
        raise Error;
      exit;
    end;

    var lDisks := new JDeviceList();
    lDisks.dlUser := UserName;

    for var x := 0 to lQuery.datarows.length-1 do
    begin
      var lInfo := new JDeviceInfo();
      lInfo.Name := lQuery.datarows[x]["name"];
      lInfo.&Type := JDeviceType( lQuery.datarows[x]["type"] );
      lInfo.owner := lQuery.datarows[x]["owner"];
      lInfo.location := lQuery.datarows[x]["location"];
      lInfo.APIKey := lQuery.datarows[x]["apikey"];
      lInfo.APISecret := lQuery.datarows[x]["apisecret"];
      lInfo.APIPassword := lQuery.datarows[x]["apipassword"];
      lInfo.APIUser := lQuery.datarows[x]["apiuser"];
      lDisks.dlDrives.add(lInfo);
    end;

    try
      if assigned(CB) then
        CB(TagValue, lDisks, nil);
    finally
      lQuery.free;
    end;
  end);
end;

procedure TQTXTaskService.ValidateLocalDiskName(TagValue: variant; Username, DeviceName: string; CB: TQTXCheckDevicePathCB);
begin
  var Filename := 'disk.' + username + '.' + DeviceName + '.' + ord(JDeviceType.dtLocal).ToString();

  var LBasePath := TQTXNodeFileUtils.GetCurrentDirectory();
  LBasePath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(LBasePath) + 'userdevices';

  // Make sure the device folder is there
  if not TQTXNodeFileUtils.DirectoryExists(LBasePath) then
  begin
    var lError := EException.CreateFmt("Directory not found: %s", [lBasePath]);
    if assigned(CB) then
      CB(TagValue, '', lError)
    else
      raise lError;
    exit;
  end;

  lBasePath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(LBasePath) + Filename;

  if TQTXNodeFileUtils.DirectoryExists(LBasePath) then
  begin
    var lError := EException.CreateFmt("Path already exist error [%s]", [lBasePath]);
    if assigned(CB) then
      CB(TagValue, '', lError)
    else
      raise lError;
    exit;
  end;

  // OK, folder is not created yet, so its good to go
  if assigned(CB) then
    CB(TagValue, Filename, nil);
end;

procedure TQTXTaskService.UnRegisterLocalDevice(TagValue: variant; UserName, DiskName: string; CB: TQTXUnRegisterLocalDeviceCB);
begin
  WriteToLogF("Removing local device [%s] for user [%s] ", [DiskName, Username]);

  // Check username string
  UserName := username.trim().ToLower();
  if Username.length < 1 then
  begin
    WriteToLog("Failed to unregister device, username was invalid error");
    var lError := EException.Create("Failed to register device, invalid username");
    if assigned(CB) then
      CB(TagValue, DiskName, lError)
    else
      raise lError;
    exit;
  end;

  // Check diskname string
  DiskName := DiskName.trim().ToLower();
  if DiskName.length < 1 then
  begin
    WriteToLog("Failed to unregister device, disk-name was invalid error");
    var lError := EException.Create("Failed to register device, invalid disk-name");
    if assigned(CB) then
      CB(TagValue, DiskName, lError)
    else
      raise lError;
    exit;
  end;

  FindDeviceByName(TagValue, Username, DiskName,
  procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
  begin
    // Did the search fail?
    if Error  nil then
    begin
      WriteToLog(Error.message);
      if assigned(CB) then
        CB(TagValue, DiskName, Error)
      else
        raise Error;
      exit;
    end;

    // Make sure the device is local
    if Device.&Type  dtLocal then
    begin
      var lError := EException.CreateFmt('Failed to translate path, device [%s] is not local error', [Device.Name]);
      if assigned(CB) then
        CB(TagValue, DiskName, Error)
      else
        raise Error;
      exit;
    end;

    // Delete record from database
    var lWriter: TQTXWriteTransaction;
    if FDatabase.CreateWriteTransaction(lWriter) then
    begin
      lWriter.SQL := "delete from profiles where user = ? and name = ?;";
      lWriter.Parameters.AddValueOnly(Username);
      lWriter.Parameters.AddValueOnly(DiskName);

      lWriter.Execute(
      procedure (Sender: TObject; Error: Exception)
      begin
        try

          if Error  nil then
          begin
            if assigned(CB) then
              CB(TagValue, DiskName, Error)
            else
              raise Error;
            exit;
          end;

          // Construct complete storage location
          var lFullPath := TQTXNodeFileUtils.GetCurrentDirectory();
          lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + 'userdevices';
          lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + Device.location.trim();

          // Now delete the disk-drive directory
          TQTXNodeFileUtils.DeleteDirectory(nil, lFullPath,
          procedure (TagValue: variant; Path: string; Error: Exception)
          begin
            if assigned(CB) then
              CB(TagValue, DiskName, Error)
          end);

        finally
          lWriter.free;
          lWriter := nil;
        end;
      end);
    end;
  end);
end;

procedure TQTXTaskService.RegisterLocalDevice(TagValue: variant; Username, DiskName: string; CB: TQTXRegisterLocalDeviceCB);
begin
  WriteToLogF("Adding local device [%s] for user [%s] ", [DiskName, Username]);

  UserName := username.trim().ToLower();
  if Username.length < 1 then
  begin
    WriteToLog("Failed to register device, username was invalid error");
    var lError := EException.Create("Failed to register device, invalid username");
    if assigned(CB) then
      CB(TagValue, '', lError)
    else
      raise lError;
    exit;
  end;

  DiskName := DiskName.trim().ToLower();
  if DiskName.length < 1 then
  begin
    WriteToLog("Failed to register device, disk-name was invalid error");
    var lError := EException.Create("Failed to register device, invalid disk-name");
    if assigned(CB) then
      CB(TagValue, '', lError)
    else
      raise lError;
    exit;
  end;

  FindDeviceByName(TagValue, Username, DiskName,
  procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
  begin
    // Did the search fail?
    if Error  nil then
    begin
      WriteToLog(Error.message);
      if assigned(CB) then
        CB(TagValue, '', Error)
      else
        raise Error;
      exit;
    end;

    // Does a device that match already exist?
    if Device  nil then
    begin
      var lError := EException.CreateFmt("Failed to create device [%s], device already exists", [DiskName]);
      if assigned(CB) then
        CB(TagValue, '', lError)
      else
        raise lError;
      exit;
    end;

    //  make sure the device-folder does not exist, so we can create it
    ValidateLocalDiskName(TagValue, Username, DiskName,
    procedure (TagValue: variant; PathName: string; Error: Exception)
    begin
      if Error  nil then
      begin
        if assigned(CB) then
          CB(TagValue, '', Error)
        else
          raise Error;
        exit;
      end;

      // ValidateLocalDiskName only returns the valid directory-name,
      // not a full path -- so we need to build up the full targetpath
      var lFullPath := TQTXNodeFileUtils.GetCurrentDirectory();
      lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + 'userdevices';
      lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + PathName;

      TQTXNodeFileUtils.CreateDirectory(nil, lFullPath,
      procedure (TagValue: variant; Path: string; Error: exception)
      begin
        if Error  nil then
        begin
          var lError := EException.CreateFmt("Failed to create device [%s] with path: %s", [DiskName, lFullPath]);
          if assigned(CB) then
            CB(TagValue, PathName, lError)
          else
            raise lError;
          exit;
        end;

        FDatabase.Execute(
          #'insert into devices (type, owner, name, location)
            values(?, ?, ?, ?);',
            [ord(JDeviceType.dtLocal), UserName, Diskname, PathName] ,
        procedure (Sender: TObject; Error: Exception)
        begin
          if Error  nil then
          begin
            WriteToLog(Error.message);
            if assigned(CB) then
              CB(TagValue, PathName, Error)
            else
              raise Error;
            exit;
          end;

          WriteToLogF("Device [%s] added to database user [%s]", [DiskName, UserName]);
          if assigned(CB) then
            CB(TagValue, PathName, nil);
        end);

      end);



    end);
  end);
end;

procedure TQTXTaskService.SetupDeviceTable(const TagValue: variant; const CB: TRagnarokServiceCB);
begin

  FDatabase.Execute(
    #'
      create table if not exists devices
          (
            id integer primary key AUTOINCREMENT,
            type        integer,
            owner       text,
            name        text,
            location    text,
            apikey      text,
            apisecret   text,
            apipassword text,
            apiuser     text
          );
          ', [],
    procedure (Sender: TObject; Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        if assigned(CB) then
          CB(Error)
        else
          raise Error;
        exit;
      end else
      if assigned(CB) then
        CB(nil);
    end);
end;

procedure TQTXTaskService.SetupTaskTable(const TagValue: variant; const CB: TRagnarokServiceCB);
begin

  FDatabase.Execute(
    #'
      create table if not exists tasks
          (
            id integer primary key AUTOINCREMENT,
            state     integer,
            username  text,
            created   real
          );
          ', [],
    procedure (Sender: TObject; Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        if assigned(CB) then
          CB(Error)
        else
          raise Error;
        exit;
      end else
      if assigned(CB) then
        CB(nil);
    end);
end;


procedure TQTXTaskService.SetupOperationsTable(const TagValue: variant; const CB: TRagnarokServiceCB);
begin
  FDatabase.Execute(
    #'
      create table if not exists operations
          (
            id integer primary key AUTOINCREMENT,
            username text,
            taskid integer,
            name text,
            filename text
          );
          ', [],
    procedure (Sender: TObject; Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        if assigned(CB) then
          CB(Error)
        else
          raise Error;
        exit;
      end else
      if assigned(CB) then
        CB(nil);
    end);
end;

procedure TQTXTaskService.SetupDatabase(const CB: TRagnarokServiceCB);
begin
  // Try to read database-path from preferences file
  var LDbFileToOpen := FPrefs.ReadString("database", "database_name", "");

  // Trim away spaces, check if there is a filename
  LDbFileToOpen := LDbFileToOpen.trim();
  if LDbFileToOpen.length < 1 then
  begin
    // No filename? Fall back on pre-defined file in CWD
    var LBasePath := TQTXNodeFileUtils.GetCurrentDirectory();
    LDbFileToOpen := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(LBasePath) + CNT_PREFS_DBNAME;
  end;

  FDatabase.AccessMode := TSQLite3AccessMode.sqaReadWriteCreate;
  FDatabase.Open(LDbFileToOpen,
    procedure (Sender: TObject; Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        if assigned(CB) then
          CB(Error)
        else
          raise Error;
        exit;
      end;

      WriteToLog("Initializing task table");
      SetupTaskTable(nil, procedure (Error: exception)
      begin
        if Error  nil then
        begin
          WriteToLog("Tasks initialized: **failed");
          WriteToLog(error.message);
          if assigned(CB) then
            CB(Error)
          else
            raise Error;
          exit;
        end else
        writeToLog("Tasks initialized: OK");

        WriteToLog("Initializing operations table");
        SetupOperationsTable(nil, procedure (Error: exception)
        begin
          if Error  nil then
          begin
            WriteToLog("Operations initialized: **failed");
            WriteToLog(error.message);
            if assigned(CB) then
              CB(Error);
            exit;
          end else
          writeToLog("Operations initialized: OK");

          WriteToLog("Initializing device table");
          SetupDeviceTable(nil, procedure (Error: exception)
          begin
            if Error  nil then
            begin
              WriteToLog("Device-table initialized: **failed");
              WriteToLog(error.message);
              if assigned(CB) then
                CB(Error);
              exit;
            end else
            writeToLog("Device-table initialized: OK");

            if assigned(CB) then
              CB(nil);
          end);
        end);
      end);
    end);
end;


procedure TQTXTaskService.HandleFileRead(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXFileReadRequest(request);
  var lUserName := lRequest.UserName;
  var lFileName := lRequest.FileName;

  // Check filename length
  if lFileName.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  GetFileInfo(lRequest, lUserName, lFileName,
  procedure (TagValue: variant; LocalFile: string; Info: JStats; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    var lOptions: TReadFileOptions;
    lOptions.encoding := 'binary';

    NodeJsFsAPI().readFile(LocalFile, lOptions,
    procedure (Error: JError; Data: JNodeBuffer)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        SendError(Socket, Request, Error.Message);
        exit;
      end;

      var lResponse := TQTXFileReadResponse.Create(Request.Ticket);
      lResponse.UserName := lUserName;
      lResponse.Routing.TagValue := request.routing.tagValue;
      lResponse.FileName := lFileName;
      lResponse.Code := CNT_MESSAGE_CODE_OK;
      lResponse.Response := CNT_MESSAGE_TEXT_OK;

      // Convert filedata in one pass
      try
        var lConvert := TDataTypeConverter.Create();
        try
          lResponse.Attachment.AppendBytes( lConvert.TypedArrayToBytes(Data) );
        finally
          lConvert.free;
        end;
      except
        on e: exception do
        begin
          WriteToLog(e.message);
          SendError(Socket, Request, e.Message);
          exit;
        end;
      end;

      try
        Socket.Send( lResponse.Serialize() );
      except
        on e: exception do
          WriteToLog(e.message);
      end;
    end);
  end);
end;

procedure TQTXTaskService.HandleFileReadPartial(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXFileReadPartialRequest(request);
  var lUserName := lRequest.UserName;
  var lFileName := lRequest.FileName;
  var lStart := lRequest.Offset;
  var lSize := lRequest.Size;

  // Check filename length
  if lFileName.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  if lSize < 1 then
  begin
    SendError(Socket, Request, "Read failed, invalid size error");
    exit;
  end;

  if lStart < 0 then
  begin
    SendError(Socket, Request, "Read failed, invalid offset error");
    exit;
  end;

  GetFileInfo(lRequest, lUserName, lFileName,
  procedure (TagValue: variant; LocalFile: string; Info: JStats; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    if lStart > Info.size then
    begin
      SendError(Socket, Request, "Read failed, offset beyond filesize error");
      exit;
    end;

    NodeJsFsAPI().open(LocalFile, "r",
    procedure (Error: JError; Fd: THandle)
    begin
      if error  nil then
      begin
        WriteToLog(Error.message);
        SendError(Socket, Request, Error.Message);
        exit;
      end;

      var Data = new JNodeBuffer(lSize);
      NodeJsFsAPI().read(Fd, Data, 0, lSize, lStart,
      procedure (Error: JError; BytesRead: integer; buffer: JNodeBuffer)
      begin
        if Error  nil then
        begin
          NodeJsFsAPI().closeSync(Fd);
          WriteToLog(Error.message);
          SendError(Socket, Request, Error.Message);
          exit;
        end;

        // Close the file-handle and return data
        NodeJsFsAPI().close(Fd, procedure (Error: JError)
        begin
          var lResponse := TQTXFileReadPartialResponse.Create(Request.Ticket);
          lResponse.UserName := lUserName;
          lResponse.Routing.TagValue := request.routing.tagValue;
          lResponse.FileName := lFileName;
          lResponse.Code := CNT_MESSAGE_CODE_OK;
          lResponse.Response := CNT_MESSAGE_TEXT_OK;

          // Only encode data if read
          if BytesRead > 0 then
          begin
            // Convert filedata in one pass
            try
              var lConvert := TDataTypeConverter.Create();
              try
                lResponse.Attachment.AppendBytes( lConvert.TypedArrayToBytes(buffer) );
              finally
                lConvert.free;
              end;
            except
              on e: exception do
              begin
                WriteToLog(e.message);
                SendError(Socket, Request, e.Message);
                exit;
              end;
            end;
          end;

          try
            Socket.Send( lResponse.Serialize() );
          except
            on e: exception do
              WriteToLog(e.message);
          end;

        end);
      end);
    end);
  end);
end;

procedure TQTXTaskService.HandleFileWrite(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest  := TQTXFileWriteRequest(request);
  var lFileName := lRequest.FileName.trim();
  var lUserName := lRequest.UserName.trim();

  var FullPath  := lFileName;

  // Check filename length
  if lFileName.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  var lParser := TQTXPathParser.Create();
  try
    var lInfo: TQTXPathData;
    if lparser.Parse(FullPath, lInfo) then
    begin
      // Locate the device for the path belonging to the user
      FindDeviceByName(nil, lUserName, lInfo.MountPart,
      procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
      begin
        if Error  nil then
        begin
          WriteToLog(Error.Message);
          SendError(Socket, Request, Error.Message);
          exit;
        end;

        case Device.&Type of
        dtLocal:
          begin
            // We want the path + filename, so we can append that to
            // the actual localized filesystem
            var lExtract := FullPath;
            delete(lExtract, 1, lInfo.MountPart.Length + 1);

            // Construct complete storage location
            var lFullPath := TQTXNodeFileUtils.GetCurrentDirectory();
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + 'userdevices';
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + Device.location.trim();
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + lExtract;

            // Extract data to be appended, if any
            // note: null bytes should be allowed, it should just create the file
            var lBytes: array of UInt8;
            if lRequest.attachment.Size > 0 then
              lBytes := lRequest.Attachment.ToBytes();

            // Write the data to the file
            NodeJsFsAPI().writeFile(lFullPath, lBytes,
            procedure (Error: JError)
            begin
              if Error  nil then
              begin
                WriteToLog(Error.Message);
                SendError(Socket, Request, Error.Message);
                exit;
              end;

              // Setup response object
              var lResponse := TQTXFileWriteResponse.Create(lRequest.Ticket);
              lResponse.UserName := lUserName;
              lResponse.FileName := lFileName;
              lResponse.Code := CNT_MESSAGE_CODE_OK;
              lResponse.Response := CNT_MESSAGE_TEXT_OK;

              // Send success response
              try
                Socket.Send( lResponse.Serialize() );
              except
                on e: exception do
                  WriteToLog(e.message);
              end;

            end);

          end;
        dtDropbox, dtGoogle, dtMsDrive:
          begin
            var lErrorText := Format("Clound bindings not active error [%s]", [lRequest.FileName]);
            WriteToLog(lErrorText);
            SendError(Socket, Request, lErrorText);
          end;
        end;
      end);
    end else
    begin
      SendError(Socket, Request, format("Invalid path [%s] error", [FullPath]));
    end;
  finally
    lParser.free;
  end;
end;

procedure TQTXTaskService.HandleFileWritePartial(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest  := TQTXFileWritePartialRequest(request);
  var lFileName  := lRequest.FileName.trim();
  var lUserName := lRequest.UserName.trim();
  var lFileOffset := lRequest.Offset;

  // Check filename length
  if lFileName.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  var FullPath := lFileName;

  var lParser := TQTXPathParser.Create();
  try
    var lInfo: TQTXPathData;
    if lparser.Parse(FullPath, lInfo) then
    begin
      // Locate the device for the path belonging to the user
      FindDeviceByName(nil, lUserName, lInfo.MountPart,
      procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
      begin
        if Error  nil then
        begin
          WriteToLog(Error.Message);
          SendError(Socket, Request, Error.Message);
          exit;
        end;

        case Device.&Type of
        dtLocal:
          begin
            // We want the path + filename, so we can append that to
            // the actual localized filesystem
            var lExtract := FullPath;
            delete(lExtract, 1, lInfo.MountPart.Length + 1);

            // Construct complete storage location
            var lFullPath := TQTXNodeFileUtils.GetCurrentDirectory();
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + 'userdevices';
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + Device.location.trim();
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + lExtract;

            // Extract data to be appended, if any
            // note: null bytes should be allowed, it should just create the file
            var lBytes: array of UInt8;
            if lRequest.attachment.Size > 0 then
              lBytes := lRequest.Attachment.ToBytes();

            var lAccess := TQTXNodeFile.Create();
            lAccess.Open(lFullPath, TQTXNodeFileMode.nfWrite,
            procedure (Error: Exception)
            begin
              if Error  nil then
              begin
                WriteToLog(Error.Message);
                SendError(Socket, Request, Error.Message);
                exit;
              end;

              lAccess.Write(lBytes, lFileOffset,
              procedure (Error: Exception)
              begin
                if Error  nil then
                begin
                  WriteToLog(Error.Message);
                  SendError(Socket, Request, Error.Message);
                  exit;
                end;

                // Setup response object
                var lResponse := TQTXFileWriteResponse.Create(lRequest.Ticket);
                lResponse.UserName := lUserName;
                lResponse.FileName := lFileName;
                lResponse.Code := CNT_MESSAGE_CODE_OK;
                lResponse.Response := CNT_MESSAGE_TEXT_OK;

                // Send success response
                try
                  Socket.Send( lResponse.Serialize() );
                except
                  on e: exception do
                    WriteToLog(e.message);
                end;

              end);
            end);
          end;
        dtDropbox, dtGoogle, dtMsDrive:
          begin
            var lErrorText := Format("Clound bindings not active error [%s]", [lRequest.FileName]);
            WriteToLog(lErrorText);
            SendError(Socket, Request, lErrorText);
          end;
        end;
      end);
    end else
    begin
      SendError(Socket, Request, format("Invalid path [%s] error", [FullPath]));
    end;
  finally
    lParser.free;
  end;
end;

procedure TQTXTaskService.HandleRmDir(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXRmDirRequest(request);
  var lUserName := lRequest.UserName.trim();
  var lDirPath := lRequest.DirPath.trim();

  if lDirPath.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lDirPath) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  var lParser := TQTXPathParser.Create();
  try
    var lInfo: TQTXPathData;
    if lParser.Parse(lDirPath, lInfo) then
    begin
      GetTranslatedPathFor(nil, lUserName, lDirPath,
      procedure (TagValue: variant; Original, Translated: string; Error: Exception)
      begin
        if Error  nil then
        begin
          WriteToLog(Error.message);
          SendError(Socket, Request, Error.Message);
          exit;
        end;

        if not TQTXNodeFileUtils.DirectoryExists(Translated) then
        begin
          WriteToLogF("RmDir Failed, directory [%s] does not exist", [Translated]);
          SendError(Socket, Request, Format("RmDir failed, directory [%s] does not exist", [Original]));
          exit;
        end;

        TQTXNodeFileUtils.DeleteDirectory(nil, Translated,
        procedure (TagValue: variant; Path: string; Error: Exception)
        begin
          if error  nil then
          begin
            WriteToLog(Error.message);
            SendError(Socket, Request, Error.Message);
            exit;
          end;

          // Setup response object
          var lResponse := TQTXRmDirResponse.Create(lRequest.Ticket);
          lResponse.UserName := lUserName;
          lResponse.DirPath := lDirPath;
          lResponse.Code := CNT_MESSAGE_CODE_OK;
          lResponse.Response := CNT_MESSAGE_TEXT_OK;
          lResponse.Routing.TagValue := lRequest.Routing.TagValue;

          // Send success response
          try
            Socket.Send( lResponse.Serialize() );
          except
            on e: exception do
              WriteToLog(e.message);
          end;
        end);
      end);
    end else
    begin
      var lText := format("RmDir failed, invalid path [%s] error", [lDirPath]);
      WriteToLog(lText);
      SendError(Socket, Request, lText);
    end;
  finally
    lParser.free;
  end;
end;

procedure TQTXTaskService.HandleMkDir(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXMkDirRequest(request);
  var lUserName := lRequest.UserName.trim();
  var lDirPath := lRequest.DirPath.trim();

  if lDirPath.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lDirPath) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  var lParser := TQTXPathParser.Create();
  try
    var lInfo: TQTXPathData;
    if lparser.Parse(lDirPath, lInfo) then
    begin
      GetTranslatedPathFor(nil, lUserName, lDirPath,
      procedure (TagValue: variant; Original, Translated: string; Error: Exception)
      begin
        if Error  nil then
        begin
          WriteToLog(Error.message);
          SendError(Socket, Request, Error.Message);
          exit;
        end;

        TQTXNodeFileUtils.DirectoryExists(nil, Translated,
        procedure (TagValue: variant; Path: string; Error: Exception)
        begin
          if Error  nil then
          begin
            WriteToLogF("MkDir Failed, directory [%s] already exists", [Translated]);
            SendError(Socket, Request, Format("MkDir Failed, directory [%s] already exists", [Original]));
            exit;
          end;

          TQTXNodeFileUtils.CreateDirectory(nil, Translated,
          procedure (TagValue: variant; Path: string; Error: Exception)
          begin
            if Error  nil then
            begin
              WriteToLogF("MkDir Failed, directory [%s] could not be created", [Original]);
              SendError(Socket, Request, Format("MkDir Failed, directory [%s] could not be created", [Translated]));
              exit;
            end;

            // Setup response object
            var lResponse := TQTXMkDirResponse.Create(lRequest.Ticket);
            lResponse.UserName := lUserName;
            lResponse.DirPath := lDirPath;
            lResponse.Code := CNT_MESSAGE_CODE_OK;
            lResponse.Response := CNT_MESSAGE_TEXT_OK;
            lResponse.Routing.TagValue := lRequest.Routing.TagValue;

            // Send success response
            try
              Socket.Send( lResponse.Serialize() );
            except
              on e: exception do
                WriteToLog(e.message);
            end;

          end);
        end);
      end);

    end else
    begin
      var lText := format("MkDir Failed, invalid path [%s] error", [lDirPath]);
      WriteToLog(lText);
      SendError(Socket, Request, lText);
    end;
  finally
    lParser.free;
  end;
end;

procedure TQTXTaskService.HandleFileDelete(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXFileDeleteRequest(Request);
  var lUserName := lRequest.UserName.trim();
  var lFileName := lRequest.FileName.trim();

  if lFileName.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  GetFileInfo(lRequest, lUserName, lFileName,
  procedure (TagValue: variant; LocalFile: string; Info: JStats; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    if not Info.isFile then
    begin
      SendError(Socket, Request, "Filesystem object is not a file error");
      exit;
    end;

    NodeJsFsAPI().unlink(LocalFile,
    procedure (Error: JError)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        SendError(Socket, Request, Error.message);
        exit;
      end;

      var lResponse := new TQTXFileDeleteResponse(lRequest.Ticket);
      lResponse.Routing.TagValue := request.Routing.TagValue;
      lResponse.UserName := lUserName;
      lResponse.FileName := lFileName;
      lResponse.Code := CNT_MESSAGE_CODE_OK;
      lResponse.Response := CNT_MESSAGE_TEXT_OK;

      try
        Socket.Send( lResponse.Serialize() );
      except
        on e: exception do
          WriteToLog(e.message);
      end;
    end);
  end);
end;

procedure TQTXTaskService.HandleFileRename(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXFileRenameRequest(Request);
  var lUserName := lRequest.UserName.trim();
  var lFileName := lRequest.FileName.trim();
  var lNewName := lRequest.NewName.trim();

  // Check filename length
  if lFileName.length < 1 then
  begin
    SendError(Socket, Request, Format("Invalid or empty from-filename [%s] error", [lFileName]) );
    exit;
  end;

  // check newname length
  if lNewName.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  if pos(lTemp, lNewName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  if pos(lTemp, lNewName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;


  GetFileInfo(lRequest, lUserName, lFileName,
  procedure (TagValue: variant; LocalFile: string; Info: JStats; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    if not Info.isFile then
    begin
      SendError(Socket, Request, "Filesystem object is not a file error");
      exit;
    end;

    GetTranslatedPathFor(nil, lUsername, lNewName,
    procedure (TagValue: variant; Original, Translated: string; Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        SendError(Socket, Request, Error.Message);
        exit;
      end;

      NodeJsFsAPI().rename(LocalFile, Translated,
      procedure (Error: JError)
      begin
        if Error  nil then
        begin
          WriteToLog(Error.message);
          SendError(Socket, Request, Error.message);
          exit;
        end;

        var lResponse := new TQTXFileRenameResponse(lRequest.Ticket);
        lResponse.Routing.TagValue := request.Routing.TagValue;
        lResponse.UserName := lUserName;
        lResponse.FileName := lFileName;
        lResponse.Code := CNT_MESSAGE_CODE_OK;
        lResponse.Response := CNT_MESSAGE_TEXT_OK;

        try
          Socket.Send( lResponse.Serialize() );
        except
          on e: exception do
            WriteToLog(e.message);
        end;
      end);

    end);

  end);
end;

procedure TQTXTaskService.HandleGetDir(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXFileDirRequest(Request);
  var lUserName := lRequest.UserName.trim();
  var lPath := lRequest.Path.trim();

  // prevent path escape attempts
  var lTemp := "../";
  if pos(lTemp, lPath) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lPath) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  GetTranslatedPathFor(nil, lUserName, lPath,
  procedure (TagValue: variant; Original, Translated: string; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    //writeln("Translated path is:" + Translated);

    if not TQTXNodeFileUtils.DirectoryExists(Translated) then
    begin
      WriteToLogF("GetDir Failed, directory [%s] does not exist", [Translated]);
      SendError(Socket, Request, Format("GetDir failed, directory [%s] does not exist", [Original]));
      exit;
    end;

    var lWalker := TQTXFileWalker.Create();
    lWalker.Examine(Translated, procedure (Sender: TQTXFileWalker; Error: EException)
    begin
      if Error  nil then
      begin
        WriteToLogF("GetDir Failed: %s", [Error.Message]);
        SendError(Socket, Request, Format("GetDir failed: %s", [Error.Message]));
        exit;
      end;

      // Get the directory data, swap out the path
      // record with the original [amiga] style path
      var lData := Sender.ExtractList();
      lData.dlPath := Original;

      var lResponse := new TQTXFileDirResponse(lRequest.Ticket);
      lResponse.Routing.TagValue := request.Routing.TagValue;
      lResponse.UserName := lUserName;
      lResponse.Path := lPath;
      lResponse.Assign( lData );

      try
        Socket.Send( lResponse.Serialize() );
      except
        on e: exception do
          WriteToLog(e.message);
      end;

      // release instance in 100ms
      TQTXDispatch.execute(procedure ()
      begin
        try
          lWalker.free
        except
          on e: exception do
          begin
            WriteToLogF("Failed to release file-walker instance: %s", [e.message]);
          end;
        end;
      end, 100);
    end);
  end);
end;

procedure TQTXTaskService.HandleGetFileInfo(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXFileInfoRequest(Request);
  var lUserName := lRequest.UserName.trim();
  var lFileName := lRequest.FileName.trim();

  // prevent path escape attempts
  var lTemp := "../";
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  GetFileInfo(lRequest, lUserName, lFileName,
  procedure (TagValue: variant; LocalFile: string; Info: JStats; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    // Collect the data
    var lData := new JFileItem();
    lData.diFileName := lFileName;
    lData.diFileType := if Info.isFile then JFileItemType.wtFile else JFileItemType.wtFolder;
    lData.diFileSize := Info.size;
    lData.diFileMode := IntToStr(Info.mode);
    lData.diCreated  := TDateUtils.FromJsDate( Info.cTime );
    lData.diModified := TDateUtils.FromJsDate( Info.mTime );

    var lResponse := new TQTXFileInfoResponse(lRequest.Ticket);
    lResponse.Routing.TagValue := request.Routing.TagValue;
    lResponse.UserName := lUserName;
    lResponse.FileName := lFileName;
    lResponse.Assign(lData);

    try
      Socket.Send( lResponse.Serialize() );
    except
      on e: exception do
        WriteToLog(e.message);
    end;
  end);
end;

procedure TQTXTaskService.HandleDestroyDevice(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lMessage := TQTXFileDestroyDeviceRequest(request);

  // This will also destroy any files + unregister the device in the
  // database table for the service -- do not mess with this!
  UnRegisterLocalDevice(nil, lMessage.Username, lMessage.DeviceName,
  procedure (TagValue: variant; LocalPath: string; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.Message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    var lResponse := TQTXFileDestroyDeviceResponse.Create(request.ticket);
    lResponse.UserName := lMessage.UserName;
    lResponse.DeviceName := lMessage.DeviceName;
    lResponse.Routing.TagValue := Request.Routing.TagValue;
    lResponse.Code := CNT_MESSAGE_CODE_OK;
    lResponse.Response := CNT_MESSAGE_TEXT_OK;

    try
      Socket.Send( lResponse.Serialize() );
    except
      on e: exception do
      begin
        WriteToLog(e.message);
      end;
    end;
  end);
end;

procedure TQTXTaskService.HandleCreateLocalDevice(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lMessage := TQTXFileCreateLocalDeviceRequest(request);

  // Attempt to register.
  // NOTE: This will automatically create a matching folder
  //       under $cwd/userdevices/[calculated_name_of_device]

  RegisterLocalDevice(nil, lMessage.Username, lMessage.DeviceName,
  procedure (TagValue: variant; LocalPath: string; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.Message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    FindDeviceByName(nil, lMessage.Username, lMessage.DeviceName,
    procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.Message);
        SendError(Socket, Request, Error.Message);
        exit;
      end;

      var lResponse := TQTXFileCreateLocalDeviceResponse.Create(request.ticket);
      lResponse.UserName := lMessage.UserName;
      lResponse.Routing.TagValue := Request.Routing.TagValue;
      lResponse.Code := CNT_MESSAGE_CODE_OK;
      lResponse.Response := CNT_MESSAGE_TEXT_OK;
      if Device  nil then
        lResponse.assign(Device);

      try
        Socket.Send( lResponse.Serialize() );
      except
        on e: exception do
        begin
          WriteToLog(e.message);
        end;
      end;

    end);
  end);
end;

procedure TQTXTaskService.HandleGetDeviceByName(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lMessage := TQTXFileGetDeviceByNameRequest(request);

  FindDeviceByName(nil, lMessage.Username, lMessage.DeviceName,
  procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.Message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    var lResponse := TQTXFileGetDeviceByNameResponse.Create(request.ticket);
    lResponse.UserName := lMessage.UserName;
    lResponse.Code := CNT_MESSAGE_CODE_OK;
    lResponse.Response := CNT_MESSAGE_TEXT_OK;
    if Device  nil then
      lResponse.assign(Device);

    try
      Socket.Send( lResponse.Serialize() );
    except
      on e: exception do
      begin
        WriteToLog(e.message);
      end;
    end;
  end);

end;

procedure TQTXTaskService.HandleGetDevices(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lMessage := TQTXFileGetDeviceListRequest(Request);
  GetDevicesForUser(nil, lMessage.Username,
  procedure (TagValue: variant; Devices: JDeviceList; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.Message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    var lResponse := TQTXFileGetDeviceListResponse.Create(request.ticket);
    lResponse.UserName := lMessage.UserName;
    lResponse.Code := CNT_MESSAGE_CODE_OK;
    lResponse.Response := CNT_MESSAGE_TEXT_OK;
    if Devices  nil then
      lResponse.assign(Devices);

    try
      Socket.Send( lResponse.Serialize() );
    except
      on e: exception do
      begin
        WriteToLog(e.message);
      end;
    end;

  end);
end;

procedure TQTXTaskService.AfterServerStarted;
begin
  inherited;

  // Check prefs if zconfig should be applied
  if self.FPrefs.ReadBoolean("zconfig", "active", false) then
  begin
    // ZConfig should only run on the master instance.
    // We dont want to register our endpoint for each worker
    if NodeJSClusterAPI().isWorker then
      exit;

    writeln("Setting up Zero-Configuration layer");
    FZConfig.port := FPrefs.ReadInteger('zconfig', 'bindport', 2109);
    FZConfig.address := GetMachineIP();
    FZConfig.Start(nil, procedure (Sender: TObject; TagValue: variant; Error: Exception)
    begin
      if FPrefs.ReadBoolean("zconfig", "broadcast", true) then
        FZConfig.Socket.setBroadcast(true);

      // Build up the endpoint (URL) for our websocket server
      var lEndpoint := '';

      if FPrefs.ReadBoolean('networking', 'secure', false) then
        lEndpoint := 'wss://'
      else
        lEndpoint := 'ws://';

      lEndpoint += GetMachineIP();
      lEndpoint += ':' + Port.ToString();

      // Ping the ZConfig service on interval, until our service is registered
      // We keep track of the interval handle so we can stop calling on interval later
      FRegHandle := TQTXDispatch.SetInterval( procedure ()
      begin
        inc(FRegCount);

        // Only output once to avoid overkill in the log
        if FRegCount = 1 then
          WriteToLogF("ZConfig registration begins [%s]", [lEndpoint]);

        FZConfig.RegisterService(nil, CNT_ZCONFIG_SERVICE_NAME, SERVICE_ID_TASKMANAGER, lEndpoint,
        procedure (TagValue: variant; Error: Exception)
        begin
          if Error = nil then
          begin
            WriteToLog("Service registered");
            TQTXDispatch.ClearInterval(FRegHandle);
            FRegCount := 0;
            exit;
          end;
        end);
      end, 1000);

    end);
  end;
end;

procedure TQTXTaskService.BeforeServerStopped;
begin
  inherited;
end;

procedure TQTXTaskService.Dispatch(Socket: TNJWebSocketSocket; Message: TQTXBaseMessage);
begin
  var LInfo := MessageDispatch.GetMessageInfoForClass(Message);
  if LInfo  nil then
  begin
    try
      LInfo.MessageHandler(Socket, Message);
    except
      on e: exception do
      begin
        //Log error
        WriteToLog(e.message);
      end;
    end;
  end;
end;

end.


 

Hydra, what’s the big deal anyway?

October 29, 2019 7 comments

RemObjects Hydra is a product I have used for years in concert with Delphi, and like most developers that come into contact with RemObjects products – once the full scope of the components hit you, you never want to go back to not using Hydra in your applications.

Note: It’s easy to dismiss Hydra as a “Delphi product”, but Hydra for .Net and Java does the exact same thing, namely let you mix and match modules from different languages in your programs. So if you are a C# developer looking for ways to incorporate Java, Delphi, Elements or Freepascal components in your application, then keep reading.

But let’s start with what Hydra can do for Delphi developers.

What is Hydra anyways?

Hydra is a component package for Delphi, Freepascal, .Net and Java that takes plugins to a whole new level. Now bear with me for a second, because these plugins is in a completely different league from anything you have used in the past.

In short, Hydra allows you to wrap code and components from other languages, and use them from Delphi or Lazarus. There are thousands of really amazing components for the .Net and Java platforms, and Hydra allows you compile those into modules (or “plugins” if you prefer that); modules that can then be used in your applications like they were native components.

hydra-01-overview

Hydra, here using a C# component in a Delphi application

But it doesn’t stop there; you can also mix VCL and FMX modules in the same application. This is extremely powerful since it offers a clear path to modernizing your codebase gradually rather than doing a time consuming and costly re-write.

So if you want to move your aging VCL codebase to Firemonkey, but the cost of having to re-write all your forms and business logic for FMX would break your budget -that’s where Hydra gives you a second option: namely that you can continue to use your VCL code from FMX and refactor the application in your own tempo and with minimal financial impact.

The best of all worlds

Not long ago RemObjects added support for Lazarus (Freepascal) to the mix, which once again opens a whole new ecosystem that Delphi, C# and Java developers can benefit from. Delphi has a lot of really cool components, but Lazarus have components that are not always available for Delphi. There are some really good developers in the Freepascal community, and you will find hundreds of components and classes (if not thousands) that are open-source; For example, Lazarus has a branch of Synedit that is much more evolved and polished than the fork available for Delphi. And with Hydra you can compile that into a module / plugin and use it in your Delphi applications.

This is also true for Java and C# developers. Some of the components available for native languages might not have similar functionality in the .Net world, and by using Hydra you can tap into the wealth that native languages have to offer.

As a Delphi or Freepascal developer, perhaps you have seen some of the fancy grids C# and Java coders enjoy? Developer Express have some of the coolest components available for any platform, but their focus is more on .Net these days than Delphi. They do maintain the control packages they have, but compared to the amount of development done for C# their Delphi offerings are abysmal. So with Hydra you can tap into the .Net side of things and use the latest components and libraries in your Delphi applications.

Financial savings

One of coolest features of Hydra, is that you can use it across Delphi versions. This has helped me leverage the price-tag of updating to the latest Delphi.

It’s easy to forget that whenever you update Delphi, you also need to update the components you have bought. This was one of the reasons I was reluctant to upgrade my Delphi license until Embarcadero released Delphi 10.2. Because I had thousands of dollars invested in components – and updating all my licenses would cost a small fortune.

So to get around this, I put the components into a Hydra module and compiled that using my older Delphi. And then i simply used those modules from my new Delphi installation. This way I was able to cut cost by thousands of dollars and enjoy the latest Delphi.

hydramix

Using Firemonkey controls under VCL is easy with Hydra

A couple of years back I also took the time to wrap a ton of older components that work fine but are no longer maintained or sold. I used an older version of Delphi to get these components into a Hydra module – and I can now use those with Delphi 10.3 (!). In my case there was a component-set for working closely with Active Directory that I have used in a customer’s project (and much faster than having to go the route via SQL). The company that made these don’t exist any more, and I have no source-code for the components.

The only way I could have used these without Hydra, would be to compile them into a .dll file and painstakingly export every single method (or use COM+ to cross the 32-bit / 64-bit barrier), which would have taken me a week since we are talking a large body of quality code. With Hydra i was able to wrap the whole thing in less than an hour.

I’m not advocating that people stop updating their components. But I am very thankful for the opportunity to delay having to update my entire component stack just to enjoy a modern version of Delphi.

Hydra gives me that opportunity, which means I can upgrade when my wallet allows it.

Building better applications

There is also another side to Hydra, namely that it allows you to design applications in a modular way. If you have the luxury of starting a brand new project and use Hydra from day one, you can isolate each part of your application as a module. Avoiding the trap of monolithic applications.

img_517046

Hydra for .Net allows you to use Delphi, Java and FPC modules under C#

This way of working has great impact on how you maintain your software, and consequently how you issue hotfixes and updates. If you have isolated each key part of your application as separate modules, you don’t need to ship a full build every time.

This also safeguards you from having all your eggs in one basket. If you have isolated each form (for example) as separate modules, there is nothing stopping you from rewriting some of these forms in another language – or cross the VCL and FMX barrier. You have to admit that being able to use the latest components from Developer Express is pretty cool. There is not a shadow of a doubt that Developer-Express makes the best damn components around for any platform. There are many grids for Delphi, but they cant hold a candle to the latest and greatest from Developer Express.

Why can’t I just use packages?

If you are thinking “hey, this sounds exactly like packages, why should I buy Hydra when packages does the exact same thing?“. Actually that’s not how packages work for Delphi.

Delphi packages are cool, but they are also severely limited. One of the reasons you have to update your components whenever you buy a newer version of Delphi, is because packages are not backwards compatible.

delphi-500

Delphi packages are great, but severely limited

A Delphi package must be compiled with the same RTL as the host (your program), and version information and RTTI must match. This is because packages use the same RTL and more importantly, the same memory manager.

Hydra modules are not packages. They are clean and lean library files (*.dll files) that includes whatever RTL you compiled them with. In other words, you can safely load a Hydra module compiled with Delphi 7, into a Delphi 10.3 application without having to re-compile.

Once you start to work with Hydra, you gradually build up modules of functionality that you can recycle in the future. In many ways Hydra is a whole new take on components and RAD. This is how Delphi packages and libraries should have been.

Without saying anything bad about Delphi, because Delphi is a system that I love very much; but having to update your entire component stack just to use the latest Delphi, is sadly one of the factors that have led developers to abandon the platform. If you have USD 10.000 in dependencies, having to pay that as well as buying Delphi can be difficult to justify; especially when comparing with other languages and ecosystems.

For me, Hydra has been a tremendous boon for Delphi. It has allowed me to keep current with Delphi and all it’s many new features, without losing the money I have already invested in components packages.

If you are looking for something to bring your product to the next level, then I urge you to spend a few hours with Hydra. The documentation is exceptional, the features and benefits are outstanding — and you will wonder how you ever managed to work without them.

External resources

Disclaimer: I am not a salesman by any stretch of the imagination. I realize that promoting a product made by the company you work for might come across as a sales pitch; but that’s just it: I started to work for RemObjects for a reason. And that reason is that I have used their products since they came on the market. I have worked with these components long before I started working at RemObjects.

Using multiple languages is the same project

August 21, 2019 1 comment

Most compilers can only handle a single syntax for any project, but the Elements compiler from RemObjects deals with 5 (five!) different languages -even within the same project. That’s pretty awesome and opens up for some considerable savings.

I mean, it’s not always easy to find developers for a single language, but when you can approach your codebase from C#, Java, Go, Swift and Oxygene (object pascal) at the same time (inside the same project even!), you suddenly have some options.  Especially since you can pick exotic targets like WebAssembly. Or what about compiling Java to .net bytecodes? Or using the VCL from C#? It’s pretty awesome stuff!

Check out Marc Hoffmans article on the Elements compiler toolchain and how you can mix and match between languages, picking the best from each — while still compiling to a single binary of llvm optimized code:

mixins

Click on the picture to be redirected

 

Check out RemObjects Remoting SDK

July 22, 2019 3 comments

RemObjects Remoting SDK is one of those component packages that have become more than the sum of it’s part. Just like project Jedi has become standard equipment almost, Remoting SDK is a system that all Delphi and Freepascal developers should have in their toolbox.

ro_logo
In this article I’m going to present the SDK in broad strokes; from a viewpoint of someone who haven’t used the SDK before. There are still a large number of Delphi developers that don’t know it even exists – hopefully this post will shed some light on why the system is worth every penny and what it can do for you.

I should also add, that this is a personal blog. This is not an official RemObjects presentation, but a piece written by me based on my subjective experience and notions. We have a lot of running dialog at Delphi Developer on Facebook, so if I read overly harsh on a subject, that is my personal view as a Delphi Developer.

Stop re-inventing the wheel

Delphi has always been a great tool for writing system services. It has accumulated a vast ecosystem of non-visual components over the years, both commercial and non-commercial, and this allows developers to quickly aggregate and expose complex behavior — everything from graphics processing to databases, file processing to networking.

The challenge for Delphi is that writing large composite systems, where you have more than a single service doing work in concert, is not factored into the RTL or project type. Delphi provides a bare-bone project type for system services, and that’s it. Depending on how you look at it, it’s either a blessing or a curse. You essentially start on C level.

So fundamental things like IPC (inter process communication) is something you have to deal with yourself. If you want multi-tenancy that is likewise not supported out of the box. And all of this is before we venture into protocol standards, message formats and async vs synchronous execution.

The idea behind Remoting SDK is to get away from this style of low-level hacking. Without sounding negative, it provides the missing pieces that Delphi lacks, including the stuff that C# developers enjoy under .net (and then some). So if you are a Delphi developer who look over at C# with smudge of envy, then you are going to love Remoting SDK.

Say goodbye to boilerplate mistakes

Writing distributed servers and services is boring work. For each function you expose, you have to define the parameters and data-types in a portable way, then you have to implement the code that represents the exposed function and finally the interface itself that can be consumed by clients. The latter must be defined in a way that works with other languages too, not just Delphi. So while server tech in it’s essential form is quite simple, it’s the infrastructure that sets the stage of how quickly you can apply improvements and adapt to change.

For example, let’s say you have implemented a wonderful new service. It exposes 60 awesome functions that your customers can consume in their own work. The amount of boilerplate code for 60 distributed functions, especially if you operate with composite data types, is horrendous. It is a nightmare to manage and opens up for sloppy, unnecessary mistakes.

ide_int

After you install Remoting SDK, the service designer becomes a part of the IDE

This is where Remoting SDK truly shines. When you install the software, it integrates it’s editors and wizards closely with the Delphi IDE. It adds a ton of new project types, components and whatnot – but the most important feature is without a doubt the service designer.

bonjour

Start the service-designer in any server or service project and you can edit the methods, data types and interfaces your system expose to the world

As the name implies, the service designer allows you to visually define your services. Adding a new function is a simple click, the same goes for datatypes and structures (record types). These datatypes are exposed too and can be consumed from any modern language. So a service you make in Delphi can be used from C#, C/C++, Java, Oxygene, Swift (and visa-versa).

Auto generated code

A service designer is all good and well I hear you say, but what about that boilerplate code? Well Remoting SDK takes care of that too (kinda the point). Whenever you edit your services, the designer will auto-generate a new interface unit for you. This contains the classes and definitions that describe your service. It will also generate an implementation unit, with empty functions; you just need to fill in the blanks.

The designer is also smart enough not to remove code. So if you go in and change something, it won’t just delete the older implementation procedure. Only the params and names will be changed if you have already written some code.

bonjour_source

Having changed a service, hitting F9 re-generates the interface code automatically. Your only job is to fill in the code for each method in the implementation units. The SDK takes care of everything else for you

The service information, including the type information, is stored in a special file format called “rodl”. This format is very close to Microsoft WSDL format, but it holds more information. It’s important to underline that you can import the service directly from your servers (optional naturally) as WSDL. So if you want to consume a Remoting SDK service using Delphi’s ordinary RIO components, that is not a problem. Visual Studio likewise imports and consumes services – so Remoting SDK behaves identical regardless of platform or language used.

Remoting SDK is not just for Delphi, just to be clear on that. If you are presently using both Delphi and C# (which is a common situation), you can buy a license for both C# and Delphi and use whatever language you feel is best for a particular task or service. You can even get Remoting SDK for Javascript and call your service-stack directly from your website if you like. So there are a lot of options for leveraging the technology.

Transport is not content

OK so Remoting SDK makes it easy to define distributed services and servers. But what about communication? Are we boxed into RemObjects way of doing things?

The remoting framework comes with a ton of components, divided into 3 primary groups:

  • Servers
  • Channels (clients)
  • Messages

The reason for this distinction is simple: the ability to transport data, is never the same as the ability to describe data. For example, a message is always connected to a standard. It’s job is ultimately to serialize (represent) and de-serialize data according to a format. The server’s job is to receive a request and send a response. So these concepts are neatly decoupled for maximum agility.

As of writing the SDK offers the following message formats:

  • Binary
  • Post
  • SOAP
  • JSON

If you are exposing a service that will be consumed from JavaScript, throwing in a TROJSONMessage component is the way to go. If you expect messages to be posted from your website using ordinary web forms, then TROPostMessage is a perfect match. If you want XML then TROSOAPMessage rocks, and if you want fast, binary messages – well then there is TROBinaryMessage.

What you must understand is that you don’t have to pick just one! You can drop all 4 of these message formats and hook them up to your server or channel. The SDK is smart enough to recognize the format and use the correct component for serialization. So creating a distributed service that can be consumed from all major platforms is a matter of dropping components and setting a property.

channels

If you double-click on a server or channel, you can link message components with a simple click. No messy code snippets in sight.

Multi-tenancy out of the box

With the release of Rad-Server as a part of Delphi, people have started to ask what exactly multi-tenancy is and why it matters. I have to be honest and say that yes, it does matter if you are creating a service stack where you want to isolate the logic for each customer in compartments – but the idea that this is somehow new or unique is not the case. Remoting SDK have given users multi-tenancy support for 15+ years, which is also why I haven’t been too enthusiastic with Rad-Server.

Now don’t get me wrong, I don’t have an axe to grind with Rad-Server. The only reason I mention it is because people have asked how i feel about it. The tech itself is absolutely welcome, but it’s the licensing and throwing Interbase in there that rubs me the wrong way. If it could run on SQLite3 and was free with Enterprise I would have felt different about it.

mt-models

There are various models for multi-tenancy, but they revolve around the same principles

To get back on topic: multi-tenancy means that you can dynamically load services and expose them on demand. You can look at it as a form of plugin functionality. The idea in Rad-Server is that you can isolate a customer’s service in a separate package – and then load the package into your server whenever you need it.

ro_comps

Some of the components that ship with the system

The reason I dislike Rad-Server in this respect, is because they force you to compile with packages. So if you want to write a Rad-Server system, you have to compile your entire project as package-based, and ship a ton of .dpk files with your system. Packages is not wrong or bad per-se, but they open your system up on a fundamental level. There is nothing stopping a customer from rolling his own spoof package and potentially bypass your security.

There is also an issue with un-loading a package, where right now the package remains in memory. This means that hot-swapping packages without killing the server wont work.

Rad-Server is also hardcoded to use Interbase, which suddenly bring in licensing issues that rubs people the wrong way. Considering the price of Delphi in 2019, Rad-Server stands out as a bit of an oddity. And hardcoding a database into it, with the licensing issues that brings -just rendered the whole system mute for me. Why should I pay more to get less? Especially when I have been using multi-tenancy with RemObjects for some 15 years?

With Remoting SDK you have something called DLL servers, which does the exact same thing – but using ordinary DLL files (not packages!). You don’t have to compile your system with packages, and it takes just one line of code to make your main dispatcher aware of the loaded service.

This actually works so well that I use Remoting SDK as my primary “plugin” system. Even when I write ordinary desktop applications that has nothing to do with servers or services – I always try to compartmentalize features that could be replaced in the future.

For example, I’m a huge fan of ElevateDB, which is a native Delphi database engine that compiles directly into your executable. By isolating that inside a DLL as a service, my application is now engine agnostic – and I get a break from buying a truck load of components every time Delphi is updated.

Saving money

The thing about DLL services, is that you can save a lot of money. I’m actually using an ElevateDB license that was for Delphi 2007. I compiled the engine using D2007 into a DLL service — and then I consume that DLL from my more modern Delphi editions. I have no problem supporting or paying for components, that is right and fair, but having to buy new licenses for every single component each time Delphi is updated? This is unheard of in other languages, and I would rather ditch the platform all together than forking out $10k ever time I update.

dll_project

A DLL server can be used for many things if you are creative about it

While we are on the subject – Hydra is another great money saver. It allows you to use .net and Java libraries (both visual and non-visual) with Delphi. With Hydra you can design something in .net, compile it into a DLL file, and then use that from Delphi.

But — you can also compile things from Delphi, and use it in newer versions of Delphi. Im not forking out for a Developer Express update just to use what I have already paid for in the latest Delphi. I have one license, I compile the forms and components into a Hydra Module — and then use it from newer Delphi editions.

hydra

Hydra, which is a separate product, allows you to stuff visual components and forms inside a vanilla DLL. It allows cross  language use, so you can finally use Java and .net components inside your Delphi application

Bonjour support

Another feature I love is the zero configuration support. This is one of those things that you often forget, but that suddenly becomes important once you deploy a service stack on cluster level.

apple_bonjour_medium-e1485166557218Remoting SDK comes with support for Apple Bonjour, so if you want to use that functionality you have to install the Bonjour library from Apple. Once installed on your host machines, your RemObjects services can find each other.

ZeroConfig is not that hard to code manually. You can roll your own using UDP or vanilla messages. But getting service discovery right can be fiddly. One thing is broadcasting an UDP message saying “here I am”, it’s something else entirely to allow service discovery on cluster level.

If Bonjour is not your cup of tea, the SDK provides a second option, which is RemObjects own zero-config hub. You can dig into the documentation to find out more about this.

What about that IPC stuff you mentioned?

I mentioned IPC (inter process communication) at the beginning here, which is a must have if you are making a service stack where each member is expected to talk to the others. In a large server-system the services might not exist on the same, physical hardware either, so you want to take height for that.

With the SDK this is just another service. It takes 10 minutes to create a DLL server with the functionality to send and receive messages – and then you just load and plug that into all your services. Done. Finished.

Interestingly, Remoting SDK supports named-pipes. So if you are running on a Windows network it’s even easier. Personally I prefer to use a vanilla TCP/IP based server and channel, that way I can make use of my Linux blades too.

Building on the system

There is nothing stopping you from expanding the system that RemObjects have established. You are not forced to only use their server types, message types and class framework. You can mix and match as you see fit – and also inherit out your own variation if you need something special.

firm_foundation-720x340For example, WebSocket is an emerging standard that has become wildly popular. Remoting SDK does not support that out of the box, the reason is that the standard is practically identical to the RemObjects super-server, and partly because there must be room for third party vendors.

Andre Mussche took the time to implement a WebSocket server for Remoting SDK a few years back. Demonstrating in the process just how easy it is to build on the existing infrastructure. If you are already using Remoting SDK or want WebSocket support, head over to his github repository and grab the code there: https://github.com/andremussche/DelphiWebsockets

I could probably write a whole book covering this framework. For the past 15 years, RemObjects Remoting SDK is the first product I install after Delphi. It has become standard for me and remains an integral part of my toolkit. Other packages have come and gone, but this one remains.

Hopefully this post has tickled your interest in the product. No matter if you are maintaining a legacy service stack, or thinking about re implementing your existing system in something future-proof, this framework will make your life much, much easier. And it wont break the bank either.

You can visit the product page here: https://www.remotingsdk.com/ro/default.aspx

And you can check out the documentation here: https://docs.remotingsdk.com/

Calling node.js from Delphi

July 6, 2019 1 comment

We got a good question about how to start a node.js program from Delphi on our Facebook group today (third one in a week?). When you have been coding for years you often forget that things like this might not be immediately obvious. Hopefully I can shed some light on the options in this post.

Node or chrome?

nodeJust to be clear: node.js has nothing to do with chrome or chromium embedded. Chrome is a web-browser, a completely visual environment and ecosystem.

Node.js is the complete opposite. It is purely a shell based environment, meaning that it’s designed to run services and servers, with emphasis on the latter.

The only thing node.js and chrome have in common, is that they both use the V8 JavaScript runtime engine to load, JIT compile and execute scripts at high speed. Beyond that, they are utterly alien to each other.

Can node.js be embedded into a Delphi program?

Technically there is nothing stopping a C/C++ developer from compiling the node.js core system as C++ builder compatible .obj files; files that can then be linked into a Delphi application through references. But this also requires a bit of scaffolding, like adding support for malloc_, free_ and a few other procedures – so that your .obj files uses the same memory manager as your Delphi code. But until someone does just that and publish it, im afraid you are stuck with two options:

  • Use a library called Toby, that keeps node.js in a single DLL file. This is the most practical way if you insist on hosting your own version of node.js
  • Add node.js as a prerequisite and give users the option to locate the node.exe in your application’s preferences. This is the way I would go, because you really don’t want to force users to stick with your potentially outdated or buggy build.

So yes, you can use toby and just add the toby dll file to your program folder, but I have to strongly advice against that. There is no point setting yourself up for maintaining a whole separate programming language, just because you want JavaScript support.

“How many in your company can write high quality WebAssembly modules?”

If all you want to do is support JavaScript in your application, then I would much rather install Besen into Delphi. Besen is a JavaScript runtime engine written in Freepascal. It is fully compatible with Delphi, and follows the ECMA standard to the letter. So it is extremely compatible, fast and easy to use.

Like all Delphi components Besen is compiled into your application, so you have no dependencies to worry about.

Starting a node.js script

The easiest way to start a node.js script, is to simply shell-execute out of your Delphi application. This can be done as easily as:

ShellExecute(Handle, 'open', PChar('node.exe'), pchar('script.js'), nil, SW_SHOW);

This is more than enough if you just want to start a service, server or do some work that doesn’t require that you capture the result.

If you need to capture the result, the data that your node.js program emits on stdout, there is a nice component in the Jedi Component Library. Also plenty of examples online on how to do that.

If you need even further communication, you need to look for a shell-execute that support pipes. All node.js programs have something called a message-channel in the Javascript world. In reality though, this is just a named pipe that is automatically created when your script starts (with the same moniker as the PID [process identifier]).

If you opt for the latter you have a direct, full duplex message channel directly into your node.js application. You just have to agree with yourself on a protocol so that your Delphi code understands what node.js is saying, and visa versa.

UDP or TCP

If you don’t want to get your hands dirty with named pipes and rolling your own protocol, you can just use UDP to let your Delphi application communicate with your node.js process. UDP is practically without cost since its fundamental to all networking stacks, and in your case you will be shipping messages purely between processes on localhost. Meaning: packets are never sent on the network, but rather delegated between processes on the same machine.

In that case, I suggest you ship in the port you want your UDP server to listen on, so that your node.js service acts as the server. A simple command-line statement like:

node.exe myservice.js 8090

Inside node.js you can setup an UDP server with very little fuzz:


function setupServer(port) {
  var os = require("os");
  var dgram = require("dgram");
  var socket = dgram.createSocket("udp4");

  var MULTICAST_HOST = "224.0.0.236";
  var BROADCAST_HOST = "255.255.255.255";
  var ALL_PORT = 60540;
  var MULTICAST_TTL = 1; // Local network

  socket.bind(port);
  socket.on('listening', function() {
    socket.setMulticastLoopback(true);
    socket.setMulticastTTL(MULTICAST_TTL);
    socket.addMembership(multicastHost);
    if(broadcast) { socket.setBroadcast(true); }
  });
  socket.on('message', parseMessage);
}

function parseMessage(message, rinfo) {
try {
  var messageObject = JSON.parse(message);
  var eventType = messageObject.eventType;
  } catch(e) {
  }
}

Note: the code above assumes a JSON text message.

You can then use any Delphi UDP client to communicate with your node.js server, Indy is good, Synapse is a good library with less overhead – there are many options here.

Do I have to learn Javascript to use node.js?

If you download DWScript you can hook-up the JS-codegen library (see library folder in the DWScript repository), and use that to compile DWScript (object pascal) to kick-ass Javascript. This is the same compiler that was used in Smart Mobile Studio.

“Adding WebAssembly to your resume is going to be a hell of a lot more valuable in the years to come than C# or Java”

Another alternative is to use Freepascal, they have a pas2js project where you can compile ordinary object-pascal to javascript. Naturally there are a few things to keep in mind, both for DWScript and Freepascal – like avoiding pointers. But clean object pascal compiles just fine.

If JavaScript is not your cup of tea, or you simply don’t have time to learn the delicate nuances between the DOM (document object model, used by browsers) and the 100% package oriented approach deployed by node.js — then you can just straight up to webassembly.

RemObjects Software has a kick-ass webassembly compiler, perfect if you dont have the energy or time to learn JavaScript. As of writing this is the fastest and most powerful toolchain available. And I have tested them all.

WebAssembly, no Javascript needed

RO-Single-Gear-512You might remember Oxygene? It used to be shipped with Delphi as a way to target Microsoft CLR (common language runtime) and the .net framework.

Since then Oxygene and the RemObjects toolchain has evolved dramatically and is now capable of a lot more than CLR support.

  • You can compile to raw, llvm optimized machine code for 8 platforms
  • You can compile to CLR/.Net
  • You can compile to Java bytecodes
  • You can compile to WebAssembly!

WebAssembly is not Javascript, it’s important to underline that. WebAssembly was created especially for developers using traditional languages, so that traditional compilers can emit web friendly, binary code. Unlike Javascript, WebAssembly is a purely binary format. Just like Delphi generates machine-code that is linked into a final executable, WebAssembly is likewise compiled, linked and emitted in binary form.

If that sounds like a sales pitch, it’s not. It’s a matter of practicality.

  • WebAssembly is completely barren out of the box. The runtime environment, be it V8 for the browser or V8 for node.js, gives you nothing out of the box. You don’t even have WriteLn() to emit text.
  • Google expects compiler makers to provide their own RTL functions, from the fundamental to the advanced. The only thing V8 gives you, is a barebone way of referencing objects and functions on the other side, meaning the JS and DOM world. And that’s it.

So the reason i’m talking a lot about Oxygene and RemObjects Elements (Elements is the name of the compiler toolchain RemObjects offers), is because it ships with an RTL. So you are not forced to start on actual, literal assembly level.

studio

If you don’t want to study JavaScript, Oxygene and Elements from RemObjects is the solution

RemObjects also delivers a DelphiVCL compatibility framework. This is a clone of the Delphi VCL / Freepascal LCL. Since WebAssembly is still brand new, work is being done on this framework on a daily basis, with updates being issued all the time.

Note: The Delphi VCL framework is not just for WebAssembly. It represents a unified framework that can work anywhere. So if you switch from WebAssembly to say Android, you get the same result.

The most important part of the above, is actually not the visual stuff. I mean, having HTML5 visual controls is cool – but chances are you want to use a library like Sencha, SwiftUI or jQueryUI to compose your forms right? Which means you just want to interface with the widgets in the DOM to set and get values.

jQuery UI Bootstrap

You probably want to use a fancy UI library, like jQuery UI. This works perfectly with Elements because you can reference the controls from your WebAssembly module. You dont have to create TButton, TListbox etc manually

The more interesting stuff is actually the non-visual code you get access to. Hundreds of familiar classes from the VCL, painstakingly re-created, and usable from any of the 5 languages Elements supports.

You can check it out here: https://github.com/remobjects/DelphiRTL

Skipping JavaScript all together

I dont believe in single languages. Not any more. There was a time when all you needed was Delphi and a diploma and you were set to conquer the world. But those days are long gone, and a programmer needs to be flexible and have a well stocked toolbox.

At least try the alternatives before you settle on a phone

Knowing where you want to be is half the journey

The world really don’t need yet-another-c# developer. There are millions of C# developers in India alone. C# is just “so what?”. Which is also why C# jobs pays less than Delphi or node.js system service jobs.

What you want, is to learn the things others avoid. If JavaScript looks alien and you feel uneasy about the whole thing – that means you are growing as a developer. All new things are learned by venturing outside your comfort zone.

How many in your company can write high quality WebAssembly modules?

How many within one hour driving distance from your office or home are experts at WebAssembly? How many are capable of writing industrial scale, production ready system services for node.js that can scale from a single instance to 1000 instances in a large, clustered cloud environment?

Any idiot can pick up node.js and knock out a service, but with your background from Delphi or C++ builder you have a massive advantage. All those places that can throw an exception that JS devs usually ignore? As a Delphi or Oxygene developer you know better. And when you re-apply that experience under a different language, suddenly you can do stuff others cant. Which makes your skills valuable.

qtx

The Quartex Media Desktop have made even experienced node / web developers gasp. They are not used to writing custom-controls and large-scale systems, which is my advantage

So would you learn JavaScript or just skip to WebAssembly? Honestly? Learn a bit of both. You don’t have to be an expert in JavaScript to compliment WebAssembly. Just get a cheap book, like “Node.js for beginners” and “JavaScript the good parts” ($20 a piece) and that should be more than enough to cover the JS side of things.

Adding WebAssembly to your resume and having the material to prove you know your stuff, is going to be a hell of a lot more valuable in the years to come than C#, Java or Python. THAT I can guarantee you.

And, we have a wicked cool group on Facebook you can join too: Click here to visit RemObjects Developer.

 

Getting into Node.js from Delphi

July 1, 2019 1 comment

Delphi is one of the best development toolchains for Windows. I have been an avid fan of Delphi since it was first released, and before that – Turbo Pascal too. Delphi has a healthy following – and despite popular belief, Delphi scores quite well on the Tiobe Index.

As cool and efficient as Delphi might be, there are situations where native code wont work. Or at the very least, be less efficient than the alternatives. Delphi has a broad wingspan, from low-level assembler all the way to classes and generics. But JavaScript and emerging web technology is based on a completely different philosophy, one where native code is regarded as negative since it binds you to hardware.

Getting to grips with the whole JavaScript phenomenon, be it for mobile, embedded or back-end services, can be daunting if all you know is native code. But thankfully there are alternatives that can help you become productive quickly, something I will brush over in this post.

JavaScript without JavaScript

Before we dig into the tools of the trade, I want to cover alternative ways of enjoying the power of node.js and Javascript. Namely by using compilers that can convert code from a traditional language – and emit fully working JavaScript. There are a lot more options than you think:

qtx

Quartex Media Desktop is a complete environment written purely in JavaScript. Both Server, Cluster and front-end is pure JavaScript. A good example of what can be done.

  • Swift compiles for JavaScript, and Apple is doing some amazing things with the new and sexy SwiftUI tookit. If you know your way around Swift, you can compile for Javascript
  • Go can likewise be compiled to JS:
    • RemObjects Elements supports the Go language. Elements can target both native (llvm), .Net, Java and WebAssembly.
    • Go2Js
    • GopherJs
    • TARDISgo
  • C/C++ can be compiled to asm.js courtesy of EmScripten. It uses clang to first compile your code to llvm bitcode, and then it converts that into asm.js. You have probably seen games like Quake run in the browser? That was asm.js, a kind of precursor to WebAssembly.
  • NS Basic compiles for JavaScript, this is a Visual Basic 6 style environment with its own IDE even

For those coming straight from Delphi, there are a couple of options to pick from:

  • Freepascal (pas2js project)
  • DWScript compiles code to JavaScript, this is the same compiler that we used in Smart Pascal earlier
  • Oxygene, the next generation object-pascal from RemObjects compiles to WebAssembly. This is by far the best option of them all.
studio

I strongly urge you to have a look at Elements, here running in Visual Studio

JavaScript, Asm.js or WebAssembly?

Asm.js is by far the most misunderstood technology in the JavaScript ecosystem, so let me just cover that before we move on:

A few years back JavaScript gained support for memory buffers and typed arrays. This might not sound very exciting, but in terms of speed – the difference is tremendous. The default variable type in JavaScript is what Delphi developers know as Variant. It assumes the datatype of the values you assign to it. Needless to say, there is a lot of overhead when working with variants – so JavaScript suddenly getting proper typed arrays was a huge deal.

It was then discovered that JavaScript could manipulate these arrays and buffers at high speed, providing it only used a subset of the language. A subset that the JavaScript runtime could JIT compile more easily (turn into machine-code).

So what the EmScripten team did was to implement a bytecode based virtual-machine in Javascript, and then they compile C/C++ to bytecodes. I know, it’s a huge project, but the results speak for themselves — before WebAssembly, this was as fast as it got with JavaScript.

WebAssembly

WebAssembly is different from both vanilla JavaScript and Asm.js. First of all, it’s executed at high speed by the browser itself. Not like asm.js where these bytecodes were executed by JavaScript code.

water

Water is a fast, slick and platform independent IDE for Elements. The same IDE for OS X is called Fire. You can use RemObjects Elements from either Visual Studio or Water

Secondly, WebAssembly is completely JIT compiled by the browser or node.js when loading. It’s not like Asm.js where some parts are compiled, others are interpreted. WebAssembly runs at full speed and have nothing to do with traditional JavaScript. It’s actually a completely separate engine.

Out of all the options on the table, WebAssembly is the technology with the best performance.

Kits and strategies

The first thing you need to be clear about, is what you want to work with. The needs and requirements of a game developer will be very different from a system service developer.

Here are a couple of kits to think about:

  • Mobile developer
    • Implement your mobile applications using Oxygene, compiling for WebAssembly (Elements)
    • RemObjects Remoting SDK for client / server communication
    • Use Freepascal for vanilla JavaScript scaffolding when needed
  • Service developer
    • Implement libraries in Oxygene to benefit from the speed of WebAssembly
    • Use RemObjects Data Abstract to make data-access uniform and fast
    • Use Freepascal for boilerplate node.js logic
  • Desktop developer
    • For platform independent desktop applications, WebAssembly is the way to go. You will need some scaffolding (plain Javascript) to communicate with the application host  – but the 99.9% of your code will be better under WebAssembly.
    • Use Cordova / Phonegap to “bundle” your WebAssembly, HTML5 files and CSS styling into a single, final executable.

The most important part to think about when getting into JavaScript, is to look closely at the benefits and limitation of each technology.

WebAssembly is fast, wicked fast, and let’s you write code like you are used to from Delphi. Things like pointers etc are supported in Elements, which means ordinary code that use pointers will port over with ease. You are also not bound on hand-and-feet to a particular framework.

For example, EmScripten for C/C++ have almost nothing in terms of UI functionality. The visual part is a custom build of SDL (simple directmedia layer), which fakes the graphics onto an ordinary HTML5 canvas. This makes EmScripten a good candidate for porting games written in C/C++ to the web — but it’s less than optimal for writing serious applications.

Setting up the common tools

So far we have looked at a couple of alternatives for getting into the wonderful world of JavaScript in lieu of other languages. But what if you just want to get started with the typical tools JS developers use?

vscode

Visual Studio Code is a pretty amazing code-editor

The first “must have” is Visual Studio Code. This is actually a great example of what you can achieve with JavaScript, because the entire editor and program is written in JavaScript. But I want to stress that this editor is THE editor to get. The way you work with files in JS is very different from Delphi, C# and Java. JavaScript projects are often more fragmented, with less code in each file – organized by name.

typescript

TypeScript was invented by Anders Hejlsberg, who also made Delphi and C#

The next “must have” is without a doubt TypeScript. Personally im not too fond of TypeScript, but if ordinary JavaScript makes your head hurt and you want classes and ordinary inheritance, then TypeScript is a step up.

assemblyscriptNext on the list is AssemblyScript. This is a post-processor for TypeScript that converts your code into WebAssembly. It lacks much of the charm and elegance of Oxygene, but I suspect that has to do with old habits. When you have been reading object-pascal for 20 years, you feel more at home there.

nodeYou will also need to install node.js, which is the runtime engine for running JavaScript as services. Node.js is heavily optimized for writing server software, but it’s actually a brilliant way to write services that are multi-platform. Because Node.js delivers the same behavior regardless of underlying operating system.

phonegapAnd finally, since you definitely want to convert your JavaScript and/or WebAssembly into a stand-alone executable: you will need Adobe Phonegap.

Visual Studio

No matter if you want to enter JavaScript via Elements or something else, Visual Studio will save you a lot of time, especially if you plan on targeting Azure or Amazon services. Downloading and installing the community edition is a good idea, and you can use that while exploring your options.

dotnet-visual-studio

When it comes to writing system services, you also want to check out NPM, the node.js package manager. The JavaScript ecosystem is heavily package oriented – and npm gives you some 800.000 packages to play with free of charge.

Just to be clear, npm is a shell command you use to install or remove packages. NPM is also a online repository of said packages, where you can search and find what you need. Most packages are hosted on github, but when you install a package locally into your application folder – npm figures out dependencies etc. automatically for you.

Books, glorious books

41QSvp9fTcL._SX331_BO1,204,203,200_Last but not least, get some good books. Seriously, it will save you so much time and frustration. Amazon have tons of great books, be it vanilla JavaScript, TypeScript, Node.js — pick some good ones and take the time to consume the material.

And again, I strongly urge you to have a look at Elements when it comes to WebAssembly. WebAssembly is a harsh and barren canvas, and being able to use the Elements RTL is a huge boost.

But regardless of path you pick, you will always benefit from learning vanilla JavaScript.

 

Delphi AST, XML and weekend experiments

April 29, 2019 1 comment

One of the benefits of the Delphi IDE is that it’s a very rich eco-system that component writers and technology partners can tap into for their own products. I know that writing your own components is not something everyone enjoy, but knowing that you can in-fact write tools that expands the IDE using just Delphi or C++ builder, opens up for some interesting tools.

Ye old compiler bible

Ye old compiler bible

Delphi has a long tradition of “IDE enhancement” software and elaborate third-party tools that automate or delivers some benefit right in the environment. RemObjects SDK is probably the best example of how flexible the IDE truly is. RemObjects SDK integrates a whole service designer, which will generate source-code for you, update the code if you change something – and even generate service manifests for you.

There are also other tools that show off the flexibility of the IDE, ranging from code migration to advanced code refactoring and optimization.

It was with the last bit, namely code refactoring, that a third-party open-source library received a lot of deserving attention a couple of years back. A package called DelphiAST. This is a low-level syntax parser that reads Delphi source-code, applies fundamental syntax checks, and transforms the code into XML. A wet dream for anyone interested in writing advanced tooling that operates directly on source-code level.

Delphi AST

Like mentioned above, DelphiAST is a parser. Its job is very simple: parse the code, perform language level syntax checking, and convert each aspect of the code to a valid XML element. We are not talking about stuffing source-code into a CDATA segment here, but rather breaking each statement into separate tags (begin, end, if, procedure, param) so you can apply filtering, transformations and everything XML has to offer.

Back when Roman first started on DelphiAST, I got thinking — could we follow this idea further, and apply XML transformation to produce something more interesting? Would it actually be possible to approach the notion of compiling from a whole new angle? Perhaps convert between languages in a more effective way?

The short answer is: yes, everything is possible. But as always there are caveats and obstacles to overcome.

First of all, DelphiAST despite its name doesn’t actually generate a fully functional abstract symbol tree (AST). It generates a data model that is very suitable for AST generation, but not an actual AST. Everything in a programming language that can be referenced, like a method, a class, a global variable, a local variable, a parameter – are all called “symbols”. And before you can even think about processing the code, a fast and reliable AST must be in place.

Who cares?

Before I continue, you might be wondering why re-inventing the wheel is even a thing here? Why would anyone research compilers in 2019 when the world is abundant with compilers for a multitude of languages?

Because the world of computing is about to be hit by a tsunami, that’s why.

Quartex Pascal

Quartex Pascal

In the next 8-10 years the world of computing will be turned on its head. NVIDIA and roughly 100 tech companies have invested in open-source CPU designs, making it very clear that playing by Intel’s rules and bleeding royalties will no longer be tolerated. IBM has woken up from its “patent induced slumber” and is set to push their P9 cpu architecture, targeting both the high-end server and embedded market (see my article last year on PPC). At the same time Microsoft and Apple have both signaled that they are moving to ARM (an estimate of 5 years is probably reasonable). Laptop beta’s are said to be already rolling, with a commercial version expected Q3 this year (I think it wont arrive before xmas, but who knows).

Intel has remained somewhat silent about any long-term plans, but everyone that keeps an eye on hardware knows they are working like mad on next-gen FPGA. A tech that has the potential to disrupt the whole industry. Work is also being done to bridge FPGA coding with traditional code; there is no way of predicting the outcome of that though.

Oh and AMD is usurping the Intel marketshare at a steady rate — so we are in for a fight to the death.

The rise of C/C++

Those that keep tabs on languages have no doubt noticed the spike in C/C++ popularity lately. And the cause of this is that developers are safeguarding themselves for the storm to come.  C as a language might not be the most beautiful out there, but truth be told, it’s tooling requires the least amount of work to target a new platform. When a new architecture is released, C/C++ is always the first language available. You wont see C#, Flutter or Rust shipping with the latest and greatest; It’s always GCC or Clang.

Note: GCC is not just C, it’s actually a family of languages, so ironically, Gnu Basic hits a platform at the same time.

Those that have followed my blog for the past 10 years, should be more than aware of my experiments. From compiling to Javascript, generating bytecodes – and right now, moving the whole development paradigm to the browser. Hopefully my readers also recognize why this is important.

But to make you understand why I am so passionate about my compiler experiments, let’s do a little thought experiment:

Rethinking tooling

Let’s say we take Delphi, implement a bytecode format and streamline the RTL to be platform agnostic. What would the consequences of that be?

Well, first of all the compiler process would be split in two. The traditional compilation process would still be there, but it would generate bytecodes rather than machine code. That part would be isolated in a completely separate process; a process that, just like with the Delphi IDE’s infrastructure, could be outsourced to component-writers and technology partners. This in turn would provide the community with a high degree of safety, since the community itself could approach new targets without waiting for Embarcadero.

Even more, such an architecture would not be limited to machine-code. There is no law that says “you must convert bytecodes to machine code”. Since C/C++ is the foundation that modern operating-systems rest on, generating C/C++ source-code that can be built by existing compilers is a valid strategy.

There is also another factor to include in all of this, and that is Linux. Borland was correct in their assessment of Linux (the Kylix project), but they failed miserably with regards to timing. They also gravely underestimated Linux user’s sense of quality, depending on Wine (a Windows virtualization framework) to even function. They also underestimated Freepascal and Lazarus, because Linux is something FPC does exceptionally well. Competing financially against free products wont work unless you bring outstanding abilities to the table. And Linux have development tools that rival Visual Studio in quality, yet costs nothing.

But no matter how financially tricky Linux might be, we have reached the point in time where Linux is becoming mainstream. 10 years ago I had to setup my own Linux machine. There were no retailers locally that shipped a Linux box. Today I can walk into two major chains and pick dedicated Linux machines. Ubuntu in particular is well established and delivers LTS.

So for me personally, compiler tech has never been more important. And even more important is the tooling being universal and unbound by any specific API or cpu instruction-set. Firemonkey is absolutely a step in the right direction, but I think it’s a disaster to focus on native UI’s beyond a system level binding. Because replicating the same level of support and functionality for ARM, P9, RISC 5 and whatever monstrosity Intel comes up with through FPGA will take forever.

Transformation based conversion

We have wandered far off topic now, so let’s bring it back to this weekends experiment.

In short, XML transformations to convert code does work, but the right tooling have to be there to make it viable. I implemented a poor-man’s symbol table, just collecting classes, types and methods – and yeah, works just fine. What worries me a bit though is the XML parser. Microsoft has put a lot of money into XML file handling on enterprise level. When working with massive XML files (read: gigabytes) you really can’t be bothered to load the file into conventional ram and then old-school traverse the XML character by character. Microsoft operates with pure memory mapping so that you can process gigabytes like they were megabytes — but sadly, there is nothing similar for Linux, Unix or Android, that abruptly ends the fascination for me.

The only place I see using XML transformations to process source-code, is when converting to another language on source-level.

So the idea, although technically sound, gives zero benefits over the traditional process. I am however very interested in using DelphiAST to analyze and convert Delphi code directly from the IDE. But that will have to be an experiment for 2020, im booked 24/7 with Quartex Media Desktop right now.

But it was great fun playing around with DelphiAST! I loved how clean and neat the codebase has become. So if you need to work with source-code, DelphiAST is just the ticket!

Edit: You dont have to emit the code as XML. DelphiAST is perfectly happy to act as a clean parser, just saying.

Quartex Web OS: A cloud OS in takes form

January 19, 2019 Leave a comment

It’s been a while since I’ve posted now. I have 3 articles in escrow, and every time I think I will finish them, I end up writing more. But yes, more Delphi articles is coming and I have lined up both components and rich code that everyone will be happy about.

Please look before shooting

Before we dig into the new stuff, I want to clear up a misconception. We programmers often forget that not everyone knows what we do, and we take it for granted that everyone will instantly understand something we talk about. Which is rarely the case.

I have noticed that quite a few have misjudged the project radically, thinking that the first version (cloud ripper) is just a toy, a mock desktop or even worse: just a remake of a legacy system that “has no role in modern computing”.

It is true that I have taken more than a little from Amiga OS in terms of architecture, but I have exclusively taken ideas that are good and works well under the ASYNC execution model. I have also replicated the way the filesystem is organized, things like REXX (which was added to OS X in 2015), the menu system – these are indeed built on how Amiga OS did things. The same can be said about library functions. Not because they are old, but because they make sense. Many of the functions appear in other systems too, like GTK on Linux and WinAPI for Windows. There are only so many ways to open a window, change the title, define scrollbars and execute processes.

kiosk-systems

Kiosk systems like this are great targets for the Quartex Web OS

While there are clear architectural aspects taken from older systems, doesn’t mean that the system itself is old in any way. This system is designed to run as WebAssembly, ASM.js and vanilla Javascript – which is ASYNC by nature. It is designed to run and share payload over several machines, not a single outdated CPU and chipset. You have swarm based task solving – which is quite cutting edge if I might say so. None of these things were invented back in the day.

Some have also asked why this is even needed. Well, let me give you a simple use case.

One of my customers is doing work for Jensen, a Danish producer of IT hardware. They make mostly routers, wifi usb dongles and similar devices. But like many hardware vendors their web interface leaves a lot to be desires. Router web interfaces are usually quite annoying and poorly written. Something that should have taken 5 minutes can end up taking 30 just because the design of the interface is rubbish.

With my solution these vendors will be able to drop a whole infrastructure into their products; a infrastructure that provides all the things they need to quickly build a great control panel and router interface. Things like file system mapping, being able to store data to the filesystem through an established websocket protocol; all of it wrapped up in a simple but powerful API. Their settings and features can be represented as programs, which run in windows that are intuitively styled and easy to understand. They will also cut development time dramatically by calling the Quartex Soft-Kernel, rather than having to re-invent everything from scratch.

That is just a tiny, tiny use-case where the desktop and services makes perfect sense. But also keep in mind that the same system can scale up to a 1000 instance Amazon supercomputer if you need to, providing software for your offices and development teams.

In 8 months the desktop is complete (probably before) and I start building the first purely web powered software development toolchain. Everything has been transformed into Javascript (as in compilers, linkers – the whole lot). Both freepascal, clang c/c++ and much more. And developers will be able to login and start producing applications out of the box. The fact that the entire system is chipset and platform independent is quite unique. People tend to use native code behind a facade of html5. Not here. Here you have over 4000 classes, 800.000 lines of code just for the desktop client, looking back at you.

Hopefully this has shed some lights on the project, and people will stop looking at this as “old junk”. As a person who loves older computers, Amiga especially, I am quite frankly astounded by the ignorance regarding that platform. A juiced up 30 year old Amiga will give any modern computer a run for it’s money when it comes to ease of use, quality software and pure productivity. 10 years before Windows even existed, europeans enjoyed a colorful, window based desktop with full multitasking. When we had to switch to PC it was like going back to the 1500’s in terms of functionality – and it wasnt until Windows 7 that Microsoft caught up with Commodore. So if I have managed to get over even 1% of the spirit in that machine – then I will be very happy indeed.

But to limit a clustered, 40 CPU core architecture using modern, off-the-shelves parts, a multitude of node services to “old junk” is nothing short of an intellectual emergency. Please read, digest and look more closely before passing judgement.

Right then, so what’s new?

48365835_10155890849180906_6431235229611982848_n

The Quartex “Cloud Ripper”

Where to begin! Like mentioned in my previous post Amibian.js is a cluster system. As such the project now has its first real hardware sorted! I have gone for a 5 x ODroid XU4 model, neatly tucked inside a PICO 5H case. The budget was set at USD 400, but with shipping and taxes it ended up costing around USD 600. But that is not a bad price for the firepower you get (40 CPU cores, 20 GPU cores and 16 Gb Ram), the ODroid is a powerful, stable and reliable ARM SBC (single board computer). In benchmarks the Raspberry PI 3b scored 830 Dhrystones, the ODroid scored 5500 Dhrystones. And my architecture use five of them, so this is a $600 super-computer built using off the shelves part.

The back-end server has had several bugs fixed, especially the problems with path’s and databases. You can now edit the settings.ini file and tell the system where the database should be created or accessed from, you can set the port for the server, if it should use SSL + Secure WebSocket,  or ordinary HTTP + Websocket.

50511885_10155952491120906_1059229155276619776_o

40 ARM CPU cores, that is a lot of firepower for USD 200 !

I am also ditching the TW3NodeFileSystem driver for server logic and using ordinary node.js calls there. The TW3NodeFileSystem driver is mounted as you perform a login – and it acts as a sandbox, mounting your folder as a device (and making sure you can’t ever touch files outside your “home” server folder). We still need to implement a proper UNIX directory parser, but that is easy enough.

Quartex Pascal

Yes, I have picked up Quartex Pascal again, which originally started in 2014. I have started writing a new RTL for DWScript which is an alternative to Smart Mobile Studio. It is different from the Smart RTL and is closer to FMX than VCL.

Eventually the Quartex Web OS and all its services will compile without code from Smart Mobile Studio.

Hosted applications, messages and our soft-kernel

The biggest news, which is also the most tricky to get right, is getting hosted applications (applications are hosted in IFrame containers) to communicate with the desktop. As you probably know browsers have rigid security measures, and the rules for threads (web workers) and separate processes (frames) are severe to say the least.

50407351_795409364151096_4870092648481816576_n

The LDEF assembler is the first application to grace the system

A secondary application hosted in a frame has absolutely no access to the rest of the DOM. Meaning that the code has no way of calling functions or manipulating elements outside its own DOM in the frame container. This is a good system because we don’t want rouge applications causing havoc.

The only way an application can talk to the desktop is through messages. And while this sounds easy, remember: we are doing this as a solid system, not just slapping something together.

  • After loading a hosted application, the desktop will send a handshake request. It will do this on interval until the application accepts.
  • When the application replies with a handshake message, the desktop sends a special message-channel object to the app. All communication with the desktop must happen on that secure channel.
  • With the channel obtained, the application has to provide the application manifest file. This is a special INI-File containing information about the program, including access rights. None of the soft-kernel API functions will execute until a valid manifest-file has been delivered.
  • Once the manifest has been sent and accepted, the hosted application is free to call the soft-kernel functions.

The above might sound simple but it includes several sub technologies to be in place first:

  • Call Stack: a class that keep track of sent messages and a callback. When a response arrives it will execute the correct callback to deliver the response. This is a kind of “promises” engine for message delivery.
  • Message factory, matches message-data to the correct message class, creates the instance and de-serialize the data automatically for you
  • Message dispatcher: Allows you to register a message with a handler procedure. When a message arrives the dispatcher calls the message-factory, then calls the correct handler.
  • Base64 Encoding on byte-array, stream and buffer level (does not exist in either node.js or JavaScript in general)
  • String to UTF8 Byte-Array encoding
  • UTF8 Byte-Array to String encoding
  • escape and unescape for byte-array, stream and buffer
  • URI-encoder for byte-array, stream and buffer

But that was just the beginning, I also had to introduce an object that I have been dreading to even start on, namely the “process” class. The process is not just a simple reference to the frame container, it has to keep track of the websocket endpoint, application manifest, error handling, message routing and much more.

50077678_10155951521540906_6068161951656050688_o

CLANG compiled to webassembly, meaning we can now compile proper C/C++ in the browser

Since Amibian.js supports not just JavaScript, but also bytecode applications – the process object also contains the LDEF runtime engine; not to mention all the system resources a process can own.

The cool part is that things work exactly like I planned! There is plenty of room to optimize, but all in all the architecture is sound. And it was quite a hallelujah moment when the first API call went through at 00:00 19.01.2019! A call to SetWindowTitle() where the hosted application set the caption of its main-window purely via code. Cross domain communication at it’s very best.

The LDEF Assembler

Yes LDEF Bytecodes are fantastic, and the first program I have made is a traditional assembler. I went all in and implemented a full text-editor to get better control, and also to get rid of the ACE code editor, which was a massive dependency. So glad we got rid of that.

So now you can write assembly code, assemble it, run it, dis-assemble it and even dump the bytecodes to the window. You will be able to save the bytecodes to disk by the end of this weekend, and then run the bytecode programs from shell or the desktop. So we are really making progress here.

49938355_1169526123220996_502291013608407040_o

A good shell / pipe infrastructure is the key to a powerful desktop

LDEF is the bytecode system that will be used to build high-level languages like Basic and Pascal. Since Freepascal is now able to compile itself to JavaScript I will naturally add that to the IDE next fall; the same is true for CLANG which has compiled itself to WebAssembly — and who generates webassembly.

So C/C++ and object pascal are already working and waiting for the IDE.

LDEF is a grander system though, because libraries can be loaded by Delphi, C++ builder, C# or whatever you fancy – and used. It can be post-processed to real machine code, or converted to pure WebAssembly. It holds much wider scope than stack machines like CLR and Java, and its more natural for assembly programmers – because it’s based on real CPU’s. It’s a register based virtual machine, not a stack-machine.

More?

Tons, but you have to visit my patreon page to keep track. I try to publish as much as possible there rather than here. I post a bit on both, but the proper channel for Amibian.js (or “Quartex Web OS” as its official name is) will always be Patreon.

50108015_314551789176307_8213345524409958400_n

The picture viewer now has momentum scrolling in full-mode.

Also, fixed more bugs in the Smart RTL than I can count, and re-made window movement. Window movement now uses the GPU, so they are silky smooth everywhere. Resize will be optimized next, then you can’t really tell it’s not native code at all.

Delphi Component updates

Yes Delphi is also a huge part of the Patreon project, and you will be happy to hear that the form designer (which shares a codebase with the graphics application components) have seen more work!

You can check out some of the changes to the form-designer here:

These changes will be in the january update (end of month) together with all the changes to Amibian.js, HexLicense, Tween library and all the rest 🙂

Cheers!

Amibian.js under the hood

December 5, 2018 2 comments

Amibian.js is gaining momentum as more and more developers, embedded systems architects, gamers and retro computer enthusiasts discover the project. And I have to admit I’m pretty stoked about what we are building here myself!

intro

In a life-preserver no less 😀

But, with any new technology or invention there are two common traps that people can fall into: The first trap is to gravely underestimate a technology. JavaScript certainly invites this, because only a decade ago the language was little more than a toy. Since then JavaScript have evolved to become the most widely adopted programming language in the world, and runtime engines like Google’s V8 runs JavaScript almost as fast as compiled binary code (“native” means machine code, like that produced by a C/C++ compiler, Pascal compiler or anything else that produces programs that run under Linux or Windows).

It takes some adjustments, especially for traditional programmers that havent paid attention to where browsers have gone – but long gone are the days of interpreted JavaScript. Modern JavaScript is first parsed, tokenized and compiled to bytecodes. These bytecodes are then JIT compiled (“just in time”, which means the compilation takes place inside the browser) to real machine-code using state of the art techniques (LLVM). So the JavaScript of 2018 is by no means the JavaScript of 2008.

The second trap you can fall into – is to exaggerate what a new technology can do, and attach abilities and expectations to a product that simply cannot be delivered. It is very important to me that people don’t fall into either trap, and that everyone is informed about what Amibian.js actually is and can deliver – but also what it wont deliver. Rome was not built-in a day, and it’s wise to study all the factors before passing judgement.

I have been truly fortunate that people support the project financially via Patreon, and as such I feel it’s my duty to document and explain as much as possible. I am a programmer and I often forget that not everyone understands what I’m talking about. We are all human and make mistakes.

Hopefully this post will paint a clearer picture of Amibian.js and what we are building here. The project is divided into two phases: first to finish Amibian.js itself, and secondly to write a Visual Studio clone that runs purely in the browser. Since it’s easy to mix these things up, I’m underlining this easy – just in case.

What the heck is Amibian.js?

Amibian.js is a group of services and libraries that combined creates a portable operating-system that renders to HTML5. A system that was written using readily available web technology, and designed to deliver advanced desktop functionality to web applications.

The services that make up Amibian.js was designed to piggyback on a thin Linux crust, where Linux deals with the hardware, drivers and the nitty-gritty we take for granted. There is no point trying to write a better kernel in 2018, because you are never going to catch up with Linus Torvalds. It’s must more interesting to push modern web technology to the absolute limits, and build a system that is truly portable and distributed.

smart_ass

Above: Amibian.js is created in Smart Pascal and compiled to JavaScript

The service layer is written purely in node.js (JavaScript) which guarantees the same behavior regardless of host platform. One of the benefits of using off-the-shelves web technology is that you can physically copy the whole system from one machine to the other without any changes. So if you have a running Amibian.js system on your x86 PC, and copy all the files to an ARM computer – you dont even have to recompile the system. Just fire up the services and you are back in the game.

Now before you dismiss this as “yet another web mockup” please remember what I said about JavaScript: the JavaScript in 2018 is not the JavaScript of 2008. No other language on the planet has seen as much development as JavaScript, and it has evolved from a “browser toy” – into the most important programming language of our time.

So Amibian.js is not some skin-deep mockup of a desktop (lord knows there are plenty of those online). It implements advanced technologies such as remote filesystem mapping, an object-oriented message protocol (Ragnarok), RPCS (remote procedure call invocation stack), video codec capabilities and much more — all of it done with JavaScript.

In fact, one of the demos that Amibian.js ships with is Quake III recompiled to JavaScript. It delivers 120 fps flawlessly (browser is limited to 60 fps) and makes full use of standard browser technologies (WebGL).

utube

Click on picture above to watch Amibian.js in action on YouTube

So indeed, the JavaScript we are talking about here is cutting edge. Most of Amibian.js is compiled as “Asm.js” which means that the V8 runtime (the code that runs JavaScript inside the browser, or as a program under node.js) will JIT compile it to highly efficient machine-code.

Which is why Amibian.js is able to do things that people imagine impossible!

Ok, but what does Amibian.js consist of?

Amibian.js consists of many parts, but we can divide it into two categories:

  • A HTML5 desktop client
  • A system server and various child processes

These two categories have the exact same relationship as the X desktop and the Linux kernel. The client connects to the server, invokes procedures to do some work, and then visually represent the response This is identical to how the X desktop calls functions in the kernel or one of the Linux libraries. The difference between the traditional, machine code based OS and our web variation, is that our version doesn’t have to care about the hardware. We can also assign many different roles to Ambian.js (more about that later).

smartdesk

Enjoying other cloud applications is easy with Amibian.js, here is Plex, a system very much based on the same ideas as Amibian.js

And for the record: I’m trying to avoid a bare-metal OS, otherwise I would have written the system using a native programming language like C or Object-Pascal. So I am not using JavaScript because I lack skill in native languages, I am using JavaScript because native code is not relevant for the tasks Amibian.js solves. If I used a native back-end I could have finished this in a couple of months, but a native server would be unable to replicate itself between cloud instances because chipset and CPU would be determining factors.

The Amibian.js server is not a single program. The back-end for Amibian.js consists of several service applications (daemons on Linux) that each deliver specific features. The combined functionality of these services make up “the amibian kernel” in our analogy with Linux. You can think of these services as the library files in a traditional system, and programs that are written for Amibian.js can call on these to a wide range of tasks. It can be as simple as reading a file, or as complex as registering a new user or requesting admin rights.

The greatest strength of Amibian.js is that it’s designed to run clustered, using as many CPU cores as possible. It’s also designed to scale, meaning that it will replicate itself and divide the work between different instances. This is where things get’s interesting, because an Amibian.js cluster doesn’t need the latest and coolest hardware to deliver good performance. You can build a cluster of old PC’s in your office, or a handful of embedded boards (ODroid XU4, Raspberry PI’s and Tinkerboard are brilliant candidates).

But why Amibian.js? Why not just stick with Linux?

That is a fair question, and this is where the roles I mentioned above comes in.

As a software developer many of my customers work with embedded devices and kiosk systems. You have companies that produce routers and set-top boxes, NAS boxes of various complexity, ticket systems for trains and busses; and all of them end up having to solve the same needs.

What each of these manufacturers have in common, is the need for a web desktop system that can be adapted for a specific program. Any idiot can write a web application, but when you need safe access to the filesystem, unified API’s that can delegate signals to Amazon, Azure or your company server, things suddenly get’s more complicated. And even when you have all of that, you still need a rock solid application model suitable for distributed computing. You might have 1 ticket booth, or 10.000 nation wide. There are no systems available that is designed to deal with web-technology on that scale. Yet 😉

Let’s look at a couple of real-life scenarios that I have encountered, I’m confident you will recognize a common need. So here are some roles that Amibian.js can assume and help deliver a solution rapidly. It also gives you some ideas of the economic possibilities.

Updated: Please note that we are talking javascript here, not native code. There are a lot of native solutions out there, but the whole point here is to forget about CPU, chipset and target and have a system floating on top of whatever is beneath.

  • When you want to change some settings on your router – you login to your router. It contains a small apache server (or something similar) and you do all your maintenance via that web interface. This web interface is typically skin-deep, annoying to work with and a pain for developers to update since it’s connected to a native apache module which is 100% dependent on the firmware. Each vendor end up re-inventing the wheel over and over again.
  • When you visit a large museum notice the displays. A museum needs to display multimedia, preferably on touch capable devices, throughout the different exhibits. The cost of having a developer create native applications that displays the media, plays the movies and gives visual feedback is astronomical. Which is why most museums adopt web technology to handle media presentation and interaction. Again they re-invent the wheel with varying degree of success.
  • Hotels have more or less the exact same need but on a smaller scale, especially the larger hotels where the lobby have information booths, and each room displays a web interface via the TV.
  • Shopping malls face the same challenge, and depending on the size they can need anything from a single to a hundred nodes.
  • Schools and education spend millions on training software and programming languages every year. Amibian.js can deliver both and the schools would only pay for maintenance and adaptation – the product itself is free. Kids get the benefit of learning traditional languages and enjoying instant visual feedback! They can learn Basic, Pascal, JavaScript and C. I firmly believe that the classical languages will help make them better programmers as they evolve.

You are probably starting to see the common denominator here?

They all need a web-based desktop system, one that can run complex HTML5 based media applications and give them the same depth as a native operating-system; Which is pretty hard to achieve with JavaScript alone.

Amibian.js provides a rich foundation of more than 4000 classes that developers can use to write large, complex and media rich applications (see Smart Mobile Studio below). Just like Linux and Windows provides a wealth of libraries and features for native application development – Amibian.js aims to provide the same for cloud and embedded systems.

And as the name implies, it has roots in the past with the machine that defined multimedia, namely the Commodore Amiga. So the relation is more than just visually, Amibian.js uses the same system architecture – because we believe it’s one of the best systems ever designed.

If JavaScript is so poor, why should we trust you to deliver so much?

First of all I’m not selling anything. It’s not like this project is something that is going to make me a ton of cash. I ask for support during the development period because I want to allocate proper time for it, but when done Amibian.js will be free for everyone (LGPL). And I’m also writing it because it’s something that I need and that I havent seen anywhere else. I think you have to write software for yourself, otherwise the quality wont be there.

Secondly, writing Amibian.js in raw JavaScript with the same amount of functions and depth would take years. The reason I am able to deliver so much functionality quickly, is because I use a compiler system called Smart Mobile Studio. This saves months and years of development time, and I can use all the benefits of OOP.

Prior to starting the Amibian.js project, I spent roughly 9 years creating Smart Mobile Studio. Smart is not a solo project, many individuals have been involved – and the product provides a compiler, IDE (editor and tools), and a vast run-time library of pre-made classes (roughly 4000 ready to use classes, or building-blocks).

amibian_shell

Writing large-scale node.js services in Smart is easy, fun and powerful!

Unlike other development systems, Smart Mobile Studio compiles to JavaScript rather than machine-code. We have spent a great deal of time making sure we could use proper OOP (object-oriented programming), and we have spent more than 3 years perfecting a visual application framework with the same depth as the VCL or FMX (the core visual frameworks for C++ builder and Delphi).

The result is that I can knock out a large application that a normal JavaScript coder would spend weeks on – in a single day.

Smart Mobile Studio uses the object-pascal language, a dialect which is roughly 70% compatible with Delphi. Delphi is exceptionally well suited for writing large, data driven applications. It also thrives for embedded systems and low-level system services. In short: it’s a lot easier to maintain 50.000 lines of object pascal code, than 500.000 lines of JavaScript code.

Amibian.js, both the service layer and the visual HTML5 client application, is written completely using Smart Mobile Studio. This gives me as the core developer of both systems a huge advantage (who knows it better than the designer right?). I also get to write code that is truly OOP (classes, inheritance, interfaces, virtual and abstract methods, partial classes etc), because our compiler crafts something called a VMT (virtual method table) in JavaScript.

Traditional JavaScript doesn’t have OOP, it has something called prototypes. With Smart Pascal I get to bring in code from the object-pascal community, components and libraries written in Delphi or Freepascal – which range in the hundreds of thousands. Delphi alone has a massive library of code to pick from, it’s been a popular toolkit for ages (C is 3 years older than pascal).

But how would I use Amibian.js? Do I install it or what?

Amibian.js can be setup and used in 4 different ways:

  • As a true desktop, booting straight into Amibian.js in full-screen
  • As a cloud service, accessing it through any modern browser
  • As a NAS or Kiosk front-end
  • As a local system on your existing OS, a batch script will fire it up and you can use your browser to access it on https://127.0.0.1:8090

So the short answer is yes, you install it. But it’s the same as installing Chrome OS. It’s not like an application you just install on your Linux, Windows or OSX box. The whole point of Amibian.js is to have a platform independent, chipset agnostic system. Something that doesn’t care if you using ARM, x86, PPC or Mips as your CPU of preference. Developers will no doubt install it on their existing machines, Amibian.js is non-intrusive and does not affect or touch files outside its own eco-system.

But the average non-programmer will most likely setup a dedicated machine (or several) or just deploy it on their home NAS.

The first way of enjoying Amibian.js is to install it on a PC or ARM device. A disk image will be provided for supporters so they can get up and running ASAP. This disk image will be based on a thin Linux setup, just enough to get all the drivers going (but no X desktop!). It will start all the node.js services and finally enter a full-screen web display (based on Chromium Embedded) that renders the desktop. This is the method most users will prefer to work with Amibian.js.

The second way is to use it as a cloud service. You install Amibian.js like mentioned above, but you do so on Amazon or Azure. That way you can login to your desktop using nothing but a web browser. This is a very cost-effective way of enjoying Amibian.js since renting a virtual instance is affordable and storage is abundant.

The third option is for developers. Amibian.js is a desktop system, which means it’s designed to host more elaborate applications. Where you would normally just embed an external website into an IFrame, but Amibian.js is not that primitive. Hosting external applications requires you to write a security manifest file, but more importantly: the application must interface with the desktop through the window’s message-port. This is a special object that is sent to the application as a hand-shake, and the only way for the application to access things like the file-system and server-side functionality, is via this message-port.

Calling “kernel” level functions from a hosted application is done purely via the message-port mentioned above. The actual message data is JSON and must conform to the Ragnarok client protocol specification. This is not as difficult as it might sound, but Amibian.js takes security very seriously – so applications trying to cause damage will be promptly shut down.

You mention hosted applications, do you mean websites?

Both yes and no: Amibian.js supports 3 types of applications:

  • Ordinary HTML5/JS based applications, or “websites” as many would call them. But like I talked about above they have to establish a dialog with the desktop before they can do anything useful.
  • Hybrid applications where half is installed as a node.js service, and the other half is served as a normal HTML5 app. This is the coolest program model, and developers essentially write both a server and a client – and then deploy it as a single package.
  • LDEF compiled bytecode applications, a 68k inspired assembly language that is JIT compiled by the browser (commonly called “asm.js”) and runs extremely fast. The LDEF virtual machine is a sub-project in Amibian.js

The latter option, bytecodes, is a bit like Java. A part of the Amibian.js project is a compiler and runtime system called LDEF.

patron_asm2

Above: The Amibian.js LDEF assembler, here listing opcodes + disassembling a method

The first part of the Amibian.js project is to establish the desktop and back-end services. The second part of the project is to create the worlds first cloud based development platform. A full Visual Studio clone if you like, that allows anyone to write cloud, mobile and native applications directly via the browser (!)

Several languages are supported by LDEF, and you can write programs in Object Pascal, Basic and C. The Basic dialect is especially fun to work with, since it’s a re-implementation of BlitzBasic (with a lot of added extras). Amiga developers will no doubt remember BlitzBasic, it was used to create some great games back in the 80s and 90s. It’s well suited for games and multimedia programming and above all – very easy to learn.

More advanced developers can enjoy Object Pascal (read: Delphi) or a sub-set of C/C++.

And please note: This IDE is designed for large-scale applications, not simple snippets. The ultimate goal of Amibian.js is to move the entire development cycle to the cloud and away from the desktop. With Amibian.js you can write a cool “app” in BlitzBasic, run it right in the browser — or compile it server-side and deploy it to your Android Phone as a real, natively compiled application.

So any notion of a “mock desktop for HTML” should be firmly put to the side. I am not playing around with this product and the stakes are very real.

But why don’t you just use ChromeOS?

There are many reasons, but the most important one is chipset independence. Chrome OS is a native system, meaning that it’s core services are written in C/C++ and compiled to machine code. The fundamental principle of Amibian.js is to be 100% platform agnostic, and “no native code allowed”. This is why the entire back-end and service layer is targeting node.js. This ensures the same behavior regardless of processor or host system (Linux being the default host).

Node.js has the benefit of being 100% platform independent. You will find node.js for ARM, x86, Mips and PPC. This means you can take advantage of whatever hardware is available. You can even recycle older computers that have lost mainstream support, and use them to run Amibian.js.

A second reason is: Chrome OS might be free, but it’s only as open as Google want it to be. ChromeOS is not just something you pick up and start altering. It’s dependence on native programming languages, compiler toolchains and a huge set of libraries makes it extremely niche. It also shields you utterly from the interesting parts, namely the back-end services. It’s quite frankly boring and too boxed in for any practical use; except for Google and it’s technology partners that is.

I wanted a system that I could move around, that could run in the cloud, on cheap SBC’s. A system that could scale from handling 10 users to 1000 users – a system that supports clustering and can be installed on multiple machines in a swarm.

A system that anyone with JavaScript knowledge can use to create new and exciting systems, that can be easily expanded and serve as a foundation for rich media applications.

What is this Amiga stuff, isn’t that an ancient machine?

In computing terms yes, but so is Unix. Old doesn’t automatically mean bad, it actually means that it’s adapted and survived challenges beyond its initial design. While most of us remember the Amiga for its games, I remember it mainly for its elegant and powerful operating-system. A system so flexible that it’s still in use around the world – 33 years after the machine hit the market. That is quite an achievement.

image2

The original Amiga OS, not bad for a 33-year-old OS! It was and continues to be way ahead of everyone else. A testament to the creativity of its authors

Amibian.js as the name implies, borrows architectural elements en-mass from Amiga OS. Quite simply because the way Amiga OS is organized and the way you approach computing on the Amiga is brilliant. Amiga OS is much more intuitive and easier to understand than Linux and Windows. It’s a system that you could learn how to use fully with just a couple of days exploring; and no manuals.

But the similarities are not just visual or architectural. Remember I wrote that hosted applications can access and use the Amibian.js services? These services implement as much of the original ROM Kernel functions as possible. Naturally I can’t port all of it, because it’s not really relevant for Amibian.js. Things like device-drivers serve little purpose for Amibian.js, because Amibian.js talks to node.js, and node talks to the actual system, which in turn handles hardware devices. But the way you would create windows, visual controls, bind events and create a modern, event-driven application has been preserved to the best of my ability.

But how does this thing boot? I thought you said server?

If you have setup a dedicated machine with Amibian.js then the boot sequence is the same as Linux, except that the node.js services are executed as background processes (daemons or services as they are called), the core server is initialized, and then a full-screen HTML5 view is set up that shows the desktop.

But that is just for starting the system. Your personal boot sequence which deals with your account, your preferences and adaptations – that boots when you login to the system.

When you login to your Amibian.js account, no matter if it’s just locally on a single PC, a distributed cluster, or via the browser into your cloud account — several things happen:

  1. The client (web-page if you like) connects to the server using WebSocket
  2. Login is validated by the server
  3. The client starts loading preferences files via the mapped filesystem, and then applies these to the desktop.
  4. A startup-sequence script file is loaded from your account, and then executed. The shell-script runtime engine is built into the client, as is REXX execution.
  5. The startup-script will setup configurations, create symbolic links (assigns), mount external devices (dropbox, google drive, ftp locations and so on)
  6. When finished the programs in the ~/WbStartup folder are started. These can be both visual and non-visual.

As you can see Amibian.js is not a mockup or “fake” desktop. It implements all the advanced features you expect from a “real” desktop. The filesystem mapping is especially advanced, where file-data is loaded via special drivers; drivers that act as a bridge between a storage service (a harddisk, a network share, a FTP host, Dropbox or whatever) and the desktop. Developers can add as many of these drivers as they want. If they have their own homebrew storage system on their existing servers, they can implement a driver for it. This ensures that Amibian.js can access any storage device, as long as the driver conforms to the driver standard.

In short, you can create, delete, move and copy files between these devices just like you do on Windows, OSX or the Linux desktop. And hosted applications that run inside their own window can likewise request access to these drivers and work with the filesystem (and much more!).

Wow this is bigger than I thought, but what is this emulation I hear about? Can Amibian.js really run actual programs?

Amibian.js has a JavaScript port of UAE (Unix Amiga Emulator). This is a fork of SAE (scripted Amiga Emulator) that has been heavily optimized for web. Not only is it written in JavaScript, it performs brilliantly and thus allows us to boot into a real Amiga system. So if you have some floppy-images with a game you love, that will run just fine in the browser. I even booted a 2 gigabyte harddisk image 🙂

But Amiga emulation is just the beginning. More and more emulators are ported to JavaScript; you have Nes, SNes, N64, PSX I & II, Sega Megadrive and even a NEO GEO port. So playing your favorite console games right in the browser is pretty straight forward!

But the really interesting part is probably QEmu. This allows you to run x86 instances directly in the browser too. You can boot up in Windows 7 or Ubuntu inside an Amibian.js window if you like. Perhaps not practical (at this point) but it shows some of the potential of the system.

I have been experimenting with a distributed emulation system, where the emulation is executed server-side, and only the graphics and sound is streamed back to the Amibian.js client in real-time. This has been possible for years via Apache Guacamole, but doing it in raw JS is more fitting with our philosophy: no native code!

I heard something about clustering, what the heck is that?

Remember I wrote about the services that Amibian.js has? Those that act almost like libraries on a physical computer? Well, these services don’t have to be on the same machine — you can place them on separate machines and thus its able to work faster.

47470965_10155861938320906_4959664457727868928_n

Above: The official Amibian.js cluster, 4 x ODroid XU4s SBC’s in a micro-rack

A cluster is typically several computers connected together, with the sole purpose of having more CPU cores to divide the work on. The cool thing about Amibian.js is that it doesn’t care about the underlying CPU. As long as node.js is available it will happily run whatever service you like – with the same behavior and result.

The official Amibian.js cluster consists of 5 ODroid XU4/S SBC (single board computers). Four of these are so-called “headless” computers, meaning that they don’t have a HDMI port – and they are designed to be logged into and software setup via SSH or similar tools. The last machine is a ODroid XU4 with a HDMI out port, which serves as “the master”.

The architecture is quite simple: We allocate one whole SBC for a single service, and allow the service to copy itself to use all the CPU cores available (each SBC has 8 CPU cores). With this architecture the machine that deals with the desktop clients don’t have to do all the grunt work. It will accept tasks from the user and hosted applications, and then delegate the tasks between the 4 other machines.

Note: The number of SBC’s is not fixed. Depending on your use you might not need more than a single SBC in your home setup, or perhaps two. I have started with 5 because I want each part of the architecture to have as much CPU power as possible. So the first “official” Amibian.js setup is a 40 core monster shipping at around $250.

But like mentioned, you don’t have to buy this to use Amibian.js. You can install it on a single spare X86 PC you have, or daisy chain a couple of older PC’s on a switch for the same result.

Why Headless? Don’t you need a GPU?

The headless SBC’s in the initial design all have GPU (graphical processing unit) as well as audio capabilities. What they lack is GPIO pins and 3 additional USB ports. So each of the nodes on our cluster can handle graphics at blistering speed — but that is ultimately not their task. They serve more as compute modules that will be given tasks to finish quickly, while the main machine deals with users, sessions, traffic and security.

The 40 core cluster I use has more computing power than northern europe had in the early 80s, that’s something to think about. And the pricetag is under $300 (!). I dont know about you but I always wanted a proper mainframe, a distributed computing platform that you can login to and that can perform large tasks while I do something else. This is as close as I can get on a limited budget, yet I find the limitations thrilling and fun!

Part of the reason I have opted for a clustered design has to do with future development. While UAE.js is brilliant to emulate an Amiga directly in the browser – a more interesting design is to decouple the emulation from the output. In other words, run the emulation at full speed server-side, and just stream the display and sounds back to the Amibian.js display. This would ensure that emulation, of any platform, runs as fast as possible, makes use of multi-processing (read: multi threading) and fully utilize the network bandwidth within the design (the cluster runs on its own switch, separate from the outside world-wide-web).

I am also very interested in distributed computing, where we split up a program and run each part on different cores. This is a topic I want to investigate further when Amibian.js is completed. It would no doubt require a re-design of the LDEF bytecode system, but this something to research later.

Will Amibian.js replace my Windows box?

That depends completely on what you use Windows for. The goal is to create a self-sustaining system. For retro computing, emulation and writing cool applications Amibian.js will be awesome. But Rome was not built-in a day, so it’s wise to be patient and approach Amibian.js like you would Chrome OS. Some tasks are better suited for native systems like Linux, but more and more tasks will run just fine on a cloud desktop like Amibian.js.

Until the IDE and compilers are in place after phase two, the system will be more like an embedded OS. But when the LDEF compiler and IDE is in place, then people will start using it en-mass and produce applications for it. It’s always a bit of work to reach that point and create critical mass.

tomes

Object Pascal is awesome, but modern, native development systems are quite demanding

My personal need has to do with development. Some of the languages I use installs gigabytes onto my PC and you need a full laptop to access them. I love Amibian.js because I will be able to work anywhere in the world, as long as a browser and normal internet line is available. In my case I can install a native compiler on one of the nodes in the cluster, and have LDEF emit compatible code; voila, you can build app-store ready applications from within a browser environment.

 

I also love that I can set-up a dedicated platform that runs legacy applications, games – and that I can write new applications and services using modern, off the shelve languages. And should a node in the cluster break down, I can just copy the whole system over to a new, affordable SBC and keep going. No super expensive hardware to order, no absurd hosting fees, and finally a system that we all can shape and use in a plethora of systems. From a fully fledged desktop to a super advanced NAS or Router that use Amibian.js to give it’s customers a fantastic experience.

And yes, I get to re-create the wonderful reality of Amiga OS without the absurd egoism that dominates the Amiga owners to this day. I don’t even know where to begin with the present license holders – and I am so sick of the drama that rolling my own seemed the only reasonable path forward.

Well — I hope this helps clear up any misconceptions about Amibian.js, and that you find this as interesting as I do. As more and more services are pushed cloud-side, the more relevant Amibian.js will become. It is perfect as a foundation for large-scale applications, embedded systems — and indeed, as a solo platform running on embedded devices!

I cant wait to finish the services and cluster this sucker on the ODroid rack!

If you find this project interesting, head over to my Patreon website and get involved! I could really use your support, even if it’s just a $5 “high five”. Visit the project at: http://www.patreon.com/quartexNow

New article series on Delphi and C++ builder

August 7, 2018 4 comments

An army of Delphi developers

It’s been a while since I’ve done some hardcore Delphi articles, and since that is now my job I am happy that I can finally allocate a good chunk of time for that work. Dont worry, there will be plenty of Smart Pascal content too – but I think it’s time to clean up the blog situation a bit. This blog is personal and thus contains a pot-pourri of topics, from programming to 3d printing, embedded hardware to retro-gaming. It’s a fun blog, I enjoy being able to write about things I’m passionate about, but having one blog for each topic makes more sense.

So in the near future I think it’s good that I publish Smart Mobile Studio content (except random stuff and drive-by posts) to http://www.smartmobilestudio.com, and Delphi to Embarcadero’s blog server. If nothing else it will be easier for the readers to deal with. If you only want to read about my Delphi escapades then embedded and retro stuff is not always interesting.

Deep dive into Delphi and C++ builder

So what can be cool to write about? I spent the better part of last weekend pondering this. Delphi articles have a little blind spot between beginner and advanced that I would like to focus on. There are plenty of “learn Delphi” articles out there, and there are likewise a lot of very advanced topics. So hopefully my first series will hit where it should, and be interesting for those in between.

We need a light database

Let’s peek under the hood!

Right, so the last time I read about database coding, and I mean “making your own database engine” was at least 10 years ago. The Delphi community has always been blessed with a large group of insightful and productive people, people who share their knowledge and help others. But everyone is working on something and finding the time to deep dive into subjects like this is not always easy. So hopefully my series on this will at least inspire people to experiment, try new things and fall in love with Delphi like I did.

The second article series that I am working on right now, is getting to grips with C++ builder. This is actually a very fun experiment since it serves more than a single function; I mean, just how hard is it for a Delphi developer to learn C++ ? What can Embarcadero do to help developers feel comfortable on both platforms? What are the benefits for a Delphi developer to learn C/C++?

 

cppbuilder

C++ builder Community Edition rocks!

And yes I have had more than one episode where the new concepts drove me up the wall. It would be the world’s shortest article-series if Delphi Developer didn’t have my back and I didn’t buy books. Say what you will about modern programming, but sometimes you just need to sit down, turn off the computer, and read. Old school but effective.

Reflections

Embarcadero is very different from what I expected. Before I worked here (which is still a bit surrealistic) I envisioned a stereotypical american company, located in some tall office building; utterly remote from its users and the needs of the punters in the field. This past week has forced me to reflect more than I would have liked, and my armour of strong opinions (if not arrogance) has a very visible dent; because the company that has welcomed me with open arms is everything but that imaginary stereotype.

spartan warrior

Et in Borland ego sum

The core of Embarcadero turned out to be a team of dedicated developers that are literally bending backwards to help as many developers as possible. I left yesterdays meeting with a taste of shame in my mouth, because in my blog I have given at least two of the people who now welcomed me, a less than fortunate overhaul in the past. Yet they turned out to be human beings with the exact same interests, passions and goals as myself.

Building large-scale development tools is really hard work. Seriously. As a developer you forget things like marketing, the sales apparatus, the level of support a developer will need, documentation, tutorials. The amount of requests, conflicting requests that is, from users is overwhelming. You have users who focus on mobile who don’t care about legacy VCL support, then you have people who very much need VCL legacy support and dont care at all about mobile platforms; It’s a huge list of groups, topics and goals that is constantly shifting and needs prioritization.

But all in all the Delphi community and Embarcadero is in good shape. They have worked through a lot of old baggage that simply had to be transitioned, and the result is the change we see now: community editions and better dialog with the users. Compare that to the situation we had five years ago, or eight years ago for that matter. The changes have been many and the road long -but with a purpose: Delphi is growing at a healthy rate again.

What will you need and what will we do?

The goal of the Delphi article is to implement the underlying mechanics of a database. I’m not talking about a “file of record” here or something like that, but a page and sequence based filestream and it’s support apparatus for managing blocks and available resources. This forms the basis of all databases, large or small. So we will be coding the nitty-gritty that has to be in place before you venture into expression parsing.

510242661If time allows I will implement support for filters, but naturally a full SQL parser would be over the top. The techniques demonstrated should be more than enough for a budding young developer to take the ball and run with it. The filter function is somewhat close to a “select” statement – and the essence of expression parsing will be in the filter code.

Note: I will skip memory mapping techniques, for one reason only: it can get in the way of understanding the core principles. Once you have the principles under wraps – memory mapping is the natural next step and evolution of the thoughts involved, so it will fall into place in due time.

You wont need anything special, just Delphi. Most of the code will be classical object pascal, but the parser will throw in some generics and operators, so this is a good time to download the community edition or upgrade to a compiler from this century.

The C/C++ articles will likewise have zero dependencies except the community edition of C++ builder. I went out and bought two books, C++ Primer fifth edition and The C++ programming language by Bjarne Stroustrup himself. Which should be on presciption because i fell at sleep

My frontal lobe is already reduced to jello at the sight of these books, but let’s jump in with both feet and see what we make of it from a Delphi developers point of view. I can’t imagine it can be more of a mess than raw webassembly, but C/C++ has a wingspan that rivals even Delphi so it’s wise not to underestimate the curriculum.

OK, let’s get cracking! I will see you all shortly and post the first Delphi article.

Nano PI Fire 3, part two

July 18, 2018 Leave a comment

If you missed the first installment of this test, please click here to catch up. In this installment we are just going to dive straight into general use and get a feel for what can and cannot be done.

Solving the power problem

pi-powerLike mentioned in the previous article, a normal mobile charger (5 volt, 2 amps) is not enough to support the nano-pi. Since I have misplaced my original PI power-supply with 5 volt / 3 amps I decided to cheat. So I plugged the power USB into my PC which will deliver as much juice as the device needs. I don’t have time to wait for a new PSU to arrive so this will have to do.

But for the record (and underlined) a proper PSU with at least 2.5 amps is essential to using this board. I suggest you order the official Raspberry PI 3b power-supply. But if you should find one with 3 amps that would be even better.

Web performance

The question on everyone’s mind (or at least mine) is: how does the Nano-PI fire 3 perform when rendering cutting edge, hardcore HTML5? Is this little device a potential candidate for running “The Smart Desktop” (a.k.a Amibian.js for those of you coming from the retro-computing scene)?

Like I suspected earlier, X (the Linux windowing framework) doesn’t have drivers that deliver hardware acceleration at all.

shot_desktop-1024x819-2-1024x819

Lubuntu is a sexy desktop no doubt there, but it’s overkill for this device

This is quite easy to test: when selecting a rectangle on the Lubuntu desktop and moving the mouse-cursor around (holding down the left mouse button at the same time) if it lags terribly, that is a clear indicator that no acceleration exists.

And I was right on the money because there is no acceleration what so ever for the Linux distribution. It struggles hopelessly to keep up with the mouse-pointer as you move it around with an active selection; something that would be silky smooth had the GPU been tasked with the job.

But, hardware acceleration is not just about the desktop. It’s not some flag you enable and it magically effect everything, but rather several API’s at either the kernel-level or immediate driver level (modules the kernel loads), each affecting different aspects of a system.

So while the desktop “2d blitting” is clearly cpu driven, other aspects of the system can still be accelerated (although that would be weird and rare. But considering how Asus messed up the Tinkerboard I guess anything goes these days).

Asking Chrome for the hard facts

I fired up Google Chrome (which is the default browser thank god) and entered the magic url:

chrome://gpu

This is a built-in page that avails a detailed report of what Chrome learns about the current system, right down to specific GPU features used by OpenGL.

As expected, there was NO acceleration what so ever. So I was quite surprised that it managed to run Amibian.js at all. Even without hardware acceleration it outperformed the Raspberry PI 3b+ by a factor of 4 (at the very least) and my particle demo ran at a whopping 8 fps (frames per second). The original Rasperry PI could barely manage 2 fps. So the Nano-PI Fire is leagues ahead of the PI in terms of raw cpu power, which is brilliant for headless servers or computational tasks.

FriendlyCore vs Lubuntu? QT for the win

Now here is a funny thing. So far I have used the Lubuntu standard Linux image, and performance has been interesting to say the least. No hardware acceleration, impressive cpu results but still – what good is a SBC Linux distro without fast graphics? Sure, if you just want a head-less file server or host services then you don’t need a beefy GPU. But here is the twist:

Turns out the makers of the board has a second, QT oriented distro called Friendly-core. And this image has OpenGL-ES support and all the missing acceleration lacking from Lubuntu.

I was pretty annoyed with how Asus gave users the run-around with Tinkerboard downloads, but they have thankfully cleaned up their act and listened to their customers. Friendly-elec might want to learn from Asus mistakes in this area.

Qtwebenginebrowser

QT has a rich history, but it’s being marginalized by node.js and Delphi these days

Alas, Friendly-core xenial 4.4 Arm64 image turned out to be a pure embedded development image. This is why the board has a debug port (which is probably awesome if you are into QT development). So this is for QT developers that want to use the board as a single-application system where they write the code on Windows or Linux, compile and it’s all transported to the board with live debugging back to the devtools they use. In other words: not very useful for non C/C++ QT developers.

Android Lolipop

2000px-Android_robot.svgI have only used Android on a pad and the odd Samsung Galaxy phone, so this should be interesting. I Downloaded the Lolipop disk image, burned it to the sd-card and booted up.

After 20 minutes with a blank screen i gave up.

I realize that some Android distros download packages ad-hoc and install directly from a repository, so it can take some time to get started; but 15-20 minutes with a black screen? The Android logo didn’t even show up — and that should be visible almost immediately regardless of network install or not.

This is really a great shame because I wanted to test some Delphi Firemonkey applications on it, to see how well it scales the more demanding GPU tasks. And yes i did try a different SD-Card to be sure it wasnt a disk error. Same result.

Back to Lubuntu

Having spent a considerable time trying to find a “wow” factor for this board, I have to just surrender to the fact that it’s just not there. . This is not a “PI” any more than the Tinkerboard is a PI. And appending “pi” to a product name will never change that.

I can imagine the Nano-PI Fire 3 being an awesome single-application board for QT C/C++ developers though. With a dedicated debug port making it a snap to transport, execute and do live debugging directly on the hardware — but for general DIY hacking, using it for native Android development with Delphi, or node.js development with Smart Mobile Studio – or just kicking back with emulators like Mame, UAE or whatever tickles your fancy — its just too rough around the edges. Which is really a shame!

So at the end of the day I re-installed Lubuntu and figure I just have to wait until Friendly-elec get their act together and issue proper drivers for the Mali GPU. So it’s $35 straight out the window — but I can live with that. It was a risk but at that price it’s not going to break the bank.

The positive thing

The Nano-PI Fire 3 is yet another SBC in a long list that fall short of its potential. Like many others they try to use the word “PI” to channel some of the Raspberry PI enthusiasm their way – but the quality of the actual system is not even close.

In fact, using PI in their product name is setting themselves up for a fall – because customers will quickly discover that this product is not a PI, which can cause some subconscious aversion and resentment.

37013365_10155541149775906_3122577366065348608_o

The Nano rendered Amibian.js running some very demanding demos 4 times as fast as the PI 3b, one can only speculate what the board could do with proper drivers for the GPU.

The only positive feature the Fire-3 clearly has to offer, is abundantly more cpu power. It is without a doubt twice as fast (if not 3 times as fast) as the Raspberry PI 3b. The fact that it can render highly demanding and complex HTML5 demos 4 times faster than the Raspberry PI 3b without hardware acceleration is impressive. This is a $35 board after all, which is the same price.

But without proper drivers for the mali, it’s a useless toy. Powerful and with great potential, but utterly useless for multimedia and everything that relies on fast 2D and 3D graphics. For UAE (Amiga emulation) you can pretty much forget it. Even if you can compile the latest UAE4Arm with SDL as its primary display framework – it wouldn’t work because SDL depends on the graphics drivers. So it’s back to square one.

But the CPU packs a punch that is without question.

Final verdict

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

There are a lot of stable and excellent options out there, take your time

I was planning to test UAE next but as I have outlined above: without drivers that properly expose and delegate the power of the mali, it would be a complete disaster. I’m not even sure it would build.

As such I will just leave this board as is. If it matures at some point that would be great, but my advice to people looking for a great SBC experience — get the new Raspberry PI 3b+ and enjoy learning and exploring there.

And if you are into Amibian.js or making high quality HTML5 kiosk / node.js based systems, then fork out the extra $10 and buy an ODroid XU4. If you pay $55 you can pick up the Asus Tinkerboard which is blistering fast and great value for money, despite its turbulent introduction.

Note: You cannot go wrong with the ODroid XU4. Its affordable, stable and fast. So for beginners it’s either the Raspberry PI 3b+ or the ODroid. These are the most mature in terms of software, drivers and stability.

Amiga OS 4, object pascal and everything

August 2, 2017 2 comments

Those that read my blog knows that I’m a huge fan of the Commodore Amiga machines. This was a line of computers that took the world by storm around 1985 and held its ground until 1993. Sadly the company had to file for bankruptcy after a series of absurd financial escapades by its management.

carlsassenrath_teamamiga

The original team before it fell prey to mismanagement

The death of Commodore is one of the great tragedies in computing history. There is no doubt that Commodore represented a much-needed alternative to Microsoft and Apple – and the death of Commodore meant innovation of technology took a turn for the worse.

Large books have been written on this subject, as well as great documentaries and movies – so I’m not going to dig further into the drama here. Ars Technica has a range of articles covering the whole story, so if you want to understand how the market got the way it is today, head over and read up on the story.

On a personal level I find the classic Amiga machines a source of great inspiration even now. Despite Commodore dying in the 90’s, today 30 years after the fact I still stumble over amazing source-code on this awesome computer; There are a few things in Amiga OS that “hint” to its true age, but ultimately the system has aged with amazing elegance and grace. It just blows people away when they realize that the Amiga desktop hit the market in 1984 – and much of what we regard as a modern desktop experience is actually inherited from the Amiga.

amigasys4_04

Amiga OS is highly customizable. Here showing OS 3.9 [the last of the classic OS versions]

As I type this the Amiga is going through a form of revival. It’s actually remarkable to be a part of this because the scope of such an endeavour is monumental. But even more impressive is just how many people are involved. It’s not like some tiny “computer cult” where a bunch of misfits hang out in sad corners of the internet. Nope, we are talking about thousands of educated and technical people who still use their Amiga computers on a daily basis.

For instance: the realization of the new Amiga models have cost £ 1.2 million, so there are serious players involved in this.

The user-base is varied of course, it’s not all developers and engineers. You have gamers who love to kick back with some high quality retro-gaming. You have graphics designers who pixel large masterpieces (an almost lost art in this day and age). And you have musicians who write awesome tracks; then use that to spice up otherwise flat and dull PC based tracks.

What is even more awesome is the coding. Even the latest Freepascal has been ported, so if you were expecting people hand punching hex-codes you will be disappointed. While the Amiga is old in technical terms, it was so far ahead of the competition that people are surprised just how capable the classic systems are.

And yes, people code games, demos and utility programs for the classical Amiga systems even today. I just installed a Dropbox cloud driver on my system and it works brilliantly.

The brand new Amiga

Classic Amiga machines are awesome, but this post is not about the old models; it’s about the new models that are coming out now. Yes, you read right: next generation Amiga computers that have finally become a reality. Having waited for 22 years I am thrilled to say that I just ordered a brand new Amiga 5000! (and cant wait to install Freepascal and start coding).

It’s also quite affordable. The x5000 model (which is the power system) retails at around €1650, which is roughly half the price I paid for my Intel i7, Nvidia GeForce GTX 970 workstation. And the potential as a developer is enormous.

Just think about the onslaught of Delphi code I can port over, and how instrumental my software can become by getting in early. Say what you will about Freepascal but it tends to be the second compiler to hit a platform after GCC. And with Freepascal in place a Delphi developer can do some serious magic!

20431276_643626252509574_7473564293748990830_nRight. So the first Amiga  is the power model, the Amiga 5000. This can be ordered today. It cost the same as a good PC (1600€ range depending on import tax and vat). This is far less than I paid for my crap iMac (that I never use anymore).

The power model is best suited for people who do professional work on the machine. Software development doesn’t necessarily need all the firepower the x5000 brings, but more demanding tasks like 3d rendering or media composition will.

The next model is the A1222 which is due out around x-mas 2017 /slash/ first quarter

IMG_9251

The A1222 “Tabour”

2018. You would perhaps expect a mid-range model, something retailing at around €800 or there abouts – but the A1222 is without a doubt a low-end model.

It should retail for roughly €450. I think this is a great idea because AEON (who makes hardware) have different needs from Hyperion (who makes the new Amiga OS [more about that further into the article]). AEON needs to get enough units out to secure the foundation – while Hyperion needs vertical market penetration (read: become popular and also hit other hardware platforms as well). These factors are mutually exclusive, just like they are for Windows and OS X. Which is probably why Apple refuse to sell OS X without a mac, or they could end up competing with themselves.

A brave new Amiga OS

But there is more to this “revival” than just hardware. Many would even say that hardware is the least interesting about the next generation systems, and that the true value at this point in time is the new and sexy operating system. Because what the world needs now more than hardware (in my opinion) is a lightweight alternative to Linux and Windows. A lean, powerful, easy to use, highly customizable operating system that will happily boot on a $35 Raspberry PI 3b, or a $2500 Intel i7 monster. Something that makes computing fun, affordable and most of all: portable!

AmigaOS 41 Final (Commodore-Amiga, 2015, Amiga)_2

My setup of Amiga OS 4, with FPC and Storm C/C++

And with lean I have to stress that the original Amiga operating system, the classic 3.x system that was developed all the way to the end – was initially created to thrive in as little as 512kb. At most I had 2 megabytes of ram in my Amiga 1200 and that was ample space to write and run large programs, play the latest games and enjoy the rich, colorful and user-friendly desktop environment. We have to remember that Amiga had a multi-tasking, window based OS a decade before Microsoft.

Naturally the next-generation systems is built to deal with the realities of 2017 and beyond, but incredibly enough the OS will run just fine with as little as 256 megabytes. Not even Windows embedded can boot up on that. Linux comes close with distributions like Puppy and DSL, but Amiga OS 4 gives you a lot more functionality out of the box.

What way to go?

OK so we have new hardware, but what about the software? Are the new Amiga’s supposed to run some ancient version of Amiga OS? Of-course not! The people behind the new hardware have teamed up with a second company, Hyperion, that has believe it or not, done a full re-implementation of Amiga OS! And naturally they have taken the opportunity to get rid of annoying behavior – and adding behavior people expect in 2017 (like double-clicking on a window header to maximize it, easy access to menus and much more). Visually Amiga OS 4  is absolutely gorgeous. Just stunning to look at.

Now there are many different theories and ideas about where a new Amiga should go. Sadly it’s not just as simple as “hey let’s make a new amiga“; the old system is literally boiled in patent and legislation issues. It is close to an investors worst nightmare since ownership is so fragmented. Back when Commodore died, different parts of the Amiga was sold to different companies and individuals. The main reason we havent seen a new Amiga until now – is because the owners have been fighting between themselves. The Amiga as we know it has been caught in limbo for close to two decades.

My stance on the whole subject is that Trevor Dickenson, the man behind the next generation Amiga systems, has done the only reasonable thing a sane human being can when faced with a proverbial patent kebab: the old hardware is magical for us that grew up on it – but by todays standard they are obsolete dinosaurs. The same can be said about the Amiga OS 3.9. So Trevor has gone for a full re-implementation and hardware.

The other predominant idea is more GNU/Linux in spirit, where people want Amiga OS to be platform independent (or at least written in a way that makes the code run on different hardware as long as some fundamental infrastructure exists). This actually resulted in a whole new OS being written, namely Aros, which is a community made Amiga OS clone. A project that has been perpetually maintained for 20 years now.

131-3

Aros, a community re-implementation of Amiga OS for x86

While I think the guys behind Aros should be applauded, I do feel that AEON and Hyperion have produced something better. There are still kinks to work out on both systems – and don’t get me wrong: I am thrilled that Aros is available, I just enjoy OS 4 more than I do Aros. Which is my subjective opinion of course.

New markets

Right. With all this in mind, let us completely disregard the old Amiga, the commodore drama and instead focus on the new operatingsystem as a product. It doesn’t take long before a few thrilling opportunities present themselves.

The first that comes to my mind is how well suited OS 4 would be as an embedded platform. The problem with Linux is ultimately the same that haunts OS X and Windows, namely that size and complexity grows proportionally over time. I have seen Linux systems as small as 20 megabytes, but for running X based full screen applications, taking advantage of hardware accelerated graphics – you really need a bigger infrastructure. And the moment you start adding those packages – Linux puts on weight and dependencies fast!

embedded-systems-laboratory-systems

The embedded market is one place where Amiga OS would do wonders

With embedded systems im not just talking about head-less servers or single application devices. Take something simple like a ticket booth, an information kiosk or POS terminal. Most of these run either Windows embedded or some variation of Linux. Since both of these systems require a fair bit of infrastructure to function properly, the price of the hardware typically start at around 300€. Delphi and C++ based solutions, at least those that I have seen, end up using boards in the 300€ to $400€ range.

This price-tag is high considering the tasks you need to do in a POS terminal or ticket system. You usually have a touch enabled screen, a network connection, a local database that will cache information should the network be down – the rest is visual code for dealing with menus, options, identification and fault tolerance. If a visa terminal is included then a USB driver must also be factored in.

These tasks are not heavy in themselves. So in theory a smaller system if properly adapted for it could do the same if not better job – at a much better price.

More for less, the Amiga legacy

Amiga OS would be able to deliver the exact same experience as Windows and Linux – but running on more cost-effective hardware. Where modern Windows and Linux typically need at least 2 gigabyte of ram for a heavy-duty visual application, full network stack and database services – Amiga OS is happy to run in as little as 512 megabytes. Everything is relative of course, but running a heavy visual application with less than a gigabyte memory in 2017 is rare to say the least.

Already we have cut cost. Power ARM boards ships with 4 gigabytes of ram, powered by a snappy ARM v9 cpu – and medium boards ship with 1 or 2 gigabytes of ram and a less powerful cpu. The price difference is already a good 75€ on ram alone. And if the CPU is a step down, from ARM v9 to ARM v8, we can push it down by a good 120€. At least if you are ordering in bulk (say 100 units).

The exciting part is ultimately how well Amiga OS 4 scales. I have yet to try this since I don’t have access to the machine I have ordered yet – and sadly Amiga OS 4.1 is compiled purely for PPC. This might sound odd since everyone is moving to ARM, but there is still plenty of embedded systems based on PPC. But yes, I would urge our good friend Trevor Dickenson to establish a migration plan to ARM because it would kill two birds with one stone: upgrading the faithful Amiga community while entering into the embedded market at the same time. Since the same hardware is involved these two factors would stimulate the growth and adoption of the OS.

amihw

The PPC platform gives you a lot of bang-for-the-buck in the A1222 model

But for sake of argument let’s say that Amiga OS 4 scales exceptionally well, meaning that it will happily run on ARM v8 with 1 gigabyte of ram. This would mean that it would run on systems like the Asus Tinkerboard that retails at 60€ inc. vat. This would naturally not be a high performance system like the A5000, but embedded is not about that – it’s about finding something that can run your application safely, efficiently and without problems.

So if the OS scales gracefully for ARM, we have brought the cost down from 300€ to 60€ for the hardware (I would round that up to 100€, something always comes up). If the customers software was Windows-based, a further 50€ can be subtracted from the software budget for bulk licensing. Again buying in bulk is the key.

Think different means different

Already I can hear my friends that are into Linux yell that this is rubbish and that Linux can be scaled down from 8 gigabytes to 20 megabytes if so needed. And yes that is true. But what my learned friends forget is that Linux is a PITA to work with if you havent spent a considerable amount of time learning it. It’s not a system you can just jump into and expect to have results the next day. Amiga OS has a much more friendly architecture and things that are often hard to do on Windows and Linux, is usually very simple to achieve on the Amiga.

Another fact my friends tend to forget is that the great majority of commercial embedded projects – are done using commercial software. Microsoft actually presented a paper on this when they released their IOT support package for the Raspberry PI. And based on personal experience I have to agree with this. In the past 20 years I have only seen 2 companies that use Linux as their primary OS both in products and in their offices. Everyone else uses Windows embedded for their products and day-to-day management.

So what you get are developers using traditional Windows development tools like Visual Studio or Delphi (although that is changing rapidly with node.js). And they might be outstanding programmers but Linux is still reserved for server administrators and the odd few that use it on hobby basis. We simply don’t have time to dig into esoteric “man pages” or explore the intricate secrets of the kernel.

The end result is that companies go with what they know. They get Windows embedded and use an expensive x86 board. So where they could have paid 100€ for a smaller SBC and used Amiga OS to deliver the exact same product — they are stuck with a 350€ baseline.

Be the change

The point of this little post has been to demonstrate that yes, the embedded market is more than open for alternatives. Linux is excellent for those that have the time to learn its many odd peculiarities, but over the past 20 years it has grown into a resource hungry beast. Which is ironic because it used to be Windows that was the bloated scapegoat. And to be honest Windows embedded is a joy to work with and much easier to shape to your exact needs – but the prices are ridicules and it wont perform well unless you throw at least 2 gigabyte on it (relative to the task of course, but in broad strokes that’s the ticket).

But wouldn’t it be nice with a clean, resource friendly and extremely fast alternative? One where auto-starting applications in exclusive mode was a “one liner” in the startup-sequence file? A file which is actually called “startup-sequence” rather than some esoteric “init.d” alias that is neither a folder or an archive but something reminiscent of the Windows registry? A system where libraries and the whole folder structure that makes up drivers, shell, desktop and service is intuitively named for what they are?

asus

Amiga OS could piggyback on the wave of low-cost ARM SBC’s that are flooding the market

You could learn how to use Amiga OS in 2 days tops; but it holds great depth so that you can grow with the system as your needs become more complex. But the general “how to” can be picked up in a couple of days. The architecture is so well-organized that even if you know nothing about settings, a folder named “prefs” doesn’t leave much room for misinterpretation.

But the best thing about AmigaOS is by far how elegant it has been architected. You know, when software is planned right it tends to refactor out things that would otherwise be an obstacle. It’s like a well oiled machinery where each part makes perfect sense and you don’t need a huge book to understand it.

From where I am standing, Amiga OS is ultimately the biggest asset the Hyperion and AEON have to offer. I love the new hardware that is coming out – but there is no doubt in my mind, and I know I am right about this, that the market these companies should focus on now is not PPC – but rather ARM and embedded systems.

It would take an effort to port over the code from a PPC architecture to ARM, but having said that – PPC and ARM have much more in common than say, PPC and x86.

I also think the time is ripe for a solid power ARM board for desktop computers. While smaller boards gets most of the attention, like the Raspberry PI, the ODroid XU4 and the (S)Tinkerboard – once you move the baseline beyond 300€ you see some serious muscle. Boards like iMX6 OpenRex SBC Ultra packs a serious punch, and like expected it ships with 4 gigabyte of ram out of the box.

While it’s impossible to do a raw comparison between the A1222 and the iMX6 OpenRex, I would be surprised if the iMX6 delivered terrible performance compared to the A1222 chipset. I am also sure that if we beefed up the price to 700€, aimed at home computing rather than embedded – the ARM power boards involved would wipe the floor with PPC. There are a ton of factors at play here – a fast CPU doesn’t necessarily mean better graphics. A good GPU should make up at least 1/5 of the price.

Another cool factor regarding ARM is that the bios gives you a great deal of features you can incorporate into your product. All the ARM board I have gives you FAT32 support out of the box for instance, this is supported by the SoC itself and you don’t need to write filesystem drivers for it. Most boards also support Ext2 and Ext3 filesystems. This is recognized automatically on boot. The rich bios/mini kernel is what makes ARM so attractive to code for, because it takes away a lot of the boring, low-level tasks that took months to get right in the past.

Final words

This has been a long article, from the early years of Commodore – all the way up to the present day and beyond. I hope some of my ideas make sense – and I also hope that those who are involved in the making of the new Amiga perhaps pick up an idea or two from this material.

Either way I will support the Amiga with everything I got – but we need a couple of smart ideas and concrete plans on behalf of management. And in my view, Trevor is doing exactly what is needed.

While we can debate the choice of PPC, it’s ultimately a story with a long, long background to it. But thankfully nothing is carved in stone and the future of the Amiga 5000 and 1222 looks bright! I am literally counting the days until I get one!

LDef parser done

July 21, 2017 Leave a comment

Note: For a quick introduction to LDef click here: Introduction to LDef.

Great news guys! I finally finished the parser and model builder for LDef!

02237439ec5958f6ec7362f726a94696-cogwheels-red-circle-icon-by-vexelsThat means we just need to get the assembler ported. This is presently running fine under Smart Pascal (I like to prototype things there since its faster) – and it will be easy to port it over to Delphi and Freepascal after the model has gone through the steps.

I’m really excited about this project and while I sadly don’t have much free time – this is a project I truly enjoy working on. Perhaps not as much as Smart Pascal which is my baby, but still; its turning into a fantastic system.

Thoughts on the architecture

One of the things I added support for, and that I have hoped that Embarcadero would add to Delphi for a number of years now, is support for contract coding. This is a huge topic that I’m not jumping into here, but one of the features it requires is support for entry and exit sections. Essentially that you can define code that executes before the method body and directly after it has finished (before the result is returned if it’s a function).

This opens up for some very clever means of preventing errors, or at the very least give the user better information about what went wrong. Automated tests also benefits greatly from this.

For example,  a normal object pascal method looks, for example, like this:

procedure TForm1.MySpecialMethod;
begin
  writeln("You called my-special-method")
end;

The basis of contract design builds on the classical and expands it as such:

procedure TForm1.MySpecialMethod;
  Before()
  begin
    writeln("Before my-special-method");
  end;

  After()
  begin
    writeln("After my-special-method");
  end;

begin
  writeln("You called my-special-method")
end;

Note: contract design is a huge system and this is just a fragment of the full infrastructure.

What is cool about the before/after snippets, is that they allow you to verify parameters before the body is even executed, and likewise you get to work on the result before the value is returned (if any).

You mights ask, why not just write the tests directly like people do all the time? Well, that is true. But there will also be methods that you have no control over, like a wrapper method that calls a system library for instance. Being able to attach before/after code for externally defined procedures helps take the edge off error testing.

Secondly, if you are writing a remoting framework where variant data and multi-threaded invocation is involved – being able to check things as they are dispatched means catching potential errors faster – leading to better performance.

As always, coding techniques is a source of argument – so im not going into this now. I have added support for it and if people don’t need it then fine, just leave it be.

Under LDef assembly it looks like this:

public void main() {
  enter {
  }

  leave {
  }
}

Well I guess that’s all for now. Hopefully my next LDef post will be about the assembler being ready – leaving just the linker. I need to experiment a bit with the codegen and linker before the unit format is complete.

The bytecode-format needs to include enough information so that the linker can glue things together. So every class, member, field etc. must be emitted in a way that is easy and allows the linker to quickly look things up. It also needs to write the actual, resulting method offsets into the bytecode.

Have a happy weekend!

Smart Pascal: Amibian vs. FriendOS

July 20, 2017 Leave a comment

This is not a new question, and despite my earlier post I still get hammered with these on a weekly basis – so lets dig into this subject and clean it up.

I fully understand that for non-developers suddenly having two Amiga like web desktops can be a bit confusing; especially since they superficially at least do many of the same things. But there is actually a lot of co-incidence surrounding all this, as well as evolution of the general topic. People who work with a topic will naturally come up with the same ideas from time to time.

But ok, lets dig into this and clear away any confusion

You know about FriendOS right? It looks a lot like Amibian

20100925-Designer

Custom native web servers has been a part of Delphi for ages, so it’s not that exciting for us

“A lot” is probably stretching it. But ok:  FriendOS is a custom server system with a sexy desktop front-end written in HTML5. So you have a server that is custom written to interact with the browser in a special way. This might sound like a revolution to non-developers but it’s actually an established technology; its been a part of Delphi and C++ builder for at least 12 years now (Intraweb being the best example, Raudus another). So if you are wondering why im not dazzled, it’s because this has been there for a while.

The whole point of Amibian.js is to demonstrate a different path; to get away from the native back-end and to make the whole system portable and platform independent. So in that regard the systems are diametrically different.

maxresdefault

Custom web servers that talk to your web-app is old news. Delphi developers have done this for a decade at least and it’s not really interesting at this point. Node.js holds much greater promise.

What FriendOS has done that is unique, and that I think is super cool – is to couple their server with RDP (remote desktop protocol) and some nice video streaming for smooth video chat. Again these are off the shelves parts that anyone can add once you have a native back-end, it’s not really hard to code but time-consuming; especially when you are potentially dealing with large number of users spawning threads all over the place. I think Friend-Labs have done an exceptional good job here.

When you combine these features it creates the impression of an operating system like environment. And this is perfectly fine for ordinary users. It all depends on your needs and what exactly you use the computer for.

And just to set the war-mongers straight: FriendOS is not going up against Amibian. it’s going up against ChromeOS, Nayu and and a ton of similar systems; all of them with deep pockets and an established software portfolio. We focus on software development. Not even in the same ballpark.

To be perfectly frank: I see no real purpose for a web desktop except when connected to a context. There has to be an advantage beyond isolating web functions in one place. You need something special that your system does better than others, or different than others. Amibian has been about UAE.js and to run retro games in a familiar environment. And thus create a base that Amiga lovers can build on and play with. Again based on our prefab for customers that make embedded systems and use our compiler and RTL for that.

If you have a hardware product like a NAS, a ticket system or a retro-game machine and want to have a nice web front-end for it: then it makes sense. But there is absolutely nothing in both our systems that you can’t whip-up using Intraweb or Raudus in a few weeks. If you have the luxury of a native back-end, then adding Active Directory support is a matter of dropping a component. You can even share printers and USB devices over the wire if you like, this has been available to Delphi and c++ developers for ages. The “new” factor here, which FriendOS does very well i might add, is connectivity.

This might sound like criticism but it’s really not. It’s honesty and facts. They are going to need some serious cash to take on Google, Samsung, LG and various other players that have been doing similar things for a long time (or about to jump on the same concepts) — Amibian.js is for Amiga fans and people who use Smart Pascal to write embedded applications. We don’t see anything to compete with because Amibian is a prefab connected to a programming language. FriendOS is a unification system.

A programming language doesnt have the aspirations of a communication company. So the whole “oh who is best” or “are you the same” is just wrong.

Ok you say it’s not competing, but why not?

To understand Amibian.js you first need to understand Smart Pascal (see Wikipedia article on Smart Pascal). Smart Pascal (smartmobilestudio.com) is a software development studio for writing software using web technology rather than native machine-code. It allows you to create whatever you like, from games to servers, or kiosk software to the next Facebook clone.

Our focus is on enabling our customers to quickly program robust mobile applications, servers, kiosk software, games or large JavaScript projects; products that would otherwise be hard to manage if all you have is vanilla JavaScript. I mean why spend 2 years coding something when you can do it in 2 months using Smart? So a web desktop is just ridicules when you understand how large our codebase is and the scope of the product.

smart

Under Smart Pascal what people know as Amibian.js is just a project type. There is no competition between FriendOS and Amibian because a web desktop represents a ridicules small piece of our examples; it’s literally mistaking the car for the factory. Amibian is not our product, it is a small demo and prefab (pre fabricated system that others can download and build on) project that people use to save time. So under Smart, creating your own web desktop is a piece of cake, it’s a click, and then you can brand it, expand it and do whatever you like with it. Just like you would any project you create in Visual Studio, Delphi or C++ builder.

So we are not in competition with FriendOS because we create and deliver development tools. Our customers use Smart Pascal to create web environments both large and small, and naturally we deliver what they need. You could easily create a FriendOS clone in Smart if you got the skill, but again – that is but a tiny particle in our codebase.

Really? Amibian.js is just a project under Smart Pascal?

Indeed. Our product delivers a full object-oriented pascal compiler, debugger and IDE. So you can write classes, use inheritance and enjoy all the perks of a high-level language — and then compile this to JavaScript.

You can target node.js, the browser and about 90+ embedded devices out of the box. The whole point of Smart Pascal is to avoid the PITA that is writing large applications in JavaScript. And we do this by giving you a classical programming language that was made especially for application authoring, and then compile that to JavaScript instead.

Screenshot

Amibian.js is just a tiny, tiny part of what Smart Pascal is all about

This is a massive undertaking that started back in 2009/2010 and involves a high-quality compiler, linker, debugger and code generator; a full IDE with a ton of capabilities and last but not least: a huge run-time library that allows you to work with the DOM (document object model, or HTML) and node.js from the vantage point of a programmer.

Most people approach web development as a designer. They write html and then style them using a stylesheet. They work with colors, aspects and pages. Which means people who traditionally write programs falls between two chairs: first they must learn about html and css, and secondly a language which is ill equipped for large scale applications (imagine writing adobe photoshop in nothing but JS. Sure it’s possible, but wouldnt you rather spend a month coding that than a year? In a language that actually makes sense?).

With Smart you approach web development like you do writing programs. You work with visual controls, change properties, write code in response to events. Even writing your own visual controls that you can re-use and inherit from later is both fun and easy. So rather than ending up with a huge was of spaghetti code, which sadly is the fate of most large-scale JavaScript projects — Smart lets you work like you are used to. In a language better suited for the task.

And yes, I was not kidding when I said this was a huge undertaking. The source code in our codebase is close to 2.5 gigabytes. And keep in mind that this is source-code and libraries. So it’s not something you slap together over the weekend.

20108543_10154652373945906_5493167218129195758_n

The Smart source-code is close to 2.5 gigabytes. It has taken years to complete

But why do Amibian and FriendOS both focus on the Amiga?

That is pure co-incidence. The guys over at Friend Labs started out on the Amiga just like we did. So when I updated our desktop project (previously called Quartex Media Desktop) the Amiga look and feel came natural to me.

commodoreI’m a huge retro-computing fan that loves the Amiga. When I sat down to rewrite our window manager I loved the way Amiga OS 4.x looked, so I decided to implement an UI inspired by that.

People have to remember that the Amiga was a huge success in Scandinavia, so finding developers that are in their late 30s or early 40s that didn’t own an Amiga is harder than you think.

So the fact that we all root our ideas back to the Amiga is both co-incidence and a mutual passion for a great platform. One that really should have survived the financial onslaught of fat CEO’s and thir minions in the board.

But Amibian does a lot of what FriendOS does?

Probably. JavaScript is multi-tasking by default so if loading external URL’s into window containers, doing live resize and other things is what you refer to then yes. But that is the nature of web programming. Its like creating a bucket if you want to carry water; it is a natural first step of an evolutionary pattern. It’s not like FriendOS is copying us I would imagine.

240_F_61497799_GnuUiuJliH9AyOJTeo6i3bS8JNN7wgr2

For the record Smart started back in 2010 and the media desktop came in with the first hotfix, so its been available years before Friend-Labs even existed. Creating a desktop has not been a huge part of what we do because mobile applications, building a rich and solid run-time-library with hundreds of classes for our customers – and making an IDE that is great to use, that is our primary job.

We didn’t even know FriendOS existed. Let alone that it was a Norwegian product.

But you posted that you worked for FriendOS earlier?

Yes I did, very briefly. I was offered a position and I worked there for a month. It was a chance to work side by side with legends like David John Pleasance, ex head of Commodore for europe; and also my childhood hero Francois Lionet, author of Amos Basic for the Amiga way back in the 80’s and 90s.

blastfromthepast

We never forget our childhood heroes

Sadly we had our wires crossed. I am an awesome object pascal developer, while the guys at Friend-Labs are awesome C developers. I work primarily on Windows while they work mostly on Linux. So in essence they hired a Delphi developer to work in a language he doesn’t know on a platform he havent used.

They simply took for granted that I worked in C/C++, while I took for granted that they used object pascal. Its an easy mistake to make and its not the first time; and probably not the last.

Needless to say the learning curve would be extremely high for any developer (learning a new operating-system and programming language at the same time as you are supposed to be productive).

When my girlfriend suddenly faced a life threatening illness the situation became worse. It was impossible for me to commute or leave her side for the unforeseeable future; so when you add the six months learning curve to this situation; six months of not being able to contribute on the level I am used to; well I am old enough to know how that ends. So I did what was best for everyone and resigned.

Besides, I am a damn good Delphi developer with standing invitation to many companies; so it made more sense to just take a step backwards. Which was not fun because I really enjoyed the short time I was there. But, it was not meant to be.

And that is basically all there is to it.

Ok. But if Smart is a development tool, will it support Friend-OS ?

This is something that I really want to do. But since The Smart Company is a proper company with stocks, shareholders and investors – it’s not a decision I can take on my own. It is something that must be debated by the board. But personally yeah, I would love that.

friend

As they grow, so does the need for proper development tools

One of the reasons I hope FriendOS succeeds is because it’s a win-win situation. The more they expand the more relevant Smart becomes. Say what you will about JavaScript but writing large and complex applications is not easy by any measure.

So the moment we introduce Smart Pascal for Friend, their users will be able to write large applications rapidly, with better time-to-market and consequent ROI. So it’s a win-win. If they succeed then we get a bigger market; If they don’t we havent lost anything.

This may sound extremely self-serving, but Friend-Labs have had the same chance as everyone else to invest in Smart; our investor plans have been available for quite some time, and we have to do what is best for our company.

But what about Amibian, was it just a short thing?

Not at all. It is put on hold for a few months while we release the next generation RTL. Which is probably the biggest update in the history of Smart Pascal. We have a very clear agenda ahead of us and Amibian.js is (as underlined) a very small part of what we do.

But Amibian is written using our next generation RTL, and without that our customers cant really do much with it. So it’s important to get the RTL out first and then work on the IDE to reflect its many new features. After that – Amibian.js development will continue.

The primary target for Amibian.js is embedded devices and kiosk systems, coupled with full-screen web applications and hardware front-ends (NAS and backup devices being great examples). So the desktop will run on affordable, off the shelves hardware starting at $40 and all the way up to the most powerful and expensive x86 boards on the market. Cheap solutions like Raspberry PI, ODroid XU4 and Tinkerboard will deliver what you today need a dedicated $120 x86 board to achieve.

kiosk-systems

Our desktop will run on many targets and is platform independent by design

This means that our deskop has a wildly different modus operandi. We will not require a constant connection to a remote server. Amibian will happily boot up on a single device, regardless of processor type.

Had we coded our backend using Delphi or C++ builder (native like FriendOS have done) we would have been finished months ago. And I could have caught up with FriendOS in a couple of months if I wanted to. But that is not in our agenda. We have written our server framework for node.js as we coded the desktop  – which means it’s platform and OS agnostic by design. If node.js runs, Amibian will run. It wont care if you are running on a $40 embedded board or the latest Intel i9 cpu.

Last words

I really hope this has helped and that the confusion between Amibian.js and our agenda, versus what Friend-Labs is doing, is now clearer.

Amibian666

From Norway with love

I wish Friend-Labs the very best and hope they are successful in their endeavour. They have worked very hard on the product and deserve that. And while I might come over as arrogant at times, im really not.

Web desktops have been around for a long time now (Asustor is my favorite) through Delphi and C++ builder and that is just facts. But that doesn’t mean you can’t put things together in new and interesting ways! Smart itself was first put together by existing technology. It was said to be impossible by many because JavaScript and object pascal are unthinkable companions. But it turned out to be a perfect match.

As for the future – personally I don’t believe in the web-desktop outside a specific context, something to give it purpose if you like. I believe for instance that Amibian.js will be awesome for Amiga users when its running on a $99 ARM laptop. Where the system boots straight into a full-screen desktop and where UAE.js is fully integrated into the core, making retro-gaming and running old programs close to seamless. That I can believe in.

But it would make no sense running Amibian or FriendOS in a browser on top of a Windows desktop or a full Ubuntu X session. Unless the virtual desktop functions as your corporate window with access to company mail, documents and essentially what every web-based intranet already does. So once again we end up with the fact that this has already been done. And unless you create a unique context for it, it just wont have any appeal. This is also why I havent pursued the same tech Friend-Labs have, because that’s not where the exciting stuff is happening.

But I will happily be proven wrong, because that means an even bigger market for us should we decide to support the platform.

LDef and bytecodes

July 14, 2017 Leave a comment

LDef, short for Language Definition format, is a standard I have been formulating for a couple of years. I have taken my experience with writing various compilers and parsers, and also my experience of writing RTL’s and combined it all into a standard.

programming-languages-for-iot-e1467856370607LDef is a way for anyone to create their own programming language. Just like popular libraries and packages deals with the low-level stuff, like Gr32 which is an excellent graphics library — LDef deals with the hard stuff and leaves you with the pleasant job of defining what the language should look like.

The idea is to make a language construction kit if you like, where the underlying engine is flexible enough to express the languages we know and love today – and also powerful enough to express new ideas. For example: let’s say you want to create an awesome new game system (just as an example, it applies to any system that can be automated). You have the means and skill to create the actual engine – but how are you going to market it? You will be up against monoliths like Unity and simple “click and play” engines like ClickTeam Fusion, Game Maker and the likes.

Well, the only way to make good games is hard work. There is no two ways about it. You can fake your way only so far – so at the end of the day you want to give your users something solid.

In our example of publishing a game-engine, I think that you would stand a much better chance of attracting users if you hooked that engine up to a language. A language that is easy to use, easy to learn and with commands that are both specific and non-specific to your engine.

There are some flavours of Basic that has produced knock-out games for decades, like BlitzBasic. That language alone has produced hundreds of titles for both PC, XBox and even Nintendo. So it’s insanely fast and not a pushover.

And here is the cool part about LDEF: namely that it makes it easy for you to design your own languages. You can use one of the pre-defined languages, like object pascal or visual basic if that is what you like – but ultimately the fun begins when you start to experiment with new ideas and language features. And it’s fun when you get to that point, because all the nitty gritty is handled. You get to focus on the superficial stuff like syntax and high level functions. So you can shave off quite a bit of development time and make coding fun again!

The paradox of faster bytecodes

Bytecodes used to be to slow for anything substantial. On 16-bit machines bytecodes were used in maybe one language (that I know of) and that was the ‘E’ compiler. The E language was maybe 30 years ahead of its time and is probably the only language I can think of that fits cloud programming like hand in glove. But it was also an excellent system automation language (scripting) and really turned some heads back in the late 80s and early 90s. REXX was only recently added to OS X, some 28 years after the Amiga line of computers introduced it to the general public.

ldef_bytecodes

Bytecode dump of a program compiled with the node.js version of the compiler

In modern times bytecodes have resurfaced through Java and the .NET framework which for some reason caused a stir in the whole development community. I honestly never bought into the hype, but I am old enough to remember the whole story – so I’m probably not the Microsoft demographic anyways. Java hyped their virtual machine opcodes to the point of exhaustion. People really are stupid. Man did they do a number on CEO’s and heads of R&D around the world.

Anyways, end of the story was that Intel and AMD went with it and did some optimizations that could help bytecodes run faster. The stack was optimized with Java, because let’s face it – it is the proverbial assault on the hardware. And the cache was expanded on command from the emper.. eh, Microsoft. Also (if I remember correctly) the “jump to pointer” and various branch instructions were made to execute faster. I remember reading about this in Dr. Dobbs Journal and Microsoft Developer Magazine; granted it was a few years ago. What was interesting is the symbiotic relationship that exists between Intel and Microsoft, I really didn’t know just how closely knit these guys were.

Either way, bytecodes in 2017 is capable of a lot more than they ever were on 16-bit and early 32-bit systems. A cpu like Intel i5 or i7 will chew through bytecodes like a warm knife on butter. It depends on how you orchestrate the opcodes and how much work you delegate to the various instructions.

Modeled instructions

Bytecodes are cool but they have to be modeled right, or its all going to end up as a bloated, slow and limited system. You don’t want to be to low-level, otherwise what is the point of bytecodes? Bytecodes should be a part of a bigger picture, one that could some day be modeled using FPGA’s for instance.

The LDef format is very flexible. Each instruction is ultimately a single 32-bit longword (4 bytes) where each byte holds key information about the command, what data is forward in the cache and how it should be read.

The byte organization is:

  • 0 – Actual opcode
  • 1 – Instruction layout

Depending on the instruction layout, the next two bytes can hold different values. The instruction layout is a simple value that defines how the data for the instruction is passed.

  • Constant to register
  • Variable to register
  • Register to register
  • Register to variable
  • Register to stack
  • Stack to register
  • Variable to variable
  • Constant to variable
  • Stack to variable
  • Program counter (PC) to register
  • Register to Program counter
  • ED (exception data) to register
  • Register to exception-data

As you can probably work out from the information here, this list hints to some archetectual features. Variables are first class citizens in LDef, they are allocated, managed and released using instructions. Constants can be either automatically handled and references by id (a resource chunk is linked to the class binary) or “in place” and compiled directly into the assembly as part of the instruction. For example

load R[0], "this is a test"

This line of code will take the constant “this is a test” and move it into register #0. You can choose to have the text-data stored as a proper resource which is appended to the compiled bytecode (all classes and modules have a resource chunk) or just compile “as is” and have the data read directly. The first option is faster and something you can adjust with compiler optimization options. The second option is easier to work with when you debug since you can see the data directly as a part of the debug memory dump.

And last but not least there are registers. 32 of them in number (so for the low-level coders out there you should have few limitations with regards to register mapping). All operations (like divide, multiply etc) operate on registers only. So to multiply two variables they first have to be moved into registers and the multiplication is executed there – then you can move the result to a variable afterwards.

ldef_asm

LDef assembly code. Simple but extremely effective

The reason registers is used in my runtime system – is because you will not be able to model a FPGA with high-level concepts like “variables” should someone every try to implement this as hardware. Things like registers however is very easy to model and how actual processors work. You move things from memory into a cpu register, perform an action, and then move the result back into memory.

This is where Java made a terrible mistake. They move all data onto the stack and then call the operation. This simplifies execution of instructions since there is never any registers to keep track of, but it just murders stack-space and renders Java useless on mobile devices. The reason Google threw out classical Java (e.g Java as bytecodes) is due to this fact (and more). After the first android devices came out they quickly switched to a native compiler – because Java was too slow, to power-hungry and required too much memory (especially stack space) to function properly. Battery life was close to useless and the only way to save Java was to go native. Which is laughable because the entire point of Java was mobility, “compile once run everywhere” — yeah well, that didn’t turn out to well did it 😀

Dot net improved on this by adding a “load resource” type instruction, where each method will load in the constant data by number – and they are loaded into pre-defined slots (the variables you have used naturally). Then you can execute operations in typical “A + B to C” style (actually all of that is omitted since the compiler already knows both A, B and C). This is much more stack friendly and places performance penalty on the common language runtime (CLR).

Sadly Microsoft’s platform, like everything Microsoft does, requires a pretty large infrastructure. It’s not simple, elegant and fast – it’s more monolithic, massive and resource hungry. You don’t see .net being the first thing ported to a new platform. You typically see GCC followed by Freepascal.

LDef takes the bytecode architecture one step further. On assembly level you reference data using identifiers just like .net, and each instruction is naturally executed by the runtime-engine – but data handling is kept within the virtual realm. You are expected to use the registers as temporary holding slots for your information. And no operations are ever done directly on a variable.

The benefit of this is:

  • Better payload balancing
  • Easier to JIT since the architecture is closer to real assembly
  • Retains important aspects of how real hardware works (with FPGA in mind)

So there are good reasons for the standard, all of them very good.

C like intermediate language

With assembler so clearly defined you would expect  assembly to be the way you work. In essence that what you do, but since OOP is built into the system and there are structures you are expected to populate — structures that would be tedious to do in raw unbridled assembler, I have opted for a C++ inspired intermediate language.

ldef_app

The LDEF assembler kitchen sink

You would half expect me to implement pascal, but truth be told pascal parsing is more complex than C parsing, and C allows you to recycle parsers more easily, so dealing with sub structures and nested regions is less maintainance and easier to write code for.

So there is no spesific reason why I picked C++ as a intermediate language. I would prefer pascal but I also think it would cause a lot of confusion since object pascal will be the prime citizen of LDef languages. My other language, N++ also used curley brackets so I’m honestly not strict about what syntax people prefer.

Intermediate language features supported are:

  • Class declarations
  • Struct declarations
  • Parameter to register mapping
  • Before mehod code (enter)
  • After method code (leave)
  • Alloc section for class fields
  • Alloc section for method variables

The before and after code for methods is very handy. They allow you to define code that should execute before the actual procedure. On a higher level when designing a new language, this is where you would implement custom allocation, parameter testing etc.

So if you call this method:

function testcode() {
    enter {
      writeln("this is called before the method entry");
    }
    leave { 
      writeln("this is called after the method exits");
    }
  writeln("this is the method body");
}

Results in the following output:

this is called before the method entry
this is the method body
this is called after the method exits

 

When you work with designing your language, you eventually.

Truly portable

Now I have no aspirations in going into competition with neither Oracle, Microsoft or anyone in between. Like most geeks I do things I find interesting and enjoy working within a field of computing that is stimulating and personally rewarding.

Programming languages is an area where things havent really changed that much since the golden 80s. Sure we have gotten a ton of fancy new software, and the way people use languages has changed – but at the end of the day the languages we use havent really changed that much.

JavaScript is probably the only language that came out of the blue and took the world by storm, but that is due to the central role the browser holds for the internet. I sincerely doubt JavaScript would even have made a dent in the market otherwise.

LDef is the type of toolkit that can change all this. It’s not just another language, and it’s not just another bytecode engine. A lot of thought has gone into its architecture, not just notions of “how can we do this or that”, but big ideas about the future of computing and how IOT will sculpt the market within 5-8 years. And the changes will be permanent and irrevocable.

Being able to define new languages will be utmost important in the decade ahead. We don’t even know the landscape yet but we can extrapolate some ideas based on where technology is going. All of it in broad strokes of course, but still – there are some fundamental facts about computers that the timeless and havent aged a day. It’s like mathematics, the Pythagorean theorem may be 2500 years old but it’s just as valid today as it was back then. Principles never die.

I took the example of a game engine at the start of this article. That might have been a poor choice for some, but hopefully the general reader got the message: the nature of control requires articulation. Regardless if you are coding an invoice system or a game engine, factors like time, portability and ease of use will be just as valid.

There is also automation to keep your eye on. While most of it is just media hype at this point, there will be some form of AI automation. The media always exaggerates things, so I think we can safely disregard a walking, self-aware Terminator type robot replacing you at work. In my view you can disregard as much as 80% of what the media talks about (regardless of topic). But some industries will see wast improvement from automation. The oil and gas sector are the most obvious. A the moment security is as good as humans can make them – which means it is flawed and something goes wrong every day around the globe. But smart pumping stations and clever pressure measurements and handling will make a huge difference for the people who work with oil. And safer oil pipelines means lives saved and better environmental control.

The question is, how do we describe programs 20 years from now? Is our current tools up for the reality of IOT and billions of connected devices? Do we even have a language that runs equally well as a 1000 instance server-cluster as it would as a stand alone program on your desktop? When you start to look into parallel computing and multi-cluster data processing farms – languages like C# and C++ makes little sense. Node.js is close, very close, but dealing with all the callbacks and odd limitations of JavaScript is tedious (which is why we created Smart Pascal to begin with).

The future needs new things. And for that to happen we first need tools to create them. Which is where my passion is.

Node, native and beyond

When people create compilers and programming languages they often do so for a reason. It could be that their own tools are lacking (which was my initial motivation), or that they have thought of a better way to achieve something; the reasons can be many. In Microsofts case it was revenge and spite, since they were unsuccessful in stealing Java away from Sun Microsystems (Oracle now owns Java).

LDEF

LDef binaries are fairly straight forward. The less fluff the better

Point is, you implement your idea using the language you know – on the platform you normally use. So for me that is object pascal on windows. I’m writing object pascal because while the native compiler and runtime is written in Delphi – it is made to compile under Freepascal for Linux and OS X.

But the primary work is done in Smart Pascal and compiled to JavaScript for node.js. So the native part is actually a back-port from Smart. And there is a good reason I’m doing it this way.

First of all I wanted a runtime and compiler system that would require very little to run. Node.js has grown fat in features over the past couple of years – but out of the box node.js is fast, portable and available almost anywhere these days. You can write some damn fast and scalable cloud servers with node (and with fast i mean FAST, as in handling thousands of online gamers all playing complex first person worlds) and you can also write some stable and rock solid system services.

Node is turning into a jack of all trades, capable of scaling and clustering way beyond what native software can do. Netflix actually re-wrote their entire service stack using node back in 2014. The old C++ and ASP approach was not able to handle the payload. And every time they did a small change it took 45 minutes to compile and get a binary to test. So yeah, node.js makes so much more sense when you start looking a big-data!

So I wanted to write LDef in a way that made it portable and easy to implement. Regardless of platform, language and features. Out of the box JavaScript is pretty naked stuff and the most advanced high-level feature LDef uses is buffers to deal with memory. everything else is forced to be simple and straight forward. No huge architecture or global system services, just a small and fast runtime and your binaries. And that’s all you need to run your compiled applications.

Ultimately, LDef will be written in LDef itself and compile itself. Needing only a small executable stub to be ported to a new platform. Most of mono C# for Linux is written in C# itself – again making it super easy to move mono between distros and operating systems. You can’t do that with the Visual Studio, at least not until Microsoft wants you to. Neither would you expect that from Apple XCode. Just saying.

The only way to achieve the same portability that mono, freepascal and C/C++ has to offer, is naturally to design the system as such from the beginning. Keep it simple, avoid (operatingsystem) globalization at all cost, and never-ever use platform bound APIs except in the runtime. Be Posix but for everything!

Current state of standard and licensing

The standard is currently being documented and a lot of work has been done in this department already. But it’s a huge project to document since it covers not only LDEF as a high-level toolkit, but stretches from the compiler to the source-code it is designed to compile to the very binary output. The standard documentation is close to a book at this stage, but that’s the way it has to be to ensure every part is understood correctly.

But the question most people have is often “how are you licensing this?”.

Well, I really want LDEF to be a free standard. However, to protect it against hijacking and abuse – a license must be obtained for financial entities (as in companies) using the LDEF toolkit and standard in commercial products.

I think the way Unreal software handles their open-source business is a great example of how things should be done. They never charge the little guy or the Indie developer – until they are successful enough to afford it. So once sales hits a defined sum, you are expected to pay a small percentage in royalties. Which is only fair since Unreal engine is central to the software to begin with.

So LDef is open source, free to use for all types of projects (with an obligation to pay a 3% royalty for commercial products that exceeds $4999 in revenue). Emphasis is on open source development. As long as the financial obligations by companies and developers using LDEF to create successful products is respected, only creativity sets the limit.

If you use LDEF to create a successful product where you make 50.000 NKR (roughly USD 5000) you are legally bound to pay 3% of your product revenue monthly for the duration of the product. Which is extremely little (3% of $5000 is $150 which is a lot less than you would pay for a Delphi license, the latter costing between upwards of USD 3000).

 

FMX4Linux is coming, and we cant wait!

May 3, 2017 1 comment

When Embarcadero announced Linux support for their Tokyo release of Delphi, my soul literally left my body for a moment. Could it really be true? After all these years have Embarcadero done what many said would be impossible?

I must admit that people saying something is impossible has lost its sting for me. Over the past 3 years I have one of these impossible things on a weekly basis, yet people are just as shocked every single time. But this time – I was the one in a state of excitement.

As an outspoken (an understatement perhaps) active blogger, my skin has grown thick over the years. I also get to see a lot of cool tech long before it’s commercially available and mainstream – so it takes more for me to be swayed and dazzled. And you grow a healthy instinct for separating bullshit from true technical achievements too.

Tokyo

Delphi Tokyo is probably one of the finest Delphi editions to date

But yes, I admit it – this time Embarcadero surprised me in every positive way imaginable. If you follow my blog you know that I call it as I see it and hold little back. But this was a purely positive experience.

No I know what you are going to say; everyone knew about this right? Yeah me too. But my mind has been elsewhere lately with work and projects, so I didn’t catch the release buzz from the closed forums (well, “closed” is a matter of perspective, I have friends from Russia to the United States, and from china to the Sudan) and for the first time since the Borland days – Embarcadero got the drop on me.

I’m the kind of guy that runs on passion. Delphi, Smart Pascal and even object pascal as a general language is not just work for me – its something I love to use. I relax and enjoy myself when coding. So you can imagine my reaction when my boss sent me a message with “Download Tokyo and give me a report”. It was close to midnight but I was out of that bed faster than bacon on toast, ran into my home office wearing nothing but boxers and a Commodore t-shirt – and threw myself into Embarcadero Developer Network’s download section.

From hero to zero in 2 seconds

I think it was around 07:00 the next morning, one hour before I was due for work that the magic phrase “command line and system services [daemons] only” hit home. And yes I “kinda” knew that before – but maybe, just maybe Embarcadero had thrown in visual applications in the 11th hour. I even had a friendly bet with Jim McKeeth that a FMX UI solution would appear less than 24 hours after release (more about that later).

linuxproj

Now that is a beautiful thing

Either way, it was quite the anti-climax after all that work in VMWare installing Delphi from scratch, waiting, hoping and praying. First of all because I remember watching an in-depth technical review about Firemonkey by David Intersimone a few years back; the one where he describes the Firemonkey architecture in detail. Especially how the abstraction layer between the visual control framework and the actual os made it possible for FMX to quickly adapt to new environments. Firemonkey is a complex and highly adaptable framework, but its biggest strength is paradoxically enough its simplicity. A simplicity achieved through abstracting the UI from the rendering functionality. At least as much as possible, you still have to deal with OS level windowing, security and all of that – so it’s no walk in the park either.

In the presentation (sadly I don’t have a link to this one) David went to great lengths to explain that regardless of operating system, as long as someone implemented a driver class that exposed the set of features FMX needs – Firemonkey would run as long as the compiler supported the instruction set. Visual engines could be DirectX, OpenGL, Cairo or whatever makes sense on that particular platform. As long as the “bridge class” talking with the operating system is there – Firemonkey can run on a toaster if so be.

So Firemonkey has the same abstraction concept that we use in the VJL (Visual JavaScript Component Library) for Smart Pascal (differences not withstanding). If it’s Windows you are running on, DirectX is used; If it’s OS X then Apple’s implementation of OpenGL runs the show – and if you are on Linux you can pick between OpenGL and Cairo. I must admit I havent looked too closely at Cairo, but I know it was designed to make advanced composition and UI rendering more efficient.

Why it Tokyo didn’t ship with visual application support is beyond me, but considering the timing of what happened next, I have made my own conclusions. It doesnt really matter to be honest – the point is we got it and it rocks!

FMX4Linux to the rescue!

Remember the wager I mentioned with Jim McKeeth? It wasnt a serious wager, I just commented and said “a Linux FMX solution will appear within 24hrs, you can bet on it” and added a smiley. I had no idea who or how, I just knew it’s going to turn up.

Because one thing that is a sure bet – it’s that the Delphi community is a group made up of highly resourceful, inventive and clever people. And I was pretty sure that it wouldn’t take many hours before someone came up with a patch or framework to fill the void. And right I was. Less than 24 hours later and it was fact rather than conjecture.

fmxlinux

This is just a must have. There is no debate.

So less than 24 hours after Delphi Tokyo hit the shelves. Eugene Kryukov and Alexey Sharagin presented “FMX for Linux”. Giving you both the missing rendering back-end that talks to the system – and some kind of “widget mapping” (as far as I can understand, I wont pretend to know how they did it) that renders your UI according to GTK. So it’s not just a simple “patch”, theme or class that calls a handful of external routines; it’s a full visual implementation of FMX for Linux. Impressive? Oh yeah, and then some!

Let us explore!

Over the next few days I will be giving you an in-depth look at how this system works. I’m also going to test drive Html Components and see if we can get that running under Linux as well. Since FMX for Linux is still in development we have to take height for that – but being able to target Ubuntu is pretty cool! And Remobjects, glorious Remobjects SDK, if that works out of the box I will dance the jig and upload it to YouTube, I swear to cow!

Remobjects_linux

Linux is about to feel the full onslaught of object pascal

There is little doubt what the next Smart Pascal IDE will be based on, and when you combine FMX for Linux, HTML components, Remobjects SDK and TMS into one – you got serious firepower to play with that will give even the most hardened C/C++ QT developer reason to be scared. And that is before the onslaught of Data Abstract and Remobjects C# native compiler.

Oh man next weekend is going to be the best ever!

Let’s not forget ARM targets

asus

The “PI killer” has arrived!

On a second note we will also be looking at the my latest embedded toys – namely the Asus Tinkerboard. I just got two of them in the mail today. Its going to be exciting to see how it fares against the Raspberry PI, ODroid XU4 and the Intel Atom based UP board.

We will also be testing how these two cards can be clustered together using node.js to work as one – and see how that impacts performance for our node.js based Smart Desktop project. These are exciting times indeed!

To make things even more interesting I will be pitching the Tinkerboard against the ODroid XU4 (original version, not the pussy passive save the environment unicorn they push now) against both the x86 UP v1 and Raspberry 3b. Although I think the Raspberry PI is in for the beating of its life when the ODroid and Tinker is overclocked to blood lust mode!

smartdesk

The Smart desktop has a powerful node.js back-end that packs a punch

So, when LLVM optimized JavaScript runs Mc68040 machine-code at 4 times the speed of a high-end Amiga 4000, I will be content.

So let’s do another “that’s impossible” shall we 🙂

Smart-Pascal: A brave new world, 2022 is here

April 29, 2017 6 comments

Trying to explain what Smart Mobile Studio does and the impact it can have on your development cycle is very hard. The market is rampant with superficial frameworks that promises you the world, and investors have been taken for a ride by hyped up, one-click “app makers” more than once.

I can imagine that being an investor is a bit like panning for gold. Things that glitter the most often turn out to be worthless – yet fortunes may hide beneath unpolished and rugged surfaces.

Software will disrupt most traditional industries in the next 5-10 years.
Uber is just a software tool, they don’t own any cars, yet they are now the
biggest taxi company in the world. -Source: R.M.Goldman, Ph.d

So I had enough. Instead of trying to tell people what I can do, I decided I’m going to show them instead. As the american’s say: “talk is cheap”. And a working demonstration is worth a thousand words.

Care to back that up with something?

A couple of weeks ago I published a video on YouTube of our Smart Pascal based desktop booting up in VMWare. The Amiga forums went off the chart!

vmware

For those that havent followed my blog or know nothing about the desktop I’m talking about, here is a short summary of the events so far:


Smart Mobile Studio is a compiler that takes pascal, like that made popular in Delphi or Lazarus, and compiles it JavaScript instead of machine-code.

This product has shipped with an example of a desktop for years (called “Quartex media desktop”). It was intended as an example of how you could write a front-end for kiosk machines and embedded devices. Systems that could use a touch screen as the interface between customer and software.

You have probably seen those info booths in museums, universities and libraries? Or the ticket machines in subways, train-stations or even your local car-wash? All of those are embedded systems. And up until recently these have been small and expensive computers for running Windows applications in full-screen. Applications which in turn talk to a server or local database.

Smart Mobile Studio is able to deliver the exact same (and more) for a fraction of the price. A company in Oslo replaced their $300 per-board unit – with off the shelves $35 Raspberry Pi mini-computers. They then used Smart Pascal to write their client software and ran it in a fullscreen-browser. The Linux distribution was changed to boot straight into Firefox in full-screen. No Linux desktop, just a web display.

The result? They were able to cut production cost by $265 per unit.


Right, back to the desktop. I mentioned the Amiga community. This is a community of coders and gamers that grew up with the old Commodore machines back in the 80s and 90s. A new Amiga is now on the way (just took 20+ years) – and the look and feel of the new operating-system, Amiga OS 4.1, is the look and feel I have used in The Smart Desktop environment. First of all because I grew up on these machines myself, and secondly because the architecture of that system was extremely cost-effective. We are talking about a system that delivered pre-emptive multitasking in as little as 512Kb of memory (!). So this is my “ode to OS 4” if you will.

And the desktop has caused quite a stir both in the Delphi community, cloud community and retro community alike. Why? Because it shows some of the potential cloud technology can give you. Potential that has been under their nose all this time.

And even more important: it demonstrate how productive you can be in Smart Pascal. The operating system itself, both visual and non-visual parts, was put together in my spare time over 3 weeks. Had I been able to work on it daily (as a normal job) I would have knocked it out in a week.

A desktop as a project type

All programming languages have project types. If you open up Delphi and click “new” you are greeted with a rich menu of different projects you can make. From low-level DLL files to desktop applications or database servers. Delphi has it all.

delphistuff

Delphi offers a wide range of projects types you can create

The same is true for visual studio. You click “new solution” and can pick from a wide range of different projects. Web projects, servers, desktop applications and services.

Smart Pascal is the only system where you click “new project” and there is a type called “Smart desktop” and “Smart desktop application”. In other words, the power to create a full desktop is now an integrated part of Smart Pascal.

And the desktop is unique to you. You get to shape it, brand it and make it your own!

Let us take a practical example

Imagine a developer given the task to move the company’s aging invoice and credit system from the Windows desktop – to a purely web-based environment.

legacy2The application itself is large and complex, littered with legacy code and “quick fixes” going back decades. Updating such a project is itself a monumental task – but having to first implement concepts like what a window is, tasks, user space, cloud storage, security endpoints, look and feel, back-end services and database connectivity; all of that before you even begin porting the invoice system itself ? The cost is astronomical.

And it happens every single day!

In Smart Pascal, the same developer would begin by clicking “new project” and selecting “Smart desktop”. This gives him a complete desktop environment that is unique to his project and company.

A desktop that he or she can shape, adjust, alter and adapt according to the need of his employer. Things like file-type recognition, storage, getting that database – all of these things are taken care of already. The developer can focus on his task, namely to deliver a modern implementation of their invoice and credit software – not waste months trying to force JavaScript frameworks do things they simply lack the depth to deliver.

Once the desktop has the look and feel in order, he would have to make a simple choice:

  • Should the whole desktop represent the invoice system or ..
  • Should the invoice system be implemented as a secondary application running on the desktop?

If it’s a large and dedicated system where the users have no need for other programs running, then implementing the invoice system inside the desktop itself is the way to go.

If however the customer would like to expand the system later, perhaps add team management, third-party web-services or open-office like productivity (a unified intranet if you like) – then the second option makes more sense.

On the brink of a revolution

The developer of 2022 is not limited to the desktop. He is not restricted to a particular operating system or chip-set. Fact is, cloud has already reduced these to a matter of preference. There is no strategic advantage of using Windows over Linux when it comes to cloud software.

Where a traditional developer write and implement a solution for a particular system (for instance Microsoft Windows, Apple OS X or Linux) – cloud developers deliver whole eco systems; constellations of software constructed from many parts, both micro-services developed in-house but also services from others; like Amazon or Azure.

All these parts co-operate and can be combined through established end-point standards, much like how components are used in Delphi or Visual Studio today.

smartdesk

The Smart Desktop, codename “Amibian.js”

Access to products written in Smart is through the browser, or sometimes through a “paper thin” native host (Cordova Phonegap, Delphi and C/C++) that expose system level functionality. These hosts wrap your application in a native, executable container ready for Appstore or Google Play.

Now the visual content is typically the same, and is only adapted for a particular device. The real work is divided between the client (which is now very much capable) and your server back-end.

So people still write code in 2022, but the software behaves differently and is designed to function as a group (cluster). And this requires a shift in the way we think.

asmjs

Above: One of my asm.js prototype compilers. Lets just say it runs fast!

Scaling a solution from processing 100 invoices a minute to handling 100.000 invoices a minute – is no longer a matter of code, but of architecture. This is where the traditional, native only approach to software comes up short, while more flexible approaches like node.js is infinitely more capable.

What has emerged up until now is just the tip of the ice-berg.

Over the next five to eight years, everything is going to change. And the changes will be irrevocable and permanent.

Running your Smart Pascal server as a system-level daemon is very easy once you know what to look for :)

The Smart Desktop back-end running as a system service on a Raspberry PI

As the Americans say, talk is cheap – and I’m done talking. I will do this with you, or without you. Either way it’s happening.

Nightly-build of the desktop can be tested here: http://quartexhq.myasustor.com/

Smart Puppy: Smart pascal meets linux!

April 21, 2017 Leave a comment

logo_waifu2x_art_noise1_scale_tta_1One of my absolute favorite operating-systems in the whole world has to be Puppy Linux. I discovered it just a few days ago and I have fallen completely in love with this thing. I can vaguely remember giving it a testdrive a few years back, but I didn’t know much about Linux in general so I didn’t understand what I it represented.

So if you are looking for a friendly, small, fast and easy to use Linux system – then Puppy is about as friendly as it gets. The Facebook user group with the same name is a warm and friendly place to be. Much like Delphi developer the Admin(s) take pride in keeping things orderly – and people who hang out there engage, care and help each other out.

Before you run out and download Puppy, which I hope you do later – please understand that Puppy is very different from Linux in general. You could almost say that it’s a whole alternative to mainstream Linux as we know it.

But, once you know about the differences then you are in for a treat! I will explain them in the article, so please be patient and take the time to digest.

Puppies hate fluff

One of the reasons I never converted wholesale to Linux (and yes I did try) – is that the average Linux distro is unbearable and unnecessary cryptic. For some reason Linux architects suffer from a terrible affliction, namely a shortage of characters. This sickness means that Linux don’t have enough characters for everyone, so programmers must use a maximum of five letters when naming their software. If coders ignore this shortage and blatantly name something directly or intuitively – then Richard Stallman and Lady Gaga will order a “drive-by pony tail cut” on the dude. And a Linux administrator without his pont-tail is finished (the nerd equivalent of flipping burgers at McDonalds).

Puppy Linux does contain it’s fair share of the classical Linux software (that goes without saying). But, the man behind this wonderful Linux flavour is also a level-headed, clever and resourceful man (or woman) – so he has thankfully broken with what can only be described as archaic thinking.

puppy01

Puppy Linux is not exactly software impaired

So even with my minimal Linux experience I was able to navigate around the filesystem, locate documents (which here is called “Documents” and “My Documents” even). There is a whole bunch of these tiny differences, small things that makes all the difference. From the way he (or she) has named things – to where things are stored and placed.

And it’s so small! The basic install is less than 300 megabytes in size (!) Yes you read that right. The generic Puppy Linux installation with desktop and a few popular applications is less than 300 megabytes.

In my case I can have a fully loaded development studio, featuring GCC, FPC (freepascal), Lazarus IDE, CodeBlocks IDE, KDevelop IDE, Anjunta developer studio – and last but never least Smart Mobile Studio on a 2 gigabyte USB stick (!) I don’t think you can even get USB sticks that small any more (?) The smallest I got is 32 gigabyte and the largest is 256 gigabyte.

But before we go on with the wonders of Puppy Linux – lets look at what Linux did wrong. Why is Linux even to this day considered hard to use? Or to put it another way: what has Windows and OS X done right to be considered easier to use yet capable of the same (and often more) ?

Naming, what Linux did wrong

One of the tenants of professional programming, is to ensure that classes, members and functions have meaningful names. There was a time when you would get away with single character class, variable and method names — but that wont fly in 2017. Your Q&A department would have you for breakfast if you checked in code like that. Classes, name-spacing and packages should be descriptive. End of story.

The reason this has become an almost sacred law, should be obvious: it may not be you that maintains the software 5, 10 or 15 years down the road. A piece of code should always be written in such a way that it can be understood and thus maintained by others within a reasonable time-frame (which also means plenty of comments and good documentation). This is not a matter of preference, but of time and money. And when you pay out salaries these factors are one and the same.

So naming elements of software in 2017 has a lot of criteria attached to it. The most obvious so far being:

  • Always name things clearly because that
    • ensures ease of use
    • simplifies maintenance
    • removes doubt as to “what is what”
    • less user-mistakes
  • The less mistakes, either in understanding something or using something, the less money a business throws out the window. Money that could be spent paying you to make something cool instead (or fix bugs that are critical).
  • The less user-mistakes caused by customers, the more your service department can focus on quality of service. When a company starts it usually have outstanding support, but as it grows their service-desk slowly become robots.
  • The easier and more intuitive a system is, the more users it will attract. If people can pick something up and just naturally figure out how things work, then statistics show that they most likely will continue using it through thick and thin.

Right. With these rules in mind – what happens if you take them but apply them to Linux instead? Not Linux code or libraries or stuff like that, but Linux the user-experience from top to bottom?

And don’t get me wrong, I think Linux is awesome so this is not an attack on Linux; I’m simply pointing out factors that could help make Linux even better.

I mean, just look at the Linux filesystem. Again you have this absurd shortage of characters. Why would anyone abbreviate the word “user[s]” into “usr” ? It make noh sense.  Same with “lib”, would it have killed you to call it “libraries”? And so it continues with “dev” – because calling it “devices” would cause the space-time-continuum to break.

Shell shocked

The shell (or command-line under Windows) and it’s commands is really the thing that annoys me the most. There is a fine line between use and abuse, and the level of abbreviation here is beyond whimsical and harmless – and well into the realm of silly and absurd

Who in their right mind would name a command “ps”? What could it possibly mean? The first thing that comes to mind is “print spool”. If you come from any other platform than Linux (and perhaps Unix, I don’t know) you would never imagine that “ps” actually means “list all running processes and their states”.

ps_command

“ps” lists the running processes and their states

Above: running “ps” from the shell lists the running processes. Would it have killed the coders to just call it, oh perhaps, “listprocesses” or “showrunningprograms”?

The “ps” command is just one in a long, long list of commands that really should be brought into the twenty-first century. The benefits should be obvious. It should not be necessary for a 43-year-old man to blog about this, because it’s been a problem for the better part of three decades.

  • Kids and teenagers is the bread and butter for all operating systems. The faster a kid of teenager can do something with a system, the more loyal that individual will be to the platform in the future.
  • Linux needs developers and users from other platforms. When someone who has been a successful developer for almost 30 years find a system cryptic and hard to use, how much harder will it be for a non-technical user?
  • Standards are important. The location of files, libraries and settings should be uniform. As of writing Linux seem to have 3 different standards (again, I am no expert): systemd, initd and “systemx”. The latter is just a name I made up, because no-one really knows what it’s called. We are now in the realm of PlayStation, ChromeOS, WebOS and systems that build on the Linux – but deviate the moment the drivers have loaded.

Again, I’m not writing this in a negative mindset. I have been using Ubuntu for a while as an alternative to Windows and OS X. But this has been a purely user-centric experience. I have not done any programming except random bits of Freepascal and node.js experiements. I have enjoyed Ubuntu purely as a user. Writing documents, checking email, browsing the web, IRC, reading news groups – ordinary stuff.

So I am very positive to Linux, but I have yet to find “my” flavour of it. A Linux distro I feel at home with and that appeals to my way of working.

Until today that is..

Enter puppy Linux

Puppy is a flavour of Linux that just demolishes some of Linux’s most holiest of concepts. Everyone will tell you never to run as root, always have the root account in peace – and keep it under lock and key just in case someone gets into your second account right?

Well not Puppy. Here you are expected to run as root and you can, if you for some reason must, jump out into a secondary user which is fake. So indeed – puppy Linux is a single user Linux system. It’s the rebel, the scoundrel and rouge of the Linux world – the distro that couldn’t care less what the other guys are doing.

gcc.png

Fancy a spot of coding? GCC is a SFS module away ..

Secondly, and this is very cool, Puppy is highly modular. No I’m not talking about packages, all Linux distros have that in some form or another. No I’m talking about something called SFS files, short for squashed file-system.

To make a long story short, Puppy allows you to mount compressed files as disks and they become a part of the system. It’s a bit like the virtual-drive API on windows (if you have ever coded against that?). You may have noticed in Windows how you can double-click on a .ISO file and suddenly the file is mounted in the file-explorer and stays mounted until you manually dis-mount the damn thing?

Well, SFS is that but also much more. Because when you mount the SFS file whatever applications it contains registers on the start-menu, adds itself to the global path and essentially becomes one with the whole system. This took me a while to wrap my head around this (good features always comes with a price, so i keept waiting for the negative. But there were none!). The people I talked to about this were not coders, so they had some very colorful explanations to how it all worked. But once I realized SFS was just a zip-file (or tarball or whatever) with a fixed structure (including mount script and dis-mount script) I got the picture.

Size and speed matters

Before I started using a PC back in the early 90’s I was a huge Amiga fan. I still am (as you no doubt have noticed). One of the first things I found, or first difference between Amiga computing and PC computing that hit me – was how wasteful PC’s were. I remember I was shocked when I saw how much space and cpu power the average programmer just wasted — because on the Amiga everyone strived to be as resourceful and efficient as possible.

We would spend days optimizing even the smallest parts of our applications just to ensure that it ran at top speed and produced as little bloat as possible. This was just baked into us, it was the way of the force and as common as your grandfather’s work ethics. Quality and achievement went hand in hand.

cb

CodeBlocks is an excellent IDE 🙂

When you fire up Puppy Linux you are instantly reminded that there are people to this day that cares about size and speed. And that maybe, just maybe, consumerism has tricked you into throwing away perfectly usable technology year after year. Machines that actually had more than enough CPU power for the tasks you wanted, but was slowed down by bloated operating-systems, poor programming and lazy code generators.

Puppy Linux is the fastest bloody Linux you will ever run. The only operatingsystem I have tried that runs faster, is Aros compiled for Arm (a distro called Aeros, a reverse engineered edition of Amiga OS). But as far as x86 and the Linux kernel goes — Puppy Linux is the bomb.

I know I’m repeating myself here but: less than 300 megabytes for a fully loaded Linux distro with text processor, browser, devkit, music player, video player and all the “typical” applications you would use for daily tasks? And it truly is the fastest hunk of junk in the galaxy without question.

Amiga coders and the cult of joy

When I started to snoop around the Puppy environment and community, I started to notice a couple of “tell-tell” signs. Tiny, subtle things that only an Amiga coder would pick up on. Enough to give you a hunch, a gut feeling – but not enough to blatantly say it out loud. “Amiga guys did this” i would whisper to myself. And it’s not really such a big surprise to find that coders now in their 40s that used to be Amiga coders.

In 30 years time there will be company owners and CEO’s that grew up with Playstation and have fond memories of that. But they wont recognize each-other by their craftmanship – that is the difference.

cult

The cult of joy lives on, albeit in new forms

The Amiga was special because it was not just a games machine. It was also a complete rewrite of what constituted the power operatingsystem of its time: Unix. In other words they copied the best stuff from Unix (which by the way had the same absurd filesystem as Linux still has) but cleaned it up. First thing to be cleaned was (drumroll) the filesystem. But that’s another story all together.

When I entered the Puppy Linux forum I naturally mentioned that I was a complete total Linux novice, and that my favorite machine before x86 was an Amiga. And what do you think happened? Let’s just say that more than a few greeted me with open arms. These were the Amiga users that went to Linux when Commodore went under all those years ago. And they had been active in shaping Linux ever since (!)

So yeah, had a great time on their forums – and it was like running into your long-lost cousin or something. Like if you havent seen a family member in 30 years and suddenly you meet them face to face.

Tired of 30 gigabyte operatingsystems?

Puppy Linux is not for everyone. It’s the kind of system you either love or hate. I have yet to find someone on a middle-ground regarding puppy. Either you love it, or you hate it. Or if you prefer: either you use it and are thrilled about it, or you never install it.

It has a lot of good things going for it:

  • It is built to be one of the smallest, working desktop environment you can get
  • It is built according to “the old ways”, where speed, efficiency and size matter
  • It runs fine on older hardware (my test machine is an 8 year old laptop) and makes stuff you would otherwise throw away become valuable again.
  • It is storage abstracted, meaning you can have all your personal stuff inside a single SFS archive (easier to back up), while the operatingsystem remains on a USB stick.
  • You don’t have to permanently install it (again, boot from a USB stick).
  • It is single user by default, which is perfect for IOT projects and devices!
  • It supports ARM, so you can now enjoy this awesome thing on Raspberry PI 3 !
  • Its Linux so it has all the benefits of a rich driver database
  • Latest Puppy is binary compatible with Ubuntu (whatever that means)
  • There are 3 different desktops for it (to my knowledge), so if you don’t like the default client just install something else
  • It is the perfect rescue USB stick. At less than 300 Mb you can fit it on any old USB stick you have around the house. I think the smallest you can buy now is 4 GB
  • It has a warm, helpful, friendly and international group of users

Oh and it’s free!

As a final note: I installed Wine, the system that makes it possible to run Windows software on Linux (not an emulator, more of a api-call middle-ware /slash/ dispatcher). I was quite surprised to see it run Smart Mobile Studio on the first try!

So fancy a bit of hacking this weekend? Why not give puppy a go?

Check it out here: http://puppylinux.org/main/Download%20Latest%20Release.htm