Archive

Posts Tagged ‘Amibian.js’

Quartex Pascal, convergence is near

July 16, 2020 1 comment
67582488_10156396548830906_5204248427029856256_o

A Quartex Cluster of 5 x ODroid XU4. A $400 super computer running Quartex media Desktop. Enough to power a school.

I only have the weekends to work on Quartex Pascal, but I have spent the past 18 months tinkering away, making up for wasted time. So I’m just going to leave some pictures here for you to enjoy.

Note: I was asked on LinkedIn if this has anything to do with Smart Mobile Studio, and the answer is a resounding no. I have nothing to do with Smart any more. QTX Pascal is a completely separate project that is written from scratch by yours truly.

The QTX Framework was initially a library I created back in 2014, but it has later been completely overhauled and turned into a full RTL. It is not compatible with Smart Pascal and has a completely different architecture.

QTX Pascal is indirectly funded by the Amiga Retro Community (which might sound strange, but the technical level of that community is beyond anything I have encountered elsewhere) since QTX is central to the creation of the Quartex Media Desktop. It is a shame that Embarcadero decided to not back the project. The compiler and toolchain would have been a part of Delphi by now, and I wouldn’t have to write a separate IDE. But when they see what this system can deliver in terms of services, database work, mobile and embedded -they might regret it. The project only accepts donation funding, I am not interested in investors or partners. If you want a vision turned into reality, you gotta do it yourself. Everything else just gets in the way.

For developers by developers

Quartex Pascal is made for the community. It will be free for students and open-source projects. And a commercial license will never exceed $300. It is a shareware license and the financial aspects is purely to help fund further research and development of the desktop cloud platform. The final goal (CloudForge) is to compile the IDE itself to JavaScript, so people only need a browser to write enterprise level applications via Quartex Media Desktop. When that is finished, my work is done – and people have a clear path to the future.

qtx_run_07

Unlike other systems, QTX started with the non-visual stuff, so the system has a well implemented infrastructure for writing universal services and servers, using node.js as a deployment host. Services are also Docker friendly. Runs without change on Windows, Mac OS, Linux and a wealth of embedded systems and SBCs (single board computers)

qtx_run_08

A completely new RTL written from scratch generates close to native speed JS, highly compatible (even with legacy browsers) and rock solid

qtx_run_09

There are several display modes for QTX forms, from dynamic to absolute positioning. You can mix and match between HTML and QTX code, including a HTML5 compliant WYSIWYG editor and style manager. Makes content handling a lot easier

qtx_run_10

Write object pascal, JavaScript, HTML, LDEF (webassembly), node.js services – or mix and match between them all for maximum potential. Writing mobile applications is now ridiculously easy compared to “other tools” out there.

Oh and for the proverbial frosting — The full clustered Quartex Media desktop and services is a project type. Thats right. A complete cloud infrastructure suitable for teams, kiosks, embedded, schools, intranets – and even an replacement OS for ChromeOS. You don’t need to interface with Amazon, you get your own Amazon (optional naturally).

desktop_02

Filesystem over websocket, IPC between hosted apps and desktop, full back-end services that are clustered, and run on anything from a Raspberry PI 4 to low-cost ARM SBCs.

49938355_1169526123220996_502291013608407040_o

Web Assembly made easy. Both for Delphi and QTX

smartdesk

Let there be rock

Oh, and documentation. Loads and loads of documentation.

qtx_run_11

Proper documentation, both class overview and explanations that a human being has written is paramount for learning and getting up to speed quickly.

I don’t have vacation this year, which means I only have weekends to tinker away. But i have spent the past 18-ish months preparing and slowly finishing the pieces I needed. From vector containers to form design controls, to a completely re-written RTL from scratch — so yeah. This time I’m doing it my way.

C/C++ porting, QTX and general status

March 15, 2020 3 comments

C is a language that I used to play around with a lot back in the Amiga days. I think the last time I used a C compiler to write a library must have been in 1992 or something like that? I held on to my Amiga 1200 for as long as i could – but having fallen completely in love with Pascal, I eventually switched to x86 and went down the Turbo Pascal road.

Lately however, C++ developers have been asking for their own Developer group on Facebook. I run several groups on Facebook in the so-called “developer” family. So you have Delphi Developer, FPC Developer, Node.JS Developer and now – C++Builder developer. The groups more or less tend to themselves, and the node.js and FPC groups are presently being seeded (meaning, that the member count is being grown for a period).

The C++Builder group however, is having the same activity level as the Delphi group almost, thanks to some really good developers that post links, tips and help solve questions. I was also fortunate enough to have David Millington come on the Admin team. David is leading the C++Builder project, so his insight  and knowledge of both language and product is exemplary. Just like Jim McKeeth, he is a wonderful resource for the community and chime in with answers to tricky questions whenever he has time to spare.

Getting back in the saddle

Having working some 30 years with Pascal and Object Pascal, 25 of those years in Delphi, C/C++ is never far away. I have an article on the subject that i’ve written for the Idera Community website, so I wont dig too deep into that here — but needless to say, Rad Studio consists of two languages: Object Pascal and C/C++, so no matter how much you love either language, the other is never far away.

So I figured it was time for this old dog to learn some new tricks! I have always said that it’s wise to learn a language immediately below and above your comfort zone. So if Delphi is your favorite language, then C/C++ is below you (meaning: more low level and complex). Above you are languages like JavaScript and C#. Learning JavaScript makes strategic sense (or use DWScript to compile Pascal to JavaScript like I do).

When I started out, the immediate language below Object Pascal was never C, but assembler. So for the longest time I turned to assembler whenever I needed a speed boost; graphics manipulation and processing pixels is especially a field where assembly makes all the difference.

But since C++Builder is indeed an integral part of Rad Studio, and Object Pascal and C/C++ so intimately connected (they have evolved side by side), why not enjoy both assembly and C right?

So I decided to jump back into the saddle and see what I could make of it.

C/C++ is not as hard as you think

intf

I’m having a ball writing C/C++, and just like Delphi – you can start where you are.

While I’m not going to rehash the article I have already prepared for the Idera Community pages here, I do want to encourage people to give it a proper try. I have always said that if you know an archetypal language, you can easily pick up other languages, because the archetypal languages will benefit you for a lifetime. This has to do with archetypal languages operating according to how computers really work; as opposed to optimistic languages (a term from the DB work, optimistic locking), also called contextual languages, like C#, Java, JavaScript etc. are based on how human beings would like things to be.

So I now had a chance to put my money where my mouth is.

When I left C back in the early 90s, I never bothered with OOP. I mean, I used C purely for shared libraries anyways, while the actual programs were done in Pascal or a hybrid language called Blitz Basic. The latter compiled to razor sharp machine code, and you could use inline assembly – which I used a lot back then (very few programmers on those machines went without assembler, it was almost given that you could use 68k in some capacity).

Without ruining the article about to be published, I had a great time with C++Builder. It took a few hours to get my bearings, but since both the VCL and FMX frameworks are there – you can approach C/C++ just like you would Object Pascal. So it’s a matter of getting an overview really.

Needless to say, I’ll be porting  a fair share of my libraries to C/C++ when I have time (those that makes sense under that paradigme). It’s always good to push yourself and there are plenty of subtle differences that I found useful.

Quartex Media Desktop

When I last wrote about QTX we were nearing the completion of the FileSystem and Task Management service. The prototype had all its file-handling directly in the core service  (or server) which worked just fine — but it was linked to the Smart Pascal RTL. It has taken time to write a new RTL + a full multi-user, platform independent service stack and desktop (phew!) but we are seeing progress!

desktop

The QTX Baseline backend services is now largely done

The filesystem service is now largely done! There are a few synchronous calls I want to get rid of, but thankfully my framework has both async and sync variations of all file procedures – so that is now finished.

To make that clearer: first I have to wrap and implement the functionality for the RTL. Once they are in the RTL, I can use those functions to build the service functions. So yeah, it’s been extremely elaborate — but thankfully it’s also become a rich, well organized codebase (both the RTL and the Quartex Media Desktop codebases) – so I think we are ready to get cracking on the core!

The core is still operating with the older API. So our next step is to remove that from the core and instead delegate calls to the filesystem to our new service. So the core will simply be reduced to a post-office or traffic officer if you like. Messages come in from the desktops, and the core delegates the messages to whatever service is in charge of them.

But, this also means that both the core and the desktop must use the new and fancy messages. And this is where I did something very clever.

While I was writing the service, I also write a client class to test (obviously). And the way the core works — means that the same client that the core use to talk to the services — can be used by the desktop as well.

So our work in the desktop to get file-access and drives running again, is to wrap the client in our TQTXDevice ancestor class. The desktop NEVER accesses the API directly. All it knows about are these device drivers (or object instances). Which is  how we solve things like DropBox and Google Drive support. The desktop wont have the faintest clue that its using Dropbox, or copying files between a local disk and Google Drive for example — because it only communicates with these device classes.

Recursive stuff

One thing that sucked about node.js function for deleting a folder, is that it’s recursive parameter doesn’t work on Windows or OS X. So I had to implement a full recursive deletefolder routine manually. Not a big thing, but slightly more painful than expected under asynchronous execution. Thankfully, Object Pascal allows for inline defined procedures, so I didn’t have to isolate it in a separate class.

Here is some of the code, a tiny spec compared to the full shabam, but it gives you an idea of what life is like under async conditions:

unit service.file.core;

interface

{.$DEFINE DEBUG}

const
  CNT_PREFS_DEFAULTPORT     = 1883;
  CNT_PREFS_FILENAME        = 'QTXTaskManager.preferences.ini';
  CNT_PREFS_DBNAME          = 'taskdata.db';

  CNT_ZCONFIG_SERVICE_NAME  = 'TaskManager';

uses
  qtx.sysutils,
  qtx.json,
  qtx.db,
  qtx.logfile,
  qtx.orm,
  qtx.time,

  qtx.node.os,
  qtx.node.sqlite3,
  qtx.node.zconfig,
  qtx.node.cluster,

  qtx.node.core,
  qtx.node.filesystem,
  qtx.node.filewalker,
  qtx.fileapi.core,

  qtx.network.service,
  qtx.network.udp,

  qtx.inifile,
  qtx.node.inifile,

  NodeJS.child_process,

  ragnarok.types,
  ragnarok.Server,
  ragnarok.messages.base,
  ragnarok.messages.factory,
  ragnarok.messages.network,

  service.base,
  service.dispatcher,
  service.file.messages;

type

  TQTXTaskServiceFactory = class(TMessageFactory)
  protected
    procedure RegisterIntrinsic; override;
  end;

  TQTXFileWriteCB = procedure (TagValue: variant; Error: Exception);
  TQTXFileStateCB = procedure (TagValue: variant; Error: Exception);

  TQTXUnRegisterLocalDeviceCB = procedure (TagValue: variant; DiskName: string; Error: Exception);
  TQTXRegisterLocalDeviceCB = procedure (TagValue: variant; LocalPath: string; Error: Exception);
  TQTXFindDeviceCB = procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception);
  TQTXGetDisksCB = procedure (TagValue: variant; Devices: JDeviceList; Error: Exception);

  TQTXGetFileInfoCB = procedure (TagValue: variant; LocalName: string; Info: JStats; Error: Exception);
  TQTXGetTranslatePathCB = procedure (TagValue: variant; Original, Translated: string; Error: Exception);

  TQTXCheckDevicePathCB = procedure (TagValue: variant; PathName: string; Error: Exception);

  TQTXServerExecuteCB = procedure (TagValue: variant; Data: string; Error: Exception);

  TQTXTaskService = class(TRagnarokService)
  private
    FPrefs:     TQTXIniFile;
    FLog:       TQTXLogEmitter;
    FDatabase:  TSQLite3Database;

    FZConfig:   TQTXZConfigClient;
    FRegHandle: TQTXDispatchHandle;
    FRegCount:  integer;

    procedure   HandleGetDevices(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleGetDeviceByName(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleCreateLocalDevice(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleDestroyDevice(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleFileRead(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleFileReadPartial(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleGetFileInfo(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleFileDelete(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);

    procedure   HandleFileWrite(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleFileWritePartial(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleFileRename(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleGetDir(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);

    procedure   HandleMkDir(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
    procedure   HandleRmDir(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);

    procedure   ExecuteExternalJS(Params: array of string;
      TagValue: variant; const CB: TQTXServerExecuteCB);

    procedure   SendError(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage; Message: string);

  protected
    function    GetFactory: TMessageFactory; override;
    procedure   SetupPreferences(const CB: TRagnarokServiceCB);
    procedure   SetupLogfile(LogFileName: string;const CB: TRagnarokServiceCB);
    procedure   SetupDatabase(const CB: TRagnarokServiceCB);

    procedure   ValidateLocalDiskName(TagValue: variant; Username, DeviceName: string; CB: TQTXCheckDevicePathCB);
    procedure   RegisterLocalDevice(TagValue: variant; Username, DiskName: string; CB: TQTXRegisterLocalDeviceCB);
    procedure   UnRegisterLocalDevice(TagValue: variant; UserName, DiskName:string; CB: TQTXUnRegisterLocalDeviceCB);

    procedure   GetDevicesForUser(TagValue: variant; UserName: string; CB: TQTXGetDisksCB);
    procedure   FindDeviceByName(TagValue: variant; UserName, DiskName: string; CB: TQTXFindDeviceCB);
    procedure   FindDeviceByType(TagValue: variant; UserName: string; &Type: JDeviceType; CB: TQTXGetDisksCB);

    procedure   GetTranslatedPathFor(TagValue: variant; Username, FullPath: string; CB: TQTXGetTranslatePathCB);

    procedure   GetFileInfo(TagValue: variant; UserName: string; FullPath: string; CB: TQTXGetFileInfoCB);

    procedure   SetupTaskTable(const TagValue: variant; const CB: TRagnarokServiceCB);
    procedure   SetupOperationsTable(const TagValue: variant; const CB: TRagnarokServiceCB);
    procedure   SetupDeviceTable(const TagValue: variant; const CB: TRagnarokServiceCB);

    procedure   AfterServerStarted; override;
    procedure   BeforeServerStopped; override;
    procedure   Dispatch(Socket: TNJWebSocketSocket; Message: TQTXBaseMessage); override;

  public
    property    Preferences: TQTXIniFile read FPrefs;
    property    Database: TSQLite3Database read FDatabase;

    procedure   SetupService(const CB: TRagnarokServiceCB);

    constructor Create; override;
    destructor  Destroy; override;
  end;


implementation

//#############################################################################
// TQTXFileenticationFactory
//#############################################################################

procedure TQTXTaskServiceFactory.RegisterIntrinsic;
begin
  writeln("Registering task interface");
  &Register(TQTXFileGetDeviceListRequest);
  &Register(TQTXFileGetDeviceByNameRequest);
  &Register(TQTXFileCreateLocalDeviceRequest);
  &Register(TQTXFileDestroyDeviceRequest);
  &Register(TQTXFileReadPartialRequest);
  &Register(TQTXFileReadRequest);
  &Register(TQTXFileWritePartialRequest);
  &Register(TQTXFileWriteRequest);
  &Register(TQTXFileDeleteRequest);
  &Register(TQTXFileRenameRequest);
  &Register(TQTXFileInfoRequest);
  &Register(TQTXFileDirRequest);
  &Register(TQTXMkDirRequest);
  &Register(TQTXRmDirRequest);
  &Register(TQTXFileRenameRequest);
  &Register(TQTXFileDirRequest);
end;

//#############################################################################
// TQTXTaskService
//#############################################################################

constructor TQTXTaskService.Create;
begin
  inherited Create;
  FPrefs := TQTXIniFile.Create();
  FLog := TQTXLogEmitter.Create();
  FDatabase := TSQLite3Database.Create(nil);

  FZConfig := TQTXZConfigClient.Create();
  FZConfig.Port := 2292;

  self.OnUserSignedOff := procedure (Sender: TObject; Username: string)
  begin
    WriteToLogF("We got a service signal! User [%s] has signed off completely", [Username]);
  end;

  MessageDispatch.RegisterMessage(TQTXFileGetDeviceListRequest, @HandleGetDevices);
  MessageDispatch.RegisterMessage(TQTXFileGetDeviceByNameRequest, @HandleGetDeviceByName);
  MessageDispatch.RegisterMessage(TQTXFileCreateLocalDeviceRequest, @HandleCreateLocalDevice);
  MessageDispatch.RegisterMessage(TQTXFileDestroyDeviceRequest, @HandleDestroyDevice);

  MessageDispatch.RegisterMessage(TQTXFileReadRequest, @HandleFileRead);
  MessageDispatch.RegisterMessage(TQTXFileReadPartialRequest, @HandleFileReadPartial);

  MessageDispatch.RegisterMessage(TQTXFileWriteRequest, @HandleFileWrite);
  MessageDispatch.RegisterMessage(TQTXFileWritePartialRequest, @HandleFileWritePartial);

  MessageDispatch.RegisterMessage(TQTXFileInfoRequest, @HandleGetFileInfo);
  MessageDispatch.RegisterMessage(TQTXFileDeleteRequest, @HandleFileDelete);

  MessageDispatch.RegisterMessage(TQTXMkDirRequest, @HandleMkDir);
  MessageDispatch.RegisterMessage(TQTXRmDirRequest, @HandleRmDir);
  MessageDispatch.RegisterMessage(TQTXFileRenameRequest, @HandleFileRename);

  MessageDispatch.RegisterMessage(TQTXFileDirRequest, @HandleGetDir);
end;

destructor TQTXTaskService.Destroy;
begin
  // decouple logger from our instance
  self.logging := nil;

  // Release prefs + log
  FPrefs.free;
  FLog.free;
  FZConfig.free;
  inherited;
end;

procedure TQTXTaskService.SendError(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage; Message: string);
begin
  var reply := TQTXErrorMessage.Create(request.ticket);
  try
    reply.Code := CNT_MESSAGE_CODE_ERROR;
    reply.Routing.TagValue := Request.Routing.TagValue;
    reply.Response := Message;

    if Socket.ReadyState = rsOpen then
    begin
      try
        Socket.Send( reply.Serialize() );
      except
        on e: exception do
        WriteToLog(e.message);
      end;
    end else
      WriteToLog("Failed to dispatch error, socket is closed error");
  finally
    reply.free;
  end;
end;

procedure TQTXTaskService.ExecuteExternalJS(Params: array of string;
  TagValue: variant; const CB: TQTXServerExecuteCB);
begin
  var LTask: JChildProcess;

  var lOpts := TVariant.CreateObject();
  lOpts.shell := false;
  lOpts.detached := true;

  Params.insert(0, '--no-warnings');

  // Spawn a new process, this creates a new shell interface
  try
    LTask := child_process().spawn('node', Params, lOpts );
  except
    on e: exception do
    begin
      if assigned(CB) then
        CB(TagValue, e.message, e);
      exit;
    end;
  end;

  // Map general errors on process level
  LTask.on('error', procedure (error: variant)
  begin
    {$IFDEF DEBUG}
    writeln("error->" + error.toString());
    {$ENDIF}
    WriteToLog(error.toString());

    if assigned(CB) then
      CB(TagValue, "", nil);
  end);

  // map stdout so we capture the output
  LTask.stdout.on('data', procedure (data: variant)
  begin
    if assigned(CB) then
      CB(TagValue, data.toString(), nil);
  end);

  // map stderr so we can capture exception messages
  LTask.stderr.on('data', procedure (error:variant)
  begin
    {$IFDEF DEBUG}
    writeln("stdErr->" + error.toString());
    {$ENDIF}

    if assigned(CB) then
      CB(TagValue, "", nil);

    WriteToLog(error.toString());
  end);
end;

function TQTXTaskService.GetFactory: TMessageFactory;
begin
  result := TQTXTaskServiceFactory.Create();
end;

procedure TQTXTaskService.SetupService(const CB: TRagnarokServiceCB);
begin
  SetupPreferences( procedure (Error: Exception)
  begin
    // No logfile yet setup (!)
    if Error  nil then
    begin
      WriteToLog("Preferences setup: Failed!");
      WriteToLog(error.message);
      raise error;
    end else
    WriteToLog("Preferences setup: OK");

    // logfile-name is always relative to the executable
    var LLogFileName := TQTXNodeFileUtils.IncludeTrailingPathDelimiter( TQTXNodeFileUtils.GetCurrentDirectory );
    LLogFileName += FPrefs.ReadString('log', 'logfile', 'log.txt');

    // Port is defined in the ancestor, so we assigns it here
    Port := FPrefs.ReadInteger('networking', 'port', CNT_PREFS_DEFAULTPORT);

    SetupLogfile(LLogFileName, procedure (Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog("Logfile setup: Failed!");
        WriteToLog(error.message);
        raise error;
      end else
      WriteToLog("Logfile setup: OK");

      SetupDatabase( procedure (Error: Exception)
      begin
        if Error  nil then
        begin
          WriteToLog("Database setup: Failed!");
          WriteToLog(error.message);
          if assigned(CB) then
            CB(Error)
          else
            raise Error;
        end else
        WriteToLog("Database setup: OK");

        if assigned(CB) then
          CB(nil);
      end);

    end);
  end);
end;

procedure TQTXTaskService.SetupPreferences(const CB: TRagnarokServiceCB);
begin
  var lBasePath := TQTXNodeFileUtils.GetCurrentDirectory;
  var LPrefsFile := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(LBasePath) + CNT_PREFS_FILENAME;

  if TQTXNodeFileUtils.FileExists(LPrefsFile) then
  begin
    FPrefs.LoadFromFile(nil, LPrefsFile, procedure (TagValue: variant; Error: Exception)
    begin
      if Error  nil then
      begin
        if assigned(CB) then
          CB(Error)
        else
          raise Error;
        exit;
      end;

      if assigned(CB) then
        CB(nil);
    end);

  end else
  begin
    var LError := Exception.Create('Could not locate preferences file: ' + LPrefsFile);
    WriteToLog(LError.message);
    if assigned(CB) then
      CB(LError)
    else
      raise LError;
  end;
end;

procedure TQTXTaskService.SetupLogfile(LogFileName: string;const CB: TRagnarokServiceCB);
begin
  // Attempt to open logfile
  // Note: Log-object error options is set to throw exceptions
  try
    FLog.Open(LogFileName);
  except
    on e: exception do
    begin
      if assigned(CB) then
      begin
        CB(e);
        exit;
      end else
      begin
        writeln(e.message);
        raise;
      end;
    end;
  end;

  // We inherit from TQTXLogObject, which means we can pipe
  // all errors etc directly using built-in functions. So here
  // we simply glue our instance to the log-file, and its all good
  self.Logging := FLog as IQTXLogClient;

  if assigned(CB) then
    CB(nil);
end;

procedure TQTXTaskService.FindDeviceByType(TagValue: variant; UserName: string; &Type: JDeviceType; CB: TQTXGetDisksCB);
begin
  UserName := username.trim().ToLower();
  if Username.length < 1 then
  begin
    WriteToLog("Failed to lookup disk, username was invalid error");
    var lError := EException.Create("Failed to lookup devices, invalid username");
    if assigned(CB) then
      CB(TagValue, nil, lError)
    else
      raise lError;
    exit;
  end;

  GetDevicesForUser(TagValue, Username,
  procedure (TagValue: variant; Devices: JDeviceList; Error: Exception)
  begin
    if Error  nil then
    begin
      if assigned(CB) then
        CB(TagValue, nil, Error)
      else
        raise Error;
      exit;
    end;

    var x := 0;
    while x < Devices.dlDrives.Count do
    begin
      if Devices.dlDrives[x].&Type  &Type then
      begin
        Devices.dlDrives.delete(x, 1);
        continue;
      end;
      inc(x);
    end;

    if assigned(CB) then
      CB(TagValue, Devices, nil);
  end);
end;

procedure TQTXTaskService.FindDeviceByName(TagValue: variant; Username, DiskName: string; CB: TQTXFindDeviceCB);
begin
  UserName := username.trim().ToLower();
  if Username.length < 1 then
  begin
    var lLogText := "Failed to lookup device, username was invalid error";
    WriteToLog(lLogText);
    var lError := EException.Create(lLogText);
    if assigned(CB) then
      CB(TagValue, nil, lError)
    else
      raise lError;
    exit;
  end;

  DiskName := DiskName.trim();
  if DiskName.length < 1 then
  begin
    var lLogText := "Failed to lookup device, disk-name was invalid error";
    WriteToLog(lLogText);
    var lError := EException.Create(lLogText);
    if assigned(CB) then
      CB(TagValue, nil, lError)
    else
      raise lError;
    exit;
  end;

  GetDevicesForUser(TagValue, Username,
  procedure (TagValue: variant; Devices: JDeviceList; Error: Exception)
  begin
    if Error  nil then
    begin
      if assigned(CB) then
        CB(TagValue, nil, Error)
      else
        raise Error;
      exit;
    end;

    DiskName := DiskName.trim().ToLower();
    var lDiskInfo: JDeviceInfo := nil;


    for var disk in Devices.dlDrives do
    begin
      if disk.Name.ToLower() = DiskName then
      begin
        lDiskInfo := disk;
        break;
      end;
    end;

    if assigned(CB) then
      CB(TagValue, lDiskInfo, nil);
  end);
end;

procedure TQTXTaskService.GetTranslatedPathFor(TagValue: variant; Username, FullPath: string; CB: TQTXGetTranslatePathCB);
begin
  var lParser := TQTXPathParser.Create();
  try
    var lInfo: TQTXPathData;
    if lparser.Parse(FullPath, lInfo) then
    begin
      // Locate the device for the path belonging to the user
      FindDeviceByName(TagValue, UserName, lInfo.MountPart,
      procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
      begin
        if Error  nil then
        begin
          if assigned(CB) then
            CB(TagValue, FullPath, '', Error)
          else
            raise Error;
          exit;
        end;

        if Device.&Type  dtLocal then
        begin
          var lError := EException.CreateFmt('Failed to translate path, device [%s] is not local error', [Device.Name]);
          if assigned(CB) then
            CB(TagValue, FullPath, '', Error)
          else
            raise Error;
          exit;
        end;

        // We want the path + filename, so we can append that to
        // the actual localized filesystem
        var lExtract := FullPath;
        delete(lExtract, 1, lInfo.MountPart.Length + 1);

        // Construct complete storage location
        var lFullPath := TQTXNodeFileUtils.GetCurrentDirectory();
        lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + 'userdevices';
        lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + Device.location.trim();
        lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + lExtract;

        // Return translated path
        if assigned(CB) then
          CB(TagValue, FullPath, lFullPath, nil);

      end);
    end else
    begin
      var lErr := EException.CreateFmt("Invalid path [%s] error", [FullPath]);
      if assigned(CB) then
        CB(TagValue, FullPath, '', lErr)
      else
        raise lErr;
    end;
  finally
    lParser.free;
  end;
end;

procedure TQTXTaskService.GetFileInfo(TagValue: variant; UserName, FullPath: string; CB: TQTXGetFileInfoCB);
begin
  var lParser := TQTXPathParser.Create();
  try
    var lInfo: TQTXPathData;
    if lparser.Parse(FullPath, lInfo) then
    begin
      // Locate the device for the path belonging to the user
      FindDeviceByName(TagValue, UserName, lInfo.MountPart,
      procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
      begin
        if Error  nil then
        begin
          if assigned(CB) then
            CB(TagValue, '', nil, Error)
          else
            raise Error;
          exit;
        end;

        case Device.&Type of
        dtLocal:
          begin
            // We want the path + filename, so we can append that to
            // the actual localized filesystem
            var lExtract := FullPath;
            delete(lExtract, 1, lInfo.MountPart.Length + 1);

            // Construct complete storage location
            var lFullPath := TQTXNodeFileUtils.GetCurrentDirectory();
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + 'userdevices';
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + Device.location.trim();
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + lExtract;

            // Call the underlying OS to get the file statistics
            NodeJsFsAPI().lStat(lFullPath,
            procedure (Error: JError; Stats: JStats)
            begin
              if Error  nil then
              begin
                var lError := EException.Create(Error.message);
                if assigned(CB) then
                  CB(TagValue, lFullPath, nil, lError)
                else
                  raise lError;
                exit;
              end;

              // And deliver
              if assigned(CB) then
                CB(TagValue, lFullPath, Stats, nil);
            end);
          end;
        dtDropbox, dtGoogle, dtMsDrive:
          begin
            var lError := EException.Create("Cloud bindings not activated error");
            if assigned(CB) then
              CB(TagValue, '', nil, lError)
          end;
        end;
      end);
    end else
    begin
      var lErr := EException.CreateFmt("Invalid path [%s] error", [FullPath]);
      if assigned(CB) then
        CB(TagValue, '', nil, lErr)
      else
        raise lErr;
    end;
  finally
    lParser.free;
  end;
end;

procedure TQTXTaskService.GetDevicesForUser(TagValue: variant; Username: string; CB: TQTXGetDisksCB);
begin
  UserName := username.trim().ToLower();
  if Username.length < 1 then
  begin
    WriteToLog("Failed to lookup devices, username was invalid error");
    var lError := EException.Create("Failed to lookup devices, invalid username");
    if assigned(CB) then
      CB(TagValue, nil, lError)
    else
      raise lError;
    exit;
  end;

  var lTransaction: TQTXReadTransaction;
  if not TSQLite3Database(DataBase).CreateReadTransaction(lTransaction) then
  begin
    var lErr := EException.Create("Failed to create read-transaction error");
    if assigned(cb) then
      CB(TagValue, nil, lErr)
    else
      raise lErr;
    exit;
  end;

  var lQuery := TSQLite3ReadTransaction(lTransaction);
  lQuery.SQL := "select * from devices where owner=?";
  lQuery.Parameters.AddValueOnly(Username);

  lQuery.Execute(
  procedure (Sender: TObject; Error: Exception)
  begin
    if Error  nil then
    begin
      if assigned(CB) then
        CB(TagValue, nil, Error)
      else
        raise Error;
      exit;
    end;

    var lDisks := new JDeviceList();
    lDisks.dlUser := UserName;

    for var x := 0 to lQuery.datarows.length-1 do
    begin
      var lInfo := new JDeviceInfo();
      lInfo.Name := lQuery.datarows[x]["name"];
      lInfo.&Type := JDeviceType( lQuery.datarows[x]["type"] );
      lInfo.owner := lQuery.datarows[x]["owner"];
      lInfo.location := lQuery.datarows[x]["location"];
      lInfo.APIKey := lQuery.datarows[x]["apikey"];
      lInfo.APISecret := lQuery.datarows[x]["apisecret"];
      lInfo.APIPassword := lQuery.datarows[x]["apipassword"];
      lInfo.APIUser := lQuery.datarows[x]["apiuser"];
      lDisks.dlDrives.add(lInfo);
    end;

    try
      if assigned(CB) then
        CB(TagValue, lDisks, nil);
    finally
      lQuery.free;
    end;
  end);
end;

procedure TQTXTaskService.ValidateLocalDiskName(TagValue: variant; Username, DeviceName: string; CB: TQTXCheckDevicePathCB);
begin
  var Filename := 'disk.' + username + '.' + DeviceName + '.' + ord(JDeviceType.dtLocal).ToString();

  var LBasePath := TQTXNodeFileUtils.GetCurrentDirectory();
  LBasePath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(LBasePath) + 'userdevices';

  // Make sure the device folder is there
  if not TQTXNodeFileUtils.DirectoryExists(LBasePath) then
  begin
    var lError := EException.CreateFmt("Directory not found: %s", [lBasePath]);
    if assigned(CB) then
      CB(TagValue, '', lError)
    else
      raise lError;
    exit;
  end;

  lBasePath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(LBasePath) + Filename;

  if TQTXNodeFileUtils.DirectoryExists(LBasePath) then
  begin
    var lError := EException.CreateFmt("Path already exist error [%s]", [lBasePath]);
    if assigned(CB) then
      CB(TagValue, '', lError)
    else
      raise lError;
    exit;
  end;

  // OK, folder is not created yet, so its good to go
  if assigned(CB) then
    CB(TagValue, Filename, nil);
end;

procedure TQTXTaskService.UnRegisterLocalDevice(TagValue: variant; UserName, DiskName: string; CB: TQTXUnRegisterLocalDeviceCB);
begin
  WriteToLogF("Removing local device [%s] for user [%s] ", [DiskName, Username]);

  // Check username string
  UserName := username.trim().ToLower();
  if Username.length < 1 then
  begin
    WriteToLog("Failed to unregister device, username was invalid error");
    var lError := EException.Create("Failed to register device, invalid username");
    if assigned(CB) then
      CB(TagValue, DiskName, lError)
    else
      raise lError;
    exit;
  end;

  // Check diskname string
  DiskName := DiskName.trim().ToLower();
  if DiskName.length < 1 then
  begin
    WriteToLog("Failed to unregister device, disk-name was invalid error");
    var lError := EException.Create("Failed to register device, invalid disk-name");
    if assigned(CB) then
      CB(TagValue, DiskName, lError)
    else
      raise lError;
    exit;
  end;

  FindDeviceByName(TagValue, Username, DiskName,
  procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
  begin
    // Did the search fail?
    if Error  nil then
    begin
      WriteToLog(Error.message);
      if assigned(CB) then
        CB(TagValue, DiskName, Error)
      else
        raise Error;
      exit;
    end;

    // Make sure the device is local
    if Device.&Type  dtLocal then
    begin
      var lError := EException.CreateFmt('Failed to translate path, device [%s] is not local error', [Device.Name]);
      if assigned(CB) then
        CB(TagValue, DiskName, Error)
      else
        raise Error;
      exit;
    end;

    // Delete record from database
    var lWriter: TQTXWriteTransaction;
    if FDatabase.CreateWriteTransaction(lWriter) then
    begin
      lWriter.SQL := "delete from profiles where user = ? and name = ?;";
      lWriter.Parameters.AddValueOnly(Username);
      lWriter.Parameters.AddValueOnly(DiskName);

      lWriter.Execute(
      procedure (Sender: TObject; Error: Exception)
      begin
        try

          if Error  nil then
          begin
            if assigned(CB) then
              CB(TagValue, DiskName, Error)
            else
              raise Error;
            exit;
          end;

          // Construct complete storage location
          var lFullPath := TQTXNodeFileUtils.GetCurrentDirectory();
          lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + 'userdevices';
          lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + Device.location.trim();

          // Now delete the disk-drive directory
          TQTXNodeFileUtils.DeleteDirectory(nil, lFullPath,
          procedure (TagValue: variant; Path: string; Error: Exception)
          begin
            if assigned(CB) then
              CB(TagValue, DiskName, Error)
          end);

        finally
          lWriter.free;
          lWriter := nil;
        end;
      end);
    end;
  end);
end;

procedure TQTXTaskService.RegisterLocalDevice(TagValue: variant; Username, DiskName: string; CB: TQTXRegisterLocalDeviceCB);
begin
  WriteToLogF("Adding local device [%s] for user [%s] ", [DiskName, Username]);

  UserName := username.trim().ToLower();
  if Username.length < 1 then
  begin
    WriteToLog("Failed to register device, username was invalid error");
    var lError := EException.Create("Failed to register device, invalid username");
    if assigned(CB) then
      CB(TagValue, '', lError)
    else
      raise lError;
    exit;
  end;

  DiskName := DiskName.trim().ToLower();
  if DiskName.length < 1 then
  begin
    WriteToLog("Failed to register device, disk-name was invalid error");
    var lError := EException.Create("Failed to register device, invalid disk-name");
    if assigned(CB) then
      CB(TagValue, '', lError)
    else
      raise lError;
    exit;
  end;

  FindDeviceByName(TagValue, Username, DiskName,
  procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
  begin
    // Did the search fail?
    if Error  nil then
    begin
      WriteToLog(Error.message);
      if assigned(CB) then
        CB(TagValue, '', Error)
      else
        raise Error;
      exit;
    end;

    // Does a device that match already exist?
    if Device  nil then
    begin
      var lError := EException.CreateFmt("Failed to create device [%s], device already exists", [DiskName]);
      if assigned(CB) then
        CB(TagValue, '', lError)
      else
        raise lError;
      exit;
    end;

    //  make sure the device-folder does not exist, so we can create it
    ValidateLocalDiskName(TagValue, Username, DiskName,
    procedure (TagValue: variant; PathName: string; Error: Exception)
    begin
      if Error  nil then
      begin
        if assigned(CB) then
          CB(TagValue, '', Error)
        else
          raise Error;
        exit;
      end;

      // ValidateLocalDiskName only returns the valid directory-name,
      // not a full path -- so we need to build up the full targetpath
      var lFullPath := TQTXNodeFileUtils.GetCurrentDirectory();
      lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + 'userdevices';
      lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + PathName;

      TQTXNodeFileUtils.CreateDirectory(nil, lFullPath,
      procedure (TagValue: variant; Path: string; Error: exception)
      begin
        if Error  nil then
        begin
          var lError := EException.CreateFmt("Failed to create device [%s] with path: %s", [DiskName, lFullPath]);
          if assigned(CB) then
            CB(TagValue, PathName, lError)
          else
            raise lError;
          exit;
        end;

        FDatabase.Execute(
          #'insert into devices (type, owner, name, location)
            values(?, ?, ?, ?);',
            [ord(JDeviceType.dtLocal), UserName, Diskname, PathName] ,
        procedure (Sender: TObject; Error: Exception)
        begin
          if Error  nil then
          begin
            WriteToLog(Error.message);
            if assigned(CB) then
              CB(TagValue, PathName, Error)
            else
              raise Error;
            exit;
          end;

          WriteToLogF("Device [%s] added to database user [%s]", [DiskName, UserName]);
          if assigned(CB) then
            CB(TagValue, PathName, nil);
        end);

      end);



    end);
  end);
end;

procedure TQTXTaskService.SetupDeviceTable(const TagValue: variant; const CB: TRagnarokServiceCB);
begin

  FDatabase.Execute(
    #'
      create table if not exists devices
          (
            id integer primary key AUTOINCREMENT,
            type        integer,
            owner       text,
            name        text,
            location    text,
            apikey      text,
            apisecret   text,
            apipassword text,
            apiuser     text
          );
          ', [],
    procedure (Sender: TObject; Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        if assigned(CB) then
          CB(Error)
        else
          raise Error;
        exit;
      end else
      if assigned(CB) then
        CB(nil);
    end);
end;

procedure TQTXTaskService.SetupTaskTable(const TagValue: variant; const CB: TRagnarokServiceCB);
begin

  FDatabase.Execute(
    #'
      create table if not exists tasks
          (
            id integer primary key AUTOINCREMENT,
            state     integer,
            username  text,
            created   real
          );
          ', [],
    procedure (Sender: TObject; Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        if assigned(CB) then
          CB(Error)
        else
          raise Error;
        exit;
      end else
      if assigned(CB) then
        CB(nil);
    end);
end;


procedure TQTXTaskService.SetupOperationsTable(const TagValue: variant; const CB: TRagnarokServiceCB);
begin
  FDatabase.Execute(
    #'
      create table if not exists operations
          (
            id integer primary key AUTOINCREMENT,
            username text,
            taskid integer,
            name text,
            filename text
          );
          ', [],
    procedure (Sender: TObject; Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        if assigned(CB) then
          CB(Error)
        else
          raise Error;
        exit;
      end else
      if assigned(CB) then
        CB(nil);
    end);
end;

procedure TQTXTaskService.SetupDatabase(const CB: TRagnarokServiceCB);
begin
  // Try to read database-path from preferences file
  var LDbFileToOpen := FPrefs.ReadString("database", "database_name", "");

  // Trim away spaces, check if there is a filename
  LDbFileToOpen := LDbFileToOpen.trim();
  if LDbFileToOpen.length < 1 then
  begin
    // No filename? Fall back on pre-defined file in CWD
    var LBasePath := TQTXNodeFileUtils.GetCurrentDirectory();
    LDbFileToOpen := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(LBasePath) + CNT_PREFS_DBNAME;
  end;

  FDatabase.AccessMode := TSQLite3AccessMode.sqaReadWriteCreate;
  FDatabase.Open(LDbFileToOpen,
    procedure (Sender: TObject; Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        if assigned(CB) then
          CB(Error)
        else
          raise Error;
        exit;
      end;

      WriteToLog("Initializing task table");
      SetupTaskTable(nil, procedure (Error: exception)
      begin
        if Error  nil then
        begin
          WriteToLog("Tasks initialized: **failed");
          WriteToLog(error.message);
          if assigned(CB) then
            CB(Error)
          else
            raise Error;
          exit;
        end else
        writeToLog("Tasks initialized: OK");

        WriteToLog("Initializing operations table");
        SetupOperationsTable(nil, procedure (Error: exception)
        begin
          if Error  nil then
          begin
            WriteToLog("Operations initialized: **failed");
            WriteToLog(error.message);
            if assigned(CB) then
              CB(Error);
            exit;
          end else
          writeToLog("Operations initialized: OK");

          WriteToLog("Initializing device table");
          SetupDeviceTable(nil, procedure (Error: exception)
          begin
            if Error  nil then
            begin
              WriteToLog("Device-table initialized: **failed");
              WriteToLog(error.message);
              if assigned(CB) then
                CB(Error);
              exit;
            end else
            writeToLog("Device-table initialized: OK");

            if assigned(CB) then
              CB(nil);
          end);
        end);
      end);
    end);
end;


procedure TQTXTaskService.HandleFileRead(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXFileReadRequest(request);
  var lUserName := lRequest.UserName;
  var lFileName := lRequest.FileName;

  // Check filename length
  if lFileName.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  GetFileInfo(lRequest, lUserName, lFileName,
  procedure (TagValue: variant; LocalFile: string; Info: JStats; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    var lOptions: TReadFileOptions;
    lOptions.encoding := 'binary';

    NodeJsFsAPI().readFile(LocalFile, lOptions,
    procedure (Error: JError; Data: JNodeBuffer)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        SendError(Socket, Request, Error.Message);
        exit;
      end;

      var lResponse := TQTXFileReadResponse.Create(Request.Ticket);
      lResponse.UserName := lUserName;
      lResponse.Routing.TagValue := request.routing.tagValue;
      lResponse.FileName := lFileName;
      lResponse.Code := CNT_MESSAGE_CODE_OK;
      lResponse.Response := CNT_MESSAGE_TEXT_OK;

      // Convert filedata in one pass
      try
        var lConvert := TDataTypeConverter.Create();
        try
          lResponse.Attachment.AppendBytes( lConvert.TypedArrayToBytes(Data) );
        finally
          lConvert.free;
        end;
      except
        on e: exception do
        begin
          WriteToLog(e.message);
          SendError(Socket, Request, e.Message);
          exit;
        end;
      end;

      try
        Socket.Send( lResponse.Serialize() );
      except
        on e: exception do
          WriteToLog(e.message);
      end;
    end);
  end);
end;

procedure TQTXTaskService.HandleFileReadPartial(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXFileReadPartialRequest(request);
  var lUserName := lRequest.UserName;
  var lFileName := lRequest.FileName;
  var lStart := lRequest.Offset;
  var lSize := lRequest.Size;

  // Check filename length
  if lFileName.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  if lSize < 1 then
  begin
    SendError(Socket, Request, "Read failed, invalid size error");
    exit;
  end;

  if lStart < 0 then
  begin
    SendError(Socket, Request, "Read failed, invalid offset error");
    exit;
  end;

  GetFileInfo(lRequest, lUserName, lFileName,
  procedure (TagValue: variant; LocalFile: string; Info: JStats; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    if lStart > Info.size then
    begin
      SendError(Socket, Request, "Read failed, offset beyond filesize error");
      exit;
    end;

    NodeJsFsAPI().open(LocalFile, "r",
    procedure (Error: JError; Fd: THandle)
    begin
      if error  nil then
      begin
        WriteToLog(Error.message);
        SendError(Socket, Request, Error.Message);
        exit;
      end;

      var Data = new JNodeBuffer(lSize);
      NodeJsFsAPI().read(Fd, Data, 0, lSize, lStart,
      procedure (Error: JError; BytesRead: integer; buffer: JNodeBuffer)
      begin
        if Error  nil then
        begin
          NodeJsFsAPI().closeSync(Fd);
          WriteToLog(Error.message);
          SendError(Socket, Request, Error.Message);
          exit;
        end;

        // Close the file-handle and return data
        NodeJsFsAPI().close(Fd, procedure (Error: JError)
        begin
          var lResponse := TQTXFileReadPartialResponse.Create(Request.Ticket);
          lResponse.UserName := lUserName;
          lResponse.Routing.TagValue := request.routing.tagValue;
          lResponse.FileName := lFileName;
          lResponse.Code := CNT_MESSAGE_CODE_OK;
          lResponse.Response := CNT_MESSAGE_TEXT_OK;

          // Only encode data if read
          if BytesRead > 0 then
          begin
            // Convert filedata in one pass
            try
              var lConvert := TDataTypeConverter.Create();
              try
                lResponse.Attachment.AppendBytes( lConvert.TypedArrayToBytes(buffer) );
              finally
                lConvert.free;
              end;
            except
              on e: exception do
              begin
                WriteToLog(e.message);
                SendError(Socket, Request, e.Message);
                exit;
              end;
            end;
          end;

          try
            Socket.Send( lResponse.Serialize() );
          except
            on e: exception do
              WriteToLog(e.message);
          end;

        end);
      end);
    end);
  end);
end;

procedure TQTXTaskService.HandleFileWrite(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest  := TQTXFileWriteRequest(request);
  var lFileName := lRequest.FileName.trim();
  var lUserName := lRequest.UserName.trim();

  var FullPath  := lFileName;

  // Check filename length
  if lFileName.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  var lParser := TQTXPathParser.Create();
  try
    var lInfo: TQTXPathData;
    if lparser.Parse(FullPath, lInfo) then
    begin
      // Locate the device for the path belonging to the user
      FindDeviceByName(nil, lUserName, lInfo.MountPart,
      procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
      begin
        if Error  nil then
        begin
          WriteToLog(Error.Message);
          SendError(Socket, Request, Error.Message);
          exit;
        end;

        case Device.&Type of
        dtLocal:
          begin
            // We want the path + filename, so we can append that to
            // the actual localized filesystem
            var lExtract := FullPath;
            delete(lExtract, 1, lInfo.MountPart.Length + 1);

            // Construct complete storage location
            var lFullPath := TQTXNodeFileUtils.GetCurrentDirectory();
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + 'userdevices';
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + Device.location.trim();
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + lExtract;

            // Extract data to be appended, if any
            // note: null bytes should be allowed, it should just create the file
            var lBytes: array of UInt8;
            if lRequest.attachment.Size > 0 then
              lBytes := lRequest.Attachment.ToBytes();

            // Write the data to the file
            NodeJsFsAPI().writeFile(lFullPath, lBytes,
            procedure (Error: JError)
            begin
              if Error  nil then
              begin
                WriteToLog(Error.Message);
                SendError(Socket, Request, Error.Message);
                exit;
              end;

              // Setup response object
              var lResponse := TQTXFileWriteResponse.Create(lRequest.Ticket);
              lResponse.UserName := lUserName;
              lResponse.FileName := lFileName;
              lResponse.Code := CNT_MESSAGE_CODE_OK;
              lResponse.Response := CNT_MESSAGE_TEXT_OK;

              // Send success response
              try
                Socket.Send( lResponse.Serialize() );
              except
                on e: exception do
                  WriteToLog(e.message);
              end;

            end);

          end;
        dtDropbox, dtGoogle, dtMsDrive:
          begin
            var lErrorText := Format("Clound bindings not active error [%s]", [lRequest.FileName]);
            WriteToLog(lErrorText);
            SendError(Socket, Request, lErrorText);
          end;
        end;
      end);
    end else
    begin
      SendError(Socket, Request, format("Invalid path [%s] error", [FullPath]));
    end;
  finally
    lParser.free;
  end;
end;

procedure TQTXTaskService.HandleFileWritePartial(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest  := TQTXFileWritePartialRequest(request);
  var lFileName  := lRequest.FileName.trim();
  var lUserName := lRequest.UserName.trim();
  var lFileOffset := lRequest.Offset;

  // Check filename length
  if lFileName.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  var FullPath := lFileName;

  var lParser := TQTXPathParser.Create();
  try
    var lInfo: TQTXPathData;
    if lparser.Parse(FullPath, lInfo) then
    begin
      // Locate the device for the path belonging to the user
      FindDeviceByName(nil, lUserName, lInfo.MountPart,
      procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
      begin
        if Error  nil then
        begin
          WriteToLog(Error.Message);
          SendError(Socket, Request, Error.Message);
          exit;
        end;

        case Device.&Type of
        dtLocal:
          begin
            // We want the path + filename, so we can append that to
            // the actual localized filesystem
            var lExtract := FullPath;
            delete(lExtract, 1, lInfo.MountPart.Length + 1);

            // Construct complete storage location
            var lFullPath := TQTXNodeFileUtils.GetCurrentDirectory();
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + 'userdevices';
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + Device.location.trim();
            lFullPath := TQTXNodeFileUtils.IncludeTrailingPathDelimiter(lFullPath) + lExtract;

            // Extract data to be appended, if any
            // note: null bytes should be allowed, it should just create the file
            var lBytes: array of UInt8;
            if lRequest.attachment.Size > 0 then
              lBytes := lRequest.Attachment.ToBytes();

            var lAccess := TQTXNodeFile.Create();
            lAccess.Open(lFullPath, TQTXNodeFileMode.nfWrite,
            procedure (Error: Exception)
            begin
              if Error  nil then
              begin
                WriteToLog(Error.Message);
                SendError(Socket, Request, Error.Message);
                exit;
              end;

              lAccess.Write(lBytes, lFileOffset,
              procedure (Error: Exception)
              begin
                if Error  nil then
                begin
                  WriteToLog(Error.Message);
                  SendError(Socket, Request, Error.Message);
                  exit;
                end;

                // Setup response object
                var lResponse := TQTXFileWriteResponse.Create(lRequest.Ticket);
                lResponse.UserName := lUserName;
                lResponse.FileName := lFileName;
                lResponse.Code := CNT_MESSAGE_CODE_OK;
                lResponse.Response := CNT_MESSAGE_TEXT_OK;

                // Send success response
                try
                  Socket.Send( lResponse.Serialize() );
                except
                  on e: exception do
                    WriteToLog(e.message);
                end;

              end);
            end);
          end;
        dtDropbox, dtGoogle, dtMsDrive:
          begin
            var lErrorText := Format("Clound bindings not active error [%s]", [lRequest.FileName]);
            WriteToLog(lErrorText);
            SendError(Socket, Request, lErrorText);
          end;
        end;
      end);
    end else
    begin
      SendError(Socket, Request, format("Invalid path [%s] error", [FullPath]));
    end;
  finally
    lParser.free;
  end;
end;

procedure TQTXTaskService.HandleRmDir(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXRmDirRequest(request);
  var lUserName := lRequest.UserName.trim();
  var lDirPath := lRequest.DirPath.trim();

  if lDirPath.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lDirPath) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  var lParser := TQTXPathParser.Create();
  try
    var lInfo: TQTXPathData;
    if lParser.Parse(lDirPath, lInfo) then
    begin
      GetTranslatedPathFor(nil, lUserName, lDirPath,
      procedure (TagValue: variant; Original, Translated: string; Error: Exception)
      begin
        if Error  nil then
        begin
          WriteToLog(Error.message);
          SendError(Socket, Request, Error.Message);
          exit;
        end;

        if not TQTXNodeFileUtils.DirectoryExists(Translated) then
        begin
          WriteToLogF("RmDir Failed, directory [%s] does not exist", [Translated]);
          SendError(Socket, Request, Format("RmDir failed, directory [%s] does not exist", [Original]));
          exit;
        end;

        TQTXNodeFileUtils.DeleteDirectory(nil, Translated,
        procedure (TagValue: variant; Path: string; Error: Exception)
        begin
          if error  nil then
          begin
            WriteToLog(Error.message);
            SendError(Socket, Request, Error.Message);
            exit;
          end;

          // Setup response object
          var lResponse := TQTXRmDirResponse.Create(lRequest.Ticket);
          lResponse.UserName := lUserName;
          lResponse.DirPath := lDirPath;
          lResponse.Code := CNT_MESSAGE_CODE_OK;
          lResponse.Response := CNT_MESSAGE_TEXT_OK;
          lResponse.Routing.TagValue := lRequest.Routing.TagValue;

          // Send success response
          try
            Socket.Send( lResponse.Serialize() );
          except
            on e: exception do
              WriteToLog(e.message);
          end;
        end);
      end);
    end else
    begin
      var lText := format("RmDir failed, invalid path [%s] error", [lDirPath]);
      WriteToLog(lText);
      SendError(Socket, Request, lText);
    end;
  finally
    lParser.free;
  end;
end;

procedure TQTXTaskService.HandleMkDir(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXMkDirRequest(request);
  var lUserName := lRequest.UserName.trim();
  var lDirPath := lRequest.DirPath.trim();

  if lDirPath.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lDirPath) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  var lParser := TQTXPathParser.Create();
  try
    var lInfo: TQTXPathData;
    if lparser.Parse(lDirPath, lInfo) then
    begin
      GetTranslatedPathFor(nil, lUserName, lDirPath,
      procedure (TagValue: variant; Original, Translated: string; Error: Exception)
      begin
        if Error  nil then
        begin
          WriteToLog(Error.message);
          SendError(Socket, Request, Error.Message);
          exit;
        end;

        TQTXNodeFileUtils.DirectoryExists(nil, Translated,
        procedure (TagValue: variant; Path: string; Error: Exception)
        begin
          if Error  nil then
          begin
            WriteToLogF("MkDir Failed, directory [%s] already exists", [Translated]);
            SendError(Socket, Request, Format("MkDir Failed, directory [%s] already exists", [Original]));
            exit;
          end;

          TQTXNodeFileUtils.CreateDirectory(nil, Translated,
          procedure (TagValue: variant; Path: string; Error: Exception)
          begin
            if Error  nil then
            begin
              WriteToLogF("MkDir Failed, directory [%s] could not be created", [Original]);
              SendError(Socket, Request, Format("MkDir Failed, directory [%s] could not be created", [Translated]));
              exit;
            end;

            // Setup response object
            var lResponse := TQTXMkDirResponse.Create(lRequest.Ticket);
            lResponse.UserName := lUserName;
            lResponse.DirPath := lDirPath;
            lResponse.Code := CNT_MESSAGE_CODE_OK;
            lResponse.Response := CNT_MESSAGE_TEXT_OK;
            lResponse.Routing.TagValue := lRequest.Routing.TagValue;

            // Send success response
            try
              Socket.Send( lResponse.Serialize() );
            except
              on e: exception do
                WriteToLog(e.message);
            end;

          end);
        end);
      end);

    end else
    begin
      var lText := format("MkDir Failed, invalid path [%s] error", [lDirPath]);
      WriteToLog(lText);
      SendError(Socket, Request, lText);
    end;
  finally
    lParser.free;
  end;
end;

procedure TQTXTaskService.HandleFileDelete(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXFileDeleteRequest(Request);
  var lUserName := lRequest.UserName.trim();
  var lFileName := lRequest.FileName.trim();

  if lFileName.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  GetFileInfo(lRequest, lUserName, lFileName,
  procedure (TagValue: variant; LocalFile: string; Info: JStats; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    if not Info.isFile then
    begin
      SendError(Socket, Request, "Filesystem object is not a file error");
      exit;
    end;

    NodeJsFsAPI().unlink(LocalFile,
    procedure (Error: JError)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        SendError(Socket, Request, Error.message);
        exit;
      end;

      var lResponse := new TQTXFileDeleteResponse(lRequest.Ticket);
      lResponse.Routing.TagValue := request.Routing.TagValue;
      lResponse.UserName := lUserName;
      lResponse.FileName := lFileName;
      lResponse.Code := CNT_MESSAGE_CODE_OK;
      lResponse.Response := CNT_MESSAGE_TEXT_OK;

      try
        Socket.Send( lResponse.Serialize() );
      except
        on e: exception do
          WriteToLog(e.message);
      end;
    end);
  end);
end;

procedure TQTXTaskService.HandleFileRename(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXFileRenameRequest(Request);
  var lUserName := lRequest.UserName.trim();
  var lFileName := lRequest.FileName.trim();
  var lNewName := lRequest.NewName.trim();

  // Check filename length
  if lFileName.length < 1 then
  begin
    SendError(Socket, Request, Format("Invalid or empty from-filename [%s] error", [lFileName]) );
    exit;
  end;

  // check newname length
  if lNewName.length  0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  if pos(lTemp, lNewName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  if pos(lTemp, lNewName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;


  GetFileInfo(lRequest, lUserName, lFileName,
  procedure (TagValue: variant; LocalFile: string; Info: JStats; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    if not Info.isFile then
    begin
      SendError(Socket, Request, "Filesystem object is not a file error");
      exit;
    end;

    GetTranslatedPathFor(nil, lUsername, lNewName,
    procedure (TagValue: variant; Original, Translated: string; Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.message);
        SendError(Socket, Request, Error.Message);
        exit;
      end;

      NodeJsFsAPI().rename(LocalFile, Translated,
      procedure (Error: JError)
      begin
        if Error  nil then
        begin
          WriteToLog(Error.message);
          SendError(Socket, Request, Error.message);
          exit;
        end;

        var lResponse := new TQTXFileRenameResponse(lRequest.Ticket);
        lResponse.Routing.TagValue := request.Routing.TagValue;
        lResponse.UserName := lUserName;
        lResponse.FileName := lFileName;
        lResponse.Code := CNT_MESSAGE_CODE_OK;
        lResponse.Response := CNT_MESSAGE_TEXT_OK;

        try
          Socket.Send( lResponse.Serialize() );
        except
          on e: exception do
            WriteToLog(e.message);
        end;
      end);

    end);

  end);
end;

procedure TQTXTaskService.HandleGetDir(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXFileDirRequest(Request);
  var lUserName := lRequest.UserName.trim();
  var lPath := lRequest.Path.trim();

  // prevent path escape attempts
  var lTemp := "../";
  if pos(lTemp, lPath) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lPath) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  GetTranslatedPathFor(nil, lUserName, lPath,
  procedure (TagValue: variant; Original, Translated: string; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    //writeln("Translated path is:" + Translated);

    if not TQTXNodeFileUtils.DirectoryExists(Translated) then
    begin
      WriteToLogF("GetDir Failed, directory [%s] does not exist", [Translated]);
      SendError(Socket, Request, Format("GetDir failed, directory [%s] does not exist", [Original]));
      exit;
    end;

    var lWalker := TQTXFileWalker.Create();
    lWalker.Examine(Translated, procedure (Sender: TQTXFileWalker; Error: EException)
    begin
      if Error  nil then
      begin
        WriteToLogF("GetDir Failed: %s", [Error.Message]);
        SendError(Socket, Request, Format("GetDir failed: %s", [Error.Message]));
        exit;
      end;

      // Get the directory data, swap out the path
      // record with the original [amiga] style path
      var lData := Sender.ExtractList();
      lData.dlPath := Original;

      var lResponse := new TQTXFileDirResponse(lRequest.Ticket);
      lResponse.Routing.TagValue := request.Routing.TagValue;
      lResponse.UserName := lUserName;
      lResponse.Path := lPath;
      lResponse.Assign( lData );

      try
        Socket.Send( lResponse.Serialize() );
      except
        on e: exception do
          WriteToLog(e.message);
      end;

      // release instance in 100ms
      TQTXDispatch.execute(procedure ()
      begin
        try
          lWalker.free
        except
          on e: exception do
          begin
            WriteToLogF("Failed to release file-walker instance: %s", [e.message]);
          end;
        end;
      end, 100);
    end);
  end);
end;

procedure TQTXTaskService.HandleGetFileInfo(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lRequest := TQTXFileInfoRequest(Request);
  var lUserName := lRequest.UserName.trim();
  var lFileName := lRequest.FileName.trim();

  // prevent path escape attempts
  var lTemp := "../";
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  lTemp := './';
  if pos(lTemp, lFileName) > 0 then
  begin
    SendError(Socket, Request, Format("Unsupported path sequence [%s] detected error", [lTemp]) );
    exit;
  end;

  GetFileInfo(lRequest, lUserName, lFileName,
  procedure (TagValue: variant; LocalFile: string; Info: JStats; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    // Collect the data
    var lData := new JFileItem();
    lData.diFileName := lFileName;
    lData.diFileType := if Info.isFile then JFileItemType.wtFile else JFileItemType.wtFolder;
    lData.diFileSize := Info.size;
    lData.diFileMode := IntToStr(Info.mode);
    lData.diCreated  := TDateUtils.FromJsDate( Info.cTime );
    lData.diModified := TDateUtils.FromJsDate( Info.mTime );

    var lResponse := new TQTXFileInfoResponse(lRequest.Ticket);
    lResponse.Routing.TagValue := request.Routing.TagValue;
    lResponse.UserName := lUserName;
    lResponse.FileName := lFileName;
    lResponse.Assign(lData);

    try
      Socket.Send( lResponse.Serialize() );
    except
      on e: exception do
        WriteToLog(e.message);
    end;
  end);
end;

procedure TQTXTaskService.HandleDestroyDevice(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lMessage := TQTXFileDestroyDeviceRequest(request);

  // This will also destroy any files + unregister the device in the
  // database table for the service -- do not mess with this!
  UnRegisterLocalDevice(nil, lMessage.Username, lMessage.DeviceName,
  procedure (TagValue: variant; LocalPath: string; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.Message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    var lResponse := TQTXFileDestroyDeviceResponse.Create(request.ticket);
    lResponse.UserName := lMessage.UserName;
    lResponse.DeviceName := lMessage.DeviceName;
    lResponse.Routing.TagValue := Request.Routing.TagValue;
    lResponse.Code := CNT_MESSAGE_CODE_OK;
    lResponse.Response := CNT_MESSAGE_TEXT_OK;

    try
      Socket.Send( lResponse.Serialize() );
    except
      on e: exception do
      begin
        WriteToLog(e.message);
      end;
    end;
  end);
end;

procedure TQTXTaskService.HandleCreateLocalDevice(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lMessage := TQTXFileCreateLocalDeviceRequest(request);

  // Attempt to register.
  // NOTE: This will automatically create a matching folder
  //       under $cwd/userdevices/[calculated_name_of_device]

  RegisterLocalDevice(nil, lMessage.Username, lMessage.DeviceName,
  procedure (TagValue: variant; LocalPath: string; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.Message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    FindDeviceByName(nil, lMessage.Username, lMessage.DeviceName,
    procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
    begin
      if Error  nil then
      begin
        WriteToLog(Error.Message);
        SendError(Socket, Request, Error.Message);
        exit;
      end;

      var lResponse := TQTXFileCreateLocalDeviceResponse.Create(request.ticket);
      lResponse.UserName := lMessage.UserName;
      lResponse.Routing.TagValue := Request.Routing.TagValue;
      lResponse.Code := CNT_MESSAGE_CODE_OK;
      lResponse.Response := CNT_MESSAGE_TEXT_OK;
      if Device  nil then
        lResponse.assign(Device);

      try
        Socket.Send( lResponse.Serialize() );
      except
        on e: exception do
        begin
          WriteToLog(e.message);
        end;
      end;

    end);
  end);
end;

procedure TQTXTaskService.HandleGetDeviceByName(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lMessage := TQTXFileGetDeviceByNameRequest(request);

  FindDeviceByName(nil, lMessage.Username, lMessage.DeviceName,
  procedure (TagValue: variant; Device: JDeviceInfo; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.Message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    var lResponse := TQTXFileGetDeviceByNameResponse.Create(request.ticket);
    lResponse.UserName := lMessage.UserName;
    lResponse.Code := CNT_MESSAGE_CODE_OK;
    lResponse.Response := CNT_MESSAGE_TEXT_OK;
    if Device  nil then
      lResponse.assign(Device);

    try
      Socket.Send( lResponse.Serialize() );
    except
      on e: exception do
      begin
        WriteToLog(e.message);
      end;
    end;
  end);

end;

procedure TQTXTaskService.HandleGetDevices(Socket: TNJWebSocketSocket; Request: TQTXBaseMessage);
begin
  var lMessage := TQTXFileGetDeviceListRequest(Request);
  GetDevicesForUser(nil, lMessage.Username,
  procedure (TagValue: variant; Devices: JDeviceList; Error: Exception)
  begin
    if Error  nil then
    begin
      WriteToLog(Error.Message);
      SendError(Socket, Request, Error.Message);
      exit;
    end;

    var lResponse := TQTXFileGetDeviceListResponse.Create(request.ticket);
    lResponse.UserName := lMessage.UserName;
    lResponse.Code := CNT_MESSAGE_CODE_OK;
    lResponse.Response := CNT_MESSAGE_TEXT_OK;
    if Devices  nil then
      lResponse.assign(Devices);

    try
      Socket.Send( lResponse.Serialize() );
    except
      on e: exception do
      begin
        WriteToLog(e.message);
      end;
    end;

  end);
end;

procedure TQTXTaskService.AfterServerStarted;
begin
  inherited;

  // Check prefs if zconfig should be applied
  if self.FPrefs.ReadBoolean("zconfig", "active", false) then
  begin
    // ZConfig should only run on the master instance.
    // We dont want to register our endpoint for each worker
    if NodeJSClusterAPI().isWorker then
      exit;

    writeln("Setting up Zero-Configuration layer");
    FZConfig.port := FPrefs.ReadInteger('zconfig', 'bindport', 2109);
    FZConfig.address := GetMachineIP();
    FZConfig.Start(nil, procedure (Sender: TObject; TagValue: variant; Error: Exception)
    begin
      if FPrefs.ReadBoolean("zconfig", "broadcast", true) then
        FZConfig.Socket.setBroadcast(true);

      // Build up the endpoint (URL) for our websocket server
      var lEndpoint := '';

      if FPrefs.ReadBoolean('networking', 'secure', false) then
        lEndpoint := 'wss://'
      else
        lEndpoint := 'ws://';

      lEndpoint += GetMachineIP();
      lEndpoint += ':' + Port.ToString();

      // Ping the ZConfig service on interval, until our service is registered
      // We keep track of the interval handle so we can stop calling on interval later
      FRegHandle := TQTXDispatch.SetInterval( procedure ()
      begin
        inc(FRegCount);

        // Only output once to avoid overkill in the log
        if FRegCount = 1 then
          WriteToLogF("ZConfig registration begins [%s]", [lEndpoint]);

        FZConfig.RegisterService(nil, CNT_ZCONFIG_SERVICE_NAME, SERVICE_ID_TASKMANAGER, lEndpoint,
        procedure (TagValue: variant; Error: Exception)
        begin
          if Error = nil then
          begin
            WriteToLog("Service registered");
            TQTXDispatch.ClearInterval(FRegHandle);
            FRegCount := 0;
            exit;
          end;
        end);
      end, 1000);

    end);
  end;
end;

procedure TQTXTaskService.BeforeServerStopped;
begin
  inherited;
end;

procedure TQTXTaskService.Dispatch(Socket: TNJWebSocketSocket; Message: TQTXBaseMessage);
begin
  var LInfo := MessageDispatch.GetMessageInfoForClass(Message);
  if LInfo  nil then
  begin
    try
      LInfo.MessageHandler(Socket, Message);
    except
      on e: exception do
      begin
        //Log error
        WriteToLog(e.message);
      end;
    end;
  end;
end;

end.


 

Nodebuilder, QTX and the release of my brand new social platform

January 2, 2020 9 comments

First, let me wish everyone a wonderful new year! With xmas and the silly season firmly behind us, and my batteries recharged – I feel my coding fingers itch to get started again.

2019 was a very busy year for me. Exhausting even. I have juggled both a full time job, kids and family, as well as our community project, Quartex Media Desktop. And since that project has grown considerably, I had to stop and work on the tooling. Which is what NodeBuilder is all about.

I have also released my own social media platform (see further down). This was initially scheduled for Q4 2020, but Facebook pissed me off something insanely, so I set it up in december instead.

NodeBuilder

node

For those of you that read my blog you probably remember the message system I made for the Quartex Desktop, called Ragnarok? This is a system for dealing with message dispatching and RPC, except that the handling is decoupled from the transport medium. In other words, it doesnt care how you deliver a message (WebSocket, UDP, REST), but rather takes care of serialization, binary data and security.

All the back-end services that make up the desktop system, are constructed around Ragnarok. So each service exposes a set of methods that the core can call, much like a normal RPC / SOAP service would. Each method is represented by a request and response object, which Ragnarok serialize to a JSON message envelope.

In our current model I use WebSocket, which is a full duplex, long-term connection (to avoid overhead of having to connect and perform a handshake for each call). But there is nothing in the way of implementing a REST transport layer (UDP is already supported, it’s used by the Zero-Config system. The services automatically find each other and register, as long as they are connected to the same router or switch). For the public service I think REST makes more sense, since it will better utilize the software clustering that node.js offers.

nodebuilder

Node Builder is a relatively simple service designer, but highly effective for our needs

 

Now for small services that expose just a handful of methods (like our chat service), writing the message classes manually is not really a problem. But the moment you start having 20 or 30 methods – and need to implement up to 60 message classes manually – this approach quickly becomes unmanageable and prone to errors. So I simply had to stop before xmas and work on our service designer. That way we can generate the boilerplate code in seconds rather than days and weeks.

While I dont have time to evolve this software beyond that of a simple service designer (well, I kinda did already), I have no problem seeing this system as a beginning of a wonderful, universal service authoring system. One that includes coding, libraries and the full scope of the QTX runtime-library.

In fact, most of the needed parts are in the codebase already, but not everything has been activated. I don’t have time to build both a native development system AND the development system for the desktop.

nodebuilder4

NodeBuilder already have a fully functional form designer and code editor, but it is dormant for now due to time restrictions. Quartex Media Desktop comes first

But right now, we have bigger fish to fry.

Quartex Media Desktop

We have made tremendous progress on our universal desktop environment, to the point where the baseline services are very close to completion. A month should be enough to finish this unless something unforeseen comes up.

desktop

Quartex Media Desktop provides an ecosystem for advanced web applications

You have to factor in that, this project has only had weekends and the odd after work hours allocated for it. So even though we have been developing this for 12 months, the actual amount of days is roughly half of that.

So all things considered I think we have done a massive amount of code in such a short time. Even simple 2d games usually take 2 years of daily development, and that includes a team of at least 5 people! Im a single developer working in my spare time.

So what exactly is left?

The last thing we did before xmas was upon us, was to throw out the last remnants of Smart Mobile Studio code. The back-end services are now completely implemented in our own QTX runtime-library, which has been written from scratch. There is not a line of code from Smart Mobile Studio in QTX, which means we no longer have to care what that system does or where it goes.

To sum up:

  • Push all file handling code out of the core
  • Implement file-handling as it’s own service

Those two steps might seem simple enough, but you have to remember that the older code was based on the Linux path system, and was read-only.

So when pushing that code out of the core, we also have to add all the functionality that was never implemented in our prototype.

nodebuilder2

Each class actually represents a separate “mini” program, and there are still many more methods to go before we can put this service into production.

Since Javascript does not support threads, each method needs to be implemented as a separate program. So when a method is called, the file/task manager literally spawns a new process just for that task. And the result is swiftly returned back to the caller in async manner.

So what is ultimately simple, becomes more elaborate if you want to do it right. This is the price we pay for universality and a cluster enabled service-stack.

This is also why I have put the service development on pause until we have finished the NodeBuilder tooling. And I did this because I know by experience that the moment the baseline is ready, both myself and users of the system is going to go “oh we need this, and that and those”. Being able to quickly design and auto-generate all the boilerplate code will save us months of work. So I would rather spend a couple of weeks on NodeBuilder than wasting months having to manually write all that boilerplate code down the line.

What about the QTX runtime-library?

Writing an RTL from scratch was not something I could have anticipated before we started this project. But thankfully the worst part of this job is already finished.

The RTL is divided into two parts:

  • Non Visual code. Classes and methods that makes QTX an RTL
  • Visual code. Custom Controls + standard controls (buttons, lists etc)
  • Visual designer

As you can see, the non-visual aspect of the system is finished and working beautifully. It’s a lot faster than the code I wrote for Smart Mobile Studio (roughly twice as fast on average). I also implemented a full visual designer, both as a Delphi visual component and QTX visual component.

nodebuilder3

Quartex Media Desktop makes running on several machines [cluster] easy and seamless

So fundamental visual classes like TCustomControl is already there. What I haven’t had time to finish are the standard-controls, like TButton, TListBox, TEdit and those type of visual components. That will be added after the release of QTX, at which point we throw out the absolute last remnants of Smart Mobile Studio from the client (HTML5 part) software too.

Why is the QTX Runtime-Library important again?

When the desktop is out the door, the true work begins! The desktop has several roles to play, but the most important aspect of the desktop – is to provide an ecosystem capable of hosting web based applications. Offering features and methods traditionally only found in Windows, Linux or OS X. It truly is a complete cloud system that can scale from a single affordable SBC (single board computer), to a high-end cluster of powerful servers.

Clustering and writing distributed applications has always been difficult, but Quartex Media Desktop makes it simple. It is no more difficult for a user to work on a clustered system, as it is to work on a normal, single OS. The difficult part has already been taken care of, and as long as people follow the rules, there will be no issues beyond ordinary maintenance.

And the first commercial application to come out of Quartex Components, is Cloud Forge, which is the development system for the platform. It has the same role as Visual Studio for Windows, or X Code for Apple OS X.

78498221_438784840394351_7041317054627971072_n

The Quartex Media Desktop Cluster cube. A $400 super computer

I have prepared 3 compilers for the system already. First there is C/C++ courtesy of Clang. So C developers will be able to jump in and get productive immediately. The second compiler is freepascal, or more precise pas2js, which allows you to compile ordinary freepascal code (which is highly Delphi compatible) to both JavaScript and WebAssembly.

And last but not least, there is my fork of DWScript, which is the same compiler that Smart Mobile Studio uses. Except that my fork is based on the absolute latest version, and i have modified it heavily to better match special features in QTX. So right out of the door CloudForge will have C/C++, two Object Pascal compilers, and vanilla Javascript and typescript. TypeScript also has its own WebAssembly compiler, so doing hard-core development directly in a browser or HTML5 viewport is where we are headed.

Once the IDE is finished I can finally, finally continue on the LDEF bytecode runtime, which will be used in my BlitzBasic port and ultimately replace both clang, freepascal and DWScript. As a bonus it will emit native code for a variety of systems, including x86, ARM, 68k [including 68080] and PPC.

This might sound incredibly ambitious, if not impossible. But what I’m ultimately doing here -is moving existing code that I already have into a new paradigm.

The beauty of object pascal is the sheer size and volume of available components and code. Some refactoring must be done due to the async nature of JS, but when needed we fall back on WebAssembly via Freepascal (WASM executes linear, just like ordinary native code does).

A brand new social platform

During december Facebook royally pissed me off. I cannot underline enough how much i loath A.I censorship, and the mistakes that A.I does – in which you are utterly powerless to complain or be heard by a human being. In my case i posted a gif from their own mobile application, of a female body builder that did push-ups while doing hand-stands. In other words, a completely harmless gif with strength as the punchline. The A.I was not able to distinguish between a leotard and bare-skin, and just like that i was muted for over a week. No human being would make such a ruling. As an admin of a fairly large set of groups, there are many cases where bans are the result. Disgruntled members that acts out of revenge and report technical posts about coding as porn or offensive. Again, you are helpless because there are nobody you can talk to about resolving the issue. And this time I had enough.

It was always planned that we would launch our own social media platform, an alternative to Facebook aimed at adult geeks rather than kids (Facebook operates with an age limit of 12 years). So instead of waiting I rushed out and set up a brand new social network. One where those banale restrictions Facebook has conditioned us with, does not apply.

Just to underline, this is not some simple and small web forum. This is more or less a carbon copy of Facebook the way it used to be 8-9 years ago. So instead of having a single group on facebook, we can now have as many groups as we like, on a platform that looks more or less identical to Facebook – but under our control and human rules.

AD1

Amigadisrupt.com is a brand new social media platform for geeks

You can visit the site right now at https://www.amigadisrupt.com. Obviously the major content on the platform right now is dominated by retro computing – but groups like Delphi Developer and FPC developer has already been setup and are in use. But if you are expecting thousands of active users, that will take time. We are now closing in on 250 active users which is pretty good for such a short period of time. I dont want a platform anywhere near as big as FB. The goal is to get 10k users and have a stable community of coders, retro geeks, builders and creative individuals.

AD (Amiga Disrupt) will be a standard application that comes with Quartex Media Desktop. This is the beauty of web technology, in that it can unify different resources under one roof. And we will have our cake and eat it come hell or high water.

Disclaimer: Amiga Disrupt has a lower age limit of 18 years. This is a platform meant for adults. Which means there will be profanity, jokes that would get you banned on Facebook and content that is not meant for kids. This is hacker-land, and political correctness is considered toilet paper. So if you need social toffery like FB and Twitter deals with, you will be kicked by one of the admins.

After you sign up your feed will be completely empty. Here is how to get it started. And feel free to add me to your friends-list!thumb

Quartex “Cloud Ripper” hardware

November 10, 2019 Leave a comment

For close to a year now I have been busy on a very exciting project, namely my own cloud system. While I have written about this project quite a bit these past months, mostly focusing on the software aspect, not much has been said about that hardware.

74238389_10156646805205906_1728576808808349696_o

Quartex “Cloud Ripper” running neatly on my home-office desk

So let’s have a look at Cloud Ripper, the official hardware setup for Quartex Media Desktop.

Tiny footprint, maximum power

Despite its complexity, the Quartex Media Desktop architecture is surprisingly lightweight. The services that makes up the baseline system (read: essential services) barely consume 40 megabytes of ram per instance (!). And while there is a lot of activity going on between these services -most of that activity is message-dispatching. Sending messages costs practically nothing in cpu and network terms. This will naturally change the moment you run your cloud as a public service, or setup the system in an office environment for a team. The more users, the more signals are shipped between the processes – but with the exception of reading and writing large files, messages are delivered practically instantaneous and hardly use CPU time.

CloudRipper

Quartex Media Desktop is based on a clustered micro-service architecture

One of the reasons I compile my code to JavaScript (Quartex Media Desktop is written from the ground up in Object Pascal, which is compiled to JavaScript) has to do with the speed and universality of node.js services. As you might know, Node.js is powered by the Google V8 runtime engine, which means the code is first converted to bytecodes, and further compiled into highly optimized machine-code [courtesy of llvm]. When coded right, such Javascript based services execute just as fast as those implemented in a native language. There simply are no perks to be gained from using a native language for this type of work. There are however plenty of perks from using Node.js as a service-host:

  • Node.js delivers the exact same behavior no matter what hardware or operating-system you are booting up from. In our case we use a minimal Linux setup with just enough infrastructure to run our services. But you can use any OS that supports Node.js. I actually have it installed on my Android based Smart-TV (!)
  • We can literally copy our services between different machines and operating systems without recompiling a line of code. So we don’t need to maintain several versions of the same software for different systems.
  • We can generate scripts “on the fly”, physically ship the code over the network, and execute it on any of the machines in our cluster. While possible to do with native code, it’s not very practical and would raise some major security concerns.
  • Node.js supports WebAssembly, you can use the Elements Compiler from RemObjects to write service modules that executes blazingly fast yet remain platform and chipset independent.

The Cloud-Ripper cube

The principal design goal when I started the project, was that it should be a distributed system. This means that instead of having one large-service that does everything (read: a typical “native” monolithic design), we instead operate with a microservice cluster design. Services that run on separate SBC’s (single board computers). The idea here is to spread the payload over multiple mico-computers that combined becomes more than the sum of their parts.

IMG_4644_Product_1024x1024@2x

Cloud Ripper – Based on the Pico 5H case and fitted with 5 x ODroid XU4 SBC’s

So instead of buying a single, dedicated x86 PC to host Quartex Media Desktop, you can instead buy cheap, off-the-shelves, easily available single-board computers and daisy chain them together. So instead of spending $800 (just to pin a number) on x86 hardware, you can pick up $400 worth of cheap ARM boards and get better network throughput and identical processing power (*). In fact, since Node.js is universal you can mix and match between x86, ARM, Mips and PPC as you see fit. Got an older PPC Mac-Mini collecting dust? Install Linux on it and get a few extra years out of these old gems.

(*) A single XU4 is hopelessly underpowered compared to an Intel i5 or i7 based PC. But in a cluster design there are more factors than just raw computational power. Each board has 8 CPU cores, bringing the total number of cores to 40. You also get 5 ARM Mali-T628 MP6 GPUs running at 533MHz. Only one of these will be used to render the HTML5 display, leaving 4 GPUs available for video processing, machine learning or compute tasks. Obviously these GPUs won’t hold a candle to even a mid-range graphics card, but the fact that we can use these chips for audio, video and computation tasks makes the system incredibly versatile.

Another design goal was to implement a UDP based Zero-Configuration mechanism. This means that the services will find and register with the core (read: master service) automatically, providing the machines are all connected to the same router or switch.

IMG_4650_Product_1024x1024@2x

Put together your own supercomputer for less than $500

The first “official” hardware setup is a cluster based on 5 cheap ARM boards; namely the ODroid XU4. The entire setup fits inside a Pico Cube, which is a special case designed to house this particular model of single board computers. Pico offers several different designs, ranging from 3 boards to a 20 board super-cluster. You are not limited ODroid XU4 boards if you prefer something else. I picked the XU4 boards because they represent the lowest possible specs you can run the Quartex Media Desktop on. While the services themselves require very little, the master board (the board that runs the QTXCore.js service) is also in charge of rendering the HTML5 display. And having tested a plethora of boards, the ODroid XU4 was the only model that could render the desktop properly (at that low a price range).

Note: If you are thinking about using a Raspberry PI 3B (or older) as the master SBC, you can pretty much forget it. The media desktop is a piece of very complex HTML5, and anything below an ODroid XU4 will only give you a terrible experience (!). You can use smaller boards as slaves, meaning that they can host one of the services, but the master should preferably be an ODroid XU4 or better. The ODroid N2 [with 4Gb Ram] is a much better candidate than a Raspberry PI v4. A Jetson Nano is an even better option due to its extremely powerful GPU.

Booting into the desktop

One of the things that confuse people when they read about the desktop project, is how it’s possible to boot into the desktop itself and use Quartex Media Desktop as a ChromeOS alternative?

How can a “cloud platform” be used as a desktop alternative? Don’t you need access to the internet at all times? If it’s a server based system, how then can we boot into it? Don’t we need a second PC with a browser to show the desktop?

73475069_10156646805615906_2668445017588105216_o

Accessing the desktop like a “web-page” from a normal Linux setup

To make a long story short: the “master” in our cluster architecture (read: the single-board computer defined as the boss) is setup to boot into a Chrome browser display under “kiosk mode”. When you start Chrome in kiosk mode, this removes all traces of the ordinary browser experience. There will be no toolbars, no URL field, no keyboard shortcuts, no right-click popup menus etc. It simply starts in full-screen and whatever HTML5 you load, has complete control over the display.

What I have done, is to to setup a minimal Linux boot sequence. It contains just enough Linux to run Chrome. So it has all the drivers etc. for the device, but instead of starting the ordinary Linux Desktop (X or Wayland) -we instead start Chrome in kiosk mode.

74602781_10156646805300906_6294526665393438720_o

Booting into the same desktop through Chrome in Kiosk Mode. In this mode, no Linux desktop is required. The Linux boot sequence is altered to jump straight into Chrome

Chrome is started to load from 127.0.0.1 (this is a special address that always means “this machine”), which is where our QTXCore.js service resides that has it’s own HTTP/S and Websocket servers. The client (HTML5 part) is loaded in under a second from the core — and the experience is more or less identical to starting your ChromeBook or NAS box. Most modern NAS (network active storage) devices are much more than a file-server today. NAS boxes like those from Asustor Inc have HDMI out, ships with a remote control, and are designed to act as a media center. So you connect the NAS directly to your TV, and can watch movies and listen to music without any manual conversion etc.

In short, you can setup Quartex Media Desktop to do the exact same thing as ChromeOS does, booting straight into the web based desktop environment. The same desktop environment that is available over the network. So you are not limited to visiting your Cloud-Ripper machine via a browser from another computer; nor are you limited to just  using a dedicated machine. You can setup the system as you see fit.

Why should I assemble a Cloud-Ripper?

Getting a Cloud-Ripper is not forced on anyone. You can put together whatever spare hardware you have (or just run it locally under Windows). Since the services are extremely lightweight, any x86 PC will do. If you invest in a ODroid N2 board ($80 range) then you can install all the services on that if you like. So if you have no interest in clustering or building your own supercomputer, then any PC, Laptop or IOT single-board computer(s) will do. Provided it yields more or equal power as the XU4 (!)

What you will experience with a dedicated cluster, regardless of putting the boards in a nice cube, is that you get excellent performance for very little money. It is quite amazing what $200 can buy you in 2019. And when you daisy chain 5 ODroid XU4 boards together on a switch, those 5 cheap boards will deliver the same serving power as an x86 setup costing twice as much.

Jetson-Nano_3QTR-Front_Left_trimmed

The NVidia Jetson Nano SBC, one of the fastest boards available at under $100

Pico is offering 3 different packages. The most expensive option is the pre-assembled cube. This is for some reason priced at $750 which is completely absurd. If you can operate a screwdriver, then you can assemble the cube yourself in less than an hour. So the starter-kit case which costs $259 is more than enough.

Next, you can buy the XU4 boards directly from Hardkernel for $40 a piece, which will set you back $200. If you order the Pico 5H case as a kit, that brings the sub-total up to $459. But that price-tag includes everything you need except sd-cards. So the kit contains power-supply, the electrical wiring, a fast gigabit ethernet switch [built-into the cube], active cooling, network cables and power cables. You don’t need more than 8Gb sd-cards, which costs practically nothing these days.

Note: The Quartex Media Desktop “file-service” should have a dedicated disk. I bought a 256Gb SSD disk with a USB 3.0 interface, but you can just use a vanilla USB stick to store user-account data + user files.

As a bonus, such a setup is easy to recycle should you want to do something else later. Perhaps you want to learn more about Kubernetes? What about a docker-swarm? A freepascal build-server perhaps? Why not install FreeNas, Plex, and a good backup solution? You can set this up as you can afford. If 5 x ODroid XU4 is too much, then get 3 of them instead + the Pico 3H case.

So should Quartex Media Desktop not be for you, or you want to do something else entirely — then having 5 ODroid XU4 boards around the house is not a bad thing.

Oh and if you want some serious firepower, then order the Pico 5H kit for the NVidia Jetson Nano boards. Graphically those boards are beyond any other SoC on the market (in it’s price range). But as a consequence the Jetson Nano starts at $99. So for a full kit you will end up with $500 for the boards alone. But man those are the proverbial Ferrari of IOT.

Quartex Media Desktop, new compiler and general progress

September 11, 2019 3 comments

It’s been a few weeks since my last update on the project. The reason I dont blog that often about Quartex Media Desktop (QTXMD), is because the official user-group has grown to 2000+ members. So it’s easier for me to post developer updates directly to the audience rather than writing articles about it.

desktop_01

Quartex Media Desktop ~ a complete environment that runs on every device

If you haven’t bothered digging into the project, let me try to sum it up for you quickly.

Quick recap on Quartex Media Desktop

To understand what makes this project special, first consider the relationship between Microsoft Windows and a desktop program. The operating system, be it Windows, Linux or OSX – provides an infrastructure that makes complex applications possible. The operating-system offers functions and services that programs can rely on.

The most obvious being:

  • A filesystem and the ability to save and load data
  • A windowing toolkit so programs can be displayed and have a UI
  • A message system so programs can communicate with the OS
  • A service stack that takes care of background tasks
  • Authorization and identity management (security)

I have just described what the Quartex Media Desktop is all about. The goal is simple:

to provide for JavaScript what Windows and OS X provides for ordinary programs.

Just stop and think about this. Every “web application” you have ever seen, have all lacked these fundamental features. Sure you have libraries that gives you a windowing environment for Javascript, like Embarcadero Sencha; but im talking about something a bit more elaborate. Creating windows and buttons is easy, but what about ownership? A runtime environment has to keep track of the resources a program allocates, and make sure that security applies at every step.

Target audience and purpose

Take a second and think about how many services you use that have a web interface. In your house you probably have a router, and all routers can be administered via the browser. Sadly, most routers operate with a crude design and that leaves much to be desired.

router

Router interfaces for web are typically very limited and plain looking. Imagine what NetGear could do with Quartex Media Desktop instead

If you like to watch movies you probably have a Plex or Kodi system running somewhere in your house; perhaps you access that directly via your TV – or via a modern media system like Playstation 4 or XBox one. Both Plex and Kodi have web-based interfaces.

Netflix is now omnipresent and have practically become an institution in it’s own right. Netflix is often installed as an app – but the app is just a thin wrapper around a web-interface. That way they dont have to code apps for every possible device and OS out there.

If you commute via train in Scandinavia, chances are you buy tickets on a kiosk booth. Most of these booths run embedded software and the interface is again web based. That way they can update the whole interface without manually installing new software on each device.

plex-desktop-movies-1024x659

Plex is a much loved system. It is based on a mix of web and native technologies

These are just examples of web based interfaces you might know and use; devices that leverage web technology. As a developer, wouldn’t it be cool if there was a system that could be forked, adapted and provide advanced functionality out of the box?

Just imagine a cheap Jensen router with a Quartex Media Desktop interface! It could provide a proper UI interface with applications that run in a windowing environment. They could disable ordinary desktop functionality and run their single application in kiosk mode. Taking full advantage of the underlying functionality without loss of security.

And the same is true for you. If you have a great idea for a web based application, you can fork the system, adjust it to suit your needs – and deploy a cutting edge cloud system in days rather than months!

New compiler?

Up until recently I used Smart Mobile Studio. But since I have left that company, the matter became somewhat pressing. I mean, QTXMD is an open-source system and cant really rely on third-party intellectual property. Eventually I fired up Delphi, forked the latest  DWScript, and used that to roll a new command-line compiler.

desktop_02

Web technology has reached a level of performance that rivals native applications. You can pretty much retire Photoshop in favour of web based applications these days

But with a new compiler I also need a new RTL. Thankfully I have been coding away on the new RTL for over a year, but there is still a lot of work to do. I essentially have to implement the same functionality from scratch.

There will be more info on the new compiler / codegen when its production ready.

Progress

If I was to list all the work I have done since my last post, this article would be a small book. But to sum up the good stuff:

  • Authentication has been moved into it’s own service
  • The core (the main server) now delegates login messages to said service
  • We no longer rely on the Smart Pascal filesystem drivers, but use the raw node.js functions instead  (faster)
  • The desktop now use the Smart Theme engine. This means that we can style the desktop to whatever we like. The OS4 theme that was hardcoded will be moved into its own proper theme-file. This means the user can select between OS4, iOS, Android and Ubuntu styling. Creating your own theme-files is also possible. The Smart theme-engine will be replaced by a more elaborate system in QTX later
  • Ragnarok (the message api) messages now supports routing. If a routing structure is provided,  the core will relay the message to the process in question (providing security allows said routing for the user)
  • The desktop now checks for .info files when listing a directory. If a file is accompanied by an .info file, the icon is extracted and shown for that file
  • Most of the service layer now relies on the QTX RTL files. We still have some dependencies on the Smart Pascal RTL, but we are making good progress on QTX. Eventually  the whole system will have no dependencies outside QTX – and can thus be compiled without any financial obligations.
  • QTX has it’s own node.js classes, including server and client base-classes
  • Http(s) client and server classes are added to QTX
  • Websocket and WebSocket-Secure are added to QTX
  • TQTXHybridServer unifies http and websocket. Meaning that this server type can handle both orinary http requests – but also websocket connections on the same network socket. This is highly efficient for websocket based services
  • UDP classes for node.js are implemented, both client and server
  • Zero-Config classes are now added. This is used by the core for service discovery. Meaning that the child services hosted on another machine will automatically locate the core without knowing the IP. This is very important for machine clustering (optional, you can define a clear IP in the core preferences file)
  • Fixed a bug where the scrollbars would corrupt widget states
  • Added API functions for setting the scrollbars from hosted applications (so applications can tell the desktop that it needs scrollbar, and set the values)
  • .. and much, much more

I will keep you all posted about the progress — the core (the fundamental system) is set for release in december – so time is of the essence! Im allocating more or less all my free time to this, and it will be ready to rock around xmas.

When the core is out, I can focus solely on the applications. Everything from Notepad to Calculator needs to be there, and more importantly — the developer tools. The CloudForge IDE for developers is set for 2020. With that in place you can write applications for iOS, Android, Windows, OS X and Linux directly from Quartex Media Desktop. Nothing to install, you just need a modern browser and a QTX account.

The system is brilliant for small teams and companies. They can setup their own instance, communicate directly via the server (text chat and video chat is scheduled) and work on their products in concert.

New job, new office, new adventures

May 12, 2019 5 comments

It’s been roughly 4 weeks since I posted a status report on Amibian.js. I normally keep people up-to-date on facebook (the “Amiga Disrupt” and also “Delphi Developer” groups). It’s been a very hectic month so I fully understand that people are asking. So let’s look at where the project is at and where we are on the time-line.

For those that might not know, I decided to leave Embarcadero a couple of months ago. I will be working out may before I move on. I wanted to write about that myself in a clean fashion, but sadly the news broke on Facebook prematurely.

Long story short, I have been very fortunate to work at Embarcadero. I am not leaving because there is anything wrong or something like that. I was hired as SC for the EMEA regions, which basically made me the support and presenter for most of europe, parts of asia and the middle east. It’s been a great adventure, but ultimately I had to admit that my passion is coding and community work. Sales is a very important part of any company, but it’s not really my cup of tea; my passion has always been research and development.

So, come first of June and I start in a new position at RemObjects. A company that has deep roots with Delphi and C++ builder users – and a company that continues to produce a wealth of high-quality, high-performance frameworks for Delphi and C++ builder. RemObjects also has a strong focus on modern languages, and have a strong portfolio of new and exciting compilers and languages to offer. The Oxygene compiler should be no stranger to Delphi developers, a powerful object-pascal dialect that can target a variety of platforms and chipsets.

Since compiler technology and run-time systems has been my main focus for well over a decade now, I feel RemObjects is a better match.

Quartex Components

Quartex Components has been an officially registered Norwegian company for a while now, so perhaps not news. What is news is that it’s now directly connected with the development of the Quartex Media Desktop (codename “Amibian.js”). While Amibian.js is an open source endeavour, there will be both free and commercial products running on top of that platform. I have written at length about Cloud Forge in the past, so I wont re-hash that again. But 2020 will see a paradigm shift in how teams and companies approach software development.

quartex

Company logo professionally milled and on its way to my new office

I will also, once there is more time, continue to sell and support software license components.

Quartex Media Desktop

The “Amibian.js” project is moving along nicely. The deadline is Q4 2019, but im hoping to wrap up the core functionality before that. So we are on track and kicking ass 🙂

amibian_01

More and more elaborate functionality is being implemented for the desktop

Here is an overview of work done this month:

  • TSystemService application type has been created (node.js)
    • TApplication now holds IPC functions (inter process communication)
    • Running child processes + sending messages is now simplicity itself
    • Database drivers are 90% done. Delete() and DeleteTable() functionality needs to be implemented in a uniform way
  • Authentication is now a separate service
    • Service database layer is finished (using SQLite3 driver by default)
    • Authentication protocol has been designed
    • Server protocol and JSON message envelopes are done
    • Presently working on the client interface
  • LDEF bytecode assembler has been improved
    • Faster symbolic lookup
    • Smarter register recognition
    • Early support for stack-frames
    • Fixed bug in parser (comma-list parse)
  • QTX framework has seen a lot of work
    • Large parts of the RTL sub-strata has been implemented
    • UTF16 codec implemented
    • QTX versions of common controls:
      • TQTXButton
      • TQTXLabel
      • TQTXToolbar
        • TQTXToolButton
        • TQTXToolSeparator
        • TQTXToolElement
      • TQTXPanel
      • TQTXCheckBox
      • .. and much, much more
  • Desktop changes
    • Link Maker functionality has been added
    • Handshake process between desktop and child app now runs on a separate timer, ensuring better conformity and a more robust initialization
    • The Quartex Editor control has been optimized
      • All redraw calls are now synchronized
      • Canvas is created on demand, avoids flicker during initial redraw
      • Support for DEL key + behavior
      • Gutter is now rendered to an offscreen bitmap and blitted into the control’s canvas. The gutter is only fully rendered when cursor forces the view to change

I will continue to keep everyone up to date about the project. As you can understand, its a bit hectic right now so please be patient – it is turning into an EPIC environment!

Repository updates

February 25, 2019 2 comments

As most know by now, I was running a successful campaign on Patreon until recently. I know that some are happy with Patreon, but hopefully my experience will be a wakeup call about the total lack of rights you as a creator have – should Patreon decide they don’t understand what you are doing (which I can only presume was the case, because I was never given a reason at all). You can read more about my experience with Patreon by clicking here.

Setting up repositories

Having to manually build a package for each tier that I have backers for would be a disaster. It was time-consuming and repetitive enough to create packages on Patreon, and I don’t have time to reverse engineer Patreon either. Which I might do in the future and release as open-source just to give them a kick in the groin back.

To make it easier for my backers to get the code they want, I have isolated each project and sub-project in separate repositories on BitBucket. This covers Delphi, Smart Pascal, LDEF and everything else.

cloud_ripper

The CloudRipper architecture is coming along nicely. Here running on ODroid XU4

I’m just going to continue with the Tiers I originally made on Patreon, and use my blog as the news-center for everything. Since I tend to blog about things from a personal point of view, be it for Delphi, JavaScript or Smart Pascal — I doubt people will notice the difference.

So far the following repositories have been setup:

  • Amibian.js Server (Quartex Web OS)
  • Amibian.js Client
  • HexLicense
  • TextCraft (source-code parser for Delphi and Smart Pascal)
  • UAE.js (a fork of SAE, the JS implementation of UAE)

I need to clean up the server repository a bit, because right now it contains both the server-code and various sub projects. The LDEF assembler program for example, is also under that repository — and it belongs in its own repository as a unique sub-project.

The following repositories will be setup shortly:

  • Tweening library for Delphi and Smart Pascal
  • PixelRage graphics library
  • ByteRage bugger library
  • LDEF (containing both Delphi and Smart Pascal code)
  • LDEF Assembler

It’s been extremely busy days lately so I need to do some thinking about how we can best organize things. But rest assured that everyone that backs the project, or a particular tier, will get access to what they support.

Support and backing

I have been looking at various ways to do this, but since most backers have just said they want Paypal, I decided to go for that. So donations can be done directly via paypal. One of the new features in Paypal is repeated payments, so setting up a backer-plan should be easy enough. I am notified whenever someone gives a donation, so it’s pretty easy to follow-up on.

 

 

Updates used to be monthly, but with the changes they will be ad-hoc, meaning that I will commit directly. I do have local backups and a local git server, so for parts of the project the commits will be issued at the end of each month.

While all support is awesome, here are the tiers I used on Patreon:

  • $5 – “high-five”, im not a coder but I support the cause
  • $10 – Tweening animation library
  • $25 – License management and serial minting components
  • $35 – Rage libraries: 2 libraries for fast graphics and memory management
  • $45 – LDef assembler, virtual machine and debugger
  • $50 – Amibian.js (pre compiled) and Ragnarok client / server library
  • $100 – Amibian.js binaries, source and setup
  • $100+ All the above and pre-made disk images for ODroid XU4 and x86 on completion of the Amibian.js project (12 month timeline).

So to back the project like before, all you do is:

  1. Register with Bitbucket (free user account)
  2. Setup donation and inform me of your Bitbucket user-name
  3. I add you on BitBucket so you are granted access rights

Easy. Fast and reliable.

The QTX RTL

Those that have been following the Amibian.js project might have noticed that a fair bit of QTX units have appeared in the code? QTX is a run-time library compatible with Smart Mobile Studio and DWScript. Eventually the code that makes up Amibian.js will become a whole new RTL. This RTL has nothing to do with Smart Mobile Studio and ships with its own license.

Amigian_display

QTX approaches the DOM in more efficient way. Its faster, smaller and more powerful

Backers at $45 or beyond access to this code automatically. If you use Smart Mobile Studio then this is a must. It introduces a ton of classes that doesn’t exist in Smart Pascal, and also introduces a much faster and clean visual component framework.

If you want to develop visual applications using QTX and DWScript,  then that is OK,  providing the license is respected (LGPL, non commercial use).

Well, stay tuned for more info and news!

Quartex: Mali GPU glitches

February 20, 2019 Leave a comment

EDIT: I did further testing after this article was written, and believe the source of this to be about heat. Even with extra fans, running games like Tyrian (asm.js) that are extremely demanding, plus resizing a graphics intensive windows constantly, the temperature reached 71 degrees C very quickly. And this was with two cabinet fans helping the built-in fan to cool the device. It is thus not unthinkable that when running solo (no extra fans) that the kernel shut the device down to not cook the chipset. Which also explains why the device wont boot properly afterwards (the device is still hot).

Glitches

Something really strange is happening on Chrome and Firefox for ARM. JavaScript is not supposed to be able to take down a system, and in this case it’s neither an attempt as such either — yet for some reason I have managed to take down the ODroid XU4 with both Chrome and Firefox lately.

ODroid XU4

I guess I should lead with that I’m not able to replicate this on x86. One of the things I really love about the ODroid XU4 is that it’s affordable, powerful and probably the only SBC I have used that runs stable on the mali GPU. As you probably know I tested at least 10 different SBC’s back in 2018, and whenever there was a mali GPU involved, the product was either haunted by instabilities or lacked drivers all together.

amibian

Since the codebase for Chrome (and I presume Firefox) is ultimately the same between platforms, it leaves a question-mark about the ODroid. It is by far the most stable SBC I have tested so far (except for the PI, which is sadly underpowered for this task), but stable doesn’t mean flawless. And to be honest, Amibian.js is pushing web tech to the very limits.

Not Mali again

The reason I suspect the mali to be the culprit behind all this, is because the “bug” if we can call it that, happens exclusively during resize. So if there is a lot going on inside a desktop-window, you can sometimes provoke the ODroid to cold-crash and reboot. You actually have to power the board down and switch it back on for it to boot properly.

50431451_10155954273110906_8776790185049325568_n

Cloudripper ~ 5x ODroid XU4 [40 cores] in a PICO 5h cube

The resize and moving of windows uses CSS transformation, which in modern browsers makes use of the GPU. Chrome talks directly with OpenGL (or glES), so the operations are proxied through that. And again, since OpenGL is pretty rock solid elsewhere, we are only left with one common denominator: the mali GPU.

The challenge is that there is no way to debug or catch this error, because when it occurs the whole system literally goes down. There is no exception thrown, nor is the browser process terminated (not even a log entry, so it’s a clean-cut) — the system reboots on the spot. Since it fails on reboot when opening X (setting a screen-mode) I again point the finger at the GPU. Somehow a flag or lock survives the cold-reboot and that’s why you have to manually switch it off and on again.

This is the exact problem that made the NanoPI Fire useless. It only shipped with Android embedded drivers. The X drivers could hardly open a display without crashing. Such a waste of a good cpu.

x86 as head

ODroid is perfect for a low-cost Amibian.js experience, but I was unsure if it would handle the payload. Interestingly it handles it just fine and even with a high-speed action game running + background tasks we are not using 50% of the CPU even.

Ram is holding up too, with memory consumption while running Tyrian + having a few graphics viewers open, is at a reasonable 700 mb (of 2 gigabyte in total).

51398321_10155998598505906_8984850199142727680_o

Tyrian jogs along at 45 fps ~ that is not bad for a $45 SBC

Right now this strange error is rare, but if it continues or grows into a problem (chrome is hardly useable at all, only firefox) then I have no option than to replace the master sbc in the cluster with something else. The x86 UP board is more than capable, but it would be a shame to break the price range because of that (excuse my language) crap mali GPU. I honestly don’t understand why board makers insist on using a mali. Every board that has a mali is haunted by problems and get poor reviews.

It will be exciting to check out the dragonboard, although I fear 1Gb memory will not be enough for smooth operation. Not without a sata interface and a good swap-file.

Android and Delphi

One alternative is to switch to Android and use Delphi to code a custom Chromium Embedded webview. I am hoping to avoid the overhead of Android, but Delphi would definitively be a bonus with Android embedded (“Android of things”).

We will see.

Amibian.js under the hood

December 5, 2018 2 comments

Amibian.js is gaining momentum as more and more developers, embedded systems architects, gamers and retro computer enthusiasts discover the project. And I have to admit I’m pretty stoked about what we are building here myself!

intro

In a life-preserver no less 😀

But, with any new technology or invention there are two common traps that people can fall into: The first trap is to gravely underestimate a technology. JavaScript certainly invites this, because only a decade ago the language was little more than a toy. Since then JavaScript have evolved to become the most widely adopted programming language in the world, and runtime engines like Google’s V8 runs JavaScript almost as fast as compiled binary code (“native” means machine code, like that produced by a C/C++ compiler, Pascal compiler or anything else that produces programs that run under Linux or Windows).

It takes some adjustments, especially for traditional programmers that havent paid attention to where browsers have gone – but long gone are the days of interpreted JavaScript. Modern JavaScript is first parsed, tokenized and compiled to bytecodes. These bytecodes are then JIT compiled (“just in time”, which means the compilation takes place inside the browser) to real machine-code using state of the art techniques (LLVM). So the JavaScript of 2018 is by no means the JavaScript of 2008.

The second trap you can fall into – is to exaggerate what a new technology can do, and attach abilities and expectations to a product that simply cannot be delivered. It is very important to me that people don’t fall into either trap, and that everyone is informed about what Amibian.js actually is and can deliver – but also what it wont deliver. Rome was not built-in a day, and it’s wise to study all the factors before passing judgement.

I have been truly fortunate that people support the project financially via Patreon, and as such I feel it’s my duty to document and explain as much as possible. I am a programmer and I often forget that not everyone understands what I’m talking about. We are all human and make mistakes.

Hopefully this post will paint a clearer picture of Amibian.js and what we are building here. The project is divided into two phases: first to finish Amibian.js itself, and secondly to write a Visual Studio clone that runs purely in the browser. Since it’s easy to mix these things up, I’m underlining this easy – just in case.

What the heck is Amibian.js?

Amibian.js is a group of services and libraries that combined creates a portable operating-system that renders to HTML5. A system that was written using readily available web technology, and designed to deliver advanced desktop functionality to web applications.

The services that make up Amibian.js was designed to piggyback on a thin Linux crust, where Linux deals with the hardware, drivers and the nitty-gritty we take for granted. There is no point trying to write a better kernel in 2018, because you are never going to catch up with Linus Torvalds. It’s must more interesting to push modern web technology to the absolute limits, and build a system that is truly portable and distributed.

smart_ass

Above: Amibian.js is created in Smart Pascal and compiled to JavaScript

The service layer is written purely in node.js (JavaScript) which guarantees the same behavior regardless of host platform. One of the benefits of using off-the-shelves web technology is that you can physically copy the whole system from one machine to the other without any changes. So if you have a running Amibian.js system on your x86 PC, and copy all the files to an ARM computer – you dont even have to recompile the system. Just fire up the services and you are back in the game.

Now before you dismiss this as “yet another web mockup” please remember what I said about JavaScript: the JavaScript in 2018 is not the JavaScript of 2008. No other language on the planet has seen as much development as JavaScript, and it has evolved from a “browser toy” – into the most important programming language of our time.

So Amibian.js is not some skin-deep mockup of a desktop (lord knows there are plenty of those online). It implements advanced technologies such as remote filesystem mapping, an object-oriented message protocol (Ragnarok), RPCS (remote procedure call invocation stack), video codec capabilities and much more — all of it done with JavaScript.

In fact, one of the demos that Amibian.js ships with is Quake III recompiled to JavaScript. It delivers 120 fps flawlessly (browser is limited to 60 fps) and makes full use of standard browser technologies (WebGL).

utube

Click on picture above to watch Amibian.js in action on YouTube

So indeed, the JavaScript we are talking about here is cutting edge. Most of Amibian.js is compiled as “Asm.js” which means that the V8 runtime (the code that runs JavaScript inside the browser, or as a program under node.js) will JIT compile it to highly efficient machine-code.

Which is why Amibian.js is able to do things that people imagine impossible!

Ok, but what does Amibian.js consist of?

Amibian.js consists of many parts, but we can divide it into two categories:

  • A HTML5 desktop client
  • A system server and various child processes

These two categories have the exact same relationship as the X desktop and the Linux kernel. The client connects to the server, invokes procedures to do some work, and then visually represent the response This is identical to how the X desktop calls functions in the kernel or one of the Linux libraries. The difference between the traditional, machine code based OS and our web variation, is that our version doesn’t have to care about the hardware. We can also assign many different roles to Ambian.js (more about that later).

smartdesk

Enjoying other cloud applications is easy with Amibian.js, here is Plex, a system very much based on the same ideas as Amibian.js

And for the record: I’m trying to avoid a bare-metal OS, otherwise I would have written the system using a native programming language like C or Object-Pascal. So I am not using JavaScript because I lack skill in native languages, I am using JavaScript because native code is not relevant for the tasks Amibian.js solves. If I used a native back-end I could have finished this in a couple of months, but a native server would be unable to replicate itself between cloud instances because chipset and CPU would be determining factors.

The Amibian.js server is not a single program. The back-end for Amibian.js consists of several service applications (daemons on Linux) that each deliver specific features. The combined functionality of these services make up “the amibian kernel” in our analogy with Linux. You can think of these services as the library files in a traditional system, and programs that are written for Amibian.js can call on these to a wide range of tasks. It can be as simple as reading a file, or as complex as registering a new user or requesting admin rights.

The greatest strength of Amibian.js is that it’s designed to run clustered, using as many CPU cores as possible. It’s also designed to scale, meaning that it will replicate itself and divide the work between different instances. This is where things get’s interesting, because an Amibian.js cluster doesn’t need the latest and coolest hardware to deliver good performance. You can build a cluster of old PC’s in your office, or a handful of embedded boards (ODroid XU4, Raspberry PI’s and Tinkerboard are brilliant candidates).

But why Amibian.js? Why not just stick with Linux?

That is a fair question, and this is where the roles I mentioned above comes in.

As a software developer many of my customers work with embedded devices and kiosk systems. You have companies that produce routers and set-top boxes, NAS boxes of various complexity, ticket systems for trains and busses; and all of them end up having to solve the same needs.

What each of these manufacturers have in common, is the need for a web desktop system that can be adapted for a specific program. Any idiot can write a web application, but when you need safe access to the filesystem, unified API’s that can delegate signals to Amazon, Azure or your company server, things suddenly get’s more complicated. And even when you have all of that, you still need a rock solid application model suitable for distributed computing. You might have 1 ticket booth, or 10.000 nation wide. There are no systems available that is designed to deal with web-technology on that scale. Yet 😉

Let’s look at a couple of real-life scenarios that I have encountered, I’m confident you will recognize a common need. So here are some roles that Amibian.js can assume and help deliver a solution rapidly. It also gives you some ideas of the economic possibilities.

Updated: Please note that we are talking javascript here, not native code. There are a lot of native solutions out there, but the whole point here is to forget about CPU, chipset and target and have a system floating on top of whatever is beneath.

  • When you want to change some settings on your router – you login to your router. It contains a small apache server (or something similar) and you do all your maintenance via that web interface. This web interface is typically skin-deep, annoying to work with and a pain for developers to update since it’s connected to a native apache module which is 100% dependent on the firmware. Each vendor end up re-inventing the wheel over and over again.
  • When you visit a large museum notice the displays. A museum needs to display multimedia, preferably on touch capable devices, throughout the different exhibits. The cost of having a developer create native applications that displays the media, plays the movies and gives visual feedback is astronomical. Which is why most museums adopt web technology to handle media presentation and interaction. Again they re-invent the wheel with varying degree of success.
  • Hotels have more or less the exact same need but on a smaller scale, especially the larger hotels where the lobby have information booths, and each room displays a web interface via the TV.
  • Shopping malls face the same challenge, and depending on the size they can need anything from a single to a hundred nodes.
  • Schools and education spend millions on training software and programming languages every year. Amibian.js can deliver both and the schools would only pay for maintenance and adaptation – the product itself is free. Kids get the benefit of learning traditional languages and enjoying instant visual feedback! They can learn Basic, Pascal, JavaScript and C. I firmly believe that the classical languages will help make them better programmers as they evolve.

You are probably starting to see the common denominator here?

They all need a web-based desktop system, one that can run complex HTML5 based media applications and give them the same depth as a native operating-system; Which is pretty hard to achieve with JavaScript alone.

Amibian.js provides a rich foundation of more than 4000 classes that developers can use to write large, complex and media rich applications (see Smart Mobile Studio below). Just like Linux and Windows provides a wealth of libraries and features for native application development – Amibian.js aims to provide the same for cloud and embedded systems.

And as the name implies, it has roots in the past with the machine that defined multimedia, namely the Commodore Amiga. So the relation is more than just visually, Amibian.js uses the same system architecture – because we believe it’s one of the best systems ever designed.

If JavaScript is so poor, why should we trust you to deliver so much?

First of all I’m not selling anything. It’s not like this project is something that is going to make me a ton of cash. I ask for support during the development period because I want to allocate proper time for it, but when done Amibian.js will be free for everyone (LGPL). And I’m also writing it because it’s something that I need and that I havent seen anywhere else. I think you have to write software for yourself, otherwise the quality wont be there.

Secondly, writing Amibian.js in raw JavaScript with the same amount of functions and depth would take years. The reason I am able to deliver so much functionality quickly, is because I use a compiler system called Smart Mobile Studio. This saves months and years of development time, and I can use all the benefits of OOP.

Prior to starting the Amibian.js project, I spent roughly 9 years creating Smart Mobile Studio. Smart is not a solo project, many individuals have been involved – and the product provides a compiler, IDE (editor and tools), and a vast run-time library of pre-made classes (roughly 4000 ready to use classes, or building-blocks).

amibian_shell

Writing large-scale node.js services in Smart is easy, fun and powerful!

Unlike other development systems, Smart Mobile Studio compiles to JavaScript rather than machine-code. We have spent a great deal of time making sure we could use proper OOP (object-oriented programming), and we have spent more than 3 years perfecting a visual application framework with the same depth as the VCL or FMX (the core visual frameworks for C++ builder and Delphi).

The result is that I can knock out a large application that a normal JavaScript coder would spend weeks on – in a single day.

Smart Mobile Studio uses the object-pascal language, a dialect which is roughly 70% compatible with Delphi. Delphi is exceptionally well suited for writing large, data driven applications. It also thrives for embedded systems and low-level system services. In short: it’s a lot easier to maintain 50.000 lines of object pascal code, than 500.000 lines of JavaScript code.

Amibian.js, both the service layer and the visual HTML5 client application, is written completely using Smart Mobile Studio. This gives me as the core developer of both systems a huge advantage (who knows it better than the designer right?). I also get to write code that is truly OOP (classes, inheritance, interfaces, virtual and abstract methods, partial classes etc), because our compiler crafts something called a VMT (virtual method table) in JavaScript.

Traditional JavaScript doesn’t have OOP, it has something called prototypes. With Smart Pascal I get to bring in code from the object-pascal community, components and libraries written in Delphi or Freepascal – which range in the hundreds of thousands. Delphi alone has a massive library of code to pick from, it’s been a popular toolkit for ages (C is 3 years older than pascal).

But how would I use Amibian.js? Do I install it or what?

Amibian.js can be setup and used in 4 different ways:

  • As a true desktop, booting straight into Amibian.js in full-screen
  • As a cloud service, accessing it through any modern browser
  • As a NAS or Kiosk front-end
  • As a local system on your existing OS, a batch script will fire it up and you can use your browser to access it on https://127.0.0.1:8090

So the short answer is yes, you install it. But it’s the same as installing Chrome OS. It’s not like an application you just install on your Linux, Windows or OSX box. The whole point of Amibian.js is to have a platform independent, chipset agnostic system. Something that doesn’t care if you using ARM, x86, PPC or Mips as your CPU of preference. Developers will no doubt install it on their existing machines, Amibian.js is non-intrusive and does not affect or touch files outside its own eco-system.

But the average non-programmer will most likely setup a dedicated machine (or several) or just deploy it on their home NAS.

The first way of enjoying Amibian.js is to install it on a PC or ARM device. A disk image will be provided for supporters so they can get up and running ASAP. This disk image will be based on a thin Linux setup, just enough to get all the drivers going (but no X desktop!). It will start all the node.js services and finally enter a full-screen web display (based on Chromium Embedded) that renders the desktop. This is the method most users will prefer to work with Amibian.js.

The second way is to use it as a cloud service. You install Amibian.js like mentioned above, but you do so on Amazon or Azure. That way you can login to your desktop using nothing but a web browser. This is a very cost-effective way of enjoying Amibian.js since renting a virtual instance is affordable and storage is abundant.

The third option is for developers. Amibian.js is a desktop system, which means it’s designed to host more elaborate applications. Where you would normally just embed an external website into an IFrame, but Amibian.js is not that primitive. Hosting external applications requires you to write a security manifest file, but more importantly: the application must interface with the desktop through the window’s message-port. This is a special object that is sent to the application as a hand-shake, and the only way for the application to access things like the file-system and server-side functionality, is via this message-port.

Calling “kernel” level functions from a hosted application is done purely via the message-port mentioned above. The actual message data is JSON and must conform to the Ragnarok client protocol specification. This is not as difficult as it might sound, but Amibian.js takes security very seriously – so applications trying to cause damage will be promptly shut down.

You mention hosted applications, do you mean websites?

Both yes and no: Amibian.js supports 3 types of applications:

  • Ordinary HTML5/JS based applications, or “websites” as many would call them. But like I talked about above they have to establish a dialog with the desktop before they can do anything useful.
  • Hybrid applications where half is installed as a node.js service, and the other half is served as a normal HTML5 app. This is the coolest program model, and developers essentially write both a server and a client – and then deploy it as a single package.
  • LDEF compiled bytecode applications, a 68k inspired assembly language that is JIT compiled by the browser (commonly called “asm.js”) and runs extremely fast. The LDEF virtual machine is a sub-project in Amibian.js

The latter option, bytecodes, is a bit like Java. A part of the Amibian.js project is a compiler and runtime system called LDEF.

patron_asm2

Above: The Amibian.js LDEF assembler, here listing opcodes + disassembling a method

The first part of the Amibian.js project is to establish the desktop and back-end services. The second part of the project is to create the worlds first cloud based development platform. A full Visual Studio clone if you like, that allows anyone to write cloud, mobile and native applications directly via the browser (!)

Several languages are supported by LDEF, and you can write programs in Object Pascal, Basic and C. The Basic dialect is especially fun to work with, since it’s a re-implementation of BlitzBasic (with a lot of added extras). Amiga developers will no doubt remember BlitzBasic, it was used to create some great games back in the 80s and 90s. It’s well suited for games and multimedia programming and above all – very easy to learn.

More advanced developers can enjoy Object Pascal (read: Delphi) or a sub-set of C/C++.

And please note: This IDE is designed for large-scale applications, not simple snippets. The ultimate goal of Amibian.js is to move the entire development cycle to the cloud and away from the desktop. With Amibian.js you can write a cool “app” in BlitzBasic, run it right in the browser — or compile it server-side and deploy it to your Android Phone as a real, natively compiled application.

So any notion of a “mock desktop for HTML” should be firmly put to the side. I am not playing around with this product and the stakes are very real.

But why don’t you just use ChromeOS?

There are many reasons, but the most important one is chipset independence. Chrome OS is a native system, meaning that it’s core services are written in C/C++ and compiled to machine code. The fundamental principle of Amibian.js is to be 100% platform agnostic, and “no native code allowed”. This is why the entire back-end and service layer is targeting node.js. This ensures the same behavior regardless of processor or host system (Linux being the default host).

Node.js has the benefit of being 100% platform independent. You will find node.js for ARM, x86, Mips and PPC. This means you can take advantage of whatever hardware is available. You can even recycle older computers that have lost mainstream support, and use them to run Amibian.js.

A second reason is: Chrome OS might be free, but it’s only as open as Google want it to be. ChromeOS is not just something you pick up and start altering. It’s dependence on native programming languages, compiler toolchains and a huge set of libraries makes it extremely niche. It also shields you utterly from the interesting parts, namely the back-end services. It’s quite frankly boring and too boxed in for any practical use; except for Google and it’s technology partners that is.

I wanted a system that I could move around, that could run in the cloud, on cheap SBC’s. A system that could scale from handling 10 users to 1000 users – a system that supports clustering and can be installed on multiple machines in a swarm.

A system that anyone with JavaScript knowledge can use to create new and exciting systems, that can be easily expanded and serve as a foundation for rich media applications.

What is this Amiga stuff, isn’t that an ancient machine?

In computing terms yes, but so is Unix. Old doesn’t automatically mean bad, it actually means that it’s adapted and survived challenges beyond its initial design. While most of us remember the Amiga for its games, I remember it mainly for its elegant and powerful operating-system. A system so flexible that it’s still in use around the world – 33 years after the machine hit the market. That is quite an achievement.

image2

The original Amiga OS, not bad for a 33-year-old OS! It was and continues to be way ahead of everyone else. A testament to the creativity of its authors

Amibian.js as the name implies, borrows architectural elements en-mass from Amiga OS. Quite simply because the way Amiga OS is organized and the way you approach computing on the Amiga is brilliant. Amiga OS is much more intuitive and easier to understand than Linux and Windows. It’s a system that you could learn how to use fully with just a couple of days exploring; and no manuals.

But the similarities are not just visual or architectural. Remember I wrote that hosted applications can access and use the Amibian.js services? These services implement as much of the original ROM Kernel functions as possible. Naturally I can’t port all of it, because it’s not really relevant for Amibian.js. Things like device-drivers serve little purpose for Amibian.js, because Amibian.js talks to node.js, and node talks to the actual system, which in turn handles hardware devices. But the way you would create windows, visual controls, bind events and create a modern, event-driven application has been preserved to the best of my ability.

But how does this thing boot? I thought you said server?

If you have setup a dedicated machine with Amibian.js then the boot sequence is the same as Linux, except that the node.js services are executed as background processes (daemons or services as they are called), the core server is initialized, and then a full-screen HTML5 view is set up that shows the desktop.

But that is just for starting the system. Your personal boot sequence which deals with your account, your preferences and adaptations – that boots when you login to the system.

When you login to your Amibian.js account, no matter if it’s just locally on a single PC, a distributed cluster, or via the browser into your cloud account — several things happen:

  1. The client (web-page if you like) connects to the server using WebSocket
  2. Login is validated by the server
  3. The client starts loading preferences files via the mapped filesystem, and then applies these to the desktop.
  4. A startup-sequence script file is loaded from your account, and then executed. The shell-script runtime engine is built into the client, as is REXX execution.
  5. The startup-script will setup configurations, create symbolic links (assigns), mount external devices (dropbox, google drive, ftp locations and so on)
  6. When finished the programs in the ~/WbStartup folder are started. These can be both visual and non-visual.

As you can see Amibian.js is not a mockup or “fake” desktop. It implements all the advanced features you expect from a “real” desktop. The filesystem mapping is especially advanced, where file-data is loaded via special drivers; drivers that act as a bridge between a storage service (a harddisk, a network share, a FTP host, Dropbox or whatever) and the desktop. Developers can add as many of these drivers as they want. If they have their own homebrew storage system on their existing servers, they can implement a driver for it. This ensures that Amibian.js can access any storage device, as long as the driver conforms to the driver standard.

In short, you can create, delete, move and copy files between these devices just like you do on Windows, OSX or the Linux desktop. And hosted applications that run inside their own window can likewise request access to these drivers and work with the filesystem (and much more!).

Wow this is bigger than I thought, but what is this emulation I hear about? Can Amibian.js really run actual programs?

Amibian.js has a JavaScript port of UAE (Unix Amiga Emulator). This is a fork of SAE (scripted Amiga Emulator) that has been heavily optimized for web. Not only is it written in JavaScript, it performs brilliantly and thus allows us to boot into a real Amiga system. So if you have some floppy-images with a game you love, that will run just fine in the browser. I even booted a 2 gigabyte harddisk image 🙂

But Amiga emulation is just the beginning. More and more emulators are ported to JavaScript; you have Nes, SNes, N64, PSX I & II, Sega Megadrive and even a NEO GEO port. So playing your favorite console games right in the browser is pretty straight forward!

But the really interesting part is probably QEmu. This allows you to run x86 instances directly in the browser too. You can boot up in Windows 7 or Ubuntu inside an Amibian.js window if you like. Perhaps not practical (at this point) but it shows some of the potential of the system.

I have been experimenting with a distributed emulation system, where the emulation is executed server-side, and only the graphics and sound is streamed back to the Amibian.js client in real-time. This has been possible for years via Apache Guacamole, but doing it in raw JS is more fitting with our philosophy: no native code!

I heard something about clustering, what the heck is that?

Remember I wrote about the services that Amibian.js has? Those that act almost like libraries on a physical computer? Well, these services don’t have to be on the same machine — you can place them on separate machines and thus its able to work faster.

47470965_10155861938320906_4959664457727868928_n

Above: The official Amibian.js cluster, 4 x ODroid XU4s SBC’s in a micro-rack

A cluster is typically several computers connected together, with the sole purpose of having more CPU cores to divide the work on. The cool thing about Amibian.js is that it doesn’t care about the underlying CPU. As long as node.js is available it will happily run whatever service you like – with the same behavior and result.

The official Amibian.js cluster consists of 5 ODroid XU4/S SBC (single board computers). Four of these are so-called “headless” computers, meaning that they don’t have a HDMI port – and they are designed to be logged into and software setup via SSH or similar tools. The last machine is a ODroid XU4 with a HDMI out port, which serves as “the master”.

The architecture is quite simple: We allocate one whole SBC for a single service, and allow the service to copy itself to use all the CPU cores available (each SBC has 8 CPU cores). With this architecture the machine that deals with the desktop clients don’t have to do all the grunt work. It will accept tasks from the user and hosted applications, and then delegate the tasks between the 4 other machines.

Note: The number of SBC’s is not fixed. Depending on your use you might not need more than a single SBC in your home setup, or perhaps two. I have started with 5 because I want each part of the architecture to have as much CPU power as possible. So the first “official” Amibian.js setup is a 40 core monster shipping at around $250.

But like mentioned, you don’t have to buy this to use Amibian.js. You can install it on a single spare X86 PC you have, or daisy chain a couple of older PC’s on a switch for the same result.

Why Headless? Don’t you need a GPU?

The headless SBC’s in the initial design all have GPU (graphical processing unit) as well as audio capabilities. What they lack is GPIO pins and 3 additional USB ports. So each of the nodes on our cluster can handle graphics at blistering speed — but that is ultimately not their task. They serve more as compute modules that will be given tasks to finish quickly, while the main machine deals with users, sessions, traffic and security.

The 40 core cluster I use has more computing power than northern europe had in the early 80s, that’s something to think about. And the pricetag is under $300 (!). I dont know about you but I always wanted a proper mainframe, a distributed computing platform that you can login to and that can perform large tasks while I do something else. This is as close as I can get on a limited budget, yet I find the limitations thrilling and fun!

Part of the reason I have opted for a clustered design has to do with future development. While UAE.js is brilliant to emulate an Amiga directly in the browser – a more interesting design is to decouple the emulation from the output. In other words, run the emulation at full speed server-side, and just stream the display and sounds back to the Amibian.js display. This would ensure that emulation, of any platform, runs as fast as possible, makes use of multi-processing (read: multi threading) and fully utilize the network bandwidth within the design (the cluster runs on its own switch, separate from the outside world-wide-web).

I am also very interested in distributed computing, where we split up a program and run each part on different cores. This is a topic I want to investigate further when Amibian.js is completed. It would no doubt require a re-design of the LDEF bytecode system, but this something to research later.

Will Amibian.js replace my Windows box?

That depends completely on what you use Windows for. The goal is to create a self-sustaining system. For retro computing, emulation and writing cool applications Amibian.js will be awesome. But Rome was not built-in a day, so it’s wise to be patient and approach Amibian.js like you would Chrome OS. Some tasks are better suited for native systems like Linux, but more and more tasks will run just fine on a cloud desktop like Amibian.js.

Until the IDE and compilers are in place after phase two, the system will be more like an embedded OS. But when the LDEF compiler and IDE is in place, then people will start using it en-mass and produce applications for it. It’s always a bit of work to reach that point and create critical mass.

tomes

Object Pascal is awesome, but modern, native development systems are quite demanding

My personal need has to do with development. Some of the languages I use installs gigabytes onto my PC and you need a full laptop to access them. I love Amibian.js because I will be able to work anywhere in the world, as long as a browser and normal internet line is available. In my case I can install a native compiler on one of the nodes in the cluster, and have LDEF emit compatible code; voila, you can build app-store ready applications from within a browser environment.

 

I also love that I can set-up a dedicated platform that runs legacy applications, games – and that I can write new applications and services using modern, off the shelve languages. And should a node in the cluster break down, I can just copy the whole system over to a new, affordable SBC and keep going. No super expensive hardware to order, no absurd hosting fees, and finally a system that we all can shape and use in a plethora of systems. From a fully fledged desktop to a super advanced NAS or Router that use Amibian.js to give it’s customers a fantastic experience.

And yes, I get to re-create the wonderful reality of Amiga OS without the absurd egoism that dominates the Amiga owners to this day. I don’t even know where to begin with the present license holders – and I am so sick of the drama that rolling my own seemed the only reasonable path forward.

Well — I hope this helps clear up any misconceptions about Amibian.js, and that you find this as interesting as I do. As more and more services are pushed cloud-side, the more relevant Amibian.js will become. It is perfect as a foundation for large-scale applications, embedded systems — and indeed, as a solo platform running on embedded devices!

I cant wait to finish the services and cluster this sucker on the ODroid rack!

If you find this project interesting, head over to my Patreon website and get involved! I could really use your support, even if it’s just a $5 “high five”. Visit the project at: http://www.patreon.com/quartexNow

Mirroring groups on the MeWe network

November 18, 2018 2 comments

Following my Administrator woes on Facebook post I have had a look at alternative places to run a forum. I realized that Facebook is getting pretty intrinsic in society around the world, so I know everyone won’t be interested in a new venue. But honestly, MeWe is very simple to use and have an UI experience very close to the Facebook app.

amibian_shell

This picture was flagged as “hateful” on Facebook, which has rendered my account frozen for the next 30 days. While I agree to the strict rules that FB advocates, they really must deploy more human beings if they intend to have success in this endeavour. And that means really investigating what is flagged, reading threads in all languages etc. Because the risk of flagging the wrong guy is just too high. Admins get flagged all the time for kicking out bullies, and the use of reporting tools as a revenge strategy *must* carry a penalty.

MeWe is thankfully not like G+ which (in my personal opinion) was counter-intuitive and damn right intrusive. We all remember the G+ auto-upload feature, where some 3 million users had their family photos, vacation photos and .. ehrm, “explicitly personal” photos uploaded without consent.

Well, the MeWe app is very simple, and registration is as easy as it should be. You make a user name, a password, and type in your email; then you verify your email and that’s it!

Besides, my main use for Facebook or MeWe is to run the groups – I spend very little of my time socializing anyways. With the amount of groups and media i push on a daily basis it’s quite frankly their loss.

mewe

The MeWe group functionality is very good, and almost identical to Facebook

The alternative to MeWe is to setup a proper web forum instead. I have bought 6 domains that are now collecting dust so yes, I will look into that – but the whole purpose of a social platform is that you don’t have to do maintenance beyond daily management – so MeWe saves us some time.

So head over to MeWe and register! Here are the two main groups I manage these days. The main groups are on facebook, but i have now registered the same groups on MeWe.

MeWe doesn’t cost anything and takes less than 5 minutes to join. Just like G+ and Facebook, MeWe can be installed as an app for your phone (both iOS and Android). So as far as alternatives go, it’s a good alternative. One more app wont do much harm I imagine.

Note: I will naturally keep my Facebook account for the sake of the groups, but having experienced this 4 times in 9 years, my tolerance of Mr. Suckerberg is quickly reaching its limits. If I have blurted something out I have no problems standing for that and taking the penalty, but posting a picture of software development? In a group dedicated to software development? That takes some impressive mental acrobatics to accept.

Smart Mobile Studio presentation in Oslo

September 28, 2018 Leave a comment

Yesterday evening I traveled to Oslo and held a presentation on Smart Mobile Studio. The response was very positive and I hope that everyone who attended left with some new ideas regarding JavaScript, the direction the world of software is heading – and how Smart Mobile Studio can be of service to Delphi.

Smart Pascal is especially exciting in concert with Rad-Server, where it opens the doors to Node based, platform independent services and sub clustering. With relatively little effort Rad-Server can absorb the wealth that node has to offer through Smart – but on your terms, and under Delphi’s control. The best of both worlds.

You get the stability and structure that makes Delphi so productive, and then infuse that with the flamboyance, flair and async brilliance that JavaScript represents.

More important than technology is the community! It’s been a few years since I took part in the Oslo Delphi Club’s meetups, so it was great to chat with Halvard Vassbotten, Trond Grøntoft, Alf Christoffersen, Torgeir Amundsen and Robin Bakker face to face again. I also had the pleasure of meeting some new Delphi developers.

prespic

Presentation at ABG Sundal Collier’s offices in Oslo

Thankfully the number of attendees were a moderate 14, considering this was my first presentation ever. Last time I visited was when our late Paweł Głowacki presented FMX, and the turnout was in the ballpark of a hundred. So it was an easy-going, laid-back atmosphere throughout the evening.

Conflict of interest?

Some might wonder why a person working for Embarcadero will present Smart Mobile Studio, which some still regard as competition. Smart is not in competition with Delphi and never will be. It is written by Delphi developers for Delphi developers as a means to bridge two worlds. It’s a project of loyalty and passion. We continue because we love what it enables us to do.

The talks on Smart that I am holding now, including the november talk in London, were booked before I started at Embarcadero (so it’s not a case of me promoting Smart in leu of Embarcadero). I also made it perfectly clear when I accepted the job that my work on Smart will continue in my spare time. And Embarcadero is fine with that. So I am free to spend my after-work hours and weekend time as I see fit.

smart_desktop

The Smart Desktop, codename Amibian.js, is a solid foundation for building large-scale web front-ends. Importing Sencha’s JS API’s can be done via our TypeScript wizard

So, after my presentation in London in november Smart Mobile Studio presentations (at least hosted by me) can only take place during weekends. Which is fair and the way it should be.

Recording the English version

Since the presentation last evening was in Norwegian, there was little point in recording it. Norway have a healthy share of Delphi developers, but a programming language available internationally must be presented in English.

techA couple of months back, before I started working for Embarcadero I promised to do a video presentation that would be available on Delphi Developer and YouTube. I very much like to keep that promise. So I will re-do the presentation in English as soon as possible. I would have done it today after work, but buying tech from the US have changed quite dramatically in just a couple of years.

In short: I haven’t received the remaining equipment I ordered for professional video recording and audio podcasting (which is a part of my Patreon offering as well), as such there will be no live video-feed /slash/ webinar – and questions will be limited to either the comment-section on Delphi Developer; or perhaps more appropriate, the Smart Mobile Studio Forums.

I’m hoping to get the HD camera, mic-table-arm and various bits-and-bobs i ordered from the US sometime next week. I have no idea why FedEx have become so difficult lately, but the package is apparently at LaGuardia, and I have to send receipts that document that these items are paid for before they ship them abroad (so the package manifest listing me as the customer, my address, phone number and receipt from the seller is somehow not enough). This is a first for me.

Interestingly they also stopped a package from Embarcadero with giveaways for my upcoming Delphi presentation in Sweden – at which point I had to send them a copy of my work contract to prove that I indeed work for an American company.

But a promise is a promise, so come rain or shine it will be done. Worst case scenario we can put Samsung’s claims to the test and hook up a mic + photo lens and see if their commercials have any merit.

Nano PI Fire 3, part two

July 18, 2018 Leave a comment

If you missed the first installment of this test, please click here to catch up. In this installment we are just going to dive straight into general use and get a feel for what can and cannot be done.

Solving the power problem

pi-powerLike mentioned in the previous article, a normal mobile charger (5 volt, 2 amps) is not enough to support the nano-pi. Since I have misplaced my original PI power-supply with 5 volt / 3 amps I decided to cheat. So I plugged the power USB into my PC which will deliver as much juice as the device needs. I don’t have time to wait for a new PSU to arrive so this will have to do.

But for the record (and underlined) a proper PSU with at least 2.5 amps is essential to using this board. I suggest you order the official Raspberry PI 3b power-supply. But if you should find one with 3 amps that would be even better.

Web performance

The question on everyone’s mind (or at least mine) is: how does the Nano-PI fire 3 perform when rendering cutting edge, hardcore HTML5? Is this little device a potential candidate for running “The Smart Desktop” (a.k.a Amibian.js for those of you coming from the retro-computing scene)?

Like I suspected earlier, X (the Linux windowing framework) doesn’t have drivers that deliver hardware acceleration at all.

shot_desktop-1024x819-2-1024x819

Lubuntu is a sexy desktop no doubt there, but it’s overkill for this device

This is quite easy to test: when selecting a rectangle on the Lubuntu desktop and moving the mouse-cursor around (holding down the left mouse button at the same time) if it lags terribly, that is a clear indicator that no acceleration exists.

And I was right on the money because there is no acceleration what so ever for the Linux distribution. It struggles hopelessly to keep up with the mouse-pointer as you move it around with an active selection; something that would be silky smooth had the GPU been tasked with the job.

But, hardware acceleration is not just about the desktop. It’s not some flag you enable and it magically effect everything, but rather several API’s at either the kernel-level or immediate driver level (modules the kernel loads), each affecting different aspects of a system.

So while the desktop “2d blitting” is clearly cpu driven, other aspects of the system can still be accelerated (although that would be weird and rare. But considering how Asus messed up the Tinkerboard I guess anything goes these days).

Asking Chrome for the hard facts

I fired up Google Chrome (which is the default browser thank god) and entered the magic url:

chrome://gpu

This is a built-in page that avails a detailed report of what Chrome learns about the current system, right down to specific GPU features used by OpenGL.

As expected, there was NO acceleration what so ever. So I was quite surprised that it managed to run Amibian.js at all. Even without hardware acceleration it outperformed the Raspberry PI 3b+ by a factor of 4 (at the very least) and my particle demo ran at a whopping 8 fps (frames per second). The original Rasperry PI could barely manage 2 fps. So the Nano-PI Fire is leagues ahead of the PI in terms of raw cpu power, which is brilliant for headless servers or computational tasks.

FriendlyCore vs Lubuntu? QT for the win

Now here is a funny thing. So far I have used the Lubuntu standard Linux image, and performance has been interesting to say the least. No hardware acceleration, impressive cpu results but still – what good is a SBC Linux distro without fast graphics? Sure, if you just want a head-less file server or host services then you don’t need a beefy GPU. But here is the twist:

Turns out the makers of the board has a second, QT oriented distro called Friendly-core. And this image has OpenGL-ES support and all the missing acceleration lacking from Lubuntu.

I was pretty annoyed with how Asus gave users the run-around with Tinkerboard downloads, but they have thankfully cleaned up their act and listened to their customers. Friendly-elec might want to learn from Asus mistakes in this area.

Qtwebenginebrowser

QT has a rich history, but it’s being marginalized by node.js and Delphi these days

Alas, Friendly-core xenial 4.4 Arm64 image turned out to be a pure embedded development image. This is why the board has a debug port (which is probably awesome if you are into QT development). So this is for QT developers that want to use the board as a single-application system where they write the code on Windows or Linux, compile and it’s all transported to the board with live debugging back to the devtools they use. In other words: not very useful for non C/C++ QT developers.

Android Lolipop

2000px-Android_robot.svgI have only used Android on a pad and the odd Samsung Galaxy phone, so this should be interesting. I Downloaded the Lolipop disk image, burned it to the sd-card and booted up.

After 20 minutes with a blank screen i gave up.

I realize that some Android distros download packages ad-hoc and install directly from a repository, so it can take some time to get started; but 15-20 minutes with a black screen? The Android logo didn’t even show up — and that should be visible almost immediately regardless of network install or not.

This is really a great shame because I wanted to test some Delphi Firemonkey applications on it, to see how well it scales the more demanding GPU tasks. And yes i did try a different SD-Card to be sure it wasnt a disk error. Same result.

Back to Lubuntu

Having spent a considerable time trying to find a “wow” factor for this board, I have to just surrender to the fact that it’s just not there. . This is not a “PI” any more than the Tinkerboard is a PI. And appending “pi” to a product name will never change that.

I can imagine the Nano-PI Fire 3 being an awesome single-application board for QT C/C++ developers though. With a dedicated debug port making it a snap to transport, execute and do live debugging directly on the hardware — but for general DIY hacking, using it for native Android development with Delphi, or node.js development with Smart Mobile Studio – or just kicking back with emulators like Mame, UAE or whatever tickles your fancy — its just too rough around the edges. Which is really a shame!

So at the end of the day I re-installed Lubuntu and figure I just have to wait until Friendly-elec get their act together and issue proper drivers for the Mali GPU. So it’s $35 straight out the window — but I can live with that. It was a risk but at that price it’s not going to break the bank.

The positive thing

The Nano-PI Fire 3 is yet another SBC in a long list that fall short of its potential. Like many others they try to use the word “PI” to channel some of the Raspberry PI enthusiasm their way – but the quality of the actual system is not even close.

In fact, using PI in their product name is setting themselves up for a fall – because customers will quickly discover that this product is not a PI, which can cause some subconscious aversion and resentment.

37013365_10155541149775906_3122577366065348608_o

The Nano rendered Amibian.js running some very demanding demos 4 times as fast as the PI 3b, one can only speculate what the board could do with proper drivers for the GPU.

The only positive feature the Fire-3 clearly has to offer, is abundantly more cpu power. It is without a doubt twice as fast (if not 3 times as fast) as the Raspberry PI 3b. The fact that it can render highly demanding and complex HTML5 demos 4 times faster than the Raspberry PI 3b without hardware acceleration is impressive. This is a $35 board after all, which is the same price.

But without proper drivers for the mali, it’s a useless toy. Powerful and with great potential, but utterly useless for multimedia and everything that relies on fast 2D and 3D graphics. For UAE (Amiga emulation) you can pretty much forget it. Even if you can compile the latest UAE4Arm with SDL as its primary display framework – it wouldn’t work because SDL depends on the graphics drivers. So it’s back to square one.

But the CPU packs a punch that is without question.

Final verdict

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

There are a lot of stable and excellent options out there, take your time

I was planning to test UAE next but as I have outlined above: without drivers that properly expose and delegate the power of the mali, it would be a complete disaster. I’m not even sure it would build.

As such I will just leave this board as is. If it matures at some point that would be great, but my advice to people looking for a great SBC experience — get the new Raspberry PI 3b+ and enjoy learning and exploring there.

And if you are into Amibian.js or making high quality HTML5 kiosk / node.js based systems, then fork out the extra $10 and buy an ODroid XU4. If you pay $55 you can pick up the Asus Tinkerboard which is blistering fast and great value for money, despite its turbulent introduction.

Note: You cannot go wrong with the ODroid XU4. Its affordable, stable and fast. So for beginners it’s either the Raspberry PI 3b+ or the ODroid. These are the most mature in terms of software, drivers and stability.

The Amiga ARM project

April 19, 2018 15 comments

This has been quite the turbulent week. Without getting into all the details, a post that I made with thoughts and ideas for an Amiga inspired OS for ARM escaped the safe confines of our group, Amiga Disrupt, and took on a life of its own.
This led to a few critical posts being issued publicly, which all boiled down to a misunderstanding. Thankfully this has been resolved and things are back to normal.

The question on everyone’s lips now seem to be: did Jon mean what he said or was it just venting frustration? I thought I made my points clear in my previous post, but sadly Commodore USA formulated a title open for interpretation (which is understandable considering the mayhem at the time). So let’s go thrugh the ropes and put this to rest.

Am I making an ARM based Amiga inspired OS?

Hopefully I don’t have to. My initial post, the one posted to the Amiga Disrupt comment section (and mistaken for a project release note), had a couple of very clear criteria attached:

If nothing has been done to improve the Amiga situation [with regards to ARM or x86] by the time I finish Amibian.js (*), I will take matters into my own hand and create my own alternative.

(*) As you probably know, Amibian.js is a cloud implementation of Amiga OS, designed to bring Amiga to the browser. It is powered by a node.js application server; a server that can be hosted either locally (on the same machine as the html5 client) or remotely. It runs fine on popular embedded devices such as Tinkerboard and ODroid, and when run in a full-screen browser with no X or Windows desktop behind it – it is practically indistinguishable from the real thing.

We have customers who use our prototype to deliver cloud based learning for educational institutions. Shipping ready to use hardware units with pre-baked Amibian.js installed is perfect for schools, libraries, museums, routers and various kiosk projects.

smart_desktop

Amibian.js, here running Quake 3 at 60 fps in your browser

Note: This project started years before FriendOS, so we are not a clone of their work.

Obviously this is a large task for one person, but I have written the whole system in Smart Mobile Studio, which is a product our company started some 7 years ago, and that now has a team of six people behind it. In short it takes object pascal code such as Delphi and Freepascal, and compiles this to JavaScript. Suitable for both the browser and NodeJS. It gives you a full IDE with form designer, drag & drop visual components and a wast and rich RTL (run-time library) which naturally saves me a lot of time. So this gives me an edge over other companies working with similar technology. So while it’s a huge task, it’s leveraged considerably by the toolchain I made for it.

So am I making a native OS for ARM or x86? The short answer: I will if the situation havent dramatically improved by the time Amibian.js is finished.

Instead of wasting years trying to implement everything from scratch, Pascal Papara took the Linux kernel and ran with it. So Aeros boots by virtue of the Linux Kernel, but jumps straight into Aros once the drivers has loaded

If you are thinking “so what, who the hell do you think you are?” then perhaps you should take a closer look at my work and history.

I am an ex Quartex member, which was one of the most infamous hacking cartels in europe. I have 30 years of software development behind me, having worked as a professional developer since the age of 17. I have a history of taking on “impossible” projects and finding ways to deliver them. Smart Mobile Studio itself was deemed impossible by most Delphi developers; It was close to heresy, triggering an avalanche of criticism for even entertaining the idea that object pascal could be compiled to JavaScript. Let alone thrive on JSVM (JavaScript Virtual Machine).

assembler

Amibian.js runs javascript, but also bytecodes. Here showing the assembler prototype

You can imagine the uproar when our generated JavaScript code (compiled from object pascal) actually bested native code. I must admit we didn’t expect that at all, but it changed the way Delphi and object pascal developers looked at the world – for the better I might add.

What I am good at, is taking ordinary off the shelves parts and assembling them in new and exciting ways. Often ways the original authors never intended; in order to produce something unique. My faith is not in myself, but in the ability and innate capacity of human beings to find solutions. The biggest obstacle to progress is ultimately pride and fear of losing face. Something my Buddhist training beat our of me ages ago.

So this is not an ego trip, it’s simply a coder that is completely fed-up with the perpetual mismanagement that has held Amiga OS in captivity for two decades.

Amiga OS is a formula, and formulas are bulletproof

People love different aspects of the same thing – and the Amiga is no different. For some the Amiga is the games. Others love it for its excellent sound capabilities, while some love it for the ease of coding (the 68k is the most friendly cpu ever invented in my book). And perhaps all of us love the Amiga for the memories we have. A harmless yet valuable nostalgia of better times.

image3

Amiga OS 3.1 pimped up, running on Amibian [native] Raspberry PI 3b

But for me the love was always the OS itself. The architecture of Amiga OS is so elegant and dare I say, pure, compared to other systems. And I’m comparing against both legacy and contemporary systems here. Microsoft Windows (WinAPI) comes close, but the sheer brilliance of Amiga OS is yet to be rivaled.

We are talking about a design that delivers a multimedia driven, window based desktop 10 years before the competition. A desktop that would thrive in as little as 512 kb of ram, with fast and reliable pre-emptive multitasking.

I don’t think people realize or understand the true value of Amiga OS. It’s not in the games (although games is definitively a huge part of the experience), the hardware or the programs. The reason people have been fighting bitterly over Amiga OS for a lifetime, is because the operating system architecture or “formula” is unmatched to this very day.

Can you imagine what a system that thrives under 512 KB would do to the desktop market? Or even better, what it could bring to the table for embedded and server technology?

And this is where my frustration soars up. Even though we have OS 4.1, we have been forced to idly stand by and watch, as mistake after mistake is being made. opportunities that are ripe for the taking (some of them literally placed on the doorstep of Hyperion), have been thrown by the wayside time and time again.

And they are not alone. Aros and Morphos has likewise missed a lot of opportunities. Both opportunities to generate income and secure development as well as embracing new technology. Although I must stress that I sympatize with Aros since they lack any official funding. Morphos is doing much better using a normal, commerical license.

Frustration, the mother of invention

When the Raspberry PI was first released I jumped of joy. Finally a SBC (single board computer) with enough power to run a light version of Amiga OS 4.1, with a price tag that everyone can live with. I rushed over to Hyperion to see if they had issued a statement about the PI, but nothing could be found. The AEON site was likewise empty.

The PI version 2 came and went, still no sign that Hyperion would capitalize on the situation. I expected them to issue a “Amiga OS 4.1 light” edition for ARM, which would put them on the map and help them establish a user base. Without a user base and fresh blood there is no chance in hell of selling next generation machines in large enough quantities to justify future development. But once again, opportunity after oppertunity came and went.

Sexy, fast and modern: Amiga OS 4.1

Sexy, fast and modern: Amiga OS 4.1 would do wonders on ARM

Faster and better suited SBC’s started to turn up in droves: The ODroid, Beaglebone black, The Tinkerboard, The Banana PI – and many, many others. When the SnapDragon IV CPU’s shipped on a $120 SBC, which is the same processor used by Samsung Galaxy 6S, I was sure Hyperion would wake up and bring Amiga OS to the masses. But not a word.

Instead we were told to wait for the Amiga x5000 which is based on PPC. I have no problem with PPC, it’s a great platform and packs a serious punch. But since PPC no longer sell to mainstream computer companies like it used to, the price penalty would be nothing short of astronomical. There is also the question of longevity and being able to maintain a PPC based system for the forseeable future. Where exactly is PPC in 15 years?

Note: One of the reasons PPC was selected has to do with coding infrastructure. PPC has an established standard, something ARM lacked at the time (this was first established for ARM in 2014). PPC also has an established set of development platforms that you can build on, with libraries and pre-fab modules (pre fabricated modules, think components that you can use to quickly build what you need) that have been polished for two decades now. A developer who knows PPC from the Amiga days will naturally feel more at home with PPC. But sadly PPC is the past and modern development takes place almost exclusively on ARM and x86. Even x86 is said to have an expiration date now.

The only group that genuinely tried to bring Amiga OS to ARM has been the Aros team. They got their system compiled, implemented some rudimentary drivers (information on this has been thin to say the least) and had it booting natively on the Raspberry PI 3b. Sadly they lacked a USB stack (remember I mentioned pre-fab modules above? Well, this is a typical example. PPC devtools ship with modules like this out of the box) so things like mouse, keyboard and external peripherals wouldn’t work.

3

Aeros, the fastest Amiga you will ever play with. Running on the Raspberry PI 3b

And like always, which is the curse of Amiga, “something came up”, and the whole Raspberry PI / ARM initiative was left for dead. The details around this is sketchy, but the lead developer had a personal issue that forced him to set a new direction in life. And for some reason the other Aros developers have just continued with x86, even though a polished ARM version could have made them some money, and helped finance future development. It’s the same story, again and again.

But then something amazing happened! Out of the blue came Pascal Papara with a new take on Aros, namely AEROS. This is a distro after my own heart. Instead of wasting years trying to implement everything from scratch, Pascal took the Linux kernel and ran with it. So Aeros boots by virtue of the Linux Kernel, but jumps straight into Aros once the drivers has loaded. And the result? It is the fastest desktop you will ever experience on ARM. Seriously, it runs so fast and smooth on the Raspberry PI that you could easily mistake it for a $450 Intel i3.

Sadly Pascal has been more or less alone about this development. And truth be told he has molded it to suit his own needs rather than the consumer. Since his work includes a game machine and some Linux services, the whole Linux system is exposed to the Aros desktop. This is a huge mistake.

Using the Linux kernel to capitalize on the thousands of man hours invested in that, not to mention the linux driver database which is massive, is a great idea. It’s also the first thing that came into my mind when contemplating the issue.

But when running Aros on top of this, the Linux aspect of the system should be abstracted away. Much like what Apple did with Unix. You should hardly notice that Linux is there unless you open a shell and start to investigate. The Amiga filesystem should be the only filesystem you see when accessing things from the desktop, and a nice preferences option for showing / hiding mounted Linux drives.

My plans for an ARM based Amiga inspired OS

Building an OS is not a task for the faint of heart. Yes there is a lot of embedded / pre-fab based systems to pick from out there, but you also have to be sensible. You are not going to code a better kernel than Linus Torvalds, so instead of wasting years trying to catch up with something you cannot possibly catch up with – just grab the kernel and make it work for us.

The Linux kernel solves things such as process contexts, “userland” vs “kernel space” (giving the kernel the power to kill a task and reclaim resources), multitasking / threading, thread priorities, critical sections, mutexes and global event objects; it gives us IPC (inter process communication), disk IO, established and rock solid sound and graphics frameworks; and last but perhaps most important: free access to the millions of drivers in the Linux repository.

Screenshot

Early Amibian.js login dialog

You would have to be certified insane to ignore the Linux Kernel, thinking you will somehow be the guy (or group) that can teach Linus Torvalds a lesson. This is a man who has been writing kernel’s for 20+ years, and he does nothing else. He is surrounded by a proverbial army of developers that code, test, refactor and strive to deliver optimal performance, safety and quality assurance. So sorry if I push your buttons here, but you would be a moron to take him on. Instead, absorb the kernel and gain access to the benefits it has given Linux (technically the kernel is “Linux”, the rest is GNU – but you get what I mean).

With the Linux kernel as a foundation, as much as 50% of the work involved in writing our OS is finished already. You don’t have to invent a driver API. You dont have to invent a new executable format (or write your own ELF parser if you stick with the Linux executable). You can use established compilers like GCC / Clang and Freepascal. And you can even cherry pick some low-level packages for your own native API (like SDL, OpenGL and things that would take years to finish).

But while we want to build our house on rock, we don’t want it to be yet another Linux distro. So with the kernel in place and a significant part of our work done for us, that is also where the similarities end.

The end product is Amiga OS, which means that we need compatibility with the original Amiga rom libraries (read: api). Had we started from scratch that would have been a tremendous effort, which is also why Aros is so important. Because Aros gives us a blueprint of how they have implemented these API’s.

But our main source of inspiration is not Aros, but Amithlon. What we want to do is naturally to pipe as much as we can from the Amiga API’s back to the Linux kernel. Things like device detection, memory allocation, file IO, pipes, networking — our library files will be more thin wrappers that expose Amiga compatible calls; methods that calls the Linux Kernel to do the job. So our Amiga library files will be proxy objects whenever possible.

AmithlonQEmu

Amithlon, decades ahead of it’s time

The hard work is when we get to the window manager, or Intuition. Here we can’t cheat by pushing things back to Linux. We don’t want to install X either (although we can render our system into the X framebuffer if we like), so we have to code a window manager. This is not as simple as it sounds, because our system must operate with multiple cores, be multi threaded by design and tap into the grand scheme of things. Things like messages (which is used by applications to respond to input) must be established, and all the event codes from the original Amiga OS must be replicated.

So this work wont be easy, but with the Linux kernel as a foundation – the hardest task of all is taken care of. The magic of a kernel is that of process management and task switching. This is about as hard-core as you can get. Without that you can almost forget the rest. But since we base our system on the Linux kernel, we can focus 100% on the real task – namely to deliver a modern Amiga experience, one that is platform independent (read: conforms to standard Linux and can thus be recompiled and run anywhere Linux can run), preserves as much of the initial formula as possible – and can be successfully maintained far into the future.

By pushing as much of our work as possible into user-space (the process space where ordinary programs run, the kernel runs outside this space and is thus unaffected when a program crashes) and adhering to the Linux kernel beneath the bonnet, we have created a system that can be re-compiled anywhere Linux is. And it can be done so without any change to our codebase. Linux takes care of things like drivers, OpenGL, Sound — and presents to us a clean API that is identical on every platform. It doesn’t matter if it’s ARM, PPC, 68k, x86 or MIPS. As long as we follow the standards we are home free.

Last words

I hope all of this clears up the confusion that has surrounded the subject this week. Again, the misunderstanding that led to some unfortunate posts has been resolved. So there is no negativity, no drama and we are all on the same page.

amidesk

Early Amibian.js prototype, running 68k in the browser via uae.js optimized

Just remember that I have set some restrictions for my involvement here. I sincerely hope Hyperion and the Aros development group can focus on ARM, because the community needs this. While the Raspberry PI might seem too small a form-factor to run Aros, projects like Aeros have proven just how effective the Amiga formula is. I’m sure Hyperion could find a powerful ARM SOC in the price range of $120 and sell a complete package with profit for around $200.

What the Amiga community needs now, is not expensive hardware. The userbase has to be expanded horizontally across platforms. Amiga OS / Aros has much to offer the embedded market which today is dominated by overly complex Linux libraries. The Amiga can grow laterally as a more user-friendly alternative, much like Android did for the mobile market. Once the platform is growing and established – then custom hardware could be introduced. But right now that is not what we need.

I also hope that the Aros team drops whatever they are working on, fork Pascal Paparas codebase, and spend a few weeks polishing the system. Abstract away the Linux foundation like Apple have done, get those sexy 32 bit OS4 icons (Note: The icons used by Amiga OS 4 is available for free download from the designer’s website) and a nice theme that looks similar to OS 4 (but not too similar). Get Lazarus (the freepascal IDE) going and ship the system with a ready to use Pascal, C/C++ and Basic development environments. Bring back the fun in computing! The code is already there, use it!

page2-1036-full

Aeros interfaces directly with linux, I would propose a less direct approach

Just take something simple, like a compatible browser. It’s actually not that simple, both for reasons of complexity and how memory is handled by PPC. With a Linux foundation things like Chromium Embedded could be inked into the Amiga side of things and we would have a native, fast, established and up-to-date browser.

At the same time, since we have API level compatability, people can recompile their Aros and Morphos applications and they would run more or less unchanged.

I really hope that my little protest here, if nothing else, helps people realize that there are viable options readily at hand. Commodore is not coming back, and the only future this platform has – is the one we make. So people have to ask themselves how much they want a future.

If the OS gains momentum then there will be grounds for investors to look at custom hardware. They can then choose off the shelves parts that are inexpensive to cover the normal functionality you expect in a modern computer – which more resources can go into custom hardware that sets the system apart. But we cant start there. It has to be built up brick by brich, standing on the shoulders of giants.

OK, rant over 🙂

Why buy a Vampire accelerator?

August 24, 2017 4 comments

With the Amiga about to re-enter the consumer market, a lot of us “old timers” are busy knocking dust of our old machines. And I love my old machines even though they are technically useless by modern standards. But these machines have a lot of inspiration in them, especially if you write code. And yes there is a fair bit of nostalgia involved in this, there is no point in lying about any of this.

I mean, your mobile phone is probably 100 times faster than a vintage Amiga. But like you will discover with the new machines that are about to hit the market, there is more to this computer than you think. But vintage Amiga? Sadly they lack the power to anything useful [in the “modern” sense].

Enter the vampire

The Vampire is a product that started shipping about a year ago. It’s a FPGA based accelerator, and it’s quite frankly turning the retro scene on its head! Technically it’s a board that you just latch onto the CPU socket of your classical Amiga; it then takes over the whole machine and replace the CPU and chipset with its versions of these. Versions that are naturally a hell of a lot faster!

vanpireThe result is that the good old Amiga is suddenly beefy enough to play Doom, Quake, MP3 files and MPG video (click here to read the datasheet). In short: this little board gives your old Amiga machine a jolt of new life.

Emulation vs. FPGA

Im not going to get into the argument about FPGA not being “real”, because that’s not what FPGA is about. Nor am I negative to classical hardware – because I own a ton of old Amiga gear myself. But I will get in your face when it comes to buying a Vampire.

Before we continue I just want to mention that there are two models of the vampire. There is the add-on board I have just mentioned which is again divided into different models for various Amiga versions (A600, A500 so far). The second model is a completely stand-alone vampire motherboard that wont even need a classic Amiga to work. It will be, for all means and purposes, a stand alone SBC (single board computer) that you just hook up power, video, storage and mouse – and off you go!

This latter version, the stand-alone, is a project I firmly believe in. The old boards have been out of production since 1993 and are getting harder to come by. And just like people they will eventually break down and stop working. There is also price to consider because getting your 20-year-old A500 fixed is not easy. First of all you need a specialist that knows how to fix these old things, and he will also need parts to work with. Since parts are no longer in production and homebrew can only go so far, well – a brand new motherboard that is compatible in every way sounds like a good idea.

There is also the fact that FPGA can reach absurd speeds. It has been mentioned that if the Vampire used a more expensive FPGA modules, 68k based Amiga’s could compete with modern processors (Source: https://www.generationamiga.com/2017/08/06/arria-10-based-vampire-could-reach-600mhz/). Can you imagine a 68k Amiga running side by side with the latest Intel processors? Sounds like a lot of fun if you ask me !

20934076_1512859975447560_99984790080699660_o

Amiga 1000, in my view the best looking Amiga ever produced

But then there is emulation. Proper emulation, which for Amiga users can only mean one thing: UAE in all its magnificent diversity and incarnations.

Nothing beats firing up a real Amiga, but you know what? It has been greatly exaggerated. I recently bought a sexy A1000 which is the first model that was ever made. This is the original Amiga, made way back before Commodore started to mess around with it. It cost me a small fortune to get – but hey, it was my first ever Amiga so I wanted to own one again.

But does it feel better than my Raspberry PI 3b powered A500? Nope. In fact I have only fired up the A1000 twice since I bought it, because having to wait for disks to load is just tedious (not to mention that you can’t get new, working floppy disks anymore). Seriously. I Love the machine to bits but it’s just damn tedious to work on in 2017. It belongs to the 80s and no-one can ever take away its glory or it’s role in computer history. That achievement stands forever.

High Quality Emulation

If you have followed my blog and Amiga escapades, you know that my PI 3b based Amiga, overclocked to the hilt, yields roughly 3.2 times the speed of an Amiga 4000/040. This was at one point the flagship Commodore computer. The Amiga 4000’s were used in movie production, music production, 3d rendering and heavy-duty computing all over the world. And the 35€ Raspberry PI gives you 3.2 times the power via the UAE4Arm emulator. I don’t care what the vampire does, the PI will give it the beating of its life.

Compiling anything, even older stuff that is a joke by today standard, is painful on the Raspberry PI. Here showing my retro-fitted A500 PI with sexy led keyboard. It will soon get a makeover with an UP board :)

My retrofitted Raspberry PI 3b Amiga. Serious emulation at high speed allowing for software development and even the latest Freepascal 3.x compiler

Then suddenly, out of the blue, Asus comes along with the Tinkerboard. A board that I hated when it first came out (read part-1 here, part-2 here) due to its shabby drivers. The boards have been collecting dust on my office shelf for six months or so – and it was blind luck that i downloaded and tested a new disk image. If you missed that part you can read the full article here.

And I’m glad I did because man – the Tinkerboard makes the Raspberry PI 3b look like a toy! Asus has also adjusted the price lately. It was initially priced at 75€, but in Norway right now it retails for about 620 NKR – or 62€. So yes, it’s about twice the price of the PI – but it also gives you twice the memory, twice the graphics performance, twice the IO performance and a CPU that is a pleasure to work with.

The Raspberry PI 3b can’t be overclocked to the extent the model 1 and 2 could. You can over-volt it and tweak the GPU and memory and make it run faster. But people don’t call that “overclock” in the true sense of the word, because that means the CPU is set to run at speeds beyond the manufacturing specifications. So with the PI 3b there is relatively little you can do to make it run faster. You can speed it up a little bit, but that’s it. The Tinkerboard can be overclocked to the hilt.

A1222

The A1222 motherboard is just around the corner [conceptual art]

Out of the box it runs at 1.5 Ghz, but if you add a heatsink, fan (important) and a 3A PSU – you can overclock it to 2.6 Ghz. And like the PI you can also tweak memory and gpu. So the Tinkerboard will happily run 3 times faster than the PI. If you add a USB3 harddisk you will also beef up IO speeds by 100 megabyte a second – which makes a huge difference. Linux does memory paging and it slows down everything if you just use the SD card.

In short: if you fork out 70€ you get a SBC that runs rings around both the vampire and the Raspberry PI 3b. If we take height for some Linux services and drivers that have to run in the background, 3.2 x 3 = 9.6. Lets round that off to 9 since there will be performance hits by the background services. But still — 70€ for an Amiga that runs 9 times faster than A4000 @ MC68040 cpu ? That should blow your mind!

I’m sorry but there has to be something wrong with you if that doesn’t get your juices flowing. I rarely game on my classic Amiga setup. I’m a coder – but with this kind of firepower you can run some of the biggest and best Amiga titles ever made – and the Tinkerboard wont even break a sweat!

You can’t afford to be a fundamentalist

There are some real nutbags in the Amiga community. I think we all agree that having the real deal is a great experience, but the prices we see these days are borderline insane. I had to fork out around 500€  to get my A1000 shipped from Belgium to Norway. Had tax been added on the original price, I would have looked at something in the 700€ range. Still – 500€ for a 20-year-old computer that can hardly run Workbench 1.2? Unless you add the word “collector” here you are in fact barking mad!

If you are looking to get an Amiga for “old times sakes”, or perhaps you have an A500 and wonder if you should fork out for the Vampire? Will it be worth the 300€ pricetag? Unless you use your Amiga on a daily basis I can’t imagine what you need a vampire for. The stand-alone motherboard I can understand, that is a great idea – but the accelerator? 300€?

I mean you can pay 70€ and get the fastest Amiga that ever existed. Not a bit faster, not something on second place – no – THE FASTEST Amiga that has ever existed. If you think playing MP3 and MPG media files is cool with the vampire, then you are in for a treat here because the same software will work. You can safely download the latest patches and updates to various media players on the classic Amiga, and they will run just fine on UAE4Arm. But this time they will run a hell of a lot faster than the Vampire.

14269402_10153769032280906_1149985553_n

My old broken A500 turned into an ass-kicking, battle hardened ARM monster

You really can’t be a fundamentalist in 2017 when it comes to vintage computers. And why would you want to? With so much cool stuff happening in the scene, why would you want to limit your Amiga experience to a single model? Aros is doing awesome stuff these days, you have the x5000 out and the A1222 just around the corner. Morphos is stable and good on the G5 PPC — there has never been a time when there were so many options for Amiga enthusiasts! Not even during the golden days between 1989-1994 were there so many exciting developments.

I love the classic Amiga machines. I think the Vampire stand-alone model is fantastic and if they ramp up the fpga to a faster model, they have in fact re-created a viable computer platform. A 68080 fpga based CPU that can go head to head with x86? That is quite an achievement – and I support that whole heartedly.

But having to fork out this amount of cash just to enjoy a modern Amiga experience is a bit silly. You can actually right now go out and buy a $35 Raspberry PI and enjoy far better results than the Vampire is able to deliver. How that can be negative? I have no idea, nor will I ever understand that kind of thinking. How do any of these people expect the Amiga community to grow and get new, young members if the average price of a 20-year-old machine costs 500€? Which incidentally is 50€ more than a brand new A1222 PPC machine capable of running OS 4.

And with the Tinkerboard you can get 9 times the speed of an A4000? How can that not give you goosebumps!

People talk about Java and Virtual-Machines like its black magic. Well UAE gives you a virtual CPU and chipset that makes mince-meat of both Java and C#. It also comes with one of the largest software libraries in the world. I find it inconceivable that no-one sees the potential in that technology beyond game playing – but when you become violent or nasty over hardware, then I guess that explains quite a bit.

I say, use whatever you can to enjoy your Amiga. And if your perfect Amiga is a PI or a Tinkerboard (or ODroid) – who cares!

I for one will not put more money into legacy hardware. I’m happy that I have the A1000, but that’s where it stops for me. I am looking forward to the latest Amiga x5000 PPC and cant wait to get coding on that – but unless the Appollo crew upgrades to a faster FPGA I see little reason to buy anything. I would gladly pay 500 – 1000 € for something that can kick modern computers in the behind. And I imagine a lot of 68k users would be willing to do that as well. But right now PPC is a much better option since it gives you both 68k and the new OS 4 platform in one price. And for affordable Amiga computing, emulation is now of such quality that you wont really notice the difference.

And I love coding 68k assembler on my Amibian emulator setup. There is nothing quite like it 🙂

The Tinkerboard Strikes Back

August 20, 2017 1 comment

For those that follow my blog you probably remember the somewhat devastating rating I gave the Tinkerboard earlier this year (click here for part 1, and here for part 2). It was quite sad having to give such a poor rating to what is ultimately a fine piece of hardware. I had high hopes for it – in fact I bought two of the boards because I figured there was no way it could suck with that those specs. But suck it did and while the muscle was there, the drivers were in such a state that it never emerged for the user. It was released prematurely, and I think most people that bought it agrees on this.

asus

The initial release was less than bad, it was horrible

Since my initial review those months ago good things have happened. Asus seem to have listened to the “poonami” of negative feedback and adapted their website accordingly. Unlike the first time I visited when you literally had to dig into recursive menus (which was less than intuitive in this case) just to download the software – the disk images are now available at the bottom of the product page. So thumbs up for that (!)

They have also made the GPIO programming API a lot easier to get; downloading it is reduced to a “one liner” for C developers, which is the way it should be. And they have likewise provided wrappers for other languages, like ever popular python and scratch.

I am a bit disappointed that they don’t provide freepascal units. A lot of developers use object pascal on these board after all, because Object Pascal gives you a better balance between productivity and depth. Pascal is easier to learn (it was designed for that after all) but avoids some of the pitfalls of C/C++ while retaining all the good things. Porting over C headers is fairly easy for a good pascal programmer – but it would be cool of Asus remember that there are more languages in the world than C and python.

All of this aside: the most important change of all is what Asus has done with the drivers! They have finally put together drivers that shows off the capabilities of the hardware and unleash the speed we all hoped for when the board was first announced. And man does it show! My previous experience with the Tinkerboard was horrible; it was the text-book example of a how not to release a product (the whole release has been odd; Asus is a huge, multi-national corporation. Yet their release had basement 3 man band written all over it).

So this is fantastic news! Finally the Tinkerboard delivers and can be used for real life projects!

Smart IOT

At The Smart Company we both create and use our core product, Smart Mobile Studio, to deliver third-party solutions. As the name implies Smart is a software development system initially made for mobile applications; but it quickly grew into a much larger toolchain and is exceptionally good for making embedded applications. With embedded applications I mean things that run on kiosk systems, cash machines and stuff like that; basically anything with a touch-screen that does something.

smarts

The Smart desktop gives you a good starting point for embedded work

One of the examples that ship with Smart Pascal is a fully working desktop embedded environment. Smart compiles for both ordinary browsers (JavaScript environments with a traditional HTML5 display) but also for node.js, which is JavaScript unbound by the strict rules of a browser. Developers typically use node.js to write highly scalable server software, but you are naturally not limited to that. Netflix is written 100% in Node.js, so we are talking serious firepower here.

Our embedded environment is called The Smart Desktop (also known as Amibian.js) and gives you a ready-made node.js back-end that couples with a HTML5 front-end. This is a ready to use environment that you can deploy your own applications through. Things like storage, a nice looking UI, user logon and credentials and much, much more is all implemented for you. You don’t have to use it of course, you can write your own system from scratch if you like. We created “Amibian” to demonstrate just how powerful Smart Pascal can be in the right hands.

With this in mind – my main concern when testing SBC’s (single board computers) is obviously web performance. By default JavaScript is a single core event-driven runtime system; you can spawn threads of course but its done somewhat different from how you would work in Delphi or C++.  JavaScript is designed to be system friendly and a gentle giant if you like, which has turned out to be a good thing – because the way JS schedules execution makes it ideal for clustering!

Most people find it hard to believe that JavaScript can outperform native code, but the JavaScript runtimes of today is almost a whole eco system in themselves. With JIT compilers and LLVM optimization — it’s a whole new ballgame.

Making a scale

To give you a better context to see where the Tinkerboard is on a scale, I decided to set up a couple of simple tests. Nothing fancy, just running the same web applications and see how each of them perform on different boards. So I used the same 3 candidates as before, namely the Raspberry PI 3b, the Hardkernel ODroid XU4 and last but not least: the Asus Tinkerboard.

I setup the following applications to compile with the desktop system, meaning that they were compiled with the Smart project. We got plenty of web applications but for this I wanted to pack the most demanding apps in our library:

  • Skid-Row intro remake using the CODEF library
  • Quake 3 asm.js build
  • Plex

OK let’s go through them and see where the chips land!

The Raspberry PI 3b

bassoon

Bassoon ran well, its not that demanding

The Raspberry PI was aweful (click here for a video). There is no doubt that native applications like UAE4Arm runs extremely well on the PI (which contains hand optimized assembler, not exactly a fair fight)- but when it comes to modern HTML5 the PI doesn’t stand a chance. You could perhaps use a Raspberry PI 3b for simple applications which are not graphic and cpu intensive, but you can forget about anything remotely taxing.

It ran Bassoon reasonably fast, but all in all you really don’t want a raspberry when doing high quality IOT, unless its headless code and node.js perhaps. Frameworks like Johnny #5 gives you a ton of GPIO features out of the box – in fact you can target 40 embedded systems without any change to your code. But for large, high quality web front-ends, the PI just wont cut it.

  • Skid-Row: 1 frame per second or less
  • Quake: Can’t even start, just forget it
  • Plex: Starts but it lags so much you can’t watch anything

But hey, I never expected $35 to give me a kick ass ARM experience anyways. There are 1000 things the PI does very well, but HTML is not one of them.

ODroid XU4

XU4CloudShellAssemble29

The ODroid packs a lot of power!

The ODroid being faster than the Raspberry PI is nothing new, but I was surprised at how much power this board delivers. I never expected it to give me a Linux experience close to that of a x86 PC; I mean we are talking about a 45€ SBC here. And it’s only 10€ more than the Raspberry PI, which is a toy at best. But the ODroid XU4 delivers a good Linux desktop; And it’s well worth the extra 10€ when compared to the PI.

Personally I don’t understand why people keep buying PI’s when there is so much better options on the market now. At least not if web technology is involved. A small server or emulator sure, but not HTML5 and browsers. The PI just cant handle it.

  • Skid-Row: 4-5 frames per second
  • Quake: Runs at very enjoyable speed (!)
  • Plex: Runs well but you may want to pick SD or 720p to avoid lags

What really shocked me was that ODroid XU4 can run Quake.js! The PI can’t even start that because it’s so demanding. It is one of the largest and most resource hungry asm.js projects out there – but ODroid XU4 did a fantastic job.

Now it’s not a silky smooth experience, I would guess something along the lines of 17-20 fps. But you know what? Thats pretty good for a $45 board.

I have owned far worse x86 PC’s in my day.

The Tinkerboard

Before i powered up the board I was reluctant to push it too far, because I thought it would fail me once again. I did hope that something had been done by Asus to rectify the situation though, because Asus really should have done a better job before releasing it. It’s now been roughly 6 months since I bought it, and roughly 8 months since it was released here in Europe. It would have been better for them to have waited with the release. I was not alone about butchering the whole board, its been a source of frustration for those that bought it. 75€ is not much, but no-one likes to throw money out the window like that.

Long story short: I downloaded the latest Ubuntu image and burned that to an SD card (I actually first downloaded the Debian Jessie image they have, but sadly you have to do a bit of work to turn that into a desktop system – so I decided to go for Ubuntu instead). If the drivers are in order I have a feeling the Jessie image will be even faster – Ubuntu has always been a high-quality distribution, but it’s also one of the most demanding. One might even say it’s become bloated. But it does deliver a near Microsoft Windows like experience which has served the Linux community well.

But the Tinkerboard really delivers! (click here for the video) Asus have cleaned up their act and implemented the drivers properly, and you can feel that the moment the desktop comes into view. With the PI you are always fighting with lagging performance. When you start a program the whole system freezes for a while, when you quit a program the system freezes – hell when you move the mouse around the system bloody freezes! Well that is not the case with the Tinkerboard that’s for sure. The tinkerboard feels more like running vanilla Ubuntu on a normal x86 PC to be honest.

  • Skid-Row: 10-15 frames per second
  • Quake: Full screen 32bit graphics, runs like hell
  • Plex: Plays back fullscreen HD, just awesome!

All I can say is this: if you are going to do any bit of embedded coding, regardless if you are using Smart Mobile Studio or some other devkit — this is the board to get (!)

Like already mentioned it does cost almost twice as much as the PI, but that extra 30€ buys you loads of extra power. It opens up so many avenues of code and you can explore software far more complex than both the PI and ODroid combined. With the tinkerboard you can finally deliver a state of the art product built with off the shelves web components. It’s in a league of its own.

The ‘tinker’ rocks at last

When I first bought the tinker i felt cheated. It was so frustrating because the specs were so good and the terrible performance just came down to sloppy work and Asus releasing it prematurely for cash (lets face it, they tapped into the lucrative market established by the PI foundation). By looking at the specs you knew it had the firepower to deliver so much, but it was held back by ridicules drivers.

There is still a lot that can be done to make the Tinkerboard run even faster. Like I mentioned Ubuntu is not the racecar of distributions out there. Ubuntu is fat, there is no other way of saying it. So if someone took the time to create a minimalistic Jessie image, recompile every piece with maximum llvm optimization and as few running services as possible — the tinkerboard would positively fly!

So do I recommend it? I am thrilled to say that yes, I can finally recommend the tinkerboard! It is by far the coolest board in my collection now. In fact it’s so good that I’m donating one to my daughter. She is presently using an iMac which is overkill for her needs at age 10. Now I can make a super simple menu with Netflix and Youtube, buy a nice touch-screen display and wall mount it in her room.

Well done Asus!

Amibian.js on bitbucket

August 1, 2017 Leave a comment

The Smart Pascal driven desktop known as Amibian.js is available on bitbucket. It was hosted in a normal github repository earlier – so make sure you clone out from this one.

About Amibian.js

Amibian is a desktop environment written in Smart Pascal. It compiles to JavaScript and can be used through any modern HTML5 compliant browser. The project consists of both a client and server, both written in smart pascal. The server is executed by node.js (note: please install PM2 to have better control over scaling and task management: http://pm2.keymetrics.io/).

smartdeskAmibian.js is best suited for embedded projects, such as kiosk systems. It has been used in tutoring software for schools, custom routers and a wide range of different targets. It can easily be molded into a rich environment for SAD (single application devices) based software – but also made to act more as a real operating system:

  • Class driven filesystem, easy to target external services
    • Ram device-type
    • Browser cache device-type
    • ZIPfile device-type
    • Node.js device-type
  • Cross domain application hosting
    • Traditional IPC protocol between hosted application and desktop
    • Shared resources
      • css styling
      • glyphs and images
    • Event driven visual controls
  • Windowing manager makes it easy to implement custom applications
  • Support for fullscreen API

Amibian ships with UAE.js (based on the SAE.js codebase) making it possible to run Amiga software directly on the desktop surface.

The bitbucket repository is located here: https://bitbucket.org/hexmonks/client

 

Smart-Pascal: A brave new world, 2022 is here

April 29, 2017 6 comments

Trying to explain what Smart Mobile Studio does and the impact it can have on your development cycle is very hard. The market is rampant with superficial frameworks that promises you the world, and investors have been taken for a ride by hyped up, one-click “app makers” more than once.

I can imagine that being an investor is a bit like panning for gold. Things that glitter the most often turn out to be worthless – yet fortunes may hide beneath unpolished and rugged surfaces.

Software will disrupt most traditional industries in the next 5-10 years.
Uber is just a software tool, they don’t own any cars, yet they are now the
biggest taxi company in the world. -Source: R.M.Goldman, Ph.d

So I had enough. Instead of trying to tell people what I can do, I decided I’m going to show them instead. As the american’s say: “talk is cheap”. And a working demonstration is worth a thousand words.

Care to back that up with something?

A couple of weeks ago I published a video on YouTube of our Smart Pascal based desktop booting up in VMWare. The Amiga forums went off the chart!

vmware

For those that havent followed my blog or know nothing about the desktop I’m talking about, here is a short summary of the events so far:


Smart Mobile Studio is a compiler that takes pascal, like that made popular in Delphi or Lazarus, and compiles it JavaScript instead of machine-code.

This product has shipped with an example of a desktop for years (called “Quartex media desktop”). It was intended as an example of how you could write a front-end for kiosk machines and embedded devices. Systems that could use a touch screen as the interface between customer and software.

You have probably seen those info booths in museums, universities and libraries? Or the ticket machines in subways, train-stations or even your local car-wash? All of those are embedded systems. And up until recently these have been small and expensive computers for running Windows applications in full-screen. Applications which in turn talk to a server or local database.

Smart Mobile Studio is able to deliver the exact same (and more) for a fraction of the price. A company in Oslo replaced their $300 per-board unit – with off the shelves $35 Raspberry Pi mini-computers. They then used Smart Pascal to write their client software and ran it in a fullscreen-browser. The Linux distribution was changed to boot straight into Firefox in full-screen. No Linux desktop, just a web display.

The result? They were able to cut production cost by $265 per unit.


Right, back to the desktop. I mentioned the Amiga community. This is a community of coders and gamers that grew up with the old Commodore machines back in the 80s and 90s. A new Amiga is now on the way (just took 20+ years) – and the look and feel of the new operating-system, Amiga OS 4.1, is the look and feel I have used in The Smart Desktop environment. First of all because I grew up on these machines myself, and secondly because the architecture of that system was extremely cost-effective. We are talking about a system that delivered pre-emptive multitasking in as little as 512Kb of memory (!). So this is my “ode to OS 4” if you will.

And the desktop has caused quite a stir both in the Delphi community, cloud community and retro community alike. Why? Because it shows some of the potential cloud technology can give you. Potential that has been under their nose all this time.

And even more important: it demonstrate how productive you can be in Smart Pascal. The operating system itself, both visual and non-visual parts, was put together in my spare time over 3 weeks. Had I been able to work on it daily (as a normal job) I would have knocked it out in a week.

A desktop as a project type

All programming languages have project types. If you open up Delphi and click “new” you are greeted with a rich menu of different projects you can make. From low-level DLL files to desktop applications or database servers. Delphi has it all.

delphistuff

Delphi offers a wide range of projects types you can create

The same is true for visual studio. You click “new solution” and can pick from a wide range of different projects. Web projects, servers, desktop applications and services.

Smart Pascal is the only system where you click “new project” and there is a type called “Smart desktop” and “Smart desktop application”. In other words, the power to create a full desktop is now an integrated part of Smart Pascal.

And the desktop is unique to you. You get to shape it, brand it and make it your own!

Let us take a practical example

Imagine a developer given the task to move the company’s aging invoice and credit system from the Windows desktop – to a purely web-based environment.

legacy2The application itself is large and complex, littered with legacy code and “quick fixes” going back decades. Updating such a project is itself a monumental task – but having to first implement concepts like what a window is, tasks, user space, cloud storage, security endpoints, look and feel, back-end services and database connectivity; all of that before you even begin porting the invoice system itself ? The cost is astronomical.

And it happens every single day!

In Smart Pascal, the same developer would begin by clicking “new project” and selecting “Smart desktop”. This gives him a complete desktop environment that is unique to his project and company.

A desktop that he or she can shape, adjust, alter and adapt according to the need of his employer. Things like file-type recognition, storage, getting that database – all of these things are taken care of already. The developer can focus on his task, namely to deliver a modern implementation of their invoice and credit software – not waste months trying to force JavaScript frameworks do things they simply lack the depth to deliver.

Once the desktop has the look and feel in order, he would have to make a simple choice:

  • Should the whole desktop represent the invoice system or ..
  • Should the invoice system be implemented as a secondary application running on the desktop?

If it’s a large and dedicated system where the users have no need for other programs running, then implementing the invoice system inside the desktop itself is the way to go.

If however the customer would like to expand the system later, perhaps add team management, third-party web-services or open-office like productivity (a unified intranet if you like) – then the second option makes more sense.

On the brink of a revolution

The developer of 2022 is not limited to the desktop. He is not restricted to a particular operating system or chip-set. Fact is, cloud has already reduced these to a matter of preference. There is no strategic advantage of using Windows over Linux when it comes to cloud software.

Where a traditional developer write and implement a solution for a particular system (for instance Microsoft Windows, Apple OS X or Linux) – cloud developers deliver whole eco systems; constellations of software constructed from many parts, both micro-services developed in-house but also services from others; like Amazon or Azure.

All these parts co-operate and can be combined through established end-point standards, much like how components are used in Delphi or Visual Studio today.

smartdesk

The Smart Desktop, codename “Amibian.js”

Access to products written in Smart is through the browser, or sometimes through a “paper thin” native host (Cordova Phonegap, Delphi and C/C++) that expose system level functionality. These hosts wrap your application in a native, executable container ready for Appstore or Google Play.

Now the visual content is typically the same, and is only adapted for a particular device. The real work is divided between the client (which is now very much capable) and your server back-end.

So people still write code in 2022, but the software behaves differently and is designed to function as a group (cluster). And this requires a shift in the way we think.

asmjs

Above: One of my asm.js prototype compilers. Lets just say it runs fast!

Scaling a solution from processing 100 invoices a minute to handling 100.000 invoices a minute – is no longer a matter of code, but of architecture. This is where the traditional, native only approach to software comes up short, while more flexible approaches like node.js is infinitely more capable.

What has emerged up until now is just the tip of the ice-berg.

Over the next five to eight years, everything is going to change. And the changes will be irrevocable and permanent.

Running your Smart Pascal server as a system-level daemon is very easy once you know what to look for :)

The Smart Desktop back-end running as a system service on a Raspberry PI

As the Americans say, talk is cheap – and I’m done talking. I will do this with you, or without you. Either way it’s happening.

Nightly-build of the desktop can be tested here: http://quartexhq.myasustor.com/

Smart Desktop: Inter frame communication

April 24, 2017 Leave a comment

What is a process in a web desktop environment? Been thinking a bit about this, and it’s actually not that easy to hang the reality of an external script or website running in a frame as a solid “concept” (I use the word entity in my research notes). Visually it appears as a secondary, isolated web application – but ultimately (or technically) it’s just another web object with its own rendering context.

amibian_cortanaWhat I’m talking about is programs. Amibian.js or Smart desktop can already run a wealth of applications – which ultimately are just external or internal web-pages (a very simple term in some cases, but it will have to do). But when you want complex behavior from an eco-system, you must also provide an equally complex means of communication.

And this brings me neatly to the heart of todays task – namely how the desktop can talk to running web-applications and visa versa. And there are ultimately only two real options to choose from here: pure client, or client-server.

Unlike other embedded / web desktops I want to do as much as possible client-side. It would take me five minutes to implement message dispatching if I did this server-side, but the less dependent we are on a server – the more flexible amibian.js will be in terms of integration.

Screenshot

The first Desktop-Application type running in the desktop! And its aware that it is running under Amibian. This frame/host awareness helps establish parent/child relationship

Turns out that the browser has an API for this. A message API that allows a main website to talk with other websites embedded in IFrames. Which is just what we need.

But its very crude and not as straight forward as you may think. You have commands like postMessage() and then you can set up an event-handler that fires whenever something comes through. But you also have security measures to think about. What if you interface with a web-application designed to kill the system? Without the safety of an abstraction layer that would indeed be a very easy hack.

The Smart message API

This has been in the RTL for quite some time but I havent really used it for much. I had one demo a couple of years back that used the message-api to handle off-screen rendering in a separate IFrame (sort of poor mans thread), but that was it.

Well, time to update and bring it into 2017! So I give you the new and improved inter-frame, inter-window and inter-domain message framework! (ta da!)

It should be self-evident how to use it, but a little context wont hurt:

  • You inherit from TW3MessageData and implement whatever data-fields you need
  • You send messages using the w3_SendMessage() method
  • You broadcast (i.e send the message to all frames and windows listening) messages using the w3_broadcastMessage method

But here comes the twist, because I hate having to write a bunch of if/then/else switches for messages. So instead I created a simple subscription system! Super small yet very efficient!

msgbroker

In other words, you inform the system which messages you support, which then becomes your subscription. Whenever the local dispatcher encounters that particular message type  – it will forward the message object to you.

Here is the code. Notice the helper functions that deals with finding the current domain and so on – this will be handy when establishing a parent/child relationship.

Also note: you are expected to inherit from these classes and shape them to your message types. I have implemented the bare-bones that mimic’s Windows message API. And yes, please google inter-frame communication while testing this.

unit SmartCL.Messages;

interface

uses
  w3c.dom,
  System.Types,
  System.Time,
  System.JSON,
  Smartcl.Time,
  SmartCL.System;

const
  CNT_QTX_MESSAGES_BASEID = 1000;

type

  TW3MessageData         = class;
  TW3CustomMsgPort       = class;
  TW3OwnedMsgPort        = class;
  TW3MsgPort             = class;
  TW3MainMessagePort     = class;
  TW3MessageSubscription = class;

  TW3MessageSubCallback  = procedure (Message: TW3MessageData);
  TW3MsgPortMessageEvent = procedure (Sender: TObject; EventObj: JMessageEvent);
  TW3MsgPortErrorEvent   = procedure (Sender: TObject; EventObj: JDOMError);

  (* The TW3MessageData represents the actual message-data which is sent
     internally by the system. Notice that it inherits from JObject, which
     means it supports 1:1 mapping from JSON.
     This is why Deserialize is a member, and Serialize is a class member. *)
  TW3MessageData = class(JObject)
  public
    property    ID: Integer;
    property    Source: String;
    property    Data: String;

    function Deserialize: string;
    class function Serialize(const ObjText: string): TW3MessageData;

    constructor Create;
  end;

  (* A "mesage port" is a wrapper around the [obj]->Window[]->OnMessage
     event and [obj]->Window[]->PostMessage API.
     Where [obj] can be either the main window, or an embedded
     IFrame->contentWindow. This base-class implements the generic behavior.
     You must inherit or use a decendant class if you dont know exactly what
     you are doing! *)
  TW3CustomMsgPort = Class(TObject)
  private
    FWindow:    THandle;
  protected
    procedure   Releasewindow;virtual;
    procedure   HandleMessageReceived(eventObj: JMessageEvent);
    procedure   HandleError(eventObj: JDOMError);
  public
    Property    Handle:THandle read FWindow;
    Procedure   PostMessage(msg:Variant;targetOrigin:String);virtual;
    procedure   BroadcastMessage(msg:Variant;targetOrigin:String);virtual;
    Constructor Create(WND: THandle);virtual;
    Destructor  Destroy;Override;
  published
    property    OnMessageReceived: TW3MsgPortMessageEvent;
    Property    OnError: TW3MsgPortErrorEvent;
  end;

  (* This is a baseclass for window-handles that already exist.
     You are expected to provide that handle via the constructor *)
  TW3OwnedMsgPort = Class(TW3CustomMsgPort)
  end;

  (* This message-port type is very different from both the
     ad-hoc baseclass and "owned" variation above.
     This one actually creates it's own IFrame instance, which
     means it's a completely stand-alone entity which doesnt need an
     existing window to dispatch and handle messages *)
  TW3MsgPort = Class(TW3CustomMsgPort)
  private
    FFrame:     THandle;
    function    AllocIFrame: THandle;
    procedure   ReleaseIFrame(aHandle: THandle);
  protected
    procedure   ReleaseWindow; override;
  public
    constructor Create; reintroduce; virtual;
  end;

  (* This message port represent the "main" message port for any
     application that includes this unit. It will connect to the main
     window (deriving from the "owned" base class) and has a custom
     message-handler which dispatches messages to any subscribers *)
  TW3MainMessagePort = Class(TW3OwnedMsgPort)
  protected
    procedure HandleMessage(Sender: TObject; EventObj: JMessageEvent);
  public
    constructor Create(WND: THandle); override;
  end;

  (* Information about registered subscriptions *)
  TW3SubscriptionInfo = Record
    MSGID:    integer;
    Callback: TW3MessageSubCallback;
  end;

  (* A message subscription is an object that allows you to install
     X number of event-handlers for messages you want to recieve. Its
     important to note that all subscribers to a message will get the
     same message -- there is no blocking or ownership concepts
     involved. This system is a huge improvement over the older WinAPI *)
  TW3MessageSubscription = class(TObject)
  private
    FObjects:   Array of TW3SubscriptionInfo;
  public
    function    ValidateMessageSource(FromURL: string): boolean; virtual;
    function    SubscribesToMessage(const MSGID: integer): boolean; virtual;
    procedure   Dispatch(const Message: TW3MessageData); virtual;
    function    Subscribe(const MSGID: integer; const CB: TW3MessageSubCallback): THandle;
    procedure   Unsubscribe(const Handle: THandle);

    Constructor Create; virtual;
    Destructor  Destroy; override;
  end;

  (* Helper functions which simplify message handling *)
  //function  W3_MakeMsgData:TW3MessageData;
  procedure W3_PostMessage(const msgValue: TW3MessageData);
  procedure W3_BroadcastMessage(const msgValue: TW3MessageData);

  (* Audience returns true if a message-ID have any
     subscriptions assigned to it *)
  function  W3_Audience(msgId: integer): boolean;

  (* This returns true if the current smart application is embedded in a frame.
     That can be handy to know when establishing parent/child relationships
     between main-application and child "programs" running in frames *)
  function AppRunningInFrame: boolean;
  function GetUrlLocation: string;
  function GetDomainFromUrl(URL: string; const IncludeSubDomain: boolean): string;

implementation

uses SmartCL.System;

var
_mainport:    TW3MainMessagePort = NIL;
_subscribers: Array of TW3MessageSubscription;

function AppRunningInFrame: boolean;
var
  LMyFrame: THandle;
begin
  // window.frameElement Returns the element (such as <iframe> or <object>)
  // in which the window is embedded, or null if the window is top-level.
  try
    // Note: attempting to read frameElement will throw a SecurityError
    // exception in cross-origin iframes. It should just return null,
    // but not all browsers play by the rules -- hence the try/catch
    asm
      @LMyFrame = window.frameElement;
    end;
  except
    on e: exception do
    exit;
  end;
  result := not (LMyFrame);
end;

function GetUrlLocation: string;
begin
  asm
    @result = window.location.hostname;
  end;
end;

function GetDomainFromUrl(Url: string; const IncludeSubDomain: boolean): string;
begin
  url := Url.trim();
  if url.length < 1 then
  begin
    asm
      @Url = window.location.hostname;
    end;
  end;
  asm
    @url = (@url).replace("/(https?:\/\/)?(www.)?/i", '');
    if (!@IncludeSubDomain) {
        @url = (@url).split('.');
        @url = (@url).slice((@url).length - 2).join('.');
    }
    if ((@url).indexOf('/') !== -1) {
      @result = (@url).split('/')[0];
    } else {
      @result = @url;
    }
  end;
end;

//#############################################################################
//
//#############################################################################

procedure QTXDefaultMessageHandler(sender:TObject;EventObj:JMessageEvent);
var
  x:      Integer;
  mItem:  TW3MessageSubscription;
  mData:  TW3MessageData;
begin
  mData := TW3MessageData.Serialize(EventObj.Data);
  //mData := new TW3MessageData();
  //mData.Serialize(EventObj.Data);

  for x:=0 to _subscribers.count-1 do
  Begin
    mItem := _subscribers[x];

    // Check that the message is subscribed to
    if mItem.SubscribesToMessage(mData.ID) then
    begin
      // Make sure subscriber validates the caller-domain
      if mItem.ValidateMessageSource( TVariant.AsString(EventObj.source) ) then
      begin
        (* We execute with a minor delay, allowing the browser to
         exit the function before we dispatch our data *)
        TW3Dispatch.Execute(procedure ()
        begin
          mItem.Dispatch(mData);
        end, 10);
      end;
    end;
  end;
end;

{ function W3_MakeMsgData: TW3MessageData;
begin
  result := new TW3MessageData();
  result.Source := "*";
end;      }

function W3_GetMsgPort: TW3MainMessagePort;
begin
  if _mainport = NIL then
  _mainport := TW3MainMessagePort.Create(BrowserAPI.Window);
  result := _mainport;
end;

procedure W3_PostMessage(const msgValue:TW3MessageData);
begin
  if msgValue<>NIL then
  W3_GetMsgPort().PostMessage(msgValue.Deserialize,msgValue.Source) else
  raise exception.create('Postmessage failed, message object was NIL error');
end;

procedure W3_BroadcastMessage(const msgValue:TW3MessageData);
Begin
  if msgValue<>NIL then
  W3_GetMsgPort().BroadcastMessage(msgValue,msgValue.Source) else
  raise exception.create('Broadcastmessage failed, message object was NIL error');
end;

function  W3_Audience(msgId: Integer): boolean;
var
  x:  Integer;
  mItem:  TW3MessageSubscription;
begin
  result:=False;
  for x:=0 to _subscribers.count-1 do
  Begin
    mItem:=_subscribers[x];
    result:=mItem.SubscribesToMessage(msgId);
    if result then
    break;
  end;
end;

//#############################################################################
// TW3MainMessagePort
//#############################################################################

Constructor TW3MessageData.Create;
begin
  self.Source:="*";
end;

function  TW3MessageData.Deserialize: string;
begin
  result := JSON.Stringify(self);
end;

class function TW3MessageData.Serialize(const ObjText: string): TW3MessageData;
begin
  result := TW3MessageData(JSON.parse(ObjText));
end;

//#############################################################################
// TW3MainMessagePort
//#############################################################################

constructor TW3MainMessagePort.Create(WND: THandle);
begin
  inherited Create(WND);
  OnMessageReceived := QTXDefaultMessageHandler;
end;

procedure TW3MainMessagePort.HandleMessage(Sender: TObject; EventObj: JMessageEvent);
begin
  QTXDefaultMessageHandler(self, eventObj);
end;

//#############################################################################
// TW3MessageSubscription
//#############################################################################

Constructor TW3MessageSubscription.Create;
begin
  inherited Create;
  _subscribers.add(self);
end;

Destructor TW3MessageSubscription.Destroy;
Begin
  _subscribers.Remove(self);
  inherited;
end;

function TW3MessageSubscription.Subscribe(const MSGID: integer; const CB: TW3MessageSubCallback): THandle;
var
  LObj: TW3SubscriptionInfo;
begin
  LObj.MSGID := MSGID;
  LObj.Callback := @CB;
  FObjects.add(LObj);
  result := LObj;
end;

procedure TW3MessageSubscription.Unsubscribe(const Handle: THandle);
var
  x:  Integer;
begin
  for x:=0 to FObjects.Count-1 do
  begin
    if variant(FObjects[x]) = Handle then
    Begin
      FObjects.delete(x,1);
      break;
    end;
  end;
end;

function TW3MessageSubscription.ValidateMessageSource(FromURL: string): boolean;
begin
  // by default we accept messages from anywhere
  result := true;
end;

function TW3MessageSubscription.SubscribesToMessage(const MSGID: integer): boolean;
var
  x:  Integer;
begin
  result:=False;
  for x:=0 to FObjects.Count-1 do
  Begin
    if FObjects[x].MSGID = MSGID then
    Begin
      result:=true;
      break;
    end;
  end;
end;

procedure TW3MessageSubscription.Dispatch(const Message:TW3MessageData);
var
  x:  Integer;
begin
  for x:=0 to FObjects.Count-1 do
  Begin
    if FObjects[x].MSGID = Message.ID then
    Begin
      if assigned(FObjects[x].Callback) then
      FObjects[x].Callback(Message);
      break;
    end;
  end;
end;

//#############################################################################
// TW3MsgPort
//#############################################################################

Constructor TW3MsgPort.Create;
begin
  FFrame:=allocIFrame;
  if (FFrame) then
  inherited Create(FFrame.contentWindow) else
  Raise Exception.Create('Failed to create message-port error');
end;

procedure TW3MsgPort.ReleaseWindow;
Begin
  ReleaseIFrame(FFrame);
  FFrame:=unassigned;
  Inherited;
end;

Procedure TW3MsgPort.ReleaseIFrame(aHandle:THandle);
begin
  If (aHandle) then
  Begin
    asm
      document.body.removeChild(@aHandle);
    end;
  end;
end;

function TW3MsgPort.AllocIFrame: THandle;
Begin
  asm
    @result = document.createElement('iframe');
  end;

  if (result) then
  begin
    /* if no style property is created, we provide that */
    if not (result['style']) then
    result['style'] := TVariant.createObject;

    /* Set visible style to hidden */
    result['style'].display := 'none';

    asm
      document.body.appendChild(@result);
    end;

  end;
end;

//#############################################################################
// TW3CustomMsgPort
//#############################################################################

Constructor TW3CustomMsgPort.Create(WND: THandle);
Begin
  inherited Create;
  FWindow := WND;
  if (FWindow) then
  begin
    FWindow.addEventListener('message', @HandleMessageReceived, false);
    FWindow.addEventListener('error', @HandleError, false);
  end;
End;

Destructor TW3CustomMsgPort.Destroy;
begin
  if (FWindow) then
  begin
    FWindow.removeEventListener('message', @HandleMessageReceived, false);
    FWindow.removeEventListener('error', @HandleError, false);
    ReleaseWindow;
  end;
  inherited;
end;

procedure TW3CustomMsgPort.HandleMessageReceived(eventObj:JMessageEvent);
Begin
  if assigned(OnMessageReceived) then
  OnMessageReceived(self,eventObj);
End;

procedure TW3CustomMsgPort.HandleError(eventObj:JDOMError);
Begin
  if assigned(OnError) then
  OnError(self,eventObj);
end;

procedure TW3CustomMsgPort.Releasewindow;
begin
  FWindow := Unassigned;
end;

Procedure TW3CustomMsgPort.PostMessage(msg:Variant;targetOrigin:String);
begin
  if (FWindow) then
  FWindow.postMessage(msg,targetOrigin);
end;

// This will send a message to all frames
procedure TW3CustomMsgPort.BroadcastMessage(msg:Variant;targetOrigin:String);
var
  x:  Integer;
  mLen: Integer;
begin
  mLen := TVariant.AsInteger(browserAPI.Window.frames.length);
  for x:=0 to mLen-1 do
  begin
    browserAPI.window.frames[x].postMessage(msg,targetOrigin);
  end;
end;

finalization
begin
  if assigned(_mainport) then
  _mainport.free;
end;

end.

Amibian.js and the Narcissus hack

April 22, 2017 Leave a comment

Wow, I must admit that I never really thought Amibian.js would become even remotely as popular as it has – yet people respond with incredible enthusiasm to our endeavour. I was just told that an article at Commodore USA mentioned us – that an exposure to 37000 readers. Add that to the roughly 40.000 people that subscribe to my feeds around the world and I must say: I hope I code something worthy of your time!

amibian_cortana

This is going to look and behave like the bomb when im done!

But there is a lot of stuff on the list before it’s even remotely finished. This is due to the fact that im not just juggling one codebase here – im juggling 5 separate yet interconnected codebases at the same time (!). First there is the Smart Mobile Studio RTL (run time library) which represents the foundation. This gives me object-oriented, fully inheritance driven visual controls. This have roughly 5 years of work behind it.

138-Quake_III_Arena-1479915663

Still #1 after all these years

On top of that you have the actual visual controls, like buttons, scrollbars, lists, css3 effect engines, tweening, database storage and a ton of low-level stuff. The browser have no idea what a window is for example, let alone how it should look or respond to users. So every little piece has to be coded by someone. And well, that’s what I do.

Next you have the workbench and operating-system itself. What you know as Amibian.js, Smart Workbench or Quartex media desktop – take your pick, but it’s already a substantial codebase spanning some 40 units with thousands of lines of code. It is divided into two parts: the web front-end that you have all seen; and the node.js backend that is not yet made public.

And on top of that you have the external stuff. Quake III didn’t spontaneously self-assemble inside the desktop, someone had to do some coding and make the two interface. Same with all the other features you have.

The worst so far (as in damn hard to get right) is the Ace text editor. Ace by itself is super easy to work with – but you may have noticed that we have removed it’s scrollbars and replaced these with Amiga scrollbars instead? That is a formidable challenge it its own right.

patch2

Aced it! Kidnapping signals was a challenge

Whats on the menu this weekend?

I noticed that on Linux that text-selection was utterly messed up, so when you moved a window around – it would suddenly start selecting the title text of other elements around the desktop. This is actually a bug in the browser – not my code; but I still have to code around it. Which I have now done.

I also solved selection for the console window (or any “text” container. A window is made up of many parts and the content region can be inherited from and replaced), so that should now work fine regardless of browser and platforms. Ace theming also works, and the vertical scrollbar is responding as expected. Still need a few tweaks to move right, but that is easy stuff. The hard part is behind us thankfully.

patch

Selection – works

Right now I’m working on ScummVM so that should be in place later today 🙂

Thats cool, but what motivates you?

2jd3x2q

Cult of Joy

Retro gaming is important, and we have to make it as easy for people to enjoy their retro gear without patent trolls ruining the fun. Im just so tired of how ruined the Amiga scene is by these (3 companies in particular).. thieves is the only word I can find that fits.

So fine! I will make my own. Come hell or high water. Free as a bird and untouchable.

So I have made some tools that will make it ridiculously easy for you to share, download and play your games online. Whenever you want, hosted where-ever you need and there is not a god damn thing people can do about it. When you realize how simple the hack is it will make you laugh. I came up with this ages ago and dubbed it Narcissus.

To understand the Narcissus hack, consider the following:

PNG is a lossless compression format, meaning that it doesn’t lose any information when compressed. It’s not like JPEG which scrambles the original and saves a faximile that tricks the eye. Nope, if you compress a PNG image you get the exact same out when you decode it (read: show it).

But who said we have to store pixels? Pixels are just bytes after all. In fact, why can’t we take a whole game disk or rom and store that inside a picture? Sure you can!

cool

This tool is now built into Amibian.js

It’s amusing, I came up with this hack years ago. It has been a part of Smart Mobile Studio since the beginning.

You have to remember that retro games are super small compared to modern games. The average ADF file is what? 880kb or something like that? Well hold on to your hat buddy, because PNG can hold 64 megabyte of data! You can encode a decent Amiga hard disk image in 64 megabyte.

eatmeCan you guess what the picture on the right contains? This picture is actually ALL the Amiga rom files packed into a single image. Dont worry, I converted it to JPEG to mess up the data before uploading. But yes, you can now host not just the games as normal picture files, but also roms and whatever you like.

And the beauty of it – who the hell is going to find them? You can host them on Github, Google drive, Dropbox or right your blog — if you don’t have the encryption key the file is useless.

Snap, crackle and pop!

RSS Filesystem

You know RSS feeds right? If you sign up for a blog you automatically get a RSS feed. It’s basically just a list of your recent posts – perhaps with an extract from each article, a thumbnail picture and links to each post. RSS have been around for a decade or more. It’s a great way to keep track of news.

The second hack is that using the data-to-image-encoder you can store a whole read-only filesystem as a normal RSS feed. Always think outside the box!

Let’s say you have a game collection for your Amiga right? Lets say 200 games. Wouldnt it be nice to have all those games online? Just readily available regardless of where you may be? Without “you know who” sending you a nasty email?

Well, just encode your game as described above, include the data-picture in your WordPress post, and do that for each of your games. Since you can encrypt these images they will be worthless to others. But for you its a neat way of hosting all your games online for free (like WordPress or Blogger) and play them via Amibian or the patched UAE4Arm (ops, did I share that, sorry dirk *grin*) and you’re home free.

You know what’s really cool? For this part Amibian doesnt even need a server. So you can just save the Amibian.js html page on your phone and that’s all you need.

RumoredPirates

Drink up me hearties yo ho!

Delphi developer on its own server

April 4, 2017 Leave a comment

While the Facebook group will naturally continue exactly like it has these past years, we have set up a server active Delphi developers on my Quartex server.

This has a huge benefit: first of all those that want to test the Smart Desktop can do so from the same domain – and people who want to test Smart Mobile Studio can do so with myself just a PM away. Error reports etc. will still need to be sent to the standard e-mail, but now I can take a more active role in supervising the process and help clear up whatever missunderstanding could occur.

casebook

Always good with a hardcore Smart, Laz, amibian.js forum!

Besides that we are building a lively community of Delphi, Lazarus, Smart and Oxygene/Remobjects developers! Need a job? Have work you need done? Post an add — wont cost you a penny.

So why not sign up? Its free, it wont bite you and we can continue regardless of Facebook’s up-time this year..

You enter here and just fill out user/pass and that’s it: http://quartexhq.myasustor.com/sharetronix/